Search (44 results, page 1 of 3)

  • × type_ss:"p"
  • × language_ss:"e"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026202893 = product of:
      0.07860868 = sum of:
        0.06933434 = product of:
          0.20800301 = sum of:
            0.20800301 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20800301 = score(doc=862,freq=2.0), product of:
                0.37010026 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043654136 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.009274333 = weight(_text_:in in 862) [ClassicSimilarity], result of:
          0.009274333 = score(doc=862,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 862, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.33333334 = coord(2/6)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Wormell, I.: Multifunctional information work : new demands for training? (1995) 0.02
    0.022646975 = product of:
      0.06794092 = sum of:
        0.012365777 = weight(_text_:in in 3371) [ClassicSimilarity], result of:
          0.012365777 = score(doc=3371,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 3371, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3371)
        0.05557514 = product of:
          0.11115028 = sum of:
            0.11115028 = weight(_text_:ausbildung in 3371) [ClassicSimilarity], result of:
              0.11115028 = score(doc=3371,freq=2.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.47439498 = fieldWeight in 3371, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3371)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The paper calls for an integrated approach to information science education where disciplinary interaction is predicated on the forgoing of formal, informal and sustainable links with researchers and pracitioners in other fields. The modern information profession, in order to promote its creativity and to strengthen its development, has to go beyond the traditional roles and functions and should extend the professions' horizons. Thus the LIS education and training programmes must aim to foster professionals who, one day, will create new jobs and not just fill the old ones
    Source
    Vortrag, IFLA-Tagung 1995 in Istanbul
    Theme
    Ausbildung
  3. Madsen, M.: Teaching bibliography, bibliographic control and bibliographical competence (2000) 0.01
    0.013893785 = product of:
      0.083362706 = sum of:
        0.083362706 = product of:
          0.16672541 = sum of:
            0.16672541 = weight(_text_:ausbildung in 5408) [ClassicSimilarity], result of:
              0.16672541 = score(doc=5408,freq=2.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.71159244 = fieldWeight in 5408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5408)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Theme
    Ausbildung
  4. McIlwaine, J.: Bibliographical control : self-instruction from individualised investigations (2000) 0.01
    0.013893785 = product of:
      0.083362706 = sum of:
        0.083362706 = product of:
          0.16672541 = sum of:
            0.16672541 = weight(_text_:ausbildung in 5410) [ClassicSimilarity], result of:
              0.16672541 = score(doc=5410,freq=2.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.71159244 = fieldWeight in 5410, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5410)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Theme
    Ausbildung
  5. Snyman, R.: Bibliographic control : is the current training still relevant (2000) 0.01
    0.013893785 = product of:
      0.083362706 = sum of:
        0.083362706 = product of:
          0.16672541 = sum of:
            0.16672541 = weight(_text_:ausbildung in 5413) [ClassicSimilarity], result of:
              0.16672541 = score(doc=5413,freq=2.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.71159244 = fieldWeight in 5413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5413)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Theme
    Ausbildung
  6. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.01
    0.007032239 = product of:
      0.021096716 = sum of:
        0.006310384 = weight(_text_:in in 1171) [ClassicSimilarity], result of:
          0.006310384 = score(doc=1171,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 1171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.029572664 = score(doc=1171,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  7. Allen, G.G.: Change in the catalogue in the context of library management (1976) 0.00
    0.003365538 = product of:
      0.020193228 = sum of:
        0.020193228 = weight(_text_:in in 1575) [ClassicSimilarity], result of:
          0.020193228 = score(doc=1575,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.34006363 = fieldWeight in 1575, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1575)
      0.16666667 = coord(1/6)
    
  8. Jouguelet, S.: Subject access and the marketplace for bibliographic information in France (1989) 0.00
    0.0029448462 = product of:
      0.017669076 = sum of:
        0.017669076 = weight(_text_:in in 998) [ClassicSimilarity], result of:
          0.017669076 = score(doc=998,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.29755569 = fieldWeight in 998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=998)
      0.16666667 = coord(1/6)
    
    Abstract
    Enthält auch Beschreibung der Sacherschließungssystemen in den Nationalbibliographien
  9. Guizzardi, G.; Guarino, N.: Semantics, ontology and explanation (2023) 0.00
    0.0028220895 = product of:
      0.016932536 = sum of:
        0.016932536 = weight(_text_:in in 976) [ClassicSimilarity], result of:
          0.016932536 = score(doc=976,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.28515202 = fieldWeight in 976, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=976)
      0.16666667 = coord(1/6)
    
    Abstract
    The terms 'semantics' and 'ontology' are increasingly appearing together with 'explanation', not only in the scientific literature, but also in organizational communication. However, all of these terms are also being significantly overloaded. In this paper, we discuss their strong relation under particular interpretations. Specifically, we discuss a notion of explanation termed ontological unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications) by revealing their ontological commitment in terms of their assumed truthmakers, i.e., the entities in one's ontology that make the propositions in those descriptions true. To illustrate this idea, we employ an ontological theory of relations to explain (by revealing the hidden semantics of) a very simple symbolic model encoded in the standard modeling language UML. We also discuss the essential role played by ontology-driven conceptual models (resulting from this form of explanation processes) in properly supporting semantic interoperability tasks. Finally, we discuss the relation between ontological unpacking and other forms of explanation in philosophy and science, as well as in the area of Artificial Intelligence.
  10. Hausser, R.: Language and nonlanguage cognition (2021) 0.00
    0.0025241538 = product of:
      0.015144923 = sum of:
        0.015144923 = weight(_text_:in in 255) [ClassicSimilarity], result of:
          0.015144923 = score(doc=255,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.25504774 = fieldWeight in 255, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=255)
      0.16666667 = coord(1/6)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  11. Pejtersen, A.M.; Jensen, H.; Speck, P.; Villumsen, S.; Weber, S.: Catalogs for children : the Book House project on visualization of database retrieval and classification (1993) 0.00
    0.0024665273 = product of:
      0.014799163 = sum of:
        0.014799163 = weight(_text_:in in 6232) [ClassicSimilarity], result of:
          0.014799163 = score(doc=6232,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24922498 = fieldWeight in 6232, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6232)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes the Book House system which is designed to support children's information retrieval in libraries as part of their education. It is a shareware program available on CD-ROM and discs, and comprises functionality for database searching as well as for the classification and storage of book information in the database. The system concept is based on an understanding of children's domain structures and their capabilities for categorization of information needs in connection with their activities in public libraries, in school libraries or in schools. These structures are visualized in the interface by using metaphors and multimedia technology. Through the use of text, images and animation, the Book House supports children - even at a very early age - to learn by doing in an enjoyable way which plays on their previous experiences with computer games. Both words and pictures can be used for searching; this makes the system suitable for all age groups. Even children who have not yet learned to read properly can by selecting pictures search for and find books they would like to have read aloud. Thus at the very beginning of their school period, they can learn to search for books on their own. For the library community itself, such a system will provide an extended service which will increase the number of children's own searches and also improve the relevance, quality and utilization of the collections in the libraries. A market research on the need for an annual indexing service for books in the Book House format is in preparation by the Danish Library Center
  12. Holley, R.P.: Classification in the USA (1985) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 1730) [ClassicSimilarity], result of:
          0.014278769 = score(doc=1730,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 1730, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1730)
      0.16666667 = coord(1/6)
    
  13. Jouguelet, S.: Subject indexing in France : tools and projects (1985) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 1742) [ClassicSimilarity], result of:
          0.014278769 = score(doc=1742,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 1742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1742)
      0.16666667 = coord(1/6)
    
  14. Satija, M.P.: Classification and indexing in India : a state-of-the-art (1992) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 1539) [ClassicSimilarity], result of:
          0.014278769 = score(doc=1539,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 1539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1539)
      0.16666667 = coord(1/6)
    
  15. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 5365) [ClassicSimilarity], result of:
          0.013115887 = score(doc=5365,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 5365, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.16666667 = coord(1/6)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  16. Robertson, S.E.: OKAPI at TREC-1 (1994) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 7953) [ClassicSimilarity], result of:
          0.012620768 = score(doc=7953,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 7953, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes the work carried out on the TREC-2 project following the results of the TREC-1 project. Experiments were conducted on the OKAPI experimental text information retrieval system which investigated a number of alternative probabilistic term weighting functions in place of the 'standard' Robertson Sparck Jones weighting functions used in TREC-1
  17. Wilk, D.: Problems in the use of Library of Congress Subject Headings as the basis for Hebrew subject headings in the Bar-Ilan University Library (2000) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 5416) [ClassicSimilarity], result of:
          0.012620768 = score(doc=5416,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 5416, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=5416)
      0.16666667 = coord(1/6)
    
  18. Butcher, J.E.; Trotter, R.: Building on PRECIS : strategies for online subject access in the British Library (1989) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 996) [ClassicSimilarity], result of:
          0.012493922 = score(doc=996,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=996)
      0.16666667 = coord(1/6)
    
  19. Scheich, P.; Skorsky, M.; Vogt, F.; Wachter, C.; Wille, R.: Conceptual data systems (1992) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 3147) [ClassicSimilarity], result of:
          0.012493922 = score(doc=3147,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 3147, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=3147)
      0.16666667 = coord(1/6)
    
    Footnote
    Erscheint im Tagungsband der 16. Jahrestagung der Gesellschaft für Klassifikation 1992 in Dortmund
  20. Goldberg, J.: Classification of religion in LCC (2000) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 5402) [ClassicSimilarity], result of:
          0.012493922 = score(doc=5402,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 5402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5402)
      0.16666667 = coord(1/6)
    

Years

Types