Search (12 results, page 1 of 1)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Hypertext"
  1. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.02
    0.021239052 = product of:
      0.053097628 = sum of:
        0.005898632 = weight(_text_:a in 633) [ClassicSimilarity], result of:
          0.005898632 = score(doc=633,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 633, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.047198996 = sum of:
          0.015787644 = weight(_text_:information in 633) [ClassicSimilarity], result of:
            0.015787644 = score(doc=633,freq=8.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.19395474 = fieldWeight in 633, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.031411353 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.031411353 = score(doc=633,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.4 = coord(2/5)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
    Type
    a
  2. Robinson, L.; Maguire, M.: ¬The rhizome and the tree : changing metaphors for information organisation (2010) 0.01
    0.008234787 = product of:
      0.020586967 = sum of:
        0.009535614 = weight(_text_:a in 3957) [ClassicSimilarity], result of:
          0.009535614 = score(doc=3957,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 3957, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3957)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 3957) [ClassicSimilarity], result of:
              0.022102704 = score(doc=3957,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 3957, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3957)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The paper aims to review Deleuze and Guttari's concept of the rhizome as a model for information organisation. Design/methodology/approach - This is a critical review of selected literature. Findings - The rhizome concept is a promising model for understanding hyperlinked information services. It may be of practical value, particularly if it can be integrated with more traditional forms of information organisation. More research, conceptual and practical, is needed before this can be achieved. Research limitations/implications - The literature review is not comprehensive, and the conclusions are open-ended. Originality/value - This is the only paper to review the rhizome concept in this way.
    Type
    a
  3. Krajewski, M.: Paper machines : about cards & catalogs, 1548-1929 (2011) 0.01
    0.0075114607 = product of:
      0.018778652 = sum of:
        0.0076151006 = weight(_text_:a in 735) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=735,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 735, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=735)
        0.011163551 = product of:
          0.022327103 = sum of:
            0.022327103 = weight(_text_:information in 735) [ClassicSimilarity], result of:
              0.022327103 = score(doc=735,freq=16.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27429342 = fieldWeight in 735, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    "Krajewski draws on recent German media theory and on a rich array of European and American sources in this thought-provoking account of the index card as a tool of information management. In investigating the road from the slips of paper of the 16th century to the data processing of the 20th, Krajewski highlights its twists and turns--failures and unintended consequences, reinventions, and surprising transfers."--Ann M. Blair, Henry Charles Lea Professor of History, Harvard University, and author of Too Much to Know: Managing Scholarly Information before the Modern Age -- Ann Blair "This is a fascinating, original, continuously surprising, and meticulously researched study of the long history of the emergence of card systems for organizing not only libraries but business activities in Europe and the United States. It is particularly important for English language readers due to its European perspective and the extraordinary range of German and other resources on which it draws." --W. Boyd Rayward, Professor Emeritus, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign -- W. Boyd Rayward "Markus Krajewski has done the history of cataloguing and the history of information management a considerable service: I recommend it highly." -- Professor Tom Wilson, Editor-in-Chief, Information Research
    Footnote
    Rez. in JASIST 64(2013) no.2, S.431-432 (A. Black)
    LCSH
    Information organization / History
    Series
    History and foundations of information science
    Subject
    Information organization / History
  4. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.01
    0.007189882 = product of:
      0.017974705 = sum of:
        0.0068111527 = weight(_text_:a in 1172) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1172,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1172)
        0.011163551 = product of:
          0.022327103 = sum of:
            0.022327103 = weight(_text_:information in 1172) [ClassicSimilarity], result of:
              0.022327103 = score(doc=1172,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27429342 = fieldWeight in 1172, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1172)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    RSWK
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
    Subject
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
  5. Ferreira, R.S.; Graça Pimentel, M. de; Cristo, M.: ¬A wikification prediction model based on the combination of latent, dyadic, and monadic features (2018) 0.01
    0.0066757645 = product of:
      0.01668941 = sum of:
        0.0127425 = weight(_text_:a in 4119) [ClassicSimilarity], result of:
          0.0127425 = score(doc=4119,freq=28.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.23833402 = fieldWeight in 4119, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4119)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 4119) [ClassicSimilarity], result of:
              0.007893822 = score(doc=4119,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 4119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4119)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Considering repositories of web documents that are semantically linked and created in a collaborative fashion, as in the case of Wikipedia, a key problem faced by content providers is the placement of links in the articles. These links must support user navigation and provide a deeper semantic interpretation of the content. Current wikification methods exploit machine learning techniques to capture characteristics of the concepts and its associations. In previous work, we proposed a preliminary prediction model combining traditional predictors with a latent component which captures the concept graph topology by means of matrix factorization. In this work, we provide a detailed description of our method and a deeper comparison with a state-of-the-art wikification method using a sample of Wikipedia and report a gain up to 13% in F1 score. We also provide a comprehensive analysis of the model performance showing the importance of the latent predictor component and the attributes derived from the associations between the concepts. Moreover, we include an analysis that allows us to conclude that the model is resilient to ambiguity without including a disambiguation phase. We finally report the positive impact of selecting training samples from specific content quality classes.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.380-394
    Type
    a
  6. Ridi, R.: Hypertext (2018) 0.01
    0.00652538 = product of:
      0.01631345 = sum of:
        0.0067426977 = weight(_text_:a in 4537) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4537,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4537, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4537)
        0.009570752 = product of:
          0.019141505 = sum of:
            0.019141505 = weight(_text_:information in 4537) [ClassicSimilarity], result of:
              0.019141505 = score(doc=4537,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23515764 = fieldWeight in 4537, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4537)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Hypertexts are multilinear, granular, interactive, integrable and multimedia documents describable with graph theory and composed of several information units (nodes) interconnected by links that users can freely and indefinitely cover by following a plurality of possible different paths. Hypertexts are particularly widespread in the digital environment, but they existed (and still exist) also in non-digital forms, such as paper encyclopedias and printed academic journals, both consisting of information subunits densely linked between them. This article reviews the definitions, characteristics, components, typologies, history and applications of hypertexts, with particular attention to their theoretical and practical developments from 1945 to present day and to their use for the organization of information and knowledge.
    Type
    a
  7. Finnemann, N.O.: Hypertext configurations : genres in networked digital media (2017) 0.01
    0.006474727 = product of:
      0.016186817 = sum of:
        0.010661141 = weight(_text_:a in 3525) [ClassicSimilarity], result of:
          0.010661141 = score(doc=3525,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 3525, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3525)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 3525) [ClassicSimilarity], result of:
              0.011051352 = score(doc=3525,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 3525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3525)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The article presents a conceptual framework for distinguishing different sorts of heterogeneous digital materials. The hypothesis is that a wide range of heterogeneous data resources can be characterized and classified due to their particular configurations of hypertext features such as scripts, links, interactive processes, and time scalings, and that the hypertext configuration is a major but not sole source of the messiness of big data. The notion of hypertext will be revalidated, placed at the center of the interpretation of networked digital media, and used in the analysis of the fast-growing amounts of heterogeneous digital collections, assemblages, and corpora. The introduction summarizes the wider background of a fast-changing data landscape.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.845-854
    Type
    a
  8. Hammwöhner, R.: Hypertext (2013) 0.01
    0.00588199 = product of:
      0.014704974 = sum of:
        0.0068111527 = weight(_text_:a in 708) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=708,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=708)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 708) [ClassicSimilarity], result of:
              0.015787644 = score(doc=708,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=708)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
    Type
    a
  9. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.01
    0.00588199 = product of:
      0.014704974 = sum of:
        0.0068111527 = weight(_text_:a in 1810) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1810,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 1810) [ClassicSimilarity], result of:
              0.015787644 = score(doc=1810,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1810)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Trägerin des VFI-Dissertationspreises 2014: "Überzeugende gründliche linguistische und quantitative Analyse eines im Information Retrieval bisher wenig beachteten Textelementes anhand eines eigens erstellten grossen Hypertextkorpus, einschliesslich der Evaluation selbsterstellter Auflösungsregeln für die Nutzung in künftigen IR-Systemen.".
  10. Lima, G.A.B. de Oliveira: Conceptual modeling of hypertexts : methodological proposal for the management of semantic content in digital libraries (2012) 0.01
    0.0055105956 = product of:
      0.013776489 = sum of:
        0.007078358 = weight(_text_:a in 451) [ClassicSimilarity], result of:
          0.007078358 = score(doc=451,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 451, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=451)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 451) [ClassicSimilarity], result of:
              0.013396261 = score(doc=451,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 451, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=451)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This research is focused on the continuation of the Hypertext Map prototype implementation - MHTX, proposed by Lima, (2004), with the general objective of transforming the MHTX into a semantic content management product facilitating navigation in context supported by customizable software that is easy to use, through high end desktop/web interfaces that sustain the operation of its functions. Besides, these studies aim, in the long run, to achieve the simplification of the information organization, access and recovery processes in digital libraries, making archive management by authors, content managers and information professionals possible.
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  11. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.00
    0.004725861 = product of:
      0.011814652 = sum of:
        0.007078358 = weight(_text_:a in 3708) [ClassicSimilarity], result of:
          0.007078358 = score(doc=3708,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 3708, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3708)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 3708) [ClassicSimilarity], result of:
              0.009472587 = score(doc=3708,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 3708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3708)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1673-1685
    Type
    a
  12. Frank, I.: Fortschritt durch Rückschritt : vom Bibliothekskatalog zum Denkwerkzeug. Eine Idee (2016) 0.00
    0.0010897844 = product of:
      0.005448922 = sum of:
        0.005448922 = weight(_text_:a in 3982) [ClassicSimilarity], result of:
          0.005448922 = score(doc=3982,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 3982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3982)
      0.2 = coord(1/5)
    
    Type
    a