Search (406 results, page 1 of 21)

  • × theme_ss:"Automatisches Indexieren"
  1. Zhitomirsky-Geffet, M.; Prebor, G.; Bloch, O.: Improving proverb search and retrieval with a generic multidimensional ontology (2017) 0.08
    0.08482992 = product of:
      0.113106556 = sum of:
        0.01179477 = weight(_text_:a in 3320) [ClassicSimilarity], result of:
          0.01179477 = score(doc=3320,freq=14.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.20223314 = fieldWeight in 3320, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3320)
        0.09614516 = weight(_text_:70 in 3320) [ClassicSimilarity], result of:
          0.09614516 = score(doc=3320,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.35497418 = fieldWeight in 3320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.046875 = fieldNorm(doc=3320)
        0.0051666284 = product of:
          0.010333257 = sum of:
            0.010333257 = weight(_text_:information in 3320) [ClassicSimilarity], result of:
              0.010333257 = score(doc=3320,freq=2.0), product of:
                0.088794395 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.05058132 = queryNorm
                0.116372846 = fieldWeight in 3320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3320)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The goal of this research is to develop a generic ontological model for proverbs that unifies potential classification criteria and various characteristics of proverbs to enable their effective retrieval and large-scale analysis. Because proverbs can be described and indexed by multiple characteristics and criteria, we built a multidimensional ontology suitable for proverb classification. To evaluate the effectiveness of the constructed ontology for improving search and retrieval of proverbs, a large-scale user experiment was arranged with 70 users who were asked to search a proverb repository using ontology-based and free-text search interfaces. The comparative analysis of the results shows that the use of this ontology helped to substantially improve the search recall, precision, user satisfaction, and efficiency and to minimize user effort during the search process. A practical contribution of this work is an automated web-based proverb search and retrieval system which incorporates the proposed ontological scheme and an initial corpus of ontology-based annotated proverbs.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.141-153
    Type
    a
  2. Qualität in der Inhaltserschließung (2021) 0.07
    0.07479054 = product of:
      0.09972072 = sum of:
        0.004203046 = weight(_text_:a in 753) [ClassicSimilarity], result of:
          0.004203046 = score(doc=753,freq=4.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.072065435 = fieldWeight in 753, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=753)
        0.09064653 = weight(_text_:70 in 753) [ClassicSimilarity], result of:
          0.09064653 = score(doc=753,freq=4.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.33467287 = fieldWeight in 753, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.03125 = fieldNorm(doc=753)
        0.0048711435 = product of:
          0.009742287 = sum of:
            0.009742287 = weight(_text_:information in 753) [ClassicSimilarity], result of:
              0.009742287 = score(doc=753,freq=4.0), product of:
                0.088794395 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.05058132 = queryNorm
                0.10971737 = fieldWeight in 753, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=753)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Der 70. Band der BIPRA-Reihe beschäftigt sich mit der Qualität in der Inhaltserschließung im Kontext etablierter Verfahren und technologischer Innovationen. Treffen heterogene Erzeugnisse unterschiedlicher Methoden und Systeme aufeinander, müssen minimale Anforderungen an die Qualität der Inhaltserschließung festgelegt werden. Die Qualitätsfrage wird zurzeit in verschiedenen Zusammenhängen intensiv diskutiert und im vorliegenden Band aufgegriffen. In diesem Themenfeld aktive Autor:innen beschreiben aus ihrem jeweiligen Blickwinkel unterschiedliche Aspekte zu Metadaten, Normdaten, Formaten, Erschließungsverfahren und Erschließungspolitik. Der Band versteht sich als Handreichung und Anregung für die Diskussion um die Qualität in der Inhaltserschließung.
    Content
    Inhalt: Editorial - Michael Franke-Maier, Anna Kasprzik, Andreas Ledl und Hans Schürmann Qualität in der Inhaltserschließung - Ein Überblick aus 50 Jahren (1970-2020) - Andreas Ledl Fit for Purpose - Standardisierung von inhaltserschließenden Informationen durch Richtlinien für Metadaten - Joachim Laczny Neue Wege und Qualitäten - Die Inhaltserschließungspolitik der Deutschen Nationalbibliothek - Ulrike Junger und Frank Scholze Wissensbasen für die automatische Erschließung und ihre Qualität am Beispiel von Wikidata - Lydia Pintscher, Peter Bourgonje, Julián Moreno Schneider, Malte Ostendorff und Georg Rehm Qualitätssicherung in der GND - Esther Scheven Qualitätskriterien und Qualitätssicherung in der inhaltlichen Erschließung - Thesenpapier des Expertenteams RDA-Anwendungsprofil für die verbale Inhaltserschließung (ET RAVI) Coli-conc - Eine Infrastruktur zur Nutzung und Erstellung von Konkordanzen - Uma Balakrishnan, Stefan Peters und Jakob Voß Methoden und Metriken zur Messung von OCR-Qualität für die Kuratierung von Daten und Metadaten - Clemens Neudecker, Karolina Zaczynska, Konstantin Baierer, Georg Rehm, Mike Gerber und Julián Moreno Schneider Datenqualität als Grundlage qualitativer Inhaltserschließung - Jakob Voß Bemerkungen zu der Qualitätsbewertung von MARC-21-Datensätzen - Rudolf Ungváry und Péter Király Named Entity Linking mit Wikidata und GND - Das Potenzial handkuratierter und strukturierter Datenquellen für die semantische Anreicherung von Volltexten - Sina Menzel, Hannes Schnaitter, Josefine Zinck, Vivien Petras, Clemens Neudecker, Kai Labusch, Elena Leitner und Georg Rehm Ein Protokoll für den Datenabgleich im Web am Beispiel von OpenRefine und der Gemeinsamen Normdatei (GND) - Fabian Steeg und Adrian Pohl Verbale Erschließung in Katalogen und Discovery-Systemen - Überlegungen zur Qualität - Heidrun Wiesenmüller Inhaltserschließung für Discovery-Systeme gestalten - Jan Frederik Maas Evaluierung von Verschlagwortung im Kontext des Information Retrievals - Christian Wartena und Koraljka Golub Die Qualität der Fremddatenanreicherung FRED - Cyrus Beck Quantität als Qualität - Was die Verbünde zur Verbesserung der Inhaltserschließung beitragen können - Rita Albrecht, Barbara Block, Mathias Kratzer und Peter Thiessen Hybride Künstliche Intelligenz in der automatisierten Inhaltserschließung - Harald Sack
    Editor
    Franke-Maier, M., A. Kasprzik, A. Ledl u. H. Schürmann
    Footnote
    Vgl.: https://www.degruyter.com/document/doi/10.1515/9783110691597/html. DOI: https://doi.org/10.1515/9783110691597. Rez. in: Information - Wissenschaft und Praxis 73(2022) H.2-3, S.131-132 (B. Lorenz u. V. Steyer). Weitere Rezension in: o-bib 9(20229 Nr.3. (Martin Völkl) [https://www.o-bib.de/bib/article/view/5843/8714].
    Series
    Bibliotheks- und Informationspraxis; 70
  3. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.07
    0.07454625 = product of:
      0.1490925 = sum of:
        0.0118880095 = weight(_text_:a in 402) [ClassicSimilarity], result of:
          0.0118880095 = score(doc=402,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.20383182 = fieldWeight in 402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=402)
        0.13720448 = sum of:
          0.02755535 = weight(_text_:information in 402) [ClassicSimilarity], result of:
            0.02755535 = score(doc=402,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.3103276 = fieldWeight in 402, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.125 = fieldNorm(doc=402)
          0.10964913 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
            0.10964913 = score(doc=402,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.61904186 = fieldWeight in 402, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=402)
      0.5 = coord(2/4)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
    Type
    a
  4. Junger, U.; Scholze, F.: Neue Wege und Qualitäten : die Inhaltserschließungspolitik der Deutschen Nationalbibliothek (2021) 0.07
    0.0702139 = product of:
      0.1404278 = sum of:
        0.0044580037 = weight(_text_:a in 365) [ClassicSimilarity], result of:
          0.0044580037 = score(doc=365,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.07643694 = fieldWeight in 365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=365)
        0.13596979 = weight(_text_:70 in 365) [ClassicSimilarity], result of:
          0.13596979 = score(doc=365,freq=4.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.5020093 = fieldWeight in 365, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.046875 = fieldNorm(doc=365)
      0.5 = coord(2/4)
    
    Pages
    S.55-70
    Series
    Bibliotheks- und Informationspraxis; 70
    Type
    a
  5. Golub, K.; Lykke, M.; Tudhope, D.: Enhancing social tagging with automated keywords from the Dewey Decimal Classification (2014) 0.07
    0.06955013 = product of:
      0.0927335 = sum of:
        0.008307 = weight(_text_:a in 2918) [ClassicSimilarity], result of:
          0.008307 = score(doc=2918,freq=10.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.14243183 = fieldWeight in 2918, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
        0.08012097 = weight(_text_:70 in 2918) [ClassicSimilarity], result of:
          0.08012097 = score(doc=2918,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.29581183 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
        0.0043055234 = product of:
          0.008611047 = sum of:
            0.008611047 = weight(_text_:information in 2918) [ClassicSimilarity], result of:
              0.008611047 = score(doc=2918,freq=2.0), product of:
                0.088794395 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.05058132 = queryNorm
                0.09697737 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2918)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose - The purpose of this paper is to explore the potential of applying the Dewey Decimal Classification (DDC) as an established knowledge organization system (KOS) for enhancing social tagging, with the ultimate purpose of improving subject indexing and information retrieval. Design/methodology/approach - Over 11.000 Intute metadata records in politics were used. Totally, 28 politics students were each given four tasks, in which a total of 60 resources were tagged in two different configurations, one with uncontrolled social tags only and another with uncontrolled social tags as well as suggestions from a controlled vocabulary. The controlled vocabulary was DDC comprising also mappings from the Library of Congress Subject Headings. Findings - The results demonstrate the importance of controlled vocabulary suggestions for indexing and retrieval: to help produce ideas of which tags to use, to make it easier to find focus for the tagging, to ensure consistency and to increase the number of access points in retrieval. The value and usefulness of the suggestions proved to be dependent on the quality of the suggestions, both as to conceptual relevance to the user and as to appropriateness of the terminology. Originality/value - No research has investigated the enhancement of social tagging with suggestions from the DDC, an established KOS, in a user trial, comparing social tagging only and social tagging enhanced with the suggestions. This paper is a final reflection on all aspects of the study.
    Source
    Journal of documentation. 70(2014) no.5, S.801-828
    Type
    a
  6. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.07
    0.06522796 = product of:
      0.13045593 = sum of:
        0.010402009 = weight(_text_:a in 6265) [ClassicSimilarity], result of:
          0.010402009 = score(doc=6265,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.17835285 = fieldWeight in 6265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6265)
        0.12005392 = sum of:
          0.024110932 = weight(_text_:information in 6265) [ClassicSimilarity], result of:
            0.024110932 = score(doc=6265,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.27153665 = fieldWeight in 6265, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.109375 = fieldNorm(doc=6265)
          0.09594299 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
            0.09594299 = score(doc=6265,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.5416616 = fieldWeight in 6265, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6265)
      0.5 = coord(2/4)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
    Type
    a
  7. Sack, H.: Hybride Künstliche Intelligenz in der automatisierten Inhaltserschließung (2021) 0.05
    0.05030158 = product of:
      0.10060316 = sum of:
        0.0044580037 = weight(_text_:a in 372) [ClassicSimilarity], result of:
          0.0044580037 = score(doc=372,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.07643694 = fieldWeight in 372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=372)
        0.09614516 = weight(_text_:70 in 372) [ClassicSimilarity], result of:
          0.09614516 = score(doc=372,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.35497418 = fieldWeight in 372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.046875 = fieldNorm(doc=372)
      0.5 = coord(2/4)
    
    Series
    Bibliotheks- und Informationspraxis; 70
    Type
    a
  8. Biebricher, N.; Fuhr, N.; Lustig, G.; Schwantner, M.; Knorz, G.: ¬The automatic indexing system AIR/PHYS : from research to application (1988) 0.05
    0.050158218 = product of:
      0.100316435 = sum of:
        0.007430006 = weight(_text_:a in 1952) [ClassicSimilarity], result of:
          0.007430006 = score(doc=1952,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.12739488 = fieldWeight in 1952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1952)
        0.092886426 = sum of:
          0.024355719 = weight(_text_:information in 1952) [ClassicSimilarity], result of:
            0.024355719 = score(doc=1952,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.27429342 = fieldWeight in 1952, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.078125 = fieldNorm(doc=1952)
          0.06853071 = weight(_text_:22 in 1952) [ClassicSimilarity], result of:
            0.06853071 = score(doc=1952,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.38690117 = fieldWeight in 1952, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1952)
      0.5 = coord(2/4)
    
    Date
    16. 8.1998 12:51:22
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.513-517.
    Source
    Proceedings of the 11th annual conference on research and development in information retrieval. Ed.: Y. Chiaramella
    Type
    a
  9. Kutschekmanesch, S.; Lutes, B.; Moelle, K.; Thiel, U.; Tzeras, K.: Automated multilingual indexing : a synthesis of rule-based and thesaurus-based methods (1998) 0.05
    0.048130207 = product of:
      0.09626041 = sum of:
        0.010507616 = weight(_text_:a in 4157) [ClassicSimilarity], result of:
          0.010507616 = score(doc=4157,freq=4.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.18016359 = fieldWeight in 4157, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4157)
        0.0857528 = sum of:
          0.017222093 = weight(_text_:information in 4157) [ClassicSimilarity], result of:
            0.017222093 = score(doc=4157,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.19395474 = fieldWeight in 4157, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.078125 = fieldNorm(doc=4157)
          0.06853071 = weight(_text_:22 in 4157) [ClassicSimilarity], result of:
            0.06853071 = score(doc=4157,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.38690117 = fieldWeight in 4157, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4157)
      0.5 = coord(2/4)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
    Type
    a
  10. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.04
    0.04354715 = product of:
      0.0870943 = sum of:
        0.008406092 = weight(_text_:a in 6752) [ClassicSimilarity], result of:
          0.008406092 = score(doc=6752,freq=4.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.14413087 = fieldWeight in 6752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.078688204 = sum of:
          0.023863636 = weight(_text_:information in 6752) [ClassicSimilarity], result of:
            0.023863636 = score(doc=6752,freq=6.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.2687516 = fieldWeight in 6752, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
          0.054824565 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
            0.054824565 = score(doc=6752,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.30952093 = fieldWeight in 6752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
      0.5 = coord(2/4)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
    Type
    a
  11. Pintscher, L.; Bourgonje, P.; Moreno Schneider, J.; Ostendorff, M.; Rehm, G.: Wissensbasen für die automatische Erschließung und ihre Qualität am Beispiel von Wikidata : die Inhaltserschließungspolitik der Deutschen Nationalbibliothek (2021) 0.04
    0.041917987 = product of:
      0.083835974 = sum of:
        0.003715003 = weight(_text_:a in 366) [ClassicSimilarity], result of:
          0.003715003 = score(doc=366,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.06369744 = fieldWeight in 366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=366)
        0.08012097 = weight(_text_:70 in 366) [ClassicSimilarity], result of:
          0.08012097 = score(doc=366,freq=2.0), product of:
            0.27085114 = queryWeight, product of:
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.05058132 = queryNorm
            0.29581183 = fieldWeight in 366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.354766 = idf(docFreq=567, maxDocs=44218)
              0.0390625 = fieldNorm(doc=366)
      0.5 = coord(2/4)
    
    Series
    Bibliotheks- und Informationspraxis; 70
    Type
    a
  12. Newman, D.J.; Block, S.: Probabilistic topic decomposition of an eighteenth-century American newspaper (2006) 0.04
    0.037711255 = product of:
      0.07542251 = sum of:
        0.010402009 = weight(_text_:a in 5291) [ClassicSimilarity], result of:
          0.010402009 = score(doc=5291,freq=8.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.17835285 = fieldWeight in 5291, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5291)
        0.0650205 = sum of:
          0.017049003 = weight(_text_:information in 5291) [ClassicSimilarity], result of:
            0.017049003 = score(doc=5291,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.1920054 = fieldWeight in 5291, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5291)
          0.047971494 = weight(_text_:22 in 5291) [ClassicSimilarity], result of:
            0.047971494 = score(doc=5291,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.2708308 = fieldWeight in 5291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5291)
      0.5 = coord(2/4)
    
    Abstract
    We use a probabilistic mixture decomposition method to determine topics in the Pennsylvania Gazette, a major colonial U.S. newspaper from 1728-1800. We assess the value of several topic decomposition techniques for historical research and compare the accuracy and efficacy of various methods. After determining the topics covered by the 80,000 articles and advertisements in the entire 18th century run of the Gazette, we calculate how the prevalence of those topics changed over time, and give historically relevant examples of our findings. This approach reveals important information about the content of this colonial newspaper, and suggests the value of such approaches to a more complete understanding of early American print culture and society.
    Date
    22. 7.2006 17:32:00
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.6, S.753-767
    Type
    a
  13. Lepsky, K.; Vorhauer, J.: Lingo - ein open source System für die Automatische Indexierung deutschsprachiger Dokumente (2006) 0.04
    0.037273124 = product of:
      0.07454625 = sum of:
        0.0059440047 = weight(_text_:a in 3581) [ClassicSimilarity], result of:
          0.0059440047 = score(doc=3581,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.10191591 = fieldWeight in 3581, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3581)
        0.06860224 = sum of:
          0.013777675 = weight(_text_:information in 3581) [ClassicSimilarity], result of:
            0.013777675 = score(doc=3581,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.1551638 = fieldWeight in 3581, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=3581)
          0.054824565 = weight(_text_:22 in 3581) [ClassicSimilarity], result of:
            0.054824565 = score(doc=3581,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.30952093 = fieldWeight in 3581, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3581)
      0.5 = coord(2/4)
    
    Abstract
    Lingo ist ein frei verfügbares System (open source) zur automatischen Indexierung der deutschen Sprache. Bei der Entwicklung von lingo standen hohe Konfigurierbarkeit und Flexibilität des Systems für unterschiedliche Einsatzmöglichkeiten im Vordergrund. Der Beitrag zeigt den Nutzen einer linguistisch basierten automatischen Indexierung für das Information Retrieval auf. Die für eine Retrievalverbesserung zur Verfügung stehende linguistische Funktionalität von lingo wird vorgestellt und an Beispielen erläutert: Grundformerkennung, Kompositumerkennung bzw. Kompositumzerlegung, Wortrelationierung, lexikalische und algorithmische Mehrwortgruppenerkennung, OCR-Fehlerkorrektur. Der offene Systemaufbau von lingo wird beschrieben, mögliche Einsatzszenarien und Anwendungsgrenzen werden benannt.
    Date
    24. 3.2006 12:22:02
    Type
    a
  14. Bordoni, L.; Pazienza, M.T.: Documents automatic indexing in an environmental domain (1997) 0.04
    0.037014455 = product of:
      0.07402891 = sum of:
        0.009008404 = weight(_text_:a in 530) [ClassicSimilarity], result of:
          0.009008404 = score(doc=530,freq=6.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.1544581 = fieldWeight in 530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=530)
        0.0650205 = sum of:
          0.017049003 = weight(_text_:information in 530) [ClassicSimilarity], result of:
            0.017049003 = score(doc=530,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.1920054 = fieldWeight in 530, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=530)
          0.047971494 = weight(_text_:22 in 530) [ClassicSimilarity], result of:
            0.047971494 = score(doc=530,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.2708308 = fieldWeight in 530, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=530)
      0.5 = coord(2/4)
    
    Abstract
    Describes an application of Natural Language Processing (NLP) techniques, in HIRMA (Hypertextual Information Retrieval Managed by ARIOSTO), to the problem of document indexing by referring to a system which incorporates natural language processing techniques to determine the subject of the text of documents and to associate them with relevant semantic indexes. Describes briefly the overall system, details of its implementation on a corpus of scientific abstracts related to environmental topics and experimental evidence of the system's behaviour. Analyzes in detail an experiment designed to evaluate the system's retrieval ability in terms of recall and precision
    Source
    International forum on information and documentation. 22(1997) no.1, S.17-28
    Type
    a
  15. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.04
    0.036187917 = product of:
      0.072375834 = sum of:
        0.0073553314 = weight(_text_:a in 5001) [ClassicSimilarity], result of:
          0.0073553314 = score(doc=5001,freq=4.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.12611452 = fieldWeight in 5001, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.0650205 = sum of:
          0.017049003 = weight(_text_:information in 5001) [ClassicSimilarity], result of:
            0.017049003 = score(doc=5001,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.1920054 = fieldWeight in 5001, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.047971494 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.047971494 = score(doc=5001,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.5 = coord(2/4)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
    Type
    a
  16. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.03
    0.034517683 = product of:
      0.069035366 = sum of:
        0.009008404 = weight(_text_:a in 2673) [ClassicSimilarity], result of:
          0.009008404 = score(doc=2673,freq=6.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.1544581 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.06002696 = sum of:
          0.012055466 = weight(_text_:information in 2673) [ClassicSimilarity], result of:
            0.012055466 = score(doc=2673,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.13576832 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
          0.047971494 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
            0.047971494 = score(doc=2673,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.2708308 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
      0.5 = coord(2/4)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
    Type
    a
  17. Renz, M.: Automatische Inhaltserschließung im Zeichen von Wissensmanagement (2001) 0.03
    0.03261398 = product of:
      0.06522796 = sum of:
        0.0052010044 = weight(_text_:a in 5671) [ClassicSimilarity], result of:
          0.0052010044 = score(doc=5671,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.089176424 = fieldWeight in 5671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5671)
        0.06002696 = sum of:
          0.012055466 = weight(_text_:information in 5671) [ClassicSimilarity], result of:
            0.012055466 = score(doc=5671,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.13576832 = fieldWeight in 5671, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5671)
          0.047971494 = weight(_text_:22 in 5671) [ClassicSimilarity], result of:
            0.047971494 = score(doc=5671,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.2708308 = fieldWeight in 5671, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5671)
      0.5 = coord(2/4)
    
    Date
    22. 3.2001 13:14:48
    Source
    nfd Information - Wissenschaft und Praxis. 52(2001) H.2, S.69-78
    Type
    a
  18. Ward, M.L.: ¬The future of the human indexer (1996) 0.03
    0.029586587 = product of:
      0.059173174 = sum of:
        0.0077214893 = weight(_text_:a in 7244) [ClassicSimilarity], result of:
          0.0077214893 = score(doc=7244,freq=6.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.13239266 = fieldWeight in 7244, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=7244)
        0.051451683 = sum of:
          0.010333257 = weight(_text_:information in 7244) [ClassicSimilarity], result of:
            0.010333257 = score(doc=7244,freq=2.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.116372846 = fieldWeight in 7244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=7244)
          0.041118424 = weight(_text_:22 in 7244) [ClassicSimilarity], result of:
            0.041118424 = score(doc=7244,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.23214069 = fieldWeight in 7244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=7244)
      0.5 = coord(2/4)
    
    Abstract
    Considers the principles of indexing and the intellectual skills involved in order to determine what automatic indexing systems would be required in order to supplant or complement the human indexer. Good indexing requires: considerable prior knowledge of the literature; judgement as to what to index and what depth to index; reading skills; abstracting skills; and classification skills, Illustrates these features with a detailed description of abstracting and indexing processes involved in generating entries for the mechanical engineering database POWERLINK. Briefly assesses the possibility of replacing human indexers with specialist indexing software, with particular reference to the Object Analyzer from the InTEXT automatic indexing system and using the criteria described for human indexers. At present, it is unlikely that the automatic indexer will replace the human indexer, but when more primary texts are available in electronic form, it may be a useful productivity tool for dealing with large quantities of low grade texts (should they be wanted in the database)
    Date
    9. 2.1997 18:44:22
    Source
    Journal of librarianship and information science. 28(1996) no.4, S.217-225
    Type
    a
  19. Plaunt, C.; Norgard, B.A.: ¬An association-based method for automatic indexing with a controlled vocabulary (1998) 0.03
    0.029382242 = product of:
      0.058764484 = sum of:
        0.012321272 = weight(_text_:a in 1794) [ClassicSimilarity], result of:
          0.012321272 = score(doc=1794,freq=22.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.21126054 = fieldWeight in 1794, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1794)
        0.046443213 = sum of:
          0.012177859 = weight(_text_:information in 1794) [ClassicSimilarity], result of:
            0.012177859 = score(doc=1794,freq=4.0), product of:
              0.088794395 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.05058132 = queryNorm
              0.13714671 = fieldWeight in 1794, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1794)
          0.034265354 = weight(_text_:22 in 1794) [ClassicSimilarity], result of:
            0.034265354 = score(doc=1794,freq=2.0), product of:
              0.17712717 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05058132 = queryNorm
              0.19345059 = fieldWeight in 1794, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1794)
      0.5 = coord(2/4)
    
    Abstract
    In this article, we describe and test a two-stage algorithm based on a lexical collocation technique which maps from the lexical clues contained in a document representation into a controlled vocabulary list of subject headings. Using a collection of 4.626 INSPEC documents, we create a 'dictionary' of associations between the lexical items contained in the titles, authors, and abstracts, and controlled vocabulary subject headings assigned to those records by human indexers using a likelihood ratio statistic as the measure of association. In the deployment stage, we use the dictiony to predict which of the controlled vocabulary subject headings best describe new documents when they are presented to the system. Our evaluation of this algorithm, in which we compare the automatically assigned subject headings to the subject headings assigned to the test documents by human catalogers, shows that we can obtain results comparable to, and consistent with, human cataloging. In effect we have cast this as a classic partial match information retrieval problem. We consider the problem to be one of 'retrieving' (or assigning) the most probably 'relevant' (or correct) controlled vocabulary subject headings to a document based on the clues contained in that document
    Date
    11. 9.2000 19:53:22
    Source
    Journal of the American Society for Information Science. 49(1998) no.10, S.888-902
    Type
    a
  20. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.03
    0.029186752 = product of:
      0.058373503 = sum of:
        0.010402009 = weight(_text_:a in 262) [ClassicSimilarity], result of:
          0.010402009 = score(doc=262,freq=2.0), product of:
            0.05832264 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05058132 = queryNorm
            0.17835285 = fieldWeight in 262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=262)
        0.047971494 = product of:
          0.09594299 = sum of:
            0.09594299 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09594299 = score(doc=262,freq=2.0), product of:
                0.17712717 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05058132 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    20.10.2000 12:22:23
    Type
    a

Languages

Types

  • a 364
  • el 32
  • x 15
  • m 14
  • s 8
  • d 1
  • p 1
  • More… Less…