Search (36 results, page 1 of 2)

  • × theme_ss:"Automatisches Indexieren"
  1. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.07
    0.0687215 = product of:
      0.137443 = sum of:
        0.11336221 = weight(_text_:engineering in 6752) [ClassicSimilarity], result of:
          0.11336221 = score(doc=6752,freq=2.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.47486886 = fieldWeight in 6752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.024080802 = product of:
          0.048161604 = sum of:
            0.048161604 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.048161604 = score(doc=6752,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  2. Ward, M.L.: ¬The future of the human indexer (1996) 0.05
    0.051541127 = product of:
      0.103082255 = sum of:
        0.08502165 = weight(_text_:engineering in 7244) [ClassicSimilarity], result of:
          0.08502165 = score(doc=7244,freq=2.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.35615164 = fieldWeight in 7244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.046875 = fieldNorm(doc=7244)
        0.0180606 = product of:
          0.0361212 = sum of:
            0.0361212 = weight(_text_:22 in 7244) [ClassicSimilarity], result of:
              0.0361212 = score(doc=7244,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.23214069 = fieldWeight in 7244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7244)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Considers the principles of indexing and the intellectual skills involved in order to determine what automatic indexing systems would be required in order to supplant or complement the human indexer. Good indexing requires: considerable prior knowledge of the literature; judgement as to what to index and what depth to index; reading skills; abstracting skills; and classification skills, Illustrates these features with a detailed description of abstracting and indexing processes involved in generating entries for the mechanical engineering database POWERLINK. Briefly assesses the possibility of replacing human indexers with specialist indexing software, with particular reference to the Object Analyzer from the InTEXT automatic indexing system and using the criteria described for human indexers. At present, it is unlikely that the automatic indexer will replace the human indexer, but when more primary texts are available in electronic form, it may be a useful productivity tool for dealing with large quantities of low grade texts (should they be wanted in the database)
    Date
    9. 2.1997 18:44:22
  3. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.05
    0.04681192 = product of:
      0.18724768 = sum of:
        0.18724768 = sum of:
          0.16316688 = weight(_text_:lehrbuch in 1767) [ClassicSimilarity], result of:
            0.16316688 = score(doc=1767,freq=6.0), product of:
              0.30775926 = queryWeight, product of:
                6.926203 = idf(docFreq=117, maxDocs=44218)
                0.044434052 = queryNorm
              0.530177 = fieldWeight in 1767, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                6.926203 = idf(docFreq=117, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
          0.024080802 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
            0.024080802 = score(doc=1767,freq=2.0), product of:
              0.15560047 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044434052 = queryNorm
              0.15476047 = fieldWeight in 1767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
      0.25 = coord(1/4)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
    Im fünften Kapitel "Information Extraction" geht Nohr auf eine Problemstellung ein, die in der Fachwelt eine noch stärkere Betonung verdiente: "Die stetig ansteigende Zahl elektronischer Dokumente macht neben einer automatischen Erschließung auch eine automatische Gewinnung der relevanten Informationen aus diesen Dokumenten wünschenswert, um diese z.B. für weitere Bearbeitungen oder Auswertungen in betriebliche Informationssysteme übernehmen zu können." (S. 103) "Indexierung und Retrievalverfahren" als voneinander abhängige Verfahren werden im sechsten Kapitel behandelt. Hier stehen Relevance Ranking und Relevance Feedback sowie die Anwendung informationslinguistischer Verfahren in der Recherche im Mittelpunkt. Die "Evaluation automatischer Indexierung" setzt den thematischen Schlusspunkt. Hier geht es vor allem um die Oualität einer Indexierung, um gängige Retrievalmaße in Retrievaltest und deren Einssatz. Weiterhin ist hervorzuheben, dass jedes Kapitel durch die Vorgabe von Lernzielen eingeleitet wird und zu den jeweiligen Kapiteln (im hinteren Teil des Buches) einige Kontrollfragen gestellt werden. Die sehr zahlreichen Beispiele aus der Praxis, ein Abkürzungsverzeichnis und ein Sachregister erhöhen den Nutzwert des Buches. Die Lektüre förderte beim Rezensenten das Verständnis für die Zusammenhänge von BID-Handwerkzeug, Wirtschaftsinformatik (insbesondere Data Warehousing) und Künstlicher Intelligenz. Die "Grundlagen der automatischen Indexierung" sollte auch in den bibliothekarischen Studiengängen zur Pflichtlektüre gehören. Holger Nohrs Lehrbuch ist auch für den BID-Profi geeignet, um die mehr oder weniger fundierten Kenntnisse auf dem Gebiet "automatisches Indexieren" schnell, leicht verständlich und informativ aufzufrischen."
  4. Faraj, N.: Analyse d'une methode d'indexation automatique basée sur une analyse syntaxique de texte (1996) 0.03
    0.028340552 = product of:
      0.11336221 = sum of:
        0.11336221 = weight(_text_:engineering in 685) [ClassicSimilarity], result of:
          0.11336221 = score(doc=685,freq=2.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.47486886 = fieldWeight in 685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.0625 = fieldNorm(doc=685)
      0.25 = coord(1/4)
    
    Abstract
    Evaluates an automatic indexing method based on syntactical text analysis combined with statistical analysis. Tests many combinations for the choice of term categories and weighting methods. The experiment, conducted on a software engineering corpus, shows systematic improvement in the use of syntactic term phrases compared to using only individual words as index terms
  5. Witschel, H.F.: Terminology extraction and automatic indexing : comparison and qualitative evaluation of methods (2005) 0.03
    0.025049746 = product of:
      0.100198984 = sum of:
        0.100198984 = weight(_text_:engineering in 1842) [ClassicSimilarity], result of:
          0.100198984 = score(doc=1842,freq=4.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.41972876 = fieldWeight in 1842, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1842)
      0.25 = coord(1/4)
    
    Abstract
    Many terminology engineering processes involve the task of automatic terminology extraction: before the terminology of a given domain can be modelled, organised or standardised, important concepts (or terms) of this domain have to be identified and fed into terminological databases. These serve in further steps as a starting point for compiling dictionaries, thesauri or maybe even terminological ontologies for the domain. For the extraction of the initial concepts, extraction methods are needed that operate on specialised language texts. On the other hand, many machine learning or information retrieval applications require automatic indexing techniques. In Machine Learning applications concerned with the automatic clustering or classification of texts, often feature vectors are needed that describe the contents of a given text briefly but meaningfully. These feature vectors typically consist of a fairly small set of index terms together with weights indicating their importance. Short but meaningful descriptions of document contents as provided by good index terms are also useful to humans: some knowledge management applications (e.g. topic maps) use them as a set of basic concepts (topics). The author believes that the tasks of terminology extraction and automatic indexing have much in common and can thus benefit from the same set of basic algorithms. It is the goal of this paper to outline some methods that may be used in both contexts, but also to find the discriminating factors between the two tasks that call for the variation of parameters or application of different techniques. The discussion of these methods will be based on statistical, syntactical and especially morphological properties of (index) terms. The paper is concluded by the presentation of some qualitative and quantitative results comparing statistical and morphological methods.
    Source
    TKE 2005: Proc. of Terminology and Knowledge Engineering (TKE) 2005
  6. Scherer, B.: Automatische Indexierung und ihre Anwendung im DFG-Projekt "Gemeinsames Portal für Bibliotheken, Archive und Museen (BAM)" (2003) 0.02
    0.017712845 = product of:
      0.07085138 = sum of:
        0.07085138 = weight(_text_:engineering in 4283) [ClassicSimilarity], result of:
          0.07085138 = score(doc=4283,freq=2.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.29679304 = fieldWeight in 4283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4283)
      0.25 = coord(1/4)
    
    Footnote
    Masterarbeit im Studiengang Information Engineering zur Erlagung des Grades eines Master of Science in Information science,
  7. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.012040401 = product of:
      0.048161604 = sum of:
        0.048161604 = product of:
          0.09632321 = sum of:
            0.09632321 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.09632321 = score(doc=402,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  8. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.010535351 = product of:
      0.042141404 = sum of:
        0.042141404 = product of:
          0.08428281 = sum of:
            0.08428281 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.08428281 = score(doc=262,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20.10.2000 12:22:23
  9. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.01
    0.010535351 = product of:
      0.042141404 = sum of:
        0.042141404 = product of:
          0.08428281 = sum of:
            0.08428281 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.08428281 = score(doc=6265,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  10. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.0090303 = product of:
      0.0361212 = sum of:
        0.0361212 = product of:
          0.0722424 = sum of:
            0.0722424 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.0722424 = score(doc=58,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:44
  11. Hauer, M.: Automatische Indexierung (2000) 0.01
    0.0090303 = product of:
      0.0361212 = sum of:
        0.0361212 = product of:
          0.0722424 = sum of:
            0.0722424 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.0722424 = score(doc=5887,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  12. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.0090303 = product of:
      0.0361212 = sum of:
        0.0361212 = product of:
          0.0722424 = sum of:
            0.0722424 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.0722424 = score(doc=2051,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:56
  13. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.01
    0.0090303 = product of:
      0.0361212 = sum of:
        0.0361212 = product of:
          0.0722424 = sum of:
            0.0722424 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.0722424 = score(doc=5629,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166
  14. Biebricher, N.; Fuhr, N.; Lustig, G.; Schwantner, M.; Knorz, G.: ¬The automatic indexing system AIR/PHYS : from research to application (1988) 0.01
    0.007525251 = product of:
      0.030101003 = sum of:
        0.030101003 = product of:
          0.060202006 = sum of:
            0.060202006 = weight(_text_:22 in 1952) [ClassicSimilarity], result of:
              0.060202006 = score(doc=1952,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.38690117 = fieldWeight in 1952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1952)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    16. 8.1998 12:51:22
  15. Kutschekmanesch, S.; Lutes, B.; Moelle, K.; Thiel, U.; Tzeras, K.: Automated multilingual indexing : a synthesis of rule-based and thesaurus-based methods (1998) 0.01
    0.007525251 = product of:
      0.030101003 = sum of:
        0.030101003 = product of:
          0.060202006 = sum of:
            0.060202006 = weight(_text_:22 in 4157) [ClassicSimilarity], result of:
              0.060202006 = score(doc=4157,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.38690117 = fieldWeight in 4157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4157)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
  16. Tsareva, P.V.: Algoritmy dlya raspoznavaniya pozitivnykh i negativnykh vkhozdenii deskriptorov v tekst i protsedura avtomaticheskoi klassifikatsii tekstov (1999) 0.01
    0.007525251 = product of:
      0.030101003 = sum of:
        0.030101003 = product of:
          0.060202006 = sum of:
            0.060202006 = weight(_text_:22 in 374) [ClassicSimilarity], result of:
              0.060202006 = score(doc=374,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.38690117 = fieldWeight in 374, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=374)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 4.2002 10:22:41
  17. Stankovic, R. et al.: Indexing of textual databases based on lexical resources : a case study for Serbian (2016) 0.01
    0.007525251 = product of:
      0.030101003 = sum of:
        0.030101003 = product of:
          0.060202006 = sum of:
            0.060202006 = weight(_text_:22 in 2759) [ClassicSimilarity], result of:
              0.060202006 = score(doc=2759,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.38690117 = fieldWeight in 2759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2759)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  18. Tsujii, J.-I.: Automatic acquisition of semantic collocation from corpora (1995) 0.01
    0.0060202004 = product of:
      0.024080802 = sum of:
        0.024080802 = product of:
          0.048161604 = sum of:
            0.048161604 = weight(_text_:22 in 4709) [ClassicSimilarity], result of:
              0.048161604 = score(doc=4709,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.30952093 = fieldWeight in 4709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  19. Lepsky, K.; Vorhauer, J.: Lingo - ein open source System für die Automatische Indexierung deutschsprachiger Dokumente (2006) 0.01
    0.0060202004 = product of:
      0.024080802 = sum of:
        0.024080802 = product of:
          0.048161604 = sum of:
            0.048161604 = weight(_text_:22 in 3581) [ClassicSimilarity], result of:
              0.048161604 = score(doc=3581,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.30952093 = fieldWeight in 3581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3581)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    24. 3.2006 12:22:02
  20. Probst, M.; Mittelbach, J.: Maschinelle Indexierung in der Sacherschließung wissenschaftlicher Bibliotheken (2006) 0.01
    0.0060202004 = product of:
      0.024080802 = sum of:
        0.024080802 = product of:
          0.048161604 = sum of:
            0.048161604 = weight(_text_:22 in 1755) [ClassicSimilarity], result of:
              0.048161604 = score(doc=1755,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.30952093 = fieldWeight in 1755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1755)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2008 12:35:19