Search (17 results, page 1 of 1)

  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Rasmussen, E.M.: Indexing and retrieval for the Web (2002) 0.02
    0.022696622 = product of:
      0.09078649 = sum of:
        0.045056276 = weight(_text_:wide in 4285) [ClassicSimilarity], result of:
          0.045056276 = score(doc=4285,freq=8.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 4285, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
        0.045730207 = weight(_text_:web in 4285) [ClassicSimilarity], result of:
          0.045730207 = score(doc=4285,freq=28.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.47219574 = fieldWeight in 4285, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
      0.25 = coord(2/8)
    
    Abstract
    The introduction and growth of the World Wide Web (WWW, or Web) have resulted in a profound change in the way individuals and organizations access information. In terms of volume, nature, and accessibility, the characteristics of electronic information are significantly different from those of even five or six years ago. Control of, and access to, this flood of information rely heavily an automated techniques for indexing and retrieval. According to Gudivada, Raghavan, Grosky, and Kasanagottu (1997, p. 58), "The ability to search and retrieve information from the Web efficiently and effectively is an enabling technology for realizing its full potential." Almost 93 percent of those surveyed consider the Web an "indispensable" Internet technology, second only to e-mail (Graphie, Visualization & Usability Center, 1998). Although there are other ways of locating information an the Web (browsing or following directory structures), 85 percent of users identify Web pages by means of a search engine (Graphie, Visualization & Usability Center, 1998). A more recent study conducted by the Stanford Institute for the Quantitative Study of Society confirms the finding that searching for information is second only to e-mail as an Internet activity (Nie & Ebring, 2000, online). In fact, Nie and Ebring conclude, "... the Internet today is a giant public library with a decidedly commercial tilt. The most widespread use of the Internet today is as an information search utility for products, travel, hobbies, and general information. Virtually all users interviewed responded that they engaged in one or more of these information gathering activities."
    Techniques for automated indexing and information retrieval (IR) have been developed, tested, and refined over the past 40 years, and are well documented (see, for example, Agosti & Smeaton, 1996; BaezaYates & Ribeiro-Neto, 1999a; Frakes & Baeza-Yates, 1992; Korfhage, 1997; Salton, 1989; Witten, Moffat, & Bell, 1999). With the introduction of the Web, and the capability to index and retrieve via search engines, these techniques have been extended to a new environment. They have been adopted, altered, and in some Gases extended to include new methods. "In short, search engines are indispensable for searching the Web, they employ a variety of relatively advanced IR techniques, and there are some peculiar aspects of search engines that make searching the Web different than more conventional information retrieval" (Gordon & Pathak, 1999, p. 145). The environment for information retrieval an the World Wide Web differs from that of "conventional" information retrieval in a number of fundamental ways. The collection is very large and changes continuously, with pages being added, deleted, and altered. Wide variability between the size, structure, focus, quality, and usefulness of documents makes Web documents much more heterogeneous than a typical electronic document collection. The wide variety of document types includes images, video, audio, and scripts, as well as many different document languages. Duplication of documents and sites is common. Documents are interconnected through networks of hyperlinks. Because of the size and dynamic nature of the Web, preprocessing all documents requires considerable resources and is often not feasible, certainly not an the frequent basis required to ensure currency. Query length is usually much shorter than in other environments-only a few words-and user behavior differs from that in other environments. These differences make the Web a novel environment for information retrieval (Baeza-Yates & Ribeiro-Neto, 1999b; Bharat & Henzinger, 1998; Huang, 2000).
  2. Nohr, H.: Theorie des Information Retrieval II : Automatische Indexierung (2004) 0.01
    0.013324158 = product of:
      0.053296633 = sum of:
        0.016391123 = weight(_text_:data in 8) [ClassicSimilarity], result of:
          0.016391123 = score(doc=8,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 8, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=8)
        0.036905512 = product of:
          0.073811024 = sum of:
            0.073811024 = weight(_text_:mining in 8) [ClassicSimilarity], result of:
              0.073811024 = score(doc=8,freq=4.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.44081625 = fieldWeight in 8, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=8)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Ein großer Teil der Informationen - Schätzungen zufolge bis zu 80% - liegt in Organisationen in unstrukturierten Dokumenten vor. In der Vergangenheit wurden Lösungen für das Management strukturierter Informationen entwickelt, die es nun auch zu erreichen gilt für unstrukturierte Informationen. Neben Verfahren des Data Mining für die Datenanalyse treten Versuche, Text Mining (Lit. 06) auf die Textanalyse anzuwenden. Um gezielt Dokumente im Repository suchen zu können, ist eine effektive Inhaltserkennung und -kennzeichnung erforderlich, d.h. eine Zuordnung der Dokumente zu Themengebieten bzw die Speicherung geeigneter Indexterme als Metadaten. Zu diesem Zweck müssen die Dokumenteninhalte repräsentiert, d.h. indexiert oder klassifiziert, werden. Dokumentanalyse dient auch der Steuerung des Informations- und Dokumentenflusses. Ziel ist die Einleitung eines "Workflow nach Posteingang". Eine Dokumentanalyse kann anhand erkannter Merkmale Eingangspost automatisch an den Sachbearbeiter oder die zuständige Organisationseinheit (Rechnungen in die Buchhaltung, Aufträge in den Vertrieb) im Unternehmen leiten. Dokumentanalysen werden auch benötigt, wenn Mitarbeiter über einen persönlichen Informationsfilter relevante Dokumente automatisch zugestellt bekommen sollen. Aufgrund der Systemintegration werden Indexierungslösungen in den Funktionsumfang von DMS- bzw. Workflow-Produkten integriert. Eine Architektur solcher Systeme zeigt Abb. 1. Die Architektur zeigt die Indexierungs- bzw. Klassifizierungsfunktion im Zentrum der Anwendung. Dabei erfüllt sie Aufgaben für die Repräsentation von Dokumenten (Metadaten) und das spätere Retrieval.
  3. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.00
    0.004827458 = product of:
      0.038619664 = sum of:
        0.038619664 = weight(_text_:wide in 5480) [ClassicSimilarity], result of:
          0.038619664 = score(doc=5480,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.29372054 = fieldWeight in 5480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5480)
      0.125 = coord(1/8)
    
    Abstract
    (Automatic) document classification is generally defined as content-based assignment of one or more predefined categories to documents. Usually, machine learning, statistical pattern recognition, or neural network approaches are used to construct classifiers automatically. In this paper we thoroughly evaluate a wide variety of these methods on a document classification task for German text. We evaluate different feature construction and selection methods and various classifiers. Our main results are: (1) feature selection is necessary not only to reduce learning and classification time, but also to avoid overfitting (even for Support Vector Machines); (2) surprisingly, our morphological analysis does not improve classification quality compared to a letter 5-gram approach; (3) Support Vector Machines are significantly better than all other classification methods
  4. Snajder, J.; Dalbelo Basic, B.D.; Tadic, M.: Automatic acquisition of inflectional lexica for morphological normalisation (2008) 0.00
    0.003914421 = product of:
      0.031315368 = sum of:
        0.031315368 = product of:
          0.062630735 = sum of:
            0.062630735 = weight(_text_:mining in 2910) [ClassicSimilarity], result of:
              0.062630735 = score(doc=2910,freq=2.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.37404498 = fieldWeight in 2910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2910)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Due to natural language morphology, words can take on various morphological forms. Morphological normalisation - often used in information retrieval and text mining systems - conflates morphological variants of a word to a single representative form. In this paper, we describe an approach to lexicon-based inflectional normalisation. This approach is in between stemming and lemmatisation, and is suitable for morphological normalisation of inflectionally complex languages. To eliminate the immense effort required to compile the lexicon by hand, we focus on the problem of acquiring automatically an inflectional morphological lexicon from raw corpora. We propose a convenient and highly expressive morphology representation formalism on which the acquisition procedure is based. Our approach is applied to the morphologically complex Croatian language, but it should be equally applicable to other languages of similar morphological complexity. Experimental results show that our approach can be used to acquire a lexicon whose linguistic quality allows for rather good normalisation performance.
  5. Hauer, M: Silicon Valley Vorarlberg : Maschinelle Indexierung und semantisches Retrieval verbessert den Katalog der Vorarlberger Landesbibliothek (2004) 0.00
    0.0037801738 = product of:
      0.03024139 = sum of:
        0.03024139 = weight(_text_:web in 2489) [ClassicSimilarity], result of:
          0.03024139 = score(doc=2489,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.3122631 = fieldWeight in 2489, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2489)
      0.125 = coord(1/8)
    
    Abstract
    10 Jahre Internet haben die WeIt um die Bibliotheken herum stark geändert. Der Web-OPAC war eine Antwort der Bibliotheken. Doch reicht ein Web-OPAC im Zeitalter des Internets noch aus? Außer Web ist es doch der alte Katalog. Ca. 90% aller Bibliotheksrecherchen durch Benutzer sind Themenrecherchen. Ein Anteil dieser Recherchen bringt kein Ergebnis. Es kann leicht gemessen werden, dass null Medien gefunden wurden. Die Gründe hierfür wurden auch immer wieder untersucht: Plural- anstelle Singularformen, zu spezifische Suchbegriffe, Schreib- oder Bedienungsfehler. Zu wenig untersucht sind aber die Recherchen, die nicht mit einer Ausleihe enden, denn auch dann kann man in vielen Fällen von einem Retrieval-Mangel ausgehen. Schließlich: Von den ausgeliehenen Büchern werden nach Einschätzung vieler Bibliothekare 80% nicht weiter als bis zum Inhaltsverzeichnis gelesen (außer in Präsenzbibliotheken) - und erst nach Wochen zurückgegeben. Ein Politiker würde dies neudeutsch als "ein Vermittlungsproblem" bezeichnen. Ein Controller als nicht hinreichende Kapitalnutzung. Einfacher machen es sich immer mehr Studenten und Wissenschaftler, ihr Wissensaustausch vollzieht sich zunehmend an anderen Orten. Bibliotheken (als Funktion) sind unverzichtbar für die wissenschaftliche Kommunikation. Deshalb geht es darum, Wege zu finden und auch zu beschreiten, welche die Schätze von Bibliotheken (als Institution) effizienter an die Zielgruppe bringen. Der Einsatz von Information Retrieval-Technologie, neue Erschließungsmethoden und neuer Content sind Ansätze dazu. Doch die bisherigen Verbundstrukturen und Abhängigkeit haben das hier vorgestellte innovative Projekt keineswegs gefördert. Innovation entsteht wie die Innvoationsforschung zeigt eigentlich immer an der Peripherie: in Bregenz fing es an.
  6. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.00
    0.003518027 = product of:
      0.028144216 = sum of:
        0.028144216 = product of:
          0.056288432 = sum of:
            0.056288432 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.056288432 = score(doc=6265,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  7. Hauer, M.: Neue Qualitäten in Bibliotheken : Durch Content-Ergänzung, maschinelle Indexierung und modernes Information Retrieval können Recherchen in Bibliothekskatalogen deutlich verbessert werden (2004) 0.00
    0.003491975 = product of:
      0.0279358 = sum of:
        0.0279358 = weight(_text_:web in 886) [ClassicSimilarity], result of:
          0.0279358 = score(doc=886,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.2884563 = fieldWeight in 886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=886)
      0.125 = coord(1/8)
    
    Abstract
    Seit Frühjahr 2004 ist Dandelon.com als neues, offenes, internationales Wissenschaftsportal in Betrieb. Erste Retrieval-Tests bescheinigen deutlich bessere Suchergebnisse als in herkömmlichen OPACs oder Verbundsystemen. Seine Daten stammen aus intelligentCAPTURE und Bibliothekskatalogen. intelligentCAPTURE erfasst Content über Scanning oder File-Import oder Web-Spidering und indexiert nach morphosyntaktischen und semantischen Verfahren. Aufbereiteter Content und Indexate gehen an Bibliothekssysteme und an dandelon.com. Dandelon.com ist kostenlos zugänglich für Endbenutzer und ist zugleich Austauschzentrale und Katalogerweiterung für angeschlossene Bibliotheken. Neue Inhalte können so kostengünstig und performant erschlossen werden.
  8. Hauer, M.: Automatische Indexierung (2000) 0.00
    0.0030154518 = product of:
      0.024123615 = sum of:
        0.024123615 = product of:
          0.04824723 = sum of:
            0.04824723 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.04824723 = score(doc=5887,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  9. Souza, R.R.; Raghavan, K.S.: ¬A methodology for noun phrase-based automatic indexing (2006) 0.00
    0.002618981 = product of:
      0.020951848 = sum of:
        0.020951848 = weight(_text_:web in 173) [ClassicSimilarity], result of:
          0.020951848 = score(doc=173,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.21634221 = fieldWeight in 173, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=173)
      0.125 = coord(1/8)
    
    Abstract
    The scholarly community is increasingly employing the Web both for publication of scholarly output and for locating and accessing relevant scholarly literature. Organization of this vast body of digital information assumes significance in this context. The sheer volume of digital information to be handled makes traditional indexing and knowledge representation strategies ineffective and impractical. It is, therefore, worth exploring new approaches. An approach being discussed considers the intrinsic semantics of texts of documents. Based on the hypothesis that noun phrases in a text are semantically rich in terms of their ability to represent the subject content of the document, this approach seeks to identify and extract noun phrases instead of single keywords, and use them as descriptors. This paper presents a methodology that has been developed for extracting noun phrases from Portuguese texts. The results of an experiment carried out to test the adequacy of the methodology are also presented.
  10. Roberts, D.; Souter, C.: ¬The automation of controlled vocabulary subject indexing of medical journal articles (2000) 0.00
    0.0024586683 = product of:
      0.019669347 = sum of:
        0.019669347 = weight(_text_:data in 711) [ClassicSimilarity], result of:
          0.019669347 = score(doc=711,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=711)
      0.125 = coord(1/8)
    
    Abstract
    This article discusses the possibility of the automation of sophisticated subject indexing of medical journal articles. Approaches to subject descriptor assignment in information retrieval research are usually either based upon the manual descriptors in the database or generation of search parameters from the text of the article. The principles of the Medline indexing system are described, followed by a summary of a pilot project, based upon the Amed database. The results suggest that a more extended study, based upon Medline, should encompass various components: Extraction of 'concept strings' from titles and abstracts of records, based upon linguistic features characteristic of medical literature. Use of the Unified Medical Language System (UMLS) for identification of controlled vocabulary descriptors. Coordination of descriptors, utilising features of the Medline indexing system. The emphasis should be on system manipulation of data, based upon input, available resources and specifically designed rules.
  11. Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003) 0.00
    0.0024586683 = product of:
      0.019669347 = sum of:
        0.019669347 = weight(_text_:data in 1167) [ClassicSimilarity], result of:
          0.019669347 = score(doc=1167,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 1167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1167)
      0.125 = coord(1/8)
    
  12. Medelyan, O.; Witten, I.H.: Domain-independent automatic keyphrase indexing with small training sets (2008) 0.00
    0.0024586683 = product of:
      0.019669347 = sum of:
        0.019669347 = weight(_text_:data in 1871) [ClassicSimilarity], result of:
          0.019669347 = score(doc=1871,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 1871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1871)
      0.125 = coord(1/8)
    
    Abstract
    Keyphrases are widely used in both physical and digital libraries as a brief, but precise, summary of documents. They help organize material based on content, provide thematic access, represent search results, and assist with navigation. Manual assignment is expensive because trained human indexers must reach an understanding of the document and select appropriate descriptors according to defined cataloging rules. We propose a new method that enhances automatic keyphrase extraction by using semantic information about terms and phrases gleaned from a domain-specific thesaurus. The key advantage of the new approach is that it performs well with very little training data. We evaluate it on a large set of manually indexed documents in the domain of agriculture, compare its consistency with a group of six professional indexers, and explore its performance on smaller collections of documents in other domains and of French and Spanish documents.
  13. Rädler, K.: In Bibliothekskatalogen "googlen" : Integration von Inhaltsverzeichnissen, Volltexten und WEB-Ressourcen in Bibliothekskataloge (2004) 0.00
    0.0021824844 = product of:
      0.017459875 = sum of:
        0.017459875 = weight(_text_:web in 2432) [ClassicSimilarity], result of:
          0.017459875 = score(doc=2432,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 2432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2432)
      0.125 = coord(1/8)
    
  14. Lepsky, K.; Vorhauer, J.: Lingo - ein open source System für die Automatische Indexierung deutschsprachiger Dokumente (2006) 0.00
    0.0020103012 = product of:
      0.01608241 = sum of:
        0.01608241 = product of:
          0.03216482 = sum of:
            0.03216482 = weight(_text_:22 in 3581) [ClassicSimilarity], result of:
              0.03216482 = score(doc=3581,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.30952093 = fieldWeight in 3581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3581)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    24. 3.2006 12:22:02
  15. Probst, M.; Mittelbach, J.: Maschinelle Indexierung in der Sacherschließung wissenschaftlicher Bibliotheken (2006) 0.00
    0.0020103012 = product of:
      0.01608241 = sum of:
        0.01608241 = product of:
          0.03216482 = sum of:
            0.03216482 = weight(_text_:22 in 1755) [ClassicSimilarity], result of:
              0.03216482 = score(doc=1755,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.30952093 = fieldWeight in 1755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1755)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 3.2008 12:35:19
  16. Renz, M.: Automatische Inhaltserschließung im Zeichen von Wissensmanagement (2001) 0.00
    0.0017590135 = product of:
      0.014072108 = sum of:
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 5671) [ClassicSimilarity], result of:
              0.028144216 = score(doc=5671,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 5671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5671)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 3.2001 13:14:48
  17. Newman, D.J.; Block, S.: Probabilistic topic decomposition of an eighteenth-century American newspaper (2006) 0.00
    0.0017590135 = product of:
      0.014072108 = sum of:
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 5291) [ClassicSimilarity], result of:
              0.028144216 = score(doc=5291,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 5291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5291)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 7.2006 17:32:00