Search (42 results, page 1 of 3)

  • × theme_ss:"Automatisches Indexieren"
  1. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.06
    0.06171181 = product of:
      0.12342362 = sum of:
        0.12342362 = sum of:
          0.08910345 = weight(_text_:mining in 3780) [ClassicSimilarity], result of:
            0.08910345 = score(doc=3780,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.31170416 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
          0.034320172 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
            0.034320172 = score(doc=3780,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.19345059 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
      0.5 = coord(1/2)
    
    Abstract
    Wir leben im 21. Jahrhundert, und vieles, was vor hundert und noch vor fünfzig Jahren als Science Fiction abgetan worden wäre, ist mittlerweile Realität. Raumsonden fliegen zum Mars, machen dort Experimente und liefern Daten zur Erde zurück. Roboter werden für Routineaufgaben eingesetzt, zum Beispiel in der Industrie oder in der Medizin. Digitalisierung, künstliche Intelligenz und automatisierte Verfahren sind kaum mehr aus unserem Alltag wegzudenken. Grundlage vieler Prozesse sind lernende Algorithmen. Die fortschreitende digitale Transformation ist global und umfasst alle Lebens- und Arbeitsbereiche: Wirtschaft, Gesellschaft und Politik. Sie eröffnet neue Möglichkeiten, von denen auch Bibliotheken profitieren. Der starke Anstieg digitaler Publikationen, die einen wichtigen und prozentual immer größer werdenden Teil des Kulturerbes darstellen, sollte für Bibliotheken Anlass sein, diese Möglichkeiten aktiv aufzugreifen und einzusetzen. Die Auswertbarkeit digitaler Inhalte, beispielsweise durch Text- and Data-Mining (TDM), und die Entwicklung technischer Verfahren, mittels derer Inhalte miteinander vernetzt und semantisch in Beziehung gesetzt werden können, bieten Raum, auch bibliothekarische Erschließungsverfahren neu zu denken. Daher beschäftigt sich die Deutsche Nationalbibliothek (DNB) seit einigen Jahren mit der Frage, wie sich die Prozesse bei der Erschließung von Medienwerken verbessern und maschinell unterstützen lassen. Sie steht dabei im regelmäßigen kollegialen Austausch mit anderen Bibliotheken, die sich ebenfalls aktiv mit dieser Fragestellung befassen, sowie mit europäischen Nationalbibliotheken, die ihrerseits Interesse an dem Thema und den Erfahrungen der DNB haben. Als Nationalbibliothek mit umfangreichen Beständen an digitalen Publikationen hat die DNB auch Expertise bei der digitalen Langzeitarchivierung aufgebaut und ist im Netzwerk ihrer Partner als kompetente Gesprächspartnerin geschätzt.
    Date
    19. 8.2017 9:24:22
  2. Lowe, D.B.; Dollinger, I.; Koster, T.; Herbert, B.E.: Text mining for type of research classification (2021) 0.06
    0.059772413 = product of:
      0.11954483 = sum of:
        0.11954483 = product of:
          0.23908965 = sum of:
            0.23908965 = weight(_text_:mining in 720) [ClassicSimilarity], result of:
              0.23908965 = score(doc=720,freq=10.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.83639 = fieldWeight in 720, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This project brought together undergraduate students in Computer Science with librarians to mine abstracts of articles from the Texas A&M University Libraries' institutional repository, OAKTrust, in order to probe the creation of new metadata to improve discovery and use. The mining operation task consisted simply of classifying the articles into two categories of research type: basic research ("for understanding," "curiosity-based," or "knowledge-based") and applied research ("use-based"). These categories are fundamental especially for funders but are also important to researchers. The mining-to-classification steps took several iterations, but ultimately, we achieved good results with the toolkit BERT (Bidirectional Encoder Representations from Transformers). The project and its workflows represent a preview of what may lie ahead in the future of crafting metadata using text mining techniques to enhance discoverability.
    Theme
    Data Mining
  3. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.05
    0.054016102 = product of:
      0.108032204 = sum of:
        0.108032204 = product of:
          0.21606441 = sum of:
            0.21606441 = weight(_text_:mining in 5481) [ClassicSimilarity], result of:
              0.21606441 = score(doc=5481,freq=6.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.75584245 = fieldWeight in 5481, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5481)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes multiple experiments in text mining at Northern Illinois University that were undertaken to improve the efficiency and accuracy of cataloging. It focuses narrowly on subject analysis of dime novels, a format of inexpensive fiction that was popular in the United States between 1860 and 1915. NIU holds more than 55,000 dime novels in its collections, which it is in the process of comprehensively digitizing. Classification, keyword extraction, named-entity recognition, clustering, and topic modeling are discussed as means of assigning subject headings to improve their discoverability by researchers and to increase the productivity of digitization workflows.
    Theme
    Data Mining
  4. Moulaison-Sandy, H.; Adkins, D.; Bossaller, J.; Cho, H.: ¬An automated approach to describing fiction : a methodology to use book reviews to identify affect (2021) 0.04
    0.044103958 = product of:
      0.088207915 = sum of:
        0.088207915 = product of:
          0.17641583 = sum of:
            0.17641583 = weight(_text_:mining in 710) [ClassicSimilarity], result of:
              0.17641583 = score(doc=710,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.61714274 = fieldWeight in 710, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=710)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject headings and genre terms are notoriously difficult to apply, yet are important for fiction. The current project functions as a proof of concept, using a text-mining methodology to identify affective information (emotion and tone) about fiction titles from professional book reviews as a potential first step in automating the subject analysis process. Findings are presented and discussed, comparing results to the range of aboutness and isness information in library cataloging records. The methodology is likewise presented, and how future work might expand on the current project to enhance catalog records through text-mining is explored.
  5. Nohr, H.: Theorie des Information Retrieval II : Automatische Indexierung (2004) 0.03
    0.031502828 = product of:
      0.063005656 = sum of:
        0.063005656 = product of:
          0.12601131 = sum of:
            0.12601131 = weight(_text_:mining in 8) [ClassicSimilarity], result of:
              0.12601131 = score(doc=8,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.44081625 = fieldWeight in 8, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=8)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ein großer Teil der Informationen - Schätzungen zufolge bis zu 80% - liegt in Organisationen in unstrukturierten Dokumenten vor. In der Vergangenheit wurden Lösungen für das Management strukturierter Informationen entwickelt, die es nun auch zu erreichen gilt für unstrukturierte Informationen. Neben Verfahren des Data Mining für die Datenanalyse treten Versuche, Text Mining (Lit. 06) auf die Textanalyse anzuwenden. Um gezielt Dokumente im Repository suchen zu können, ist eine effektive Inhaltserkennung und -kennzeichnung erforderlich, d.h. eine Zuordnung der Dokumente zu Themengebieten bzw die Speicherung geeigneter Indexterme als Metadaten. Zu diesem Zweck müssen die Dokumenteninhalte repräsentiert, d.h. indexiert oder klassifiziert, werden. Dokumentanalyse dient auch der Steuerung des Informations- und Dokumentenflusses. Ziel ist die Einleitung eines "Workflow nach Posteingang". Eine Dokumentanalyse kann anhand erkannter Merkmale Eingangspost automatisch an den Sachbearbeiter oder die zuständige Organisationseinheit (Rechnungen in die Buchhaltung, Aufträge in den Vertrieb) im Unternehmen leiten. Dokumentanalysen werden auch benötigt, wenn Mitarbeiter über einen persönlichen Informationsfilter relevante Dokumente automatisch zugestellt bekommen sollen. Aufgrund der Systemintegration werden Indexierungslösungen in den Funktionsumfang von DMS- bzw. Workflow-Produkten integriert. Eine Architektur solcher Systeme zeigt Abb. 1. Die Architektur zeigt die Indexierungs- bzw. Klassifizierungsfunktion im Zentrum der Anwendung. Dabei erfüllt sie Aufgaben für die Repräsentation von Dokumenten (Metadaten) und das spätere Retrieval.
  6. Wolfe, EW.: a case study in automated metadata enhancement : Natural Language Processing in the humanities (2019) 0.03
    0.031186208 = product of:
      0.062372416 = sum of:
        0.062372416 = product of:
          0.12474483 = sum of:
            0.12474483 = weight(_text_:mining in 5236) [ClassicSimilarity], result of:
              0.12474483 = score(doc=5236,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.4363858 = fieldWeight in 5236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
  7. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.03
    0.027456136 = product of:
      0.054912273 = sum of:
        0.054912273 = product of:
          0.109824546 = sum of:
            0.109824546 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.109824546 = score(doc=402,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  8. Snajder, J.; Dalbelo Basic, B.D.; Tadic, M.: Automatic acquisition of inflectional lexica for morphological normalisation (2008) 0.03
    0.026731037 = product of:
      0.053462073 = sum of:
        0.053462073 = product of:
          0.10692415 = sum of:
            0.10692415 = weight(_text_:mining in 2910) [ClassicSimilarity], result of:
              0.10692415 = score(doc=2910,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.37404498 = fieldWeight in 2910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2910)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Due to natural language morphology, words can take on various morphological forms. Morphological normalisation - often used in information retrieval and text mining systems - conflates morphological variants of a word to a single representative form. In this paper, we describe an approach to lexicon-based inflectional normalisation. This approach is in between stemming and lemmatisation, and is suitable for morphological normalisation of inflectionally complex languages. To eliminate the immense effort required to compile the lexicon by hand, we focus on the problem of acquiring automatically an inflectional morphological lexicon from raw corpora. We propose a convenient and highly expressive morphology representation formalism on which the acquisition procedure is based. Our approach is applied to the morphologically complex Croatian language, but it should be equally applicable to other languages of similar morphological complexity. Experimental results show that our approach can be used to acquire a lexicon whose linguistic quality allows for rather good normalisation performance.
  9. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.03
    0.025202263 = product of:
      0.050404526 = sum of:
        0.050404526 = product of:
          0.10080905 = sum of:
            0.10080905 = weight(_text_:mining in 2596) [ClassicSimilarity], result of:
              0.10080905 = score(doc=2596,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.352653 = fieldWeight in 2596, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2596)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  10. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.024024118 = product of:
      0.048048235 = sum of:
        0.048048235 = product of:
          0.09609647 = sum of:
            0.09609647 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09609647 = score(doc=262,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20.10.2000 12:22:23
  11. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.02
    0.024024118 = product of:
      0.048048235 = sum of:
        0.048048235 = product of:
          0.09609647 = sum of:
            0.09609647 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.09609647 = score(doc=6265,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  12. Scherer, B.: Automatische Indexierung und ihre Anwendung im DFG-Projekt "Gemeinsames Portal für Bibliotheken, Archive und Museen (BAM)" (2003) 0.02
    0.022275863 = product of:
      0.044551726 = sum of:
        0.044551726 = product of:
          0.08910345 = sum of:
            0.08910345 = weight(_text_:mining in 4283) [ClassicSimilarity], result of:
              0.08910345 = score(doc=4283,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.31170416 = fieldWeight in 4283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4283)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Automatische Indexierung verzeichnet schon seit einigen Jahren aufgrund steigender Informationsflut ein wachsendes Interesse. Allerdings gibt es immer noch Vorbehalte gegenüber der intellektuellen Indexierung in Bezug auf Qualität und größerem Aufwand der Systemimplementierung bzw. -pflege. Neuere Entwicklungen aus dem Bereich des Wissensmanagements, wie beispielsweise Verfahren aus der Künstlichen Intelligenz, der Informationsextraktion, dem Text Mining bzw. der automatischen Klassifikation sollen die automatische Indexierung aufwerten und verbessern. Damit soll eine intelligentere und mehr inhaltsbasierte Erschließung geleistet werden. In dieser Masterarbeit wird außerhalb der Darstellung von Grundlagen und Verfahren der automatischen Indexierung sowie neueren Entwicklungen auch Möglichkeiten der Evaluation dargestellt. Die mögliche Anwendung der automatischen Indexierung im DFG-ProjektGemeinsames Portal für Bibliotheken, Archive und Museen (BAM)" bilden den Schwerpunkt der Arbeit. Im Portal steht die bibliothekarische Erschließung von Texten im Vordergrund. In einem umfangreichen Test werden drei deutsche, linguistische Systeme mit statistischen Verfahren kombiniert (die aber teilweise im System bereits integriert ist) und evaluiert, allerdings nur auf der Basis der ausgegebenen Indexate. Abschließend kann festgestellt werden, dass die Ergebnisse und damit die Qualität (bezogen auf die Indexate) von intellektueller und automatischer Indexierung noch signifikant unterschiedlich sind. Die Gründe liegen in noch zu lösenden semantischen Problemen bzw, in der Obereinstimmung mit Worten aus einem Thesaurus, die von einem automatischen Indexierungssystem nicht immer nachvollzogen werden kann. Eine Inhaltsanreicherung mit den Indexaten zum Vorteil beim Retrieval kann, je nach System oder auch über die Einbindung durch einen Thesaurus, erreicht werden.
  13. Lu, K.; Mao, J.: ¬An automatic approach to weighted subject indexing : an empirical study in the biomedical domain (2015) 0.02
    0.022275863 = product of:
      0.044551726 = sum of:
        0.044551726 = product of:
          0.08910345 = sum of:
            0.08910345 = weight(_text_:mining in 4005) [ClassicSimilarity], result of:
              0.08910345 = score(doc=4005,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.31170416 = fieldWeight in 4005, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4005)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject indexing is an intellectually intensive process that has many inherent uncertainties. Existing manual subject indexing systems generally produce binary outcomes for whether or not to assign an indexing term. This does not sufficiently reflect the extent to which the indexing terms are associated with the documents. On the other hand, the idea of probabilistic or weighted indexing was proposed a long time ago and has seen success in capturing uncertainties in the automatic indexing process. One hurdle to overcome in implementing weighted indexing in manual subject indexing systems is the practical burden that could be added to the already intensive indexing process. This study proposes a method to infer automatically the associations between subject terms and documents through text mining. By uncovering the connections between MeSH descriptors and document text, we are able to derive the weights of MeSH descriptors manually assigned to documents. Our initial results suggest that the inference method is feasible and promising. The study has practical implications for improving subject indexing practice and providing better support for information retrieval.
  14. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.02
    0.0205921 = product of:
      0.0411842 = sum of:
        0.0411842 = product of:
          0.0823684 = sum of:
            0.0823684 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.0823684 = score(doc=58,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:44
  15. Hauer, M.: Automatische Indexierung (2000) 0.02
    0.0205921 = product of:
      0.0411842 = sum of:
        0.0411842 = product of:
          0.0823684 = sum of:
            0.0823684 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.0823684 = score(doc=5887,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  16. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.02
    0.0205921 = product of:
      0.0411842 = sum of:
        0.0411842 = product of:
          0.0823684 = sum of:
            0.0823684 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.0823684 = score(doc=2051,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:56
  17. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.02
    0.0205921 = product of:
      0.0411842 = sum of:
        0.0411842 = product of:
          0.0823684 = sum of:
            0.0823684 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.0823684 = score(doc=5629,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166
  18. Biebricher, N.; Fuhr, N.; Lustig, G.; Schwantner, M.; Knorz, G.: ¬The automatic indexing system AIR/PHYS : from research to application (1988) 0.02
    0.017160086 = product of:
      0.034320172 = sum of:
        0.034320172 = product of:
          0.068640344 = sum of:
            0.068640344 = weight(_text_:22 in 1952) [ClassicSimilarity], result of:
              0.068640344 = score(doc=1952,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.38690117 = fieldWeight in 1952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1952)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 8.1998 12:51:22
  19. Kutschekmanesch, S.; Lutes, B.; Moelle, K.; Thiel, U.; Tzeras, K.: Automated multilingual indexing : a synthesis of rule-based and thesaurus-based methods (1998) 0.02
    0.017160086 = product of:
      0.034320172 = sum of:
        0.034320172 = product of:
          0.068640344 = sum of:
            0.068640344 = weight(_text_:22 in 4157) [ClassicSimilarity], result of:
              0.068640344 = score(doc=4157,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.38690117 = fieldWeight in 4157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4157)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
  20. Tsareva, P.V.: Algoritmy dlya raspoznavaniya pozitivnykh i negativnykh vkhozdenii deskriptorov v tekst i protsedura avtomaticheskoi klassifikatsii tekstov (1999) 0.02
    0.017160086 = product of:
      0.034320172 = sum of:
        0.034320172 = product of:
          0.068640344 = sum of:
            0.068640344 = weight(_text_:22 in 374) [ClassicSimilarity], result of:
              0.068640344 = score(doc=374,freq=2.0), product of:
                0.17741053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05066224 = queryNorm
                0.38690117 = fieldWeight in 374, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=374)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 4.2002 10:22:41

Years

Languages

  • e 24
  • d 17
  • ru 1
  • More… Less…

Types

  • a 37
  • el 3
  • x 3
  • m 1
  • More… Less…