Search (16 results, page 1 of 1)

  • × theme_ss:"Automatisches Indexieren"
  • × theme_ss:"Retrievalstudien"
  • × type_ss:"a"
  1. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.06
    0.06317672 = product of:
      0.09476508 = sum of:
        0.03901083 = weight(_text_:im in 5863) [ClassicSimilarity], result of:
          0.03901083 = score(doc=5863,freq=6.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.27047595 = fieldWeight in 5863, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5863)
        0.055754256 = product of:
          0.08363138 = sum of:
            0.025961377 = weight(_text_:online in 5863) [ClassicSimilarity], result of:
              0.025961377 = score(doc=5863,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 5863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5863)
            0.05767 = weight(_text_:retrieval in 5863) [ClassicSimilarity], result of:
              0.05767 = score(doc=5863,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37365708 = fieldWeight in 5863, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5863)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Retrievaltests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das aufgrund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
  2. Munkelt, J.: Erstellung einer DNB-Retrieval-Testkollektion (2018) 0.04
    0.044433918 = product of:
      0.066650875 = sum of:
        0.054615162 = weight(_text_:im in 4310) [ClassicSimilarity], result of:
          0.054615162 = score(doc=4310,freq=6.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.37866634 = fieldWeight in 4310, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4310)
        0.012035711 = product of:
          0.03610713 = sum of:
            0.03610713 = weight(_text_:retrieval in 4310) [ClassicSimilarity], result of:
              0.03610713 = score(doc=4310,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 4310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4310)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Seit Herbst 2017 findet in der Deutschen Nationalbibliothek die Inhaltserschließung bestimmter Medienwerke rein maschinell statt. Die Qualität dieses Verfahrens, das die Prozessorganisation von Bibliotheken maßgeblich prägen kann, wird unter Fachleuten kontrovers diskutiert. Ihre Standpunkte werden zunächst hinreichend erläutert, ehe die Notwendigkeit einer Qualitätsprüfung des Verfahrens und dessen Grundlagen dargelegt werden. Zentraler Bestandteil einer künftigen Prüfung ist eine Testkollektion. Ihre Erstellung und deren Dokumentation steht im Fokus dieser Arbeit. In diesem Zusammenhang werden auch die Entstehungsgeschichte und Anforderungen an gelungene Testkollektionen behandelt. Abschließend wird ein Retrievaltest durchgeführt, der die Einsatzfähigkeit der erarbeiteten Testkollektion belegt. Seine Ergebnisse dienen ausschließlich der Funktionsüberprüfung. Eine Qualitätsbeurteilung maschineller Inhaltserschließung im Speziellen sowie im Allgemeinen findet nicht statt und ist nicht Ziel der Ausarbeitung.
  3. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.04
    0.04232825 = product of:
      0.06349237 = sum of:
        0.03822265 = weight(_text_:im in 6386) [ClassicSimilarity], result of:
          0.03822265 = score(doc=6386,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.26501122 = fieldWeight in 6386, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
        0.025269728 = product of:
          0.07580918 = sum of:
            0.07580918 = weight(_text_:retrieval in 6386) [ClassicSimilarity], result of:
              0.07580918 = score(doc=6386,freq=12.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.49118498 = fieldWeight in 6386, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6386)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  4. Oberhauser, O.; Labner, J.: OPAC-Erweiterung durch automatische Indexierung : Empirische Untersuchung mit Daten aus dem Österreichischen Verbundkatalog (2002) 0.03
    0.032404803 = product of:
      0.0486072 = sum of:
        0.03822265 = weight(_text_:im in 883) [ClassicSimilarity], result of:
          0.03822265 = score(doc=883,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.26501122 = fieldWeight in 883, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=883)
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 883) [ClassicSimilarity], result of:
              0.031153653 = score(doc=883,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=883)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    In Anlehnung an die in den neunziger Jahren durchgeführten Erschließungsprojekte MILOS I und MILOS II, die die Eignung eines Verfahrens zur automatischen Indexierung für Bibliothekskataloge zum Thema hatten, wurde eine empirische Untersuchung anhand einer repräsentativen Stichprobe von Titelsätzen aus dem Österreichischen Verbundkatalog durchgeführt. Ziel war die Prüfung und Bewertung der Einsatzmöglichkeit dieses Verfahrens in den Online-Katalogen des Verbundes. Der Realsituation der OPAC-Benutzung gemäß wurde ausschließlich die Auswirkung auf den automatisch generierten Begriffen angereicherten Basic Index ("Alle Felder") untersucht. Dazu wurden 100 Suchanfragen zunächst im ursprünglichen Basic Index und sodann im angereicherten Basic Index in einem OPAC unter Aleph 500 durchgeführt. Die Tests erbrachten einen Zuwachs an relevanten Treffern bei nur leichten Verlusten an Precision, eine Reduktion der Nulltreffer-Ergebnisse sowie Aufschlüsse über die Auswirkung einer vorhandenen verbalen Sacherschließung.
  5. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.03
    0.026800975 = product of:
      0.080402926 = sum of:
        0.080402926 = product of:
          0.12060438 = sum of:
            0.07221426 = weight(_text_:retrieval in 5001) [ClassicSimilarity], result of:
              0.07221426 = score(doc=5001,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46789268 = fieldWeight in 5001, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
            0.048390117 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.048390117 = score(doc=5001,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  6. Lepsky, K.; Siepmann, J.; Zimmermann, A.: Automatische Indexierung für Online-Kataloge : Ergebnisse eines Retrievaltests (1996) 0.02
    0.01610068 = product of:
      0.04830204 = sum of:
        0.04830204 = product of:
          0.07245306 = sum of:
            0.03634593 = weight(_text_:online in 3251) [ClassicSimilarity], result of:
              0.03634593 = score(doc=3251,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 3251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3251)
            0.03610713 = weight(_text_:retrieval in 3251) [ClassicSimilarity], result of:
              0.03610713 = score(doc=3251,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 3251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3251)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Examines the effectiveness of automated indexing and presents the results of a study of information retrieval from a segment (40.000 items) of the ULB Düsseldorf database. The segment was selected randomly and all the documents included were indexed automatically. The search topics included 50 subject areas ranging from economic growth to alternative energy sources. While there were 876 relevant documents in the database segment for each of the 50 search topics, the recall ranged from 1 to 244 references, with the average being 17.52 documents per topic. Therefore it seems that, in the immediate future, automatic indexing should be used in combination with intellectual indexing
  7. Mielke, B.: Wider einige gängige Ansichten zur juristischen Informationserschließung (2002) 0.01
    0.012740883 = product of:
      0.03822265 = sum of:
        0.03822265 = weight(_text_:im in 2145) [ClassicSimilarity], result of:
          0.03822265 = score(doc=2145,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.26501122 = fieldWeight in 2145, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=2145)
      0.33333334 = coord(1/3)
    
    Abstract
    Ausgehend von einer Betrachtung in der Rechtsinformatik geläufiger Annahmen zur juristischen Informationserschließung beschreibt der Beitrag wesentliche Ergebnisse einer empirischen Studie der Retrievaleffektivität von Re-cherchen in juristischen Datenbanken. Dabei steht die Frage nach der Notwendigkeit einer intellektuellen Erschließung einerseits, der Effektivität der sogenannten Stichwortsuche andererseits im Mittelpunkt. Die Ergebnisse der Studie, bei der auch ein Vergleich zwischen einem Informationssystem auf der Basis eines Booleschen Retrievalmodells mit einem System auf der Basis statistischer Verfahren vorgenommen wurde, legen den Schluss nahe, dass in der rechtsinformatischen Fachliteratur analytisch begründete Annahmen wie die Gefahr zu großer Antwortmengen bei der Stichwortsuche empirisch nicht zu belegen sind. Auch zeigt sich keine Überlegenheit intellektueller Erschließungsverfahren (Beschlagwortung) gegenüber der automatischen Indexierung, im Gegenteil führt der Einsatz eines statistischen Verfahrens bei identischer Dokumentkollektion zu einer höheren Wiedergewinnungsrate (recall).
  8. Grummann, M.: Sind Verfahren zur maschinellen Indexierung für Literaturbestände Öffentlicher Bibliotheken geeignet? : Retrievaltests von indexierten ekz-Daten mit der Software IDX (2000) 0.01
    0.01201222 = product of:
      0.03603666 = sum of:
        0.03603666 = weight(_text_:im in 1879) [ClassicSimilarity], result of:
          0.03603666 = score(doc=1879,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.24985497 = fieldWeight in 1879, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0625 = fieldNorm(doc=1879)
      0.33333334 = coord(1/3)
    
    Abstract
    Maschinelles Indexieren vereinheitlicht und vermehrt das Suchvokabular eines Bibliothekskatalogs durch verschiedene Methoden (u.a. Ermittlung der Grundform, Kompositazerlegung, Wortableitungen). Ein Retrievaltest mit einem für öffentliche Bibliotheken typischen Sachbuchbestand zeigt, dass dieses Verfahren die Ergebnisse von OPAC-Recherchen verbessert - trotz 'blumiger' Titelformulierungen. Im Vergleich zu herkömmlichen Erschließungsmethoden (Stich- und Schlagwörter) werden mehr relevante Titel gefunden, ohne gleichzeitig den 'Ballast' zu erhöhen. Das maschinelle Indexieren kann die Verschlagwortung jedoch nicht ersetzen, sondern nur ergänzen
  9. Toepfer, M.; Seifert, C.: Content-based quality estimation for automatic subject indexing of short texts under precision and recall constraints 0.01
    0.011500487 = product of:
      0.03450146 = sum of:
        0.03450146 = product of:
          0.051752187 = sum of:
            0.025961377 = weight(_text_:online in 4309) [ClassicSimilarity], result of:
              0.025961377 = score(doc=4309,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 4309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4309)
            0.025790809 = weight(_text_:retrieval in 4309) [ClassicSimilarity], result of:
              0.025790809 = score(doc=4309,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 4309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4309)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Semantic annotations have to satisfy quality constraints to be useful for digital libraries, which is particularly challenging on large and diverse datasets. Confidence scores of multi-label classification methods typically refer only to the relevance of particular subjects, disregarding indicators of insufficient content representation at the document-level. Therefore, we propose a novel approach that detects documents rather than concepts where quality criteria are met. Our approach uses a deep, multi-layered regression architecture, which comprises a variety of content-based indicators. We evaluated multiple configurations using text collections from law and economics, where the available content is restricted to very short texts. Notably, we demonstrate that the proposed quality estimation technique can determine subsets of the previously unseen data where considerable gains in document-level recall can be achieved, while upholding precision at the same time. Hence, the approach effectively performs a filtering that ensures high data quality standards in operative information retrieval systems.
    Content
    This is an authors' manuscript version of a paper accepted for proceedings of TPDL-2018, Porto, Portugal, Sept 10-13. The nal authenticated publication is available online at https://doi.org/will be added as soon as available.
  10. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.01075336 = product of:
      0.03226008 = sum of:
        0.03226008 = product of:
          0.09678023 = sum of:
            0.09678023 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09678023 = score(doc=262,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  11. Wan, T.-L.; Evens, M.; Wan, Y.-W.; Pao, Y.-Y.: Experiments with automatic indexing and a relational thesaurus in a Chinese information retrieval system (1997) 0.01
    0.008970889 = product of:
      0.026912667 = sum of:
        0.026912667 = product of:
          0.080738 = sum of:
            0.080738 = weight(_text_:retrieval in 956) [ClassicSimilarity], result of:
              0.080738 = score(doc=956,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5231199 = fieldWeight in 956, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=956)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes a series of experiments with an interactive Chinese information retrieval system named CIRS and an interactive relational thesaurus. 2 important issues have been explored: whether thesauri enhance the retrieval effectiveness of Chinese documents, and whether automatic indexing can complete with manual indexing in a Chinese information retrieval system. Recall and precision are used to measure and evaluate the effectiveness of the system. Statistical analysis of the recall and precision measures suggest that the use of the relational thesaurus does improve the retrieval effectiveness both in the automatic indexing environment and in the manual indexing environment and that automatic indexing is at least as good as manual indexing
  12. Buckley, C.; Allan, J.; Salton, G.: Automatic routing and retrieval using Smart : TREC-2 (1995) 0.01
    0.006877549 = product of:
      0.020632647 = sum of:
        0.020632647 = product of:
          0.06189794 = sum of:
            0.06189794 = weight(_text_:retrieval in 5699) [ClassicSimilarity], result of:
              0.06189794 = score(doc=5699,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40105087 = fieldWeight in 5699, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5699)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Smart information retrieval project emphazises completely automatic approaches to the understanding and retrieval of large quantities of text. The work in the TREC-2 environment continues, performing both routing and ad hoc experiments. The ad hoc work extends investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document that matches the query. The performance of ad hoc runs is good, but it is clear that full advantage of the available local information is not been taken advantage of. The routing experiments use conventional relevance feedback approaches to routing, but with a much greater degree of query expansion than was previously done. The length of a query vector is increased by a factor of 5 to 10 by adding terms found in previously seen relevant documents. This approach improves effectiveness by 30-40% over the original query
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  13. Lochbaum, K.E.; Streeter, A.R.: Comparing and combining the effectiveness of latent semantic indexing and the ordinary vector space model for information retrieval (1989) 0.01
    0.005956133 = product of:
      0.017868398 = sum of:
        0.017868398 = product of:
          0.05360519 = sum of:
            0.05360519 = weight(_text_:retrieval in 3458) [ClassicSimilarity], result of:
              0.05360519 = score(doc=3458,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.34732026 = fieldWeight in 3458, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3458)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    A retrievalsystem was built to find individuals with appropriate expertise within a large research establishment on the basis of their authored documents. The expert-locating system uses a new method for automatic indexing and retrieval based on singular value decomposition, a matrix decomposition technique related to the factor analysis. Organizational groups, represented by the documents they write, and the terms contained in these documents, are fit simultaneously into a 100-dimensional "semantic" space. User queries are positioned in the semantic space, and the most similar groups are returned to the user. Here we compared the standard vector-space model with this new technique and found that combining the two methods improved performance over either alone. We also examined the effects of various experimental variables on the system`s retrieval accuracy. In particular, the effects of: term weighting functions in the semantic space construction and in query construction, suffix stripping, and using lexical units larger than a a single word were studied.
  14. Gödert, W.; Liebig, M.: Maschinelle Indexierung auf dem Prüfstand : Ergebnisse eines Retrievaltests zum MILOS II Projekt (1997) 0.01
    0.005673688 = product of:
      0.017021064 = sum of:
        0.017021064 = product of:
          0.05106319 = sum of:
            0.05106319 = weight(_text_:retrieval in 1174) [ClassicSimilarity], result of:
              0.05106319 = score(doc=1174,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33085006 = fieldWeight in 1174, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1174)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The test ran between Nov 95-Aug 96 in Cologne Fachhochschule fur Bibliothekswesen (College of Librarianship).The test basis was a database of 190,000 book titles published between 1990-95. MILOS II mechanized indexing methods proved helpful in avoiding or reducing numbers of unsatisfied/no result retrieval searches. Retrieval from mechanised indexing is 3 times more successful than from title keyword data. MILOS II also used a standardized semantic vocabulary. Mechanised indexing demands high quality software and output data
  15. Chevallet, J.-P.; Bruandet, M.F.: Impact de l'utilisation de multi terms sur la qualité des résponses dùn système de recherche d'information a indexation automatique (1999) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 6253) [ClassicSimilarity], result of:
              0.041265294 = score(doc=6253,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 6253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6253)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    Übers. d. Titels: Impact of the use of multi-terms on the quality of the answers of an information retrieval system based on automatic indexing
  16. Munkelt, J.; Schaer, P.; Lepsky, K.: Towards an IR test collection for the German National Library (2018) 0.00
    0.0034615172 = product of:
      0.010384551 = sum of:
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 4311) [ClassicSimilarity], result of:
              0.031153653 = score(doc=4311,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 4311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4311)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Automatic content indexing is one of the innovations that are increasingly changing the way libraries work. In theory, it promises a cataloguing service that would hardly be possible with humans in terms of speed, quantity and maybe quality. The German National Library (DNB) has also recognised this potential and is increasingly relying on the automatic indexing of their catalogue content. The DNB took a major step in this direction in 2017, which was announced in two papers. The announcement was rather restrained, but the content of the papers is all the more explosive for the library community: Since September 2017, the DNB has discontinued the intellectual indexing of series Band H and has switched to an automatic process for these series. The subject indexing of online publications (series O) has been purely automatical since 2010; from September 2017, monographs and periodicals published outside the publishing industry and university publications will no longer be indexed by people. This raises the question: What is the quality of the automatic indexing compared to the manual work or in other words to which degree can the automatic indexing replace people without a signi cant drop in regards to quality?