Search (85 results, page 3 of 5)

  • × theme_ss:"Volltextretrieval"
  1. Pirkola, A.; Jarvelin, K.: ¬The effect of anaphor and ellipsis resolution on proximity searching in a text database (1995) 0.00
    0.004274482 = product of:
      0.029921371 = sum of:
        0.008737902 = weight(_text_:information in 4088) [ClassicSimilarity], result of:
          0.008737902 = score(doc=4088,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 4088, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4088)
        0.021183468 = weight(_text_:retrieval in 4088) [ClassicSimilarity], result of:
          0.021183468 = score(doc=4088,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 4088, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4088)
      0.14285715 = coord(2/14)
    
    Abstract
    So far, methods for ellipsis and anaphor resolution have been developed and the effects of anaphor resolution have been analyzed in the context of statistical information retrieval of scientific abstracts. No significant improvements has been observed. Analyzes the effects of ellipsis and anaphor resolution on proximity searching in a full text database. Anaphora and ellipsis are classified on the basis of the type of their correlates / antecedents rather than, as traditional, on the basis of their own linguistic type. The classification differentiates proper names and common nouns of basic words, compound words, and phrases. The study was carried out in a newspaper article database containing 55.000 full text articles. A set of 154 keyword pairs in different categories was created. Human resolution of keyword ellipsis and anaphora was performed to identify sentences and paragraphs which would match proximity searches after resolution. Findings indicate that ellipsis and anaphor resolution is most relevant for proper name phrases and only marginal in the other keyword categories. Therefore the recall effect of restricted resolution of proper name phrases only was analyzed for keyword pairs containing at least 1 proper name phrase. Findings indicate a recall increase of 38.2% in sentence searches, and 28.8% in paragraph searches when proper name ellipsis were resolved. The recall increase was 17.6% sentence searches, and 19.8% in paragraph searches when proper name anaphora were resolved. Some simple and computationally justifiable resolution method might be developed only for proper name phrases to support keyword based full text information retrieval. Discusses elements of such a method
    Source
    Information processing and management. 32(1996) no.2, S.199-216
  2. Blake, P.: Leading edge : Verity keeps it in the family (1997) 0.00
    0.0042119347 = product of:
      0.029483542 = sum of:
        0.020922182 = weight(_text_:web in 7398) [ClassicSimilarity], result of:
          0.020922182 = score(doc=7398,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 7398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=7398)
        0.00856136 = weight(_text_:information in 7398) [ClassicSimilarity], result of:
          0.00856136 = score(doc=7398,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 7398, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=7398)
      0.14285715 = coord(2/14)
    
    Abstract
    Verity Search 97 software will index and search e-mail, attachments, folders and documents on local and network disk drives. The Internet may be searched via the same front end and changes to particular documents or pages may be monitored. Documents may be viewed in their native formats including ASCII, HTML, PDF and popular word processors, with highlighted search terms. Agents may be launched into the Internet to retrieve information according to a user-specified profile. The software can index about 700 MB an hour. Describes the search technology which includes fuzzy logic and natural language. The Web version of Personal Search 97 works with Netscape Navigator or Microsoft Internet Explorer, while the Exchange version will work regardless of any attachment to an Exchange server. Search 97 Personal improves online time and access time and allows searches to be refined offline
    Source
    Information world review. 1997, no.122, S.15-16
  3. Wacholder, N.; Byrd, R.J.: Retrieving information from full text using linguistic knowledge (1994) 0.00
    0.00406575 = product of:
      0.028460251 = sum of:
        0.0104854815 = weight(_text_:information in 8524) [ClassicSimilarity], result of:
          0.0104854815 = score(doc=8524,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.20156369 = fieldWeight in 8524, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=8524)
        0.01797477 = weight(_text_:retrieval in 8524) [ClassicSimilarity], result of:
          0.01797477 = score(doc=8524,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 8524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=8524)
      0.14285715 = coord(2/14)
    
    Abstract
    Examines how techniques in the field of natural language processing can be applied to the analysis of text in information retrieval. State of the art text searching programs cannot distinguish, for example, between occurrences of the sickness, AIDS and aids as tool or between library school and school nor equate such terms as online or on-line which are variants of the same form. To make these distinction, systems must incorporate knowledge about the meaning of words in context. Research in natural language processing has concentrated on the automatic 'understanding' of language; how to analyze the grammatical structure and meaning of text. Although many asoects of this research remain experimental, describes how these techniques to recognize spelling variants, names, acronyms, and abbreviations
    Imprint
    Medford, NJ : Learned Information
  4. Huang, Y.-L.: ¬A theoretic and empirical research of cluster indexing for Mandarine Chinese full text document (1998) 0.00
    0.004004761 = product of:
      0.028033325 = sum of:
        0.0070627616 = weight(_text_:information in 513) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=513,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 513, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=513)
        0.020970564 = weight(_text_:retrieval in 513) [ClassicSimilarity], result of:
          0.020970564 = score(doc=513,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 513, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=513)
      0.14285715 = coord(2/14)
    
    Abstract
    Since most popular commercialized systems for full text retrieval are designed with full text scaning and Boolean logic query mode, these systems use an oversimplified relationship between the indexing form and the content of document. Reports the use of Singular Value Decomposition (SVD) to develop a Cluster Indexing Model (CIM) based on a Vector Space Model (VSM) in orer to explore the index theory of cluster indexing for chinese full text documents. From a series of experiments, it was found that the indexing performance of CIM is better than traditional VSM, and has almost equivalent effectiveness of the authority control of index terms
    Source
    Bulletin of library and information science. 1998, no.24, S.44-68
  5. McKinin, E.J.; Sievert, M.E.; Johnson, D.; Mitchell, J.A.: ¬The Medline/full-text research project (1991) 0.00
    0.003790876 = product of:
      0.02653613 = sum of:
        0.00856136 = weight(_text_:information in 5385) [ClassicSimilarity], result of:
          0.00856136 = score(doc=5385,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 5385, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5385)
        0.01797477 = weight(_text_:retrieval in 5385) [ClassicSimilarity], result of:
          0.01797477 = score(doc=5385,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 5385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5385)
      0.14285715 = coord(2/14)
    
    Abstract
    This project was designed to test the relative efficacy of index terms and full-text for the retrieval of documents in those MEDLINE journals for which full-text searching was also available. The full-text files used were MEDIS from Mead Data Central and CCML from BRS Information Technologies. One hundred clinical medical topics were searches in these two files as well as the MEDLINE file to accumulate the necessary data. It was found that full-text identified significantly more relevant articles than did the indexed file. Most relevant items missed in the full-text files, but identified in MEDLINE, were missed because the searcher failed to account for some aspect of natural language, used a logical or positional operator that was too restrictive, or included a concept which was implied, but not expressed in the natural language. Very few of the unique relevant full-text citations would have been retrievaed by title or abstract alone. Finally, as of July, 1990 the more current issue of a journal was just as likely to appear in MEDLINE as in one of the full-text files.
    Source
    Journal of the American Society for Information Science. 42(1991), S.297-307
  6. Mallinson, P.: Developments in free text retrieval systems (1993) 0.00
    0.003706335 = product of:
      0.05188869 = sum of:
        0.05188869 = weight(_text_:retrieval in 4931) [ClassicSimilarity], result of:
          0.05188869 = score(doc=4931,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5788671 = fieldWeight in 4931, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=4931)
      0.071428575 = coord(1/14)
    
    Abstract
    Describes a typical traditional 1989 free text system and discusses developments in data storage, in search strategy and in the storage and retrieval of real time data. Outlines the following areas in which free text systems are likely to develop: standards; integration; dynamic data exchange; improved user interfaces; and better retrieval methods
  7. Rösener, C.: ¬Die Stecknadel im Heuhaufen : Natürlichsprachlicher Zugang zu Volltextdatenbanken (2005) 0.00
    0.003419585 = product of:
      0.023937095 = sum of:
        0.0069903214 = weight(_text_:information in 548) [ClassicSimilarity], result of:
          0.0069903214 = score(doc=548,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1343758 = fieldWeight in 548, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=548)
        0.016946774 = weight(_text_:retrieval in 548) [ClassicSimilarity], result of:
          0.016946774 = score(doc=548,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.18905719 = fieldWeight in 548, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=548)
      0.14285715 = coord(2/14)
    
    Abstract
    Die Möglichkeiten, die der heutigen Informations- und Wissensgesellschaft für die Beschaffung und den Austausch von Information zur Verfügung stehen, haben kurioserweise gleichzeitig ein immer akuter werdendes, neues Problem geschaffen: Es wird für jeden Einzelnen immer schwieriger, aus der gewaltigen Fülle der angebotenen Informationen die tatsächlich relevanten zu selektieren. Diese Arbeit untersucht die Möglichkeit, mit Hilfe von natürlichsprachlichen Schnittstellen den Zugang des Informationssuchenden zu Volltextdatenbanken zu verbessern. Dabei werden zunächst die wissenschaftlichen Fragestellungen ausführlich behandelt. Anschließend beschreibt der Autor verschiedene Lösungsansätze und stellt anhand einer natürlichsprachlichen Schnittstelle für den Brockhaus Multimedial 2004 deren erfolgreiche Implementierung vor
    Content
    Enthält die Kapitel: 2: Wissensrepräsentation 2.1 Deklarative Wissensrepräsentation 2.2 Klassifikationen des BMM 2.3 Thesauri und Ontologien: existierende kommerzielle Software 2.4 Erstellung eines Thesaurus im Rahmen des LeWi-Projektes 3: Analysekomponenten 3.1 Sprachliche Phänomene in der maschinellen Textanalyse 3.2 Analysekomponenten: Lösungen und Forschungsansätze 3.3 Die Analysekomponenten im LeWi-Projekt 4: Information Retrieval 4.1 Grundlagen des Information Retrieval 4.2 Automatische Indexierungsmethoden und -verfahren 4.3 Automatische Indexierung des BMM im Rahmen des LeWi-Projektes 4.4 Suchstrategien und Suchablauf im LeWi-Kontext
  8. Kugler, A.: Automatisierte Volltexterschließung von Retrodigitalisaten am Beispiel historischer Zeitungen (2018) 0.00
    0.0033447412 = product of:
      0.046826374 = sum of:
        0.046826374 = weight(_text_:bibliothek in 4595) [ClassicSimilarity], result of:
          0.046826374 = score(doc=4595,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.38489348 = fieldWeight in 4595, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.046875 = fieldNorm(doc=4595)
      0.071428575 = coord(1/14)
    
    Content
    Vgl.: http://journals.ub.uni-heidelberg.de/index.php/bibliothek/article/view/48394. Vgl. auch: URN (PDF): http://nbn-resolving.de/urn:nbn:de:bsz:16-pb-483949.
    Source
    Perspektive Bibliothek. 7(2018) H.1, S.33-54
  9. Rosemann, L.: ¬Die Volltextabfrage und das Alleinstellungsmerkmal des physischen Buches (2006) 0.00
    0.003331996 = product of:
      0.023323972 = sum of:
        0.016555622 = weight(_text_:bibliothek in 5142) [ClassicSimilarity], result of:
          0.016555622 = score(doc=5142,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.13608038 = fieldWeight in 5142, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5142)
        0.0067683496 = weight(_text_:information in 5142) [ClassicSimilarity], result of:
          0.0067683496 = score(doc=5142,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1301088 = fieldWeight in 5142, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5142)
      0.14285715 = coord(2/14)
    
    Content
    "Mit Google Print bzw. mittlerweile Google Book Search und dem Projekt zur Übernahme eines brancheneigenen Portals zur Pflege und Bereitstellung digitaler Daten aus Büchern namens "Volltextsuche online" initiiert durch den Börsenverein des Deutschen Buchhandels tritt ein Thema auf den Plan, das im deutschen Sprachraum lange vernachlässigt wurde: allgemein gesprochen, die Notwendigkeit der Texterschließung durch Indexierung - sei sie gedruckt in Form von Registern im Anhang von Sach- und wissenschaftlichen Büchern oder ungedruckt in Form sog. Volltextabfragen per Suchmaske am Computer. Angesichts der exponentiell wachsenden Menge an Wissen ist es gut, wenn hierzu Überlegungen angestellt werden und damit die Chance besteht, neben der wirtschaftlichen auch über die wissenstheoretische Dimension dieser Dienste nachzudenken. Zweifellos wird die Bedeutung der Indexierung von Fließtext aus wissenstheoretischer Sicht in Zukunft noch weiter wachsen und bedeutet im Falle einer Online-Plattform (wenn sie denn in naher Zukunft eine hinreichend große Menge an Büchern in ihrem Datenbestand aufweisen wird) die Erfüllung eines Traumes für die wissenschaftliche Arbeit: Es ist fantastisch, in Millisekunden das Vorhandensein von Personen, Termen, Phrasen und Wortkomposita zu ermitteln, um die Ein- bzw. Nichteinschlägigkeit eines Buches und - mehr noch -vieler Bücher für die eigene Arbeit eindeutig beantworten zu können. Es ist fantastisch, im Trefferfall die gesuchte Information sogleich auf dem Monitor exzerpieren zu können oder sich auch bei ausbleibenden Treffern das Durcharbeiten eines ganzen Buches, vielleicht sogar einer halben Bibliothek ersparen zu können. Dabei ist das letztere Resultat mindestens eine genauso wichtige Information wie die erste, denn auch sie wird- man darf fast sagen, so gut wie immer - zu einer unglaublichen Ersparnis an Zeit verhelfen; hier bedeutet allein schon die Verringerung der Datenmenge einen Zuwachs an Wissen unter minimalem Zeitaufwand. Angesichts dieser Diagnose ist die These zu wagen, die digitale Revolution beginnt erst wirklich bei der Nutzung der Volltexte selbst als Datenquelle zur Wissensabfrage.
    . . . Ich plädiere hier aus den oben genannten wissenstheoretischen Gründen nicht nur für die Aufrechterhaltung eines Mindestmaßes an Registern und Indexen im Anhang von physischen Büchern, sondern sogar für deren Ausbau, deren standardmäßige Zugabe bei Sach- und wissenschaftlichen Büchern gerade angesichts der Volltextnutzung durch Online-Abfragen. Warum? Hierzu sechs Argumente: 1. Wie oben bereits angerissen, lehrt die Erfahrung bei CD-ROM-Zugaben zu opulenten Werken, dass Parallelmedien mit Parallelinhalten von den Nutzern nicht wirklich angenommen werden; es ist umständlich, zur Auffindung bestimmter Textstellen den Computer befragen zu müssen und die Fundstellen dann zwischen zwei Buchdeckeln nachzuschlagen. 2. Über frei wählbare Suchbegriffe seitens des Nutzers ist noch keine Qualität der Suchergebnisse garantiert. Erst das Einrechnen entsprechender Verweisungsbegriffe und Synonyme in die Suchabfrage führt zu Qualität des Ergebnisses. Die scheinbar eingesparten Kosten einer einmaligen bzw. abonnementartigen Investition in eine Online-Verfügbarkeit der Buchinhalte vonseiten der Verlage werden dann über die Hintertür doch wieder fällig, wenn sich nämlich herausstellt, dass Nutzer bei der von ihnen gesuchten Information nicht fündig werden, weil sie unter dem "falschen", d.h. entweder ihnen nicht bekannten oder einem ihnen gerade nicht präsenten Schlagwort gesucht haben. Die Online-Suchabfrage, die auf den ersten Blick höchst nutzerfreundlich erscheint, da eine ungeheure Menge an Titeln die Abfrage umfasst, erweist sich womöglich als wenig brauchbar, wenn sich die Trefferqualität aus den genannten Gründen als beschränkt herausstellt. 3. Nur bei entsprechenden Restriktionen des Zugangs bzw. der präsentierten Textausschnitte werden die Verlage es gewährleistet sehen, dass die Nutzerin, der Nutzer nicht vom Kauf des physischen Buches Abstand nehmen. Nur wenn die Nutzer wissen, dass ihnen gerade jene Informationen am Bildschirm vorenthalten werden, die sie im zu erwerbenden Buch mit Gewissheit finden werden, werden sie das Buch noch erwerben wollen. Wer auf die Schnelle nur ein Kochrezept aus einem teuer bebilderten Kochbuch der Oberklasse abrufen kann, wird das teure Kochbuch eben nicht mehr kaufen. Analog stellt sich die Frage, ob nicht aus diesem Grunde auch Bibliotheken erwägen werden, angesichts der elektronischen Präsenz teuerer physischer Bücher auf den Erwerb der Letzteren zu verzichten, wohl wissend, dass den Wissenschaftlern im Zweifel einige Mausklicks genügen, um die gewünschte Begriffsrecherche erschöpfend beantwortet zu finden.
    4. Vermutlich wird sich aufgrund der genannten Gründe der Buchservice Volltextsuche als heterogen darstellen: Einige Verlage werden gar nicht mitspielen, andere werden ein Buch im Vollzugriff, ein anderes nur zum Teil, ein drittes nur als Metainformation usw. indizieren lassen. Dies wird letztlich ebenfalls die Trefferqualität schmälern, da der Nutzer dann wiederum wissen muss, genau welche Informationen und Texte ihm bei seiner Suche vorenthalten werden. Das gedruckte Sachbuch wird gegen seinen eigenen digitalen Klon ein Alleinstellungsmerkmal brauchen, um weiterhin attraktiv zu sein. 5. Ein solches Alleinstellungsmerkmal würde m.E. maßgeblich durch die Erstellung von gedruckten Registern bereits in der Druckausgabe erreicht werden. Damit würde die Druckausgabe tatsächlich an Wert gewinnen und der Buchkäufer erhielte einen echten Mehrwert. Zum einen spiegelt sich bereits in der Erstellung konventioneller gedruckter Register die zweite digitale Revolution wider: Moderne Registererstellung basiert heutzutage ebenfalls auf der digitalen Verwertung des Volltextes. Zum anderen erfordert das "Registermachen" zugleich die Erbringung jener o.g. sachdienlichen Mehrinformationen wie Verweisungsbegriffe, vernünftige Klassifizierungen, nicht-redundante Begriffsauswahl etc., die nur begrenzt automatisierbar sind und Fachwissen erfordern. Erst diese beiden Komponenten lassen die Indexierung schlussendlich zu einer hochwertigen Aufbereitung sequentieller Information werden. 6. Genau diese Mehr- und Metainformationen, die die vorausgegangene Erstellung eines Print-Vollregisters geliefert hat, lassen sich dann in den Suchalgorithmus der Online-Suche zur Qualitätssteigerung der Treffer einrechnen."
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.4, S.217-218
  10. Markey, K.; Atherton, P.; Newton, C.: ¬An analysis of controlled vocabulary and free text search statements in online searches (1980) 0.00
    0.002995795 = product of:
      0.04194113 = sum of:
        0.04194113 = weight(_text_:retrieval in 1401) [ClassicSimilarity], result of:
          0.04194113 = score(doc=1401,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 1401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=1401)
      0.071428575 = coord(1/14)
    
    Theme
    Verbale Doksprachen im Online-Retrieval
  11. Magennis, M.: Expert rule-based query expansion (1995) 0.00
    0.002995795 = product of:
      0.04194113 = sum of:
        0.04194113 = weight(_text_:retrieval in 5181) [ClassicSimilarity], result of:
          0.04194113 = score(doc=5181,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 5181, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5181)
      0.071428575 = coord(1/14)
    
    Abstract
    Examines how, for term based free text retrieval, Interactive Query Expansion (IQE) provides better retrieval performance tahn Automatic Query Expansion (AQE) but the performance of IQE depends on the strategy employed by the user to select expansion terms. The aim is to build an expert query expansion system using term selection rules based on expert users' strategies. It is expected that such a system will achieve better performance for novice or inexperienced users that either AQE or IQE. The procedure is to discover expert IQE users' term selection strategies through observation and interrogation, to construct a rule based query expansion (RQE) system based on these and to compare the resulting retrieval performance with that of comparable AQE and IQE systems
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Marcus, J.: Full text year in review : 1996 (1996) 0.00
    0.0028179463 = product of:
      0.039451245 = sum of:
        0.039451245 = weight(_text_:web in 7737) [ClassicSimilarity], result of:
          0.039451245 = score(doc=7737,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.4079388 = fieldWeight in 7737, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=7737)
      0.071428575 = coord(1/14)
    
    Abstract
    Reviews developments in full text databases in 1996. Online services are differentiated through quantity rather than niche specializations of content. Full text databases are appearing on the WWW. Examines examples of trade magazine on the WWW from the networking and data communications area. Covers: Networks World Fusion; Data Communications on the Web; Communications Week Interactive; Network Computing Online; LAN Times Online and LAN on the Web
  13. Ashford, J.H.: Free text retrieval in the Welsh language : problems, and proposed working practice (1995) 0.00
    0.002420968 = product of:
      0.033893548 = sum of:
        0.033893548 = weight(_text_:retrieval in 6509) [ClassicSimilarity], result of:
          0.033893548 = score(doc=6509,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37811437 = fieldWeight in 6509, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6509)
      0.071428575 = coord(1/14)
    
    Abstract
    A bilingual Welsh-English full text database is planned for Inspection Reports of Her Majesty's Inspectors of Schools for Wales. Special requirements for free text retrieval in the Welsh language are identified, and practical solutions are proposed for problems arising from the use of standard text database products, some of which may also apply to other lesser-used languages
  14. Voorbij, H.: Title keywords and subject descriptors : a comparison of subject search entries of books in the humanities and social sciences (1998) 0.00
    0.0022955346 = product of:
      0.032137483 = sum of:
        0.032137483 = weight(_text_:wide in 4721) [ClassicSimilarity], result of:
          0.032137483 = score(doc=4721,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.24476713 = fieldWeight in 4721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4721)
      0.071428575 = coord(1/14)
    
    Abstract
    In order to compare the value of subject descriptors and title keywords as entries to subject searches, two studies were carried out. Both studies concentrated on monographs in the humanities and social sciences, held by the online public access catalogue of the National Library of the Netherlands. In the first study, a comparison was made by subject librarians between the subject descriptors and the title keywords of 475 records. They could express their opinion on a scale from 1 (descriptor is exactly or almost the same as word in title) to 7 (descriptor does not appear in title at all). It was concluded that 37 per cent of the records are considerably enhanced by a subject descriptor, and 49 per cent slightly or considerably enhanced. In the second study, subject librarians performed subject searches using title keywords and subject descriptors on the same topic. The relative recall amounted to 48 per cent and 86 per cent respectively. Failure analysis revealed the reasons why so many records that were found by subject descriptors were not found by title keywords. First, although completely meaningless titles hardly ever appear, the title of a publication does not always offer sufficient clues for title keyword searching. In those cases, descriptors may enhance the record of a publication. A second and even more important task of subject descriptors is controlling the vocabulary. Many relevant titles cannot be retrieved by title keyword searching because of the wide diversity of ways of expressing a topic. Descriptors take away the burden of vocabulary control from the user.
  15. Trinkwalder, A.: Wortdetektive : Volltext-Suchmaschinen für Festplatte und Intranet (2000) 0.00
    0.0021398535 = product of:
      0.029957948 = sum of:
        0.029957948 = weight(_text_:retrieval in 5318) [ClassicSimilarity], result of:
          0.029957948 = score(doc=5318,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 5318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=5318)
      0.071428575 = coord(1/14)
    
    Abstract
    Oft sorgt die eigene Unordentlichkeit dafür, dass wichtige Texte unauffindbar sind, manchmal lässt sich aber auch einfach die schiere Menge nicht bändigen und vernünftig sortieren. Text-Retrieval-Systeme durchforsten das Dickicht und versprechen den schnellen Weg zu wertvollen Informationen
  16. Dow Jones unveils knowledge indexing system (1997) 0.00
    0.0017118829 = product of:
      0.023966359 = sum of:
        0.023966359 = weight(_text_:retrieval in 751) [ClassicSimilarity], result of:
          0.023966359 = score(doc=751,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=751)
      0.071428575 = coord(1/14)
    
    Abstract
    Dow Jones Interactive Publishing has developed a sophisticated automatic knowledge indexing system that will allow searchers of the Dow Jones News / Retrieval service to get highly targeted results from a search in the service's Publications Library. Instead of relying on a thesaurus of company names, the new system uses a combination of that basic algorithm plus unique rules based on the editorial styles of individual publications in the Library. Dow Jones have also announced its acceptance of the definitions of 'selected full text' and 'full text' from Bibliodata's Fulltext Sources Online directory
  17. Pritchard-Schoch, T.: Comparing natural language retrieval : Win & Freestyle (1995) 0.00
    0.0017118829 = product of:
      0.023966359 = sum of:
        0.023966359 = weight(_text_:retrieval in 2546) [ClassicSimilarity], result of:
          0.023966359 = score(doc=2546,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 2546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2546)
      0.071428575 = coord(1/14)
    
  18. Gross, T.; Taylor, A.G.; Joudrey, D.N.: Still a lot to lose : the role of controlled vocabulary in keyword searching (2015) 0.00
    0.0014978976 = product of:
      0.020970564 = sum of:
        0.020970564 = weight(_text_:retrieval in 2007) [ClassicSimilarity], result of:
          0.020970564 = score(doc=2007,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 2007, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2007)
      0.071428575 = coord(1/14)
    
    Theme
    Verbale Doksprachen im Online-Retrieval
  19. Tenopir, C.; Ro, J.S.: Full text databases (1990) 0.00
    0.0014268934 = product of:
      0.019976506 = sum of:
        0.019976506 = weight(_text_:information in 1916) [ClassicSimilarity], result of:
          0.019976506 = score(doc=1916,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3840108 = fieldWeight in 1916, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=1916)
      0.071428575 = coord(1/14)
    
    Footnote
    Rez. in: Information technology and libraries. 10(1991) S.156-157 (E. Kanter)
    Series
    New directions in information management; 21
  20. Sormunen, E.: Free-text searching in full-text databases : probing system limits (1993) 0.00
    0.0014268934 = product of:
      0.019976506 = sum of:
        0.019976506 = weight(_text_:information in 7120) [ClassicSimilarity], result of:
          0.019976506 = score(doc=7120,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3840108 = fieldWeight in 7120, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=7120)
      0.071428575 = coord(1/14)
    
    Imprint
    Oxford : Learned Information
    Source
    Online information 93: 17th International Online Meeting Proceedings, London, 7.-9.12.1993. Ed. by D.I. Raitt et al

Years

Languages

  • e 67
  • d 14
  • nl 2
  • chi 1
  • f 1
  • More… Less…

Types

  • a 77
  • m 3
  • x 3
  • s 2
  • r 1
  • More… Less…