Search (10 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × theme_ss:"Volltextretrieval"
  1. Warner, A.J.: ¬The role of linguistic analysis in full-text retrieval (1994) 0.01
    0.008234787 = product of:
      0.020586967 = sum of:
        0.009535614 = weight(_text_:a in 2992) [ClassicSimilarity], result of:
          0.009535614 = score(doc=2992,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 2992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=2992)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 2992) [ClassicSimilarity], result of:
              0.022102704 = score(doc=2992,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 2992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2992)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Imprint
    Medford, NJ : Learned information
    Type
    a
  2. Wenzel, F.: Semantische Eingrenzung im Freitext-Retrieval auf der Basis morphologischer Segmentierungen (1980) 0.01
    0.0070104985 = product of:
      0.017526247 = sum of:
        0.009632425 = weight(_text_:a in 2037) [ClassicSimilarity], result of:
          0.009632425 = score(doc=2037,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 2037, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2037)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 2037) [ClassicSimilarity], result of:
              0.015787644 = score(doc=2037,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 2037, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2037)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The basic problem in freetext retrieval is that the retrieval language is not properly adapted to that of the author. Morphological segmentation, where words with the same root are grouped together in the inverted file, is a good eliminator of noise and information loss, providing high recall but low precision
    Type
    a
  3. Wacholder, N.; Byrd, R.J.: Retrieving information from full text using linguistic knowledge (1994) 0.00
    0.0049160775 = product of:
      0.012290194 = sum of:
        0.004086692 = weight(_text_:a in 8524) [ClassicSimilarity], result of:
          0.004086692 = score(doc=8524,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 8524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=8524)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 8524) [ClassicSimilarity], result of:
              0.016407004 = score(doc=8524,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 8524, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=8524)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Examines how techniques in the field of natural language processing can be applied to the analysis of text in information retrieval. State of the art text searching programs cannot distinguish, for example, between occurrences of the sickness, AIDS and aids as tool or between library school and school nor equate such terms as online or on-line which are variants of the same form. To make these distinction, systems must incorporate knowledge about the meaning of words in context. Research in natural language processing has concentrated on the automatic 'understanding' of language; how to analyze the grammatical structure and meaning of text. Although many asoects of this research remain experimental, describes how these techniques to recognize spelling variants, names, acronyms, and abbreviations
    Imprint
    Medford, NJ : Learned Information
    Type
    a
  4. Rösener, C.: ¬Die Stecknadel im Heuhaufen : Natürlichsprachlicher Zugang zu Volltextdatenbanken (2005) 0.00
    0.003277385 = product of:
      0.008193462 = sum of:
        0.002724461 = weight(_text_:a in 548) [ClassicSimilarity], result of:
          0.002724461 = score(doc=548,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.050957955 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=548)
        0.0054690014 = product of:
          0.010938003 = sum of:
            0.010938003 = weight(_text_:information in 548) [ClassicSimilarity], result of:
              0.010938003 = score(doc=548,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1343758 = fieldWeight in 548, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Die Möglichkeiten, die der heutigen Informations- und Wissensgesellschaft für die Beschaffung und den Austausch von Information zur Verfügung stehen, haben kurioserweise gleichzeitig ein immer akuter werdendes, neues Problem geschaffen: Es wird für jeden Einzelnen immer schwieriger, aus der gewaltigen Fülle der angebotenen Informationen die tatsächlich relevanten zu selektieren. Diese Arbeit untersucht die Möglichkeit, mit Hilfe von natürlichsprachlichen Schnittstellen den Zugang des Informationssuchenden zu Volltextdatenbanken zu verbessern. Dabei werden zunächst die wissenschaftlichen Fragestellungen ausführlich behandelt. Anschließend beschreibt der Autor verschiedene Lösungsansätze und stellt anhand einer natürlichsprachlichen Schnittstelle für den Brockhaus Multimedial 2004 deren erfolgreiche Implementierung vor
    Content
    Enthält die Kapitel: 2: Wissensrepräsentation 2.1 Deklarative Wissensrepräsentation 2.2 Klassifikationen des BMM 2.3 Thesauri und Ontologien: existierende kommerzielle Software 2.4 Erstellung eines Thesaurus im Rahmen des LeWi-Projektes 3: Analysekomponenten 3.1 Sprachliche Phänomene in der maschinellen Textanalyse 3.2 Analysekomponenten: Lösungen und Forschungsansätze 3.3 Die Analysekomponenten im LeWi-Projekt 4: Information Retrieval 4.1 Grundlagen des Information Retrieval 4.2 Automatische Indexierungsmethoden und -verfahren 4.3 Automatische Indexierung des BMM im Rahmen des LeWi-Projektes 4.4 Suchstrategien und Suchablauf im LeWi-Kontext
    5: Interaktion 5.1 Frage-Antwort- bzw. Dialogsysteme: Forschungen und Projekte 5.2 Darstellung und Visualisierung von Wissen 5.3 Das Dialogsystem im Rahmen des LeWi-Projektes 5.4 Ergebnisdarstellung und Antwortpräsentation im LeWi-Kontext 6: Testumgebungen und -ergebnisse 7: Ergebnisse und Ausblick 7.1 Ausgangssituation 7.2 Schlussfolgerungen 7.3 Ausblick Anhang A Auszüge aus der Grob- bzw. Feinklassifikation des BMM Anhang B MPRO - Formale Beschreibung der wichtigsten Merkmale ... Anhang C Fragentypologie mit Beispielsätzen (Auszug) Anhang D Semantische Merkmale im morphologischen Lexikon (Auszug) Anhang E Regelbeispiele für die Fragentypzuweisung Anhang F Aufstellung der möglichen Suchen im LeWi-Dialogmodul (Auszug) Anhang G Vollständiger Dialogbaum zu Beginn des Projektes Anhang H Statuszustände zur Ermittlung der Folgefragen (Auszug)
  5. Schwarz, C.: Freitextrecherche: Grenzen und Möglichkeiten (1982) 0.00
    0.0021795689 = product of:
      0.010897844 = sum of:
        0.010897844 = weight(_text_:a in 1349) [ClassicSimilarity], result of:
          0.010897844 = score(doc=1349,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 1349, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=1349)
      0.2 = coord(1/5)
    
    Type
    a
  6. Godby, J.: Two techniques for the identification of phrases in full text (1995) 0.00
    0.0019071229 = product of:
      0.009535614 = sum of:
        0.009535614 = weight(_text_:a in 6829) [ClassicSimilarity], result of:
          0.009535614 = score(doc=6829,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 6829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6829)
      0.2 = coord(1/5)
    
    Type
    a
  7. Magennis, M.: Expert rule-based query expansion (1995) 0.00
    0.001651617 = product of:
      0.008258085 = sum of:
        0.008258085 = weight(_text_:a in 5181) [ClassicSimilarity], result of:
          0.008258085 = score(doc=5181,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 5181, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5181)
      0.2 = coord(1/5)
    
    Abstract
    Examines how, for term based free text retrieval, Interactive Query Expansion (IQE) provides better retrieval performance tahn Automatic Query Expansion (AQE) but the performance of IQE depends on the strategy employed by the user to select expansion terms. The aim is to build an expert query expansion system using term selection rules based on expert users' strategies. It is expected that such a system will achieve better performance for novice or inexperienced users that either AQE or IQE. The procedure is to discover expert IQE users' term selection strategies through observation and interrogation, to construct a rule based query expansion (RQE) system based on these and to compare the resulting retrieval performance with that of comparable AQE and IQE systems
    Type
    a
  8. Graham, T.: ¬The free language approach to online catalogues : the user (1985) 0.00
    0.0016346768 = product of:
      0.008173384 = sum of:
        0.008173384 = weight(_text_:a in 1215) [ClassicSimilarity], result of:
          0.008173384 = score(doc=1215,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 1215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1215)
      0.2 = coord(1/5)
    
    Type
    a
  9. Allen, E.E.: Searching, naturally (1998) 0.00
    0.0016346768 = product of:
      0.008173384 = sum of:
        0.008173384 = weight(_text_:a in 2602) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2602,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=2602)
      0.2 = coord(1/5)
    
    Type
    a
  10. Pritchard-Schoch, T.: Comparing natural language retrieval : Win & Freestyle (1995) 0.00
    0.001541188 = product of:
      0.00770594 = sum of:
        0.00770594 = weight(_text_:a in 2546) [ClassicSimilarity], result of:
          0.00770594 = score(doc=2546,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 2546, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2546)
      0.2 = coord(1/5)
    
    Abstract
    Reports on a comparison of 2 natural language interfaces to full text legal databases: WIN for access to WESTLAW databases and FREESTYLE for access to the LEXIS database. 30 legal issues in natural langugae queries were presented to identical libraries in both systems. The top 20 ranked documents from each search were analyzed and reviewed for relevance to the legal issue
    Type
    a