Search (59 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Ruge, G.: Sprache und Computer : Wortbedeutung und Termassoziation. Methoden zur automatischen semantischen Klassifikation (1995) 0.12
    0.11525136 = product of:
      0.23050272 = sum of:
        0.23050272 = sum of:
          0.17613843 = weight(_text_:memory in 1534) [ClassicSimilarity], result of:
            0.17613843 = score(doc=1534,freq=2.0), product of:
              0.31615055 = queryWeight, product of:
                6.30326 = idf(docFreq=219, maxDocs=44218)
                0.050156675 = queryNorm
              0.5571347 = fieldWeight in 1534, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.30326 = idf(docFreq=219, maxDocs=44218)
                0.0625 = fieldNorm(doc=1534)
          0.054364298 = weight(_text_:22 in 1534) [ClassicSimilarity], result of:
            0.054364298 = score(doc=1534,freq=2.0), product of:
              0.17564014 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050156675 = queryNorm
              0.30952093 = fieldWeight in 1534, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1534)
      0.5 = coord(1/2)
    
    Content
    Enthält folgende Kapitel: (1) Motivation; (2) Language philosophical foundations; (3) Structural comparison of extensions; (4) Earlier approaches towards term association; (5) Experiments; (6) Spreading-activation networks or memory models; (7) Perspective. Appendices: Heads and modifiers of 'car'. Glossary. Index. Language and computer. Word semantics and term association. Methods towards an automatic semantic classification
    Footnote
    Rez. in: Knowledge organization 22(1995) no.3/4, S.182-184 (M.T. Rolland)
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10004871 = sum of:
      0.0796621 = product of:
        0.2389863 = sum of:
          0.2389863 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2389863 = score(doc=562,freq=2.0), product of:
              0.42522886 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050156675 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020386612 = product of:
        0.040773224 = sum of:
          0.040773224 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.040773224 = score(doc=562,freq=2.0), product of:
              0.17564014 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050156675 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Bowker, L.: Information retrieval in translation memory systems : assessment of current limitations and possibilities for future development (2002) 0.07
    0.066736415 = product of:
      0.13347283 = sum of:
        0.13347283 = product of:
          0.26694566 = sum of:
            0.26694566 = weight(_text_:memory in 1854) [ClassicSimilarity], result of:
              0.26694566 = score(doc=1854,freq=6.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.84436244 = fieldWeight in 1854, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1854)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A translation memory system is a new type of human language technology (HLT) tool that is gaining popularity among translators. Such tools allow translators to store previously translated texts in a type of aligned bilingual database, and to recycle relevant parts of these texts when producing new translations. Currently, these tools retrieve information from the database using superficial character string matching, which often results in poor precision and recall. This paper explains how translation memory systems work, and it considers some possible ways for introducing more sophisticated information retrieval techniques into such systems by taking syntactic and semantic similarity into account. Some of the suggested techniques are inspired by these used in other areas of HLT, and some by techniques used in information science.
  4. Kitano, H.: Speech-to-speech translation : a massively parallel memory-based approach (19??) 0.07
    0.066051915 = product of:
      0.13210383 = sum of:
        0.13210383 = product of:
          0.26420766 = sum of:
            0.26420766 = weight(_text_:memory in 5306) [ClassicSimilarity], result of:
              0.26420766 = score(doc=5306,freq=2.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.83570206 = fieldWeight in 5306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5306)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.03983105 = product of:
      0.0796621 = sum of:
        0.0796621 = product of:
          0.2389863 = sum of:
            0.2389863 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.2389863 = score(doc=862,freq=2.0), product of:
                0.42522886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050156675 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  6. Yang, Y.; Wilbur, J.: Using corpus statistics to remove redundant words in text categorization (1996) 0.03
    0.033025958 = product of:
      0.066051915 = sum of:
        0.066051915 = product of:
          0.13210383 = sum of:
            0.13210383 = weight(_text_:memory in 4199) [ClassicSimilarity], result of:
              0.13210383 = score(doc=4199,freq=2.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.41785103 = fieldWeight in 4199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4199)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article studies aggressive word removal in text categorization to reduce the noice in free texts to enhance the computational efficiency of categorization. We use a novel stop word identification method to automatically generate domain specific stoplists which are much larger than a conventional domain-independent stoplist. In our tests with 3 categorization methods on text collections from different domains/applications, significant numbers of words were removed without sacrificing categorization effectiveness. In the test of the Expert Network method on CACM documents, for example, an 87% removal of unique qords reduced the vocabulary of documents from 8.002 distinct words to 1.045 words, which resulted in a 63% time savings and a 74% memory savings in the computation of category ranking, with a 10% precision improvement on average over not using word removal. It is evident in this study that automated word removal based on corpus statistics has a practical and significant impact on the computational tractability of categorization methods in large databases
  7. Langenscheidt und TRADOS binden über drei Millionen Übersetzungen in Terminologie-Datenbanken ein (2003) 0.03
    0.033025958 = product of:
      0.066051915 = sum of:
        0.066051915 = product of:
          0.13210383 = sum of:
            0.13210383 = weight(_text_:memory in 2003) [ClassicSimilarity], result of:
              0.13210383 = score(doc=2003,freq=2.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.41785103 = fieldWeight in 2003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2003)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Die Langenscheidt KG, München, bietet ab Herbst 2003 über drei Millionen übersetzungen aus ihren Wörterbüchern und Fachwörterbüchern für die Terminologie-Datenbanken "MultiTerm" des weltweit führenden Anbieters von Sprachtechnologie, TRADOS, an. Die Translation-Memory- und Terminologiesoftware von TRADOS hat einen Marktanteil von über achtzig Prozent und wird vor allem von internationalen Firmen und professionellen Übersetzern für die Erarbeitung wertvoller mehrsprachiger Inhalte verwendet. In der neuesten Version der Terminologie-Management-Software "MultiTerm" können nun die Wortbestände von sieben allgemeinsprachlichen und elf Fachwörterbüchern von Langenscheidt eingebunden und somit die Datenbank bei maximaler Ausschöpfung um über drei Millionen Stichworte und Wendungen erweitert werden. Dies erleichtert nicht nur die Terminologiearbeit erheblich, sondem ermöglicht durch die einheitliche Arbeitsoberfläche auch zeitsparendes und komfortables Übersetzen. MultiTerm ist sowohl als Einzelplatzversion wie auch als serverbasierte Netzwerk- oder Onlineversion zu erwerben. Interessenten erhalten unter www.langenscheidt.de/b2b/ebusiness oder www. trados.com/multiterm bzw. www. trados.com/contact weitere Informationen sowie die jeweiligen Ansprechpartner."
  8. Zimmermann, H.H.: Maschinelle und Computergestützte Übersetzung (2004) 0.03
    0.033025958 = product of:
      0.066051915 = sum of:
        0.066051915 = product of:
          0.13210383 = sum of:
            0.13210383 = weight(_text_:memory in 2943) [ClassicSimilarity], result of:
              0.13210383 = score(doc=2943,freq=2.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.41785103 = fieldWeight in 2943, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Unter Maschineller Übersetzung (Machine Translation, MT) wird im Folgenden die vollautomatische Übersetzung eines Textes in natürlicher Sprache in eine andere natürliche Sprache verstanden. Unter Human-Übersetzung (Human Translation, HT) wird die intellektuelle Übersetzung eines Textes mit oder ohne maschinelle lexikalische Hilfen mit oder ohne Textverarbeitung verstanden. Unter computergestützter bzw computerunterstützter Übersetzung (CAT) wird einerseits eine intellektuelle Übersetzung verstanden, die auf einer maschinellen Vorübersetzung/Rohübersetzung (MT) aufbaut, die nachfolgend intellektuell nachbereitet wird (Postedition); andererseits wird darunter eine intellektuelle Übersetzung verstanden, bei der vor oder während des intellektuellen Übersetzungsprozesses ein Translation Memory und/ oder eine Terminologie-Bank verwendet werden. Unter ICAT wird eine spezielle Variante von CAT verstanden, bei der ein Nutzer ohne (hinreichende) Kenntnis der Zielsprache bei einer Übersetzung aus seiner Muttersprache so unterstützt wird, dass das zielsprachige Äquivalent relativ fehlerfrei ist.
  9. Hausser, R.: Language and nonlanguage cognition (2021) 0.03
    0.033025958 = product of:
      0.066051915 = sum of:
        0.066051915 = product of:
          0.13210383 = sum of:
            0.13210383 = weight(_text_:memory in 255) [ClassicSimilarity], result of:
              0.13210383 = score(doc=255,freq=2.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.41785103 = fieldWeight in 255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  10. Warner, A.J.: Natural language processing (1987) 0.03
    0.027182149 = product of:
      0.054364298 = sum of:
        0.054364298 = product of:
          0.108728595 = sum of:
            0.108728595 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.108728595 = score(doc=337,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  11. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.02378438 = product of:
      0.04756876 = sum of:
        0.04756876 = product of:
          0.09513752 = sum of:
            0.09513752 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09513752 = score(doc=3164,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  12. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.02378438 = product of:
      0.04756876 = sum of:
        0.04756876 = product of:
          0.09513752 = sum of:
            0.09513752 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09513752 = score(doc=4506,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  13. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.02378438 = product of:
      0.04756876 = sum of:
        0.04756876 = product of:
          0.09513752 = sum of:
            0.09513752 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09513752 = score(doc=6672,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  14. New tools for human translators (1997) 0.02
    0.02378438 = product of:
      0.04756876 = sum of:
        0.04756876 = product of:
          0.09513752 = sum of:
            0.09513752 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09513752 = score(doc=1179,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  15. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.02378438 = product of:
      0.04756876 = sum of:
        0.04756876 = product of:
          0.09513752 = sum of:
            0.09513752 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09513752 = score(doc=3117,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  16. ¬Der Student aus dem Computer (2023) 0.02
    0.02378438 = product of:
      0.04756876 = sum of:
        0.04756876 = product of:
          0.09513752 = sum of:
            0.09513752 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09513752 = score(doc=1079,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  17. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.020386612 = product of:
      0.040773224 = sum of:
        0.040773224 = product of:
          0.08154645 = sum of:
            0.08154645 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.08154645 = score(doc=4483,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  18. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020386612 = product of:
      0.040773224 = sum of:
        0.040773224 = product of:
          0.08154645 = sum of:
            0.08154645 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08154645 = score(doc=4888,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  19. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.020386612 = product of:
      0.040773224 = sum of:
        0.040773224 = product of:
          0.08154645 = sum of:
            0.08154645 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08154645 = score(doc=5429,freq=2.0), product of:
                0.17564014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050156675 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  20. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.02
    0.019265141 = product of:
      0.038530283 = sum of:
        0.038530283 = product of:
          0.077060565 = sum of:
            0.077060565 = weight(_text_:memory in 6671) [ClassicSimilarity], result of:
              0.077060565 = score(doc=6671,freq=2.0), product of:
                0.31615055 = queryWeight, product of:
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.050156675 = queryNorm
                0.24374643 = fieldWeight in 6671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.30326 = idf(docFreq=219, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system

Languages

  • e 41
  • d 18

Types

  • a 44
  • el 6
  • m 6
  • s 4
  • p 3
  • x 2
  • d 1
  • More… Less…