Search (87 results, page 1 of 5)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.06739502 = product of:
      0.10109252 = sum of:
        0.08049321 = product of:
          0.24147962 = sum of:
            0.24147962 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.24147962 = score(doc=562,freq=2.0), product of:
                0.42966524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050679956 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.020599304 = product of:
          0.041198608 = sum of:
            0.041198608 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.041198608 = score(doc=562,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.07
    0.06693571 = product of:
      0.10040356 = sum of:
        0.076371044 = weight(_text_:book in 3840) [ClassicSimilarity], result of:
          0.076371044 = score(doc=3840,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 3840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3840)
        0.02403252 = product of:
          0.04806504 = sum of:
            0.04806504 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04806504 = score(doc=3840,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Digital unter: http://dx.doi.org/10.1081/E-ELIS3-120044491. Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
    Date
    27. 8.2011 14:22:33
  3. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.03
    0.03212785 = product of:
      0.09638354 = sum of:
        0.09638354 = sum of:
          0.047830522 = weight(_text_:search in 2541) [ClassicSimilarity], result of:
            0.047830522 = score(doc=2541,freq=4.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.27153727 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.048553023 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.048553023 = score(doc=2541,freq=4.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.33333334 = coord(1/3)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  4. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.03
    0.03180495 = product of:
      0.09541484 = sum of:
        0.09541484 = sum of:
          0.0473498 = weight(_text_:search in 1361) [ClassicSimilarity], result of:
            0.0473498 = score(doc=1361,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.2688082 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.04806504 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
            0.04806504 = score(doc=1361,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.2708308 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
      0.33333334 = coord(1/3)
    
    Abstract
    THESYS is based on the natural language processing of free-text databases. It yields statistically evaluated correlations between words of the database. These correlations correspond to traditional thesaurus relations. The person who has to build a thesaurus is thus assisted by the proposals made by THESYS. THESYS is being tested on commercial databases under real world conditions. It is part of a text processing project at Siemens, called TINA (Text-Inhalts-Analyse). Software from TINA is actually being applied and evaluated by the US Department of Commerce for patent search and indexing (REALIST: REtrieval Aids by Linguistics and STatistics)
    Date
    6. 1.1999 10:22:07
  5. Moisl, H.: Artificial neural networks and Natural Language Processing (2009) 0.03
    0.029093731 = product of:
      0.08728119 = sum of:
        0.08728119 = weight(_text_:book in 3138) [ClassicSimilarity], result of:
          0.08728119 = score(doc=3138,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 3138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=3138)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  6. Liddy, E.D.: Natural language processing for information retrieval (2009) 0.03
    0.029093731 = product of:
      0.08728119 = sum of:
        0.08728119 = weight(_text_:book in 3854) [ClassicSimilarity], result of:
          0.08728119 = score(doc=3854,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 3854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=3854)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  7. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026831072 = product of:
      0.08049321 = sum of:
        0.08049321 = product of:
          0.24147962 = sum of:
            0.24147962 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24147962 = score(doc=862,freq=2.0), product of:
                0.42966524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050679956 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  8. Heinrichs, J.: Language theory for the computer : monodimensional semantics or multidimensional semiotics? (1996) 0.03
    0.025715468 = product of:
      0.0771464 = sum of:
        0.0771464 = weight(_text_:book in 5364) [ClassicSimilarity], result of:
          0.0771464 = score(doc=5364,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34485358 = fieldWeight in 5364, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5364)
      0.33333334 = coord(1/3)
    
    Abstract
    Computer linguistics continues to be in need of an integrative language-theory model. Maria Theresia Rolland proposes such a model in her book 'Sprachverarbeitung durch Logotechnik' (1994). Relying upon the language theory of Leo Weisgerber, she pursues a pure 'content oriented' approach, by which she understands an approach in terms of the semantics of words. Starting from the 'implications' of word-contents, she attempts to construct a complete grammar of the German language. The reviewer begins his comments with an immanent critique, calling attention to a number of serious contradictions in Rolland's concept, among them, her refusal to take syntax into account despite its undeniable real presence.In the second part of his comments, the reviewer then takes up his own semiotic language theory published in 1981, showing that semantics is but one of four semiotic dimensions of language, the other dimanesion being the sigmatic, the pragmatic and the syntactic. Without taking all four dimensions into account, no theory can offer an adequate integrative language model. Indeed, without all four dimensions, one cannot even develop an adequate grammar of German sentence construction. The fourfold semiotic model dicloses as well the universally valid structures of language as the intersubjective expression of human self-awareness. Only on the basis of these universal structures, it is argued, is it possible to identify the specific structures of a native-language, and that on all four levels. This position has important consequences for the problems of computer translation and the comparative study and use of languages
    Footnote
    Reflections on M.T. Rolland's book 'Sprachverarbeitung durch Logotechnik'
  9. Warner, A.J.: Natural language processing (1987) 0.02
    0.018310493 = product of:
      0.054931477 = sum of:
        0.054931477 = product of:
          0.10986295 = sum of:
            0.10986295 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10986295 = score(doc=337,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  10. Prasad, A.R.D.; Kar, B.B.: Parsing Boolean search expression using definite clause grammars (1994) 0.02
    0.01803802 = product of:
      0.054114055 = sum of:
        0.054114055 = product of:
          0.10822811 = sum of:
            0.10822811 = weight(_text_:search in 8188) [ClassicSimilarity], result of:
              0.10822811 = score(doc=8188,freq=8.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.6144187 = fieldWeight in 8188, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8188)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Briefly discusses the role of search languages in information retrieval and broadly groups the search languages into 4 categories. Explains the idea of definite clause grammars and demonstrates how parsers for Boolean logic-based search languages can easily be developed. Presents a partial Prolog code of the parser that was used in an object-oriented bibliographic database management system
  11. Zaitseva, E.M.: Developing linguistic tools of thematic search in library information systems (2023) 0.02
    0.016910642 = product of:
      0.050731923 = sum of:
        0.050731923 = product of:
          0.10146385 = sum of:
            0.10146385 = weight(_text_:search in 1187) [ClassicSimilarity], result of:
              0.10146385 = score(doc=1187,freq=18.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5760175 = fieldWeight in 1187, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1187)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Within the R&D program "Information support of research by scientists and specialists on the basis of RNPLS&T Open Archive - the system of scientific knowledge aggregation", the RNPLS&T analyzes the use of linguistic tools of thematic search in the modern library information systems and the prospects for their development. The author defines the key common characteristics of e-catalogs of the largest Russian libraries revealed at the first stage of the analysis. Based on the specified common characteristics and detailed comparison analysis, the author outlines and substantiates the vectors for enhancing search inter faces of e-catalogs. The focus is made on linguistic tools of thematic search in library information systems; the key vectors are suggested: use of thematic search at different search levels with the clear-cut level differentiation; use of combined functionality within thematic search system; implementation of classification search in all e-catalogs; hierarchical representation of classifications; use of the matching systems for classification information retrieval languages, and in the long term classification and verbal information retrieval languages, and various verbal information retrieval languages. The author formulates practical recommendations to improve thematic search in library information systems.
  12. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.01602168 = product of:
      0.04806504 = sum of:
        0.04806504 = product of:
          0.09613008 = sum of:
            0.09613008 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09613008 = score(doc=3164,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  13. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.01602168 = product of:
      0.04806504 = sum of:
        0.04806504 = product of:
          0.09613008 = sum of:
            0.09613008 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09613008 = score(doc=4506,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    8.10.2000 11:52:22
  14. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.01602168 = product of:
      0.04806504 = sum of:
        0.04806504 = product of:
          0.09613008 = sum of:
            0.09613008 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09613008 = score(doc=6672,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.1996 9:22:19
  15. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.01602168 = product of:
      0.04806504 = sum of:
        0.04806504 = product of:
          0.09613008 = sum of:
            0.09613008 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09613008 = score(doc=3117,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    28. 2.1999 10:48:22
  16. Krueger, S.: Getting more out of NEXIS (1996) 0.02
    0.015943509 = product of:
      0.047830522 = sum of:
        0.047830522 = product of:
          0.095661044 = sum of:
            0.095661044 = weight(_text_:search in 4512) [ClassicSimilarity], result of:
              0.095661044 = score(doc=4512,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.54307455 = fieldWeight in 4512, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4512)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The MORE search command on the LEXIS/NEXIS online databses analyzes the words in a retrieved document, selects and creates a FREESTYLE search, and retrieves th 25 most relevant documents. Shows how MORE works and gives advice about when and when not to use it
  17. Griffith, C.: FREESTYLE: LEXIS-NEXIS goes natural (1994) 0.02
    0.015943509 = product of:
      0.047830522 = sum of:
        0.047830522 = product of:
          0.095661044 = sum of:
            0.095661044 = weight(_text_:search in 2512) [ClassicSimilarity], result of:
              0.095661044 = score(doc=2512,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.54307455 = fieldWeight in 2512, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2512)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes FREESTYLE, the associative language search engine, developed by Mead Data Central for its LEXIS/NEXIS online service. The special feature of the associative language in FREESTYLE allows users to enter search descriptions in plain English
  18. Notess, G.R.: Up and coming search technologies (2000) 0.02
    0.015783267 = product of:
      0.0473498 = sum of:
        0.0473498 = product of:
          0.0946996 = sum of:
            0.0946996 = weight(_text_:search in 5467) [ClassicSimilarity], result of:
              0.0946996 = score(doc=5467,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5376164 = fieldWeight in 5467, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5467)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  19. Hsinchun, C.: Knowledge-based document retrieval framework and design (1992) 0.02
    0.015621383 = product of:
      0.04686415 = sum of:
        0.04686415 = product of:
          0.0937283 = sum of:
            0.0937283 = weight(_text_:search in 6686) [ClassicSimilarity], result of:
              0.0937283 = score(doc=6686,freq=6.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5321022 = fieldWeight in 6686, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6686)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents research on the design of knowledge-based document retrieval systems in which a semantic network was adopted to represent subject knowledge and classification scheme knowledge and experts' search strategies and user modelling capability were modelled as procedural knowledge. These functionalities were incorporated into a prototype knowledge-based retrieval system, Metacat. Describes a system, the design of which was based on the blackboard architecture, which was able to create a user profile, identify task requirements, suggest heuristics-based search strategies, perform semantic-based search assistance, and assist online query refinement
  20. Robertson, S.E.; Sparck Jones, K.: Relevance weighting of search terms (1976) 0.02
    0.015621383 = product of:
      0.04686415 = sum of:
        0.04686415 = product of:
          0.0937283 = sum of:
            0.0937283 = weight(_text_:search in 71) [ClassicSimilarity], result of:
              0.0937283 = score(doc=71,freq=6.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5321022 = fieldWeight in 71, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=71)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Examines statistical techniques for exploiting relevance information to weight search terms. These techniques are presented as a natural extension of weighting methods using information about the distribution of index terms in documents in general. A series of relevance weighting functions is derived and is justified by theoretical considerations. In particular, it is shown that specific weighted search methods are implied by a general probabilistic theory of retrieval. Different applications of relevance weighting are illustrated by experimental results for test collections

Years

Types