Search (28 results, page 1 of 2)

  • × author_ss:"Croft, W.B."
  1. Belkin, N.J.; Croft, W.B.: Information filtering and information retrieval : two sides of the same coin? (1992) 0.29
    0.29082578 = product of:
      0.38776773 = sum of:
        0.022096837 = weight(_text_:for in 6093) [ClassicSimilarity], result of:
          0.022096837 = score(doc=6093,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 6093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=6093)
        0.19179249 = weight(_text_:computing in 6093) [ClassicSimilarity], result of:
          0.19179249 = score(doc=6093,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.73337615 = fieldWeight in 6093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.09375 = fieldNorm(doc=6093)
        0.17387839 = product of:
          0.34775677 = sum of:
            0.34775677 = weight(_text_:machinery in 6093) [ClassicSimilarity], result of:
              0.34775677 = score(doc=6093,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                0.9875266 = fieldWeight in 6093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6093)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Source
    Communications of the Association for Computing Machinery. 35(1992) no.12, S.29-38
  2. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.01
    0.012811186 = product of:
      0.051244743 = sum of:
        0.051244743 = product of:
          0.10248949 = sum of:
            0.10248949 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.10248949 = score(doc=334,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  3. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.01
    0.008006992 = product of:
      0.032027967 = sum of:
        0.032027967 = product of:
          0.064055935 = sum of:
            0.064055935 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.064055935 = score(doc=3103,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 2.1999 20:55:22
  4. Croft, W.B.; Turtle, H.R.: Retrieval strategies for hypertext (1993) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 4711) [ClassicSimilarity], result of:
          0.02946245 = score(doc=4711,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 4711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=4711)
      0.25 = coord(1/4)
    
  5. Croft, W.B.: Clustering large files of documents using the single link method (1977) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 5489) [ClassicSimilarity], result of:
          0.02946245 = score(doc=5489,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 5489, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=5489)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 28(1977), S.341-344
  6. Shneiderman, B.; Byrd, D.; Croft, W.B.: Clarifying search : a user-interface framework for text searches (1997) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 1471) [ClassicSimilarity], result of:
          0.02946245 = score(doc=1471,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 1471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=1471)
      0.25 = coord(1/4)
    
  7. Shneiderman, B.; Byrd, D.; Croft, W.B.: Clarifying search : a user-interface framework for text searches (1997) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 1258) [ClassicSimilarity], result of:
          0.02946245 = score(doc=1258,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 1258, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=1258)
      0.25 = coord(1/4)
    
    Abstract
    Current user interfaces for textual database searching leave much to be desired: individually, they are often confusing, and as a group, they are seriously inconsistent. We propose a four- phase framework for user-interface design: the framework provides common structure and terminology for searching while preserving the distinct features of individual collections and search mechanisms. Users will benefit from faster learning, increased comprehension, and better control, leading to more effective searches and higher satisfaction.
  8. Croft, W.B.: Combining approaches to information retrieval (2000) 0.01
    0.0067657465 = product of:
      0.027062986 = sum of:
        0.027062986 = weight(_text_:for in 6862) [ClassicSimilarity], result of:
          0.027062986 = score(doc=6862,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3048749 = fieldWeight in 6862, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
      0.25 = coord(1/4)
    
    Abstract
    The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  9. Murdock, V.; Kelly, D.; Croft, W.B.; Belkin, N.J.; Yuan, X.: Identifying and improving retrieval for procedural questions (2007) 0.01
    0.0067657465 = product of:
      0.027062986 = sum of:
        0.027062986 = weight(_text_:for in 902) [ClassicSimilarity], result of:
          0.027062986 = score(doc=902,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3048749 = fieldWeight in 902, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=902)
      0.25 = coord(1/4)
    
    Abstract
    People use questions to elicit information from other people in their everyday lives and yet the most common method of obtaining information from a search engine is by posing keywords. There has been research that suggests users are better at expressing their information needs in natural language, however the vast majority of work to improve document retrieval has focused on queries posed as sets of keywords or Boolean queries. This paper focuses on improving document retrieval for the subset of natural language questions asking about how something is done. We classify questions as asking either for a description of a process or asking for a statement of fact, with better than 90% accuracy. Further we identify non-content features of documents relevant to questions asking about a process. Finally we demonstrate that we can use these features to significantly improve the precision of document retrieval results for questions asking about a process. Our approach, based on exploiting the structure of documents, shows a significant improvement in precision at rank one for questions asking about how something is done.
  10. Jing, Y.; Croft, W.B.: ¬An association thesaurus for information retrieval (199?) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 4494) [ClassicSimilarity], result of:
          0.025779642 = score(doc=4494,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 4494, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
      0.25 = coord(1/4)
    
    Abstract
    Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent benefits for retrieval performance, and it is difficult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in INQUERY to evaluate different types of association thesauri, and thesauri constructed for a variety of collections
  11. Croft, W.B.; Thompson, R.H.: Support for browsing in an intelligent text retrieval system (1989) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 5004) [ClassicSimilarity], result of:
          0.025779642 = score(doc=5004,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 5004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=5004)
      0.25 = coord(1/4)
    
  12. Croft, W.B.; Thompson, R.H.: I3R: a new approach to the desing of document retrieval systems (1987) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 3898) [ClassicSimilarity], result of:
          0.025779642 = score(doc=3898,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 3898, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=3898)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 38(1987), S.389-404
  13. Xu, J.; Croft, W.B.: Topic-based language models for distributed retrieval (2000) 0.01
    0.0061762533 = product of:
      0.024705013 = sum of:
        0.024705013 = weight(_text_:for in 38) [ClassicSimilarity], result of:
          0.024705013 = score(doc=38,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27831143 = fieldWeight in 38, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.25 = coord(1/4)
    
    Abstract
    Effective retrieval in a distributed environment is an important but difficult problem. Lack of effectiveness appears to have two major causes. First, existing collection selection algorithms do not work well on heterogeneous collections. Second, relevant documents are scattered over many collections and searching a few collections misses many relevant documents. We propose a topic-oriented approach to distributed retrieval. With this approach, we structure the document set of a distributed retrieval environment around a set of topics. Retrieval for a query involves first selecting the right topics for the query and then dispatching the search process to collections that contain such topics. The content of a topic is characterized by a language model. In environments where the labeling of documents by topics is unavailable, document clustering is employed for topic identification. Based on these ideas, three methods are proposed to suit different environments. We show that all three methods improve effectiveness of distributed retrieval
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  14. Croft, W.B.: What do people want from information retrieval? : the top 10 research issues for companies that use and sell IR systems (1995) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 3402) [ClassicSimilarity], result of:
          0.022096837 = score(doc=3402,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 3402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=3402)
      0.25 = coord(1/4)
    
  15. Ballesteros, L.; Croft, W.B.: Statistical methods for cross-language information retrieval (1998) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 6303) [ClassicSimilarity], result of:
          0.022096837 = score(doc=6303,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 6303, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=6303)
      0.25 = coord(1/4)
    
  16. Croft, W.B.: Advances in information retrieval : Recent research from the Center for Intelligent Information Retrieval (2000) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 6860) [ClassicSimilarity], result of:
          0.022096837 = score(doc=6860,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 6860, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=6860)
      0.25 = coord(1/4)
    
    Content
    Enthält die Beiträge: CROFT, W.B.: Combining approaches to information retrieval; GREIFF, W.R.: The use of exploratory data analysis in information retrieval research; PONTE, J.M.: Language models for relevance feedback; PAPKA, R. u. J. ALLAN: Topic detection and tracking: event clustering as a basis for first story detection; CALLAN, J.: Distributed information retrieval; XU, J. u. W.B. CROFT: Topic-based language models for ditributed retrieval; LU, Z. u. K.S. McKINLEY: The effect of collection organization and query locality on information retrieval system performance; BALLESTEROS, L.A.: Cross-language retrieval via transitive translation; SANDERSON, M. u. D. LAWRIE: Building, testing, and applying concept hierarchies; RAVELA, S. u. C. LUO: Appearance-based global similarity retrieval of images
  17. Luk, R.W.P.; Leong, H.V.; Dillon, T.S.; Chan, A.T.S.; Croft, W.B.; Allen, J.: ¬A survey in indexing and searching XML documents (2002) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 460) [ClassicSimilarity], result of:
          0.022096837 = score(doc=460,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 460, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=460)
      0.25 = coord(1/4)
    
    Abstract
    XML holds the promise to yield (1) a more precise search by providing additional information in the elements, (2) a better integrated search of documents from heterogeneous sources, (3) a powerful search paradigm using structural as well as content specifications, and (4) data and information exchange to share resources and to support cooperative search. We survey several indexing techniques for XML documents, grouping them into flatfile, semistructured, and structured indexing paradigms. Searching techniques and supporting techniques for searching are reviewed, including full text search and multistage search. Because searching XML documents can be very flexible, various search result presentations are discussed, as well as database and information retrieval system integration and XML query languages. We also survey various retrieval models, examining how they would be used or extended for retrieving XML documents. To conclude the article, we discuss various open issues that XML poses with respect to information retrieval and database research.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.6, S.415-437
  18. Tavakoli, L.; Zamani, H.; Scholer, F.; Croft, W.B.; Sanderson, M.: Analyzing clarification in asynchronous information-seeking conversations (2022) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 496) [ClassicSimilarity], result of:
          0.022096837 = score(doc=496,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 496, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=496)
      0.25 = coord(1/4)
    
    Abstract
    This research analyzes human-generated clarification questions to provide insights into how they are used to disambiguate and provide a better understanding of information needs. A set of clarification questions is extracted from posts on the Stack Exchange platform. Novel taxonomy is defined for the annotation of the questions and their responses. We investigate the clarification questions in terms of whether they add any information to the post (the initial question posted by the asker) and the accepted answer, which is the answer chosen by the asker. After identifying, which clarification questions are more useful, we investigated the characteristics of these questions in terms of their types and patterns. Non-useful clarification questions are identified, and their patterns are compared with useful clarifications. Our analysis indicates that the most useful clarification questions have similar patterns, regardless of topic. This research contributes to an understanding of clarification in conversations and can provide insight for clarification dialogues in conversational search scenarios and for the possible system generation of clarification requests in information-seeking conversations.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.3, S.449-471
  19. Croft, W.B.; Harper, D.J.: Using probabilistic models of document retrieval without relevance information (1979) 0.01
    0.0052082743 = product of:
      0.020833097 = sum of:
        0.020833097 = weight(_text_:for in 4520) [ClassicSimilarity], result of:
          0.020833097 = score(doc=4520,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 4520, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=4520)
      0.25 = coord(1/4)
    
    Abstract
    Based on a probablistic model, proposes strategies for the initial search and an intermediate search. Retrieval experiences with the Cranfield collection of 1,400 documents show that this initial search strategy is better than conventional search strategies both in terms of retrieval effectiveness and in terms of the number of queries that retrieve relevant documents. The intermediate search is a useful substitute for a relevance feedback search. A cluster search would be an effective alternative strategy.
  20. Liu, X.; Croft, W.B.: Statistical language modeling for information retrieval (2004) 0.01
    0.0051468783 = product of:
      0.020587513 = sum of:
        0.020587513 = weight(_text_:for in 4277) [ClassicSimilarity], result of:
          0.020587513 = score(doc=4277,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 4277, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4277)
      0.25 = coord(1/4)
    
    Abstract
    This chapter reviews research and applications in statistical language modeling for information retrieval (IR), which has emerged within the past several years as a new probabilistic framework for describing information retrieval processes. Generally speaking, statistical language modeling, or more simply language modeling (LM), involves estimating a probability distribution that captures statistical regularities of natural language use. Applied to information retrieval, language modeling refers to the problem of estimating the likelihood that a query and a document could have been generated by the same language model, given the language model of the document either with or without a language model of the query. The roots of statistical language modeling date to the beginning of the twentieth century when Markov tried to model letter sequences in works of Russian literature (Manning & Schütze, 1999). Zipf (1929, 1932, 1949, 1965) studied the statistical properties of text and discovered that the frequency of works decays as a Power function of each works rank. However, it was Shannon's (1951) work that inspired later research in this area. In 1951, eager to explore the applications of his newly founded information theory to human language, Shannon used a prediction game involving n-grams to investigate the information content of English text. He evaluated n-gram models' performance by comparing their crossentropy an texts with the true entropy estimated using predictions made by human subjects. For many years, statistical language models have been used primarily for automatic speech recognition. Since 1980, when the first significant language model was proposed (Rosenfeld, 2000), statistical language modeling has become a fundamental component of speech recognition, machine translation, and spelling correction.