Search (34 results, page 1 of 2)

  • × author_ss:"Sparck Jones, K."
  1. Sparck Jones, K.; Jackson, D.M.: ¬The use of automatically obtained keyword classification for information retrieval (1970) 0.07
    0.0677851 = product of:
      0.18076026 = sum of:
        0.09446257 = weight(_text_:retrieval in 5177) [ClassicSimilarity], result of:
          0.09446257 = score(doc=5177,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.75622874 = fieldWeight in 5177, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=5177)
        0.06844692 = weight(_text_:use in 5177) [ClassicSimilarity], result of:
          0.06844692 = score(doc=5177,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.5413059 = fieldWeight in 5177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.125 = fieldNorm(doc=5177)
        0.017850775 = weight(_text_:of in 5177) [ClassicSimilarity], result of:
          0.017850775 = score(doc=5177,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27643585 = fieldWeight in 5177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=5177)
      0.375 = coord(3/8)
    
    Source
    Information storage and retrieval. 5(1970), S.175-201
  2. Sparck Jones, K.: ¬A statistical interpretation of term specificity and its application in retrieval (2004) 0.05
    0.05337083 = product of:
      0.10674166 = sum of:
        0.041327372 = weight(_text_:retrieval in 4420) [ClassicSimilarity], result of:
          0.041327372 = score(doc=4420,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.33085006 = fieldWeight in 4420, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4420)
        0.029945528 = weight(_text_:use in 4420) [ClassicSimilarity], result of:
          0.029945528 = score(doc=4420,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 4420, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4420)
        0.022089208 = weight(_text_:of in 4420) [ClassicSimilarity], result of:
          0.022089208 = score(doc=4420,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34207192 = fieldWeight in 4420, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4420)
        0.013379549 = product of:
          0.026759097 = sum of:
            0.026759097 = weight(_text_:on in 4420) [ClassicSimilarity], result of:
              0.026759097 = score(doc=4420,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29462588 = fieldWeight in 4420, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4420)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The exhaustivity of document descriptions and the specificity of index terms are usually regarded as independent. It is suggested that specificity should be interpreted statistically, as a function of term use rather than of term meaning. The effects on retrieval of variations in term specificity are examined, experiments with three test collections showing, in particular, that frequently-occurring terms are required for good overall performance. It is argued that terms should be weighted according to collection frequency, so that matches on less frequent, more specific, terms are of greater value than matches on frequent terms. Results for the test collections show that considerable improvements in performance are obtained with this very simple procedure.
    Source
    Journal of documentation. 60(2004) no.5, S.493-502
  3. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.05
    0.048159793 = product of:
      0.096319586 = sum of:
        0.03267216 = weight(_text_:retrieval in 3645) [ClassicSimilarity], result of:
          0.03267216 = score(doc=3645,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.26155996 = fieldWeight in 3645, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3645)
        0.014972764 = weight(_text_:use in 3645) [ClassicSimilarity], result of:
          0.014972764 = score(doc=3645,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.11841066 = fieldWeight in 3645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3645)
        0.017463053 = weight(_text_:of in 3645) [ClassicSimilarity], result of:
          0.017463053 = score(doc=3645,freq=40.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 3645, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3645)
        0.031211607 = product of:
          0.062423214 = sum of:
            0.062423214 = weight(_text_:computers in 3645) [ClassicSimilarity], result of:
              0.062423214 = score(doc=3645,freq=4.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.28752154 = fieldWeight in 3645, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The selection that follows was chosen as it represents "a very early paper an the possibilities allowed by computers an documentation." In the early 1960s computers were being used to provide simple automatic indexing systems wherein keywords were extracted from documents. The problem with such systems was that they lacked vocabulary control, thus documents related in subject matter were not always collocated in retrieval. To improve retrieval by improving recall is the raison d'être of vocabulary control tools such as classifications and thesauri. The question arose whether it was possible by automatic means to construct classes of terms, which when substituted, one for another, could be used to improve retrieval performance? One of the first theoretical approaches to this question was initiated by R. M. Needham and Karen Sparck Jones at the Cambridge Language Research Institute in England.t The question was later pursued using experimental methodologies by Sparck Jones, who, as a Senior Research Associate in the Computer Laboratory at the University of Cambridge, has devoted her life's work to research in information retrieval and automatic naturai language processing. Based an the principles of numerical taxonomy, automatic classification techniques start from the premise that two objects are similar to the degree that they share attributes in common. When these two objects are keywords, their similarity is measured in terms of the number of documents they index in common. Step 1 in automatic classification is to compute mathematically the degree to which two terms are similar. Step 2 is to group together those terms that are "most similar" to each other, forming equivalence classes of intersubstitutable terms. The technique for forming such classes varies and is the factor that characteristically distinguishes different approaches to automatic classification. The technique used by Needham and Sparck Jones, that of clumping, is described in the selection that follows. Questions that must be asked are whether the use of automatically generated classes really does improve retrieval performance and whether there is a true eco nomic advantage in substituting mechanical for manual labor. Several years after her work with clumping, Sparck Jones was to observe that while it was not wholly satisfactory in itself, it was valuable in that it stimulated research into automatic classification. To this it might be added that it was valuable in that it introduced to libraryl information science the methods of numerical taxonomy, thus stimulating us to think again about the fundamental nature and purpose of classification. In this connection it might be useful to review how automatically derived classes differ from those of manually constructed classifications: 1) the manner of their derivation is purely a posteriori, the ultimate operationalization of the principle of literary warrant; 2) the relationship between members forming such classes is essentially statistical; the members of a given class are similar to each other not because they possess the class-defining characteristic but by virtue of sharing a family resemblance; and finally, 3) automatically derived classes are not related meaningfully one to another, that is, they are not ordered in traditional hierarchical and precedence relationships.
    Footnote
    Original in: Journal of documentation 20(1964) no.1, S.5-15.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  4. Sparck Jones, K.: Revisiting classification for retrieval (2005) 0.03
    0.031852387 = product of:
      0.0849397 = sum of:
        0.058445733 = weight(_text_:retrieval in 4328) [ClassicSimilarity], result of:
          0.058445733 = score(doc=4328,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.46789268 = fieldWeight in 4328, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4328)
        0.011044604 = weight(_text_:of in 4328) [ClassicSimilarity], result of:
          0.011044604 = score(doc=4328,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.17103596 = fieldWeight in 4328, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4328)
        0.01544937 = product of:
          0.03089874 = sum of:
            0.03089874 = weight(_text_:on in 4328) [ClassicSimilarity], result of:
              0.03089874 = score(doc=4328,freq=8.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.34020463 = fieldWeight in 4328, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4328)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Purpose - This short note seeks to respond to Hjørland and Pederson's paper "A substantive theory of classification for information retrieval" which starts from Sparck Jones's, "Some thoughts on classification for retrieval", originally published in 1970. Design/methodology/approach - The note comments on the context in which the 1970 paper was written, and on Hjørland and Pedersen's views, emphasising the need for well-grounded classification theory and application. Findings - The note maintains that text-based, a posteriori, classification, as increasingly found in applications, is likely to be more useful, in general, than a priori classification. Originality/value - The note elaborates on points made in a well-received earlier paper.
    Source
    Journal of documentation. 61(2005) no.5, S.598-601
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Sparck Jones, K.: Some thoughts on classification for retrieval (2005) 0.03
    0.031654 = product of:
      0.08441067 = sum of:
        0.055226028 = weight(_text_:retrieval in 4392) [ClassicSimilarity], result of:
          0.055226028 = score(doc=4392,freq=14.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.442117 = fieldWeight in 4392, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4392)
        0.023667008 = weight(_text_:of in 4392) [ClassicSimilarity], result of:
          0.023667008 = score(doc=4392,freq=36.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.36650562 = fieldWeight in 4392, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4392)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 4392) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=4392,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 4392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4392)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Purpose - This paper was originally published in 1970 (Journal of documentation. 26(1970), S.89-101), considered the suggestion that classifications for retrieval should be constructed automatically and raised some serious problems concerning the sorts of classification which were required, and the way in which formal classification theories should be exploited, given that a retrieval classification is required for a purpose. These difficulties had not been sufficiently considered, and the paper, therefore, aims to attempt an analysis of them, though no solutions of immediate application could be suggested. Design/methodology/approach - Starting with the illustrative proposition that a polythetic, multiple, unordered classification is required in automatic thesaurus construction, this is considered in the context of classification in general, where eight sorts of classification can be distinguished, each covering a range of class definitions and class-finding algorithms. Findings - Since there is generally no natural or best classification of a set of objects as such, the evaluation of alternative classifications requires either formal criteria of goodness of fit, or, if a classification is required for a purpose, a precise statement of that purpose. In any case a substantive theory of classification is needed, which does not exist; and, since sufficiently precise specifications of retrieval requirements are also lacking, the only currently available approach to automatic classification experiments for information retrieval is to do enough of them. Originality/value - Gives insights into the classification of material for information retrieval.
    Source
    Journal of documentation. 61(2005) no.5, S.571-581
    Theme
    Klassifikationssysteme im Online-Retrieval
  6. Robertson, S.E.; Sparck Jones, K.: Simple, proven approaches to text retrieval (1997) 0.03
    0.030648123 = product of:
      0.08172833 = sum of:
        0.046674512 = weight(_text_:retrieval in 4532) [ClassicSimilarity], result of:
          0.046674512 = score(doc=4532,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37365708 = fieldWeight in 4532, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.021389665 = weight(_text_:use in 4532) [ClassicSimilarity], result of:
          0.021389665 = score(doc=4532,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.013664153 = weight(_text_:of in 4532) [ClassicSimilarity], result of:
          0.013664153 = score(doc=4532,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.21160212 = fieldWeight in 4532, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
      0.375 = coord(3/8)
    
    Abstract
    This technical note describes straightforward techniques for document indexing and retrieval that have been solidly established through extensive testing and are easy to apply. They are useful for many different types of text material, are viable for very large files, and have the advantage that they do not require special skills or training for searching, but are easy for end users. The document and text retrieval methods described here have a sound theoretical basis, are well established by extensive testing, and the ideas involved are now implemented in some commercial retrieval systems. Testing in the last few years has, in particular, shown that the methods presented here work very well with full texts, not only title and abstracts, and with large files of texts containing three quarters of a million documents. These tests, the TREC Tests (see Harman 1993 - 1997; IP&M 1995), have been rigorous comparative evaluations involving many different approaches to information retrieval. These techniques depend an the use of simple terms for indexing both request and document texts; an term weighting exploiting statistical information about term occurrences; an scoring for request-document matching, using these weights, to obtain a ranked search output; and an relevance feedback to modify request weights or term sets in iterative searching. The normal implementation is via an inverted file organisation using a term list with linked document identifiers, plus counting data, and pointers to the actual texts. The user's request can be a word list, phrases, sentences or extended text.
    Issue
    May, 1997, Update of 1994 and 1996 versions.
    Series
    Technical Report TR356, University of Cambridge, Computer Laboratory
  7. Sparck Jones, K.; Rijsbergen, C.J. van: Progress in documentation : Information retrieval test collection (1976) 0.03
    0.030546788 = product of:
      0.0814581 = sum of:
        0.050615493 = weight(_text_:retrieval in 4161) [ClassicSimilarity], result of:
          0.050615493 = score(doc=4161,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40520695 = fieldWeight in 4161, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4161)
        0.017463053 = weight(_text_:of in 4161) [ClassicSimilarity], result of:
          0.017463053 = score(doc=4161,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 4161, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4161)
        0.013379549 = product of:
          0.026759097 = sum of:
            0.026759097 = weight(_text_:on in 4161) [ClassicSimilarity], result of:
              0.026759097 = score(doc=4161,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29462588 = fieldWeight in 4161, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4161)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Many retrieval experiments have been based on inadequate test collections, and current research is hampered by the lack of proper collections. This short review does not attempt a fully docuemted survey of all the collections used in the past decade: hopefully representative examples have been studied to throw light on the requriements test collections should meet, to show how past collections have been defective, and to suggest guidelines for a future "ideal" test collection. This specifications for this collection can be taken as an indirect comment on our present state of knowledge of major retrieval system variables, and experience in conducting experiments.
    Source
    Journal of documentation. 32(1976) no.1, S.59-75
  8. Sparck Jones, K.: Some thoughts on classification for retrieval (1970) 0.03
    0.0298677 = product of:
      0.0796472 = sum of:
        0.051129367 = weight(_text_:retrieval in 4327) [ClassicSimilarity], result of:
          0.051129367 = score(doc=4327,freq=12.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40932083 = fieldWeight in 4327, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4327)
        0.023000197 = weight(_text_:of in 4327) [ClassicSimilarity], result of:
          0.023000197 = score(doc=4327,freq=34.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.35617945 = fieldWeight in 4327, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4327)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 4327) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=4327,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 4327, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4327)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The suggestion that classifications for retrieval should be constructed automatically raises some serious problems concerning the sorts of classification which are required, and the way in which formal classification theories should be exploited, given that a retrieval classification is required for a purpose. These difficulties have not been sufficiently considered, and the paper therefore attempts an analysis of them, though no solution of immediate application can be suggested. Starting with the illustrative proposition that a polythetic, multiple, unordered classification is required in automatic thesaurus construction, this is considered in the context of classification in general, where eight sorts of classification can be distinguished, each covering a range of class definitions and class-finding algorithms. The problem which follows is that since there is generally no natural or best classification of a set of objects as such, the evaluation of alternative classifications requires either formal criteria of goodness of fit, or, if a classification is required for a purpose, a precises statement of that purpose. In any case a substantive theory of classification is needed, which does not exist; and since sufficiently precise specifications of retrieval requirements are also lacking, the only currently available approach to automatic classification experiments for information retrieval is to do enough of them
    Footnote
    Wiederabdruck in: Journal of documentation. 61(2005) no.5, S.571-581.
    Source
    Journal of documentation. 26(1970), S.89-101
    Theme
    Klassifikationssysteme im Online-Retrieval
  9. Sparck Jones, K.: ¬The role of artificial intelligence in information retrieval (1991) 0.03
    0.0292208 = product of:
      0.077922136 = sum of:
        0.047231287 = weight(_text_:retrieval in 4811) [ClassicSimilarity], result of:
          0.047231287 = score(doc=4811,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37811437 = fieldWeight in 4811, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4811)
        0.021862645 = weight(_text_:of in 4811) [ClassicSimilarity], result of:
          0.021862645 = score(doc=4811,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.33856338 = fieldWeight in 4811, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4811)
        0.008828212 = product of:
          0.017656423 = sum of:
            0.017656423 = weight(_text_:on in 4811) [ClassicSimilarity], result of:
              0.017656423 = score(doc=4811,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19440265 = fieldWeight in 4811, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4811)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Presents a view of the scope of artificial intelligence (AI) in information retrieval (IR). Considers potential roles of AI and IR, evaluating AI from a realistic point od view and within a wide information management potential, not just because AI is itself insufficiently developed, but because many information management tasks are properly shallow information processing ones. There is nevertheless an important place for specific applications of AI or AI-derived technology when particular constraints can be placed on the information management tasks involved
    Source
    Journal of the American Society for Information Science. 42(1991) no.8, S.558-565
  10. Sparck Jones, K.: Reflections on TREC : TREC-2 (1995) 0.03
    0.02771635 = product of:
      0.073910266 = sum of:
        0.047231287 = weight(_text_:retrieval in 1916) [ClassicSimilarity], result of:
          0.047231287 = score(doc=1916,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37811437 = fieldWeight in 1916, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=1916)
        0.017850775 = weight(_text_:of in 1916) [ClassicSimilarity], result of:
          0.017850775 = score(doc=1916,freq=8.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27643585 = fieldWeight in 1916, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1916)
        0.008828212 = product of:
          0.017656423 = sum of:
            0.017656423 = weight(_text_:on in 1916) [ClassicSimilarity], result of:
              0.017656423 = score(doc=1916,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19440265 = fieldWeight in 1916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1916)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Discusses the TREC programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research
  11. Sparck Jones, K.: Reflections on TREC (1997) 0.03
    0.026182959 = product of:
      0.06982122 = sum of:
        0.04338471 = weight(_text_:retrieval in 580) [ClassicSimilarity], result of:
          0.04338471 = score(doc=580,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.34732026 = fieldWeight in 580, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=580)
        0.014968331 = weight(_text_:of in 580) [ClassicSimilarity], result of:
          0.014968331 = score(doc=580,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23179851 = fieldWeight in 580, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=580)
        0.011468184 = product of:
          0.022936368 = sum of:
            0.022936368 = weight(_text_:on in 580) [ClassicSimilarity], result of:
              0.022936368 = score(doc=580,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.25253648 = fieldWeight in 580, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within its terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research
    Source
    From classification to 'knowledge organization': Dorking revisited or 'past is prelude'. A collection of reprints to commemorate the firty year span between the Dorking Conference (First International Study Conference on Classification Research 1957) and the Sixth International Study Conference on Classification Research (London 1997). Ed.: A. Gilchrist
  12. Sparck Jones, K.: ¬A statistical interpretation of term specifity and its application in retrieval (1972) 0.02
    0.023009984 = product of:
      0.092039935 = sum of:
        0.066795126 = weight(_text_:retrieval in 5187) [ClassicSimilarity], result of:
          0.066795126 = score(doc=5187,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5347345 = fieldWeight in 5187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=5187)
        0.025244808 = weight(_text_:of in 5187) [ClassicSimilarity], result of:
          0.025244808 = score(doc=5187,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.39093933 = fieldWeight in 5187, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=5187)
      0.25 = coord(2/8)
    
    Source
    Journal of documentation. 28(1972), S.11-21
  13. Sparck Jones, K.; Jones, G.J.F.; Foote, J.T.; Young, S.J.: Experiments in spoken document retrieval (1996) 0.02
    0.022710815 = product of:
      0.09084326 = sum of:
        0.07731644 = weight(_text_:retrieval in 1951) [ClassicSimilarity], result of:
          0.07731644 = score(doc=1951,freq=14.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.61896384 = fieldWeight in 1951, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1951)
        0.013526822 = weight(_text_:of in 1951) [ClassicSimilarity], result of:
          0.013526822 = score(doc=1951,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20947541 = fieldWeight in 1951, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1951)
      0.25 = coord(2/8)
    
    Abstract
    Describes experiments in the retrieval of spoken documents in multimedia systems. Speech documents pose a particular problem for retrieval since their words as well as contents are unknown. Addresses this problem, for a video mail application, by combining state of the art speech recognition with established document retrieval technologies so as to provide an effective and efficient retrieval tool. Tests with a small spoken message collection show that retrieval precision for the spoken file can reach 90% of that obtained when the same file is used, as a benchmark, in text transcription form
    Footnote
    Wiederabdruck in: Readings in informatio retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.493-502.
  14. Lewis, D.D.; Sparck Jones, K.: Natural language processing for information retrieval (1997) 0.02
    0.021939356 = product of:
      0.058504947 = sum of:
        0.033397563 = weight(_text_:retrieval in 575) [ClassicSimilarity], result of:
          0.033397563 = score(doc=575,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.26736724 = fieldWeight in 575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=575)
        0.012622404 = weight(_text_:of in 575) [ClassicSimilarity], result of:
          0.012622404 = score(doc=575,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.19546966 = fieldWeight in 575, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=575)
        0.012484977 = product of:
          0.024969954 = sum of:
            0.024969954 = weight(_text_:on in 575) [ClassicSimilarity], result of:
              0.024969954 = score(doc=575,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.27492687 = fieldWeight in 575, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=575)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Footnote
    Wiederabdruck aus: Communications of the ACM 39(1996) no.1, S.92-101
    Source
    From classification to 'knowledge organization': Dorking revisited or 'past is prelude'. A collection of reprints to commemorate the firty year span between the Dorking Conference (First International Study Conference on Classification Research 1957) and the Sixth International Study Conference on Classification Research (London 1997). Ed.: A. Gilchrist
  15. Lewis, D.D.; Sparck Jones, K.: Natural language processing for information retrieval (1996) 0.02
    0.018516291 = product of:
      0.074065164 = sum of:
        0.058445733 = weight(_text_:retrieval in 4144) [ClassicSimilarity], result of:
          0.058445733 = score(doc=4144,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.46789268 = fieldWeight in 4144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=4144)
        0.015619429 = weight(_text_:of in 4144) [ClassicSimilarity], result of:
          0.015619429 = score(doc=4144,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 4144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=4144)
      0.25 = coord(2/8)
    
    Source
    Communications of the Association for Computing Machinery. 39(1996) no.1, S.92-101
  16. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 1 (2000) 0.02
    0.015871106 = product of:
      0.06348442 = sum of:
        0.050096344 = weight(_text_:retrieval in 4181) [ClassicSimilarity], result of:
          0.050096344 = score(doc=4181,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40105087 = fieldWeight in 4181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4181)
        0.013388081 = weight(_text_:of in 4181) [ClassicSimilarity], result of:
          0.013388081 = score(doc=4181,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 4181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4181)
      0.25 = coord(2/8)
    
  17. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 2 (2000) 0.02
    0.015871106 = product of:
      0.06348442 = sum of:
        0.050096344 = weight(_text_:retrieval in 4286) [ClassicSimilarity], result of:
          0.050096344 = score(doc=4286,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40105087 = fieldWeight in 4286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4286)
        0.013388081 = weight(_text_:of in 4286) [ClassicSimilarity], result of:
          0.013388081 = score(doc=4286,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 4286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4286)
      0.25 = coord(2/8)
    
  18. Sparck Jones, K.: Search term relevance weighting given little relevance information (1979) 0.02
    0.015871106 = product of:
      0.06348442 = sum of:
        0.050096344 = weight(_text_:retrieval in 1939) [ClassicSimilarity], result of:
          0.050096344 = score(doc=1939,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40105087 = fieldWeight in 1939, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=1939)
        0.013388081 = weight(_text_:of in 1939) [ClassicSimilarity], result of:
          0.013388081 = score(doc=1939,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 1939, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1939)
      0.25 = coord(2/8)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.329-338.
    Source
    Journal of documentation. 35(1979), S.30-48
  19. Strzalkowski, T.; Sparck Jones, K.: NLP track at TREC-5 (1997) 0.02
    0.015871106 = product of:
      0.06348442 = sum of:
        0.050096344 = weight(_text_:retrieval in 3098) [ClassicSimilarity], result of:
          0.050096344 = score(doc=3098,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40105087 = fieldWeight in 3098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=3098)
        0.013388081 = weight(_text_:of in 3098) [ClassicSimilarity], result of:
          0.013388081 = score(doc=3098,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 3098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=3098)
      0.25 = coord(2/8)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  20. Sparck Jones, K.: Metareflections on TREC (2005) 0.02
    0.015834665 = product of:
      0.06333866 = sum of:
        0.050096344 = weight(_text_:retrieval in 5092) [ClassicSimilarity], result of:
          0.050096344 = score(doc=5092,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40105087 = fieldWeight in 5092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=5092)
        0.013242318 = product of:
          0.026484637 = sum of:
            0.026484637 = weight(_text_:on in 5092) [ClassicSimilarity], result of:
              0.026484637 = score(doc=5092,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29160398 = fieldWeight in 5092, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5092)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman