Search (8 results, page 1 of 1)

  • × author_ss:"Mills, J."
  1. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1997) 0.02
    0.024013238 = product of:
      0.048026476 = sum of:
        0.015682328 = weight(_text_:information in 576) [ClassicSimilarity], result of:
          0.015682328 = score(doc=576,freq=2.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.1551638 = fieldWeight in 576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=576)
        0.032344148 = product of:
          0.064688295 = sum of:
            0.064688295 = weight(_text_:organization in 576) [ClassicSimilarity], result of:
              0.064688295 = score(doc=576,freq=2.0), product of:
                0.20527108 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.057573788 = queryNorm
                0.31513596 = fieldWeight in 576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0625 = fieldNorm(doc=576)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Imprint
    The Hague : International Federation for Information and Documentation (FID)
    Source
    From classification to 'knowledge organization': Dorking revisited or 'past is prelude'. A collection of reprints to commemorate the firty year span between the Dorking Conference (First International Study Conference on Classification Research 1957) and the Sixth International Study Conference on Classification Research (London 1997). Ed.: A. Gilchrist
  2. Mills, J.: Inadequacies of existing general classification schemes (1969) 0.01
    0.006861019 = product of:
      0.027444076 = sum of:
        0.027444076 = weight(_text_:information in 1282) [ClassicSimilarity], result of:
          0.027444076 = score(doc=1282,freq=2.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.27153665 = fieldWeight in 1282, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=1282)
      0.25 = coord(1/4)
    
    Source
    Classification and information control. Papers representing the work of the Classification Research Group during 1960-1968
  3. Mills, J.: Classification of a subject field (1957) 0.01
    0.006861019 = product of:
      0.027444076 = sum of:
        0.027444076 = weight(_text_:information in 565) [ClassicSimilarity], result of:
          0.027444076 = score(doc=565,freq=2.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.27153665 = fieldWeight in 565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=565)
      0.25 = coord(1/4)
    
    Source
    Proceedings of the International Study Conference on Classification for Information Retrieval, held at Beatrice Webb House, Dorking, England, 13.-17.5.1957
  4. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.01
    0.0065750163 = product of:
      0.026300065 = sum of:
        0.026300065 = weight(_text_:information in 831) [ClassicSimilarity], result of:
          0.026300065 = score(doc=831,freq=10.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.2602176 = fieldWeight in 831, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
      0.25 = coord(1/4)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
    Footnote
    Artikel in einem Themenheft: The philosophy of information
  5. Mills, J.; Lodge, D.: Affect, emotional intelligence and librarian-user interaction (2006) 0.01
    0.0061989846 = product of:
      0.024795938 = sum of:
        0.024795938 = weight(_text_:information in 625) [ClassicSimilarity], result of:
          0.024795938 = score(doc=625,freq=20.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.2453355 = fieldWeight in 625, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=625)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to explore practical ways in which librarians may better assist, understand and manage a library user's experience. Design/methodology/approach - This paper is based on earlier work by Mills where 34 academics were interviewed on their information seeking behaviour. The concepts of affect and emotional intelligence have been introduced so information professionals can obtain a more clear understanding of the information environment. Findings - In order to connect more closely with their user populations' information professionals could consider the following: embrace the key tenets of emotional intelligence as useful assistance strategies in user-librarian interaction; understand that personal interaction is important for many users; understand that such interaction can offer valuable insights into user understandings of the role of the library; understand that there is more to a library than resource access; understand that not all users share the same perceptions as librarians of the information values of such tools as catalogues and databases; appreciate that users see many roles for a library and these are individually constructed based upon past experience and current needs; extending the physical boundaries of the library into user communities is important for role development and accept that the key marketing strategy of commercial retailers to get customers to "buy" and return to buy, is relevant in environments such as libraries. Originality/value - The paper builds upon research on the information seeking behaviour of academics and explores the idea that users select information sources for more than cognitive reasons, i.e. just to find out. The importance of the emotional aspect of user interaction with sources, including information professionals, in their search for information has been neglected. It is necessary to re-examine why and for what reasons users discriminate in their choice of information sources.
  6. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1963) 0.01
    0.005880873 = product of:
      0.023523493 = sum of:
        0.023523493 = weight(_text_:information in 577) [ClassicSimilarity], result of:
          0.023523493 = score(doc=577,freq=2.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.23274569 = fieldWeight in 577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=577)
      0.25 = coord(1/4)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.98-110.
  7. Mills, J.: Bliss Bibliographic Classification First Edition (2009) 0.00
    0.003920582 = product of:
      0.015682328 = sum of:
        0.015682328 = weight(_text_:information in 3808) [ClassicSimilarity], result of:
          0.015682328 = score(doc=3808,freq=2.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.1551638 = fieldWeight in 3808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3808)
      0.25 = coord(1/4)
    
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  8. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.00
    0.0027722702 = product of:
      0.011089081 = sum of:
        0.011089081 = weight(_text_:information in 3643) [ClassicSimilarity], result of:
          0.011089081 = score(doc=3643,freq=4.0), product of:
            0.10106951 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.057573788 = queryNorm
            0.10971737 = fieldWeight in 3643, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3643)
      0.25 = coord(1/4)
    
    Abstract
    A landmark event in the twentieth-century development of subject analysis theory was a retrieval experiment, begun in 1957, by Cyril Cleverdon, Librarian of the Cranfield Institute of Technology. For this work he received the Professional Award of the Special Libraries Association in 1962 and the Award of Merit of the American Society for Information Science in 1970. The objective of the experiment, called Cranfield I, was to test the ability of four indexing systems-UDC, Facet, Uniterm, and Alphabetic-Subject Headings-to retrieve material responsive to questions addressed to a collection of documents. The experiment was ambitious in scale, consisting of eighteen thousand documents and twelve hundred questions. Prior to Cranfield I, the question of what constitutes good indexing was approached subjectively and reference was made to assumptions in the form of principles that should be observed or user needs that should be met. Cranfield I was the first large-scale effort to use objective criteria for determining the parameters of good indexing. Its creative impetus was the definition of user satisfaction in terms of precision and recall. Out of the experiment emerged the definition of recall as the percentage of relevant documents retrieved and precision as the percentage of retrieved documents that were relevant. Operationalizing the concept of user satisfaction, that is, making it measurable, meant that it could be studied empirically and manipulated as a variable in mathematical equations. Much has been made of the fact that the experimental methodology of Cranfield I was seriously flawed. This is unfortunate as it tends to diminish Cleverdon's contribu tion, which was not methodological-such contributions can be left to benchmark researchers-but rather creative: the introduction of a new paradigm, one that proved to be eminently productive. The criticism leveled at the methodological shortcomings of Cranfield I underscored the need for more precise definitions of the variables involved in information retrieval. Particularly important was the need for a definition of the dependent variable index language. Like the definitions of precision and recall, that of index language provided a new way of looking at the indexing process. It was a re-visioning that stimulated research activity and led not only to a better understanding of indexing but also the design of better retrieval systems." Cranfield I was followed by Cranfield II. While Cranfield I was a wholesale comparison of four indexing "systems," Cranfield II aimed to single out various individual factors in index languages, called "indexing devices," and to measure how variations in these affected retrieval performance. The following selection represents the thinking at Cranfield midway between these two notable retrieval experiments.