Search (8819 results, page 2 of 441)

  1. Gnoli, C.: Classification transcends library business : the case of BiblioPhil (2010) 0.11
    0.11082558 = product of:
      0.16623837 = sum of:
        0.068818994 = weight(_text_:interest in 3698) [ClassicSimilarity], result of:
          0.068818994 = score(doc=3698,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 3698, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3698)
        0.09741938 = sum of:
          0.0632301 = weight(_text_:classification in 3698) [ClassicSimilarity], result of:
            0.0632301 = score(doc=3698,freq=10.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.39339557 = fieldWeight in 3698, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3698)
          0.034189284 = weight(_text_:22 in 3698) [ClassicSimilarity], result of:
            0.034189284 = score(doc=3698,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.19345059 = fieldWeight in 3698, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3698)
      0.6666667 = coord(2/3)
    
    Abstract
    Although bibliographic classifications usually adopt a perspective different from that of object classifications, the two have obvious relationships. These become especially relevant when users are looking for knowledge scattered in a wide variety of forms and media. This is an increasingly common situation, as library catalogues now coexist in the global digital environment with catalogues of archives, of museums, of commercial products, and many other information resources. In order to make the subject content of all these resources searchable, a broader conception of classification is needed, that can be applied to an knowledge item, rather than only bibliographic materials. To illustrate this we take an example of the research on bagpipes in Northern Italian folklore. For this kind of research, the most effective search strategy is a cross-media one, looking for many different knowledge sources such as published documents, police archives, painting details, museum specimens, organizations devoted to related subjects. To provide satisfying results for this kind of search, the traditional disciplinary approach to classification is not sufficient. Tools are needed in which knowledge items dealing with a phenomenon of interest can be retrieved independently from the other topics with which it is combined, the disciplinary context, and the medium where it occurs. This can be made possible if the basic units of classification are taken to be the phenomena treated, as recommended in the León Manifesto, rather than disciplines or other aspect features. The concept of bagpipes should be retrievable and browsable in any combination with other phenomena, disciplines, media etc. Examples are given of information sources that could be managed by this freely-faceted technique of classification.
    Date
    22. 7.2010 20:40:08
  2. Umlauf, K.: Systematik im Umbruch : systematische Aufstellung, Präsentation und Reader Interest Classification in öffentlichen Bibliotheken (1996) 0.11
    0.11061023 = product of:
      0.16591534 = sum of:
        0.13763799 = weight(_text_:interest in 396) [ClassicSimilarity], result of:
          0.13763799 = score(doc=396,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.54892015 = fieldWeight in 396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.078125 = fieldNorm(doc=396)
        0.028277358 = product of:
          0.056554716 = sum of:
            0.056554716 = weight(_text_:classification in 396) [ClassicSimilarity], result of:
              0.056554716 = score(doc=396,freq=2.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.35186368 = fieldWeight in 396, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=396)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  3. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.11
    0.10637534 = product of:
      0.159563 = sum of:
        0.068818994 = weight(_text_:interest in 1107) [ClassicSimilarity], result of:
          0.068818994 = score(doc=1107,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.090744 = sum of:
          0.056554716 = weight(_text_:classification in 1107) [ClassicSimilarity], result of:
            0.056554716 = score(doc=1107,freq=8.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.35186368 = fieldWeight in 1107, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.034189284 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.034189284 = score(doc=1107,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.6666667 = coord(2/3)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  4. Camacho-Miñano, M.-del-Mar; Núñez-Nickel, M.: ¬The multilayered nature of reference selection (2009) 0.11
    0.10502851 = product of:
      0.15754277 = sum of:
        0.082582794 = weight(_text_:interest in 2751) [ClassicSimilarity], result of:
          0.082582794 = score(doc=2751,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.3293521 = fieldWeight in 2751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.046875 = fieldNorm(doc=2751)
        0.07495997 = sum of:
          0.03393283 = weight(_text_:classification in 2751) [ClassicSimilarity], result of:
            0.03393283 = score(doc=2751,freq=2.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.21111822 = fieldWeight in 2751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
          0.04102714 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
            0.04102714 = score(doc=2751,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.23214069 = fieldWeight in 2751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
      0.6666667 = coord(2/3)
    
    Abstract
    Why authors choose some references in preference to others is a question that is still not wholly answered despite its being of interest to scientists. The relevance of references is twofold: They are a mechanism for tracing the evolution of science, and because they enhance the image of the cited authors, citations are a widely known and used indicator of scientific endeavor. Following an extensive review of the literature, we selected all papers that seek to answer the central question and demonstrate that the existing theories are not sufficient: Neither citation nor indicator theory provides a complete and convincing answer. Some perspectives in this arena remain, which are isolated from the core literature. The purpose of this article is to offer a fresh perspective on a 30-year-old problem by extending the context of the discussion. We suggest reviving the discussion about citation theories with a new perspective, that of the readers, by layers or phases, in the final choice of references, allowing for a new classification in which any paper, to date, could be included.
    Date
    22. 3.2009 19:05:07
  5. Classification : options and opportunities (1995) 0.10
    0.104873404 = product of:
      0.1573101 = sum of:
        0.09732476 = weight(_text_:interest in 3753) [ClassicSimilarity], result of:
          0.09732476 = score(doc=3753,freq=4.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.38814518 = fieldWeight in 3753, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3753)
        0.059985336 = product of:
          0.11997067 = sum of:
            0.11997067 = weight(_text_:classification in 3753) [ClassicSimilarity], result of:
              0.11997067 = score(doc=3753,freq=36.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.7464156 = fieldWeight in 3753, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3753)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Explores a wide range of options surrounding the choice and application of library and bibliographic classification Systems. It provides detailed descriptions of alternative provisions in the much-used DDC and LCC systems, as well as descriptions and discussions of several alternative systems, such as BC, UDC and Reader Interest
    Content
    Enthält die Beiträge: LANGRIDGE, D.W.: Alternative starting points in classification; THOMAS, A.R.: Blissful beliefs: Henry Evelyn Bliss councels on classification; WEINBERG, B.H.: Library classification and information retrieval thesauri: comparison and contrast; LOSEE, R.M.: How to study classification systems and their appropriateness for individual institutions; SHORT, E.C.: Knowledge and the educational purposes of higher education: implications for the design of a classification scheme; CHAN, L.M.: Library of Congress Classification: alternative provisions; MITCHELL, J.S.: Options in the Dewey Decimal Classification system: the current perspective; THOMAS, A.R.: Bliss Classification update; STRACHAN, P.D. u. F.H.M. OOMES: Universal Decimal Classification update; HSU, K.M.: The classification schemes of the Research Libraries of the New York Public Library; SAPIIE, J.: Reader-interst classification: the user-friendly schemes; WINKE, R.C.: Intentional use of multiple classification schemes in United States libraries; CHRESSANTHIS, J.D.: The reclassification decision: Dewey or Library of Congress?; PATTIE, L.-Y.W.: Reclassification revisited: an automated approach; KOH, G.S.: Options in classification available through modern technology; TROTTER, R.: Electronic Dewey: the CD-ROM version of the Dewey Decimal Classification
    Object
    Reader interest classification
    Series
    Cataloging and classification quarterly; vol.19, nos.3/4
  6. Seetharama, S.: Role of classification in information services generation (1992) 0.10
    0.10356945 = product of:
      0.15535417 = sum of:
        0.110110395 = weight(_text_:interest in 2531) [ClassicSimilarity], result of:
          0.110110395 = score(doc=2531,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 2531, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=2531)
        0.045243774 = product of:
          0.09048755 = sum of:
            0.09048755 = weight(_text_:classification in 2531) [ClassicSimilarity], result of:
              0.09048755 = score(doc=2531,freq=8.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5629819 = fieldWeight in 2531, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2531)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Mentions the areas of application of the postulates and principles of Ranganathan's general theory of knowledge classification. Indicates that classification in the sense of organizing concepts in involved in the generation, storage, retrieval, and dissemination of information, especially in designing databases, construction of user interest profile, subject description of documents, arrangement and presentation of information, etc. Demonstrates with example, the application of Ranganathan's postulates and principles of classification in the preparation of an information analysis and consolidation product
  7. Beaudoin, J.E.: Content-based image retrieval methods and professional image users (2016) 0.10
    0.10315509 = product of:
      0.15473263 = sum of:
        0.13763799 = weight(_text_:interest in 2637) [ClassicSimilarity], result of:
          0.13763799 = score(doc=2637,freq=8.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.54892015 = fieldWeight in 2637, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2637)
        0.017094642 = product of:
          0.034189284 = sum of:
            0.034189284 = weight(_text_:22 in 2637) [ClassicSimilarity], result of:
              0.034189284 = score(doc=2637,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.19345059 = fieldWeight in 2637, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2637)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article reports the findings of a qualitative research study that examined professional image users' knowledge of, and interest in using, content-based image retrieval (CBIR) systems in an attempt to clarify when and where CBIR methods might be applied. The research sought to determine the differences in the perceived usefulness of CBIR technologies among image user groups from several domains and explicate the reasons given regarding the utility of CBIR systems for their professional tasks. Twenty participants (archaeologists, architects, art historians, and artists), individuals who rely on images of cultural materials in the performance of their work, took part in the study. The findings of the study reveal that interest in CBIR methods varied among the different professional user communities. Individuals who showed an interest in these systems were primarily those concerned with the formal characteristics (i.e., color, shape, composition, and texture) of the images being sought. In contrast, those participants who expressed a strong interest in images of known items, images illustrating themes, and/or items from specific locations believe concept-based searches to be the most direct route. These image users did not see a practical application for CBIR systems in their current work routines.
    Date
    22. 1.2016 12:32:25
  8. Mixter, J.; Childress, E.R.: FAST (Faceted Application of Subject Terminology) users : summary and case studies (2013) 0.10
    0.10118445 = product of:
      0.15177667 = sum of:
        0.13763799 = weight(_text_:interest in 2011) [ClassicSimilarity], result of:
          0.13763799 = score(doc=2011,freq=8.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.54892015 = fieldWeight in 2011, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2011)
        0.014138679 = product of:
          0.028277358 = sum of:
            0.028277358 = weight(_text_:classification in 2011) [ClassicSimilarity], result of:
              0.028277358 = score(doc=2011,freq=2.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.17593184 = fieldWeight in 2011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2011)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Over the past ten years, various organizations, both public and private, have expressed interest in implementing FAST in their cataloging workflows. As interest in FAST has grown, so too has interest in knowing how FAST is being used and by whom. Since 2002 eighteen institutions (see table 1) in six countries have expressed interest in learning more about FAST and how it could be implemented in cataloging workflows. Currently OCLC is aware of nine agencies that have actually adopted or support FAST for resource description. This study, the first systematic census of FAST users undertaken by OCLC, was conducted, in part, to address these inquiries. Its purpose was to examine: how FAST is being utilized; why FAST was chosen as the cataloging vocabulary; what benefits FAST provides; and what can be done to enhance the value of FAST. Interview requests were sent to all parties that had previously contacted OCLC about FAST. Of the eighteen organizations contacted, sixteen agreed to provide information about their decision whether to use FAST (nine adopters, seven non-adopters).
    Footnote
    Rez. in: Cataloging and classification quarterly 53(2015) no.2, S.247-249 (Shelby E. Harken)
  9. Gödert, W.: Klassifikatorische Inhaltserschließung : Ein Übersichtsartikel als kommentierter Literaturbericht (1990) 0.10
    0.09952843 = product of:
      0.14929265 = sum of:
        0.110110395 = weight(_text_:interest in 5143) [ClassicSimilarity], result of:
          0.110110395 = score(doc=5143,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 5143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=5143)
        0.03918226 = product of:
          0.07836452 = sum of:
            0.07836452 = weight(_text_:classification in 5143) [ClassicSimilarity], result of:
              0.07836452 = score(doc=5143,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.48755667 = fieldWeight in 5143, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5143)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Identifies the interest in questions of classified catalogues generated by the development of on-line catalogues, establishing a difference between 2 large areas: free access to information items in a systematic arrangement and expressing the contents of books by means of notational symbols in a classification system in a local catalogue. Examines the elements and structure of classification systems, the internationally important universal classifications, the procedures for book display and systematic processing in West German public libraries and exhibtion techniques in West German academic libraries. Covers universal and faceted classifications, as well as classification systems in on-line catalogues
  10. Béthery, A.: Liberté bien ordonnée : les classifications encyclopédiques revues et corrigées (1988) 0.10
    0.09745094 = product of:
      0.14617641 = sum of:
        0.11678971 = weight(_text_:interest in 2532) [ClassicSimilarity], result of:
          0.11678971 = score(doc=2532,freq=4.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.46577424 = fieldWeight in 2532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.046875 = fieldNorm(doc=2532)
        0.029386694 = product of:
          0.058773387 = sum of:
            0.058773387 = weight(_text_:classification in 2532) [ClassicSimilarity], result of:
              0.058773387 = score(doc=2532,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.3656675 = fieldWeight in 2532, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2532)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The current trend of simplifying user access to documents in public libraries in France has led to strong criticism of the traditional use of decimal classification, and growing popularity for classifying by centres of interest. The notion of locating documents 'where the reader expects to find them' does not bear reasoned analysis: this approach depends on the subjective attitudes of the reader, whose preconceptions are unknown. Public libraries serve readers of all types, and therefore the classification used must be based on general objective criteria. Argues for the retension of traditional encyclopedic classifications (UDC or Dewey), which despite their drawbacks, are based on subject structures known to everyone, and allow for updating to accommodate new concepts. Classification can operate with visual labelling systems, to simplify access: this approach provides ready identification of centres of interest without discarding the real advantages of universality.
  11. Rafferty, P.: ¬The representation of knowledge in library classification schemes (2001) 0.10
    0.095837384 = product of:
      0.14375608 = sum of:
        0.082582794 = weight(_text_:interest in 640) [ClassicSimilarity], result of:
          0.082582794 = score(doc=640,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.3293521 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.061173283 = product of:
          0.122346565 = sum of:
            0.122346565 = weight(_text_:classification in 640) [ClassicSimilarity], result of:
              0.122346565 = score(doc=640,freq=26.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.76119757 = fieldWeight in 640, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article explores the representation of knowledge through the discursive practice of 'general' or 'universal' classification schemes. These classification schemes were constructed within a philosophical framework which viewed `man' as the central focus in the universe, which believed in progress through science and research, and which privileged written documentation over other forms. All major classification schemes are built on clearly identifiable systems of knowledge, and all classification schemes, as discursive formations, regulate the ways in which knowledge is made accessible. Of particular interest in determining how knowledge is represented in classification schemes are the following: - Main classes: classification theorists have attempted to 'discipline epistemology' in the sense of imposing main class structures with the view to simplifying access to knowledge in documents for library users. - Notational language: a number of classification theorists were particularly interested in the establishment of symbolic languages through notation. The article considers these aspects of classification theory in relation to: the Dewey Decimal Classification scheme; Otlet and La Fontaine's Universal Bibliographic Classification and the International Institute of Bibliography; Henry Evelyn Bliss's Bibliographic Classification; and S.R. Ranganathan's Colon Classification.
  12. Moeller, R.; Becnel, K.: Why on earth would we not genrefy the books? : a study of Reader-Interest Classification in school libraries (2019) 0.10
    0.09579128 = product of:
      0.14368692 = sum of:
        0.11919801 = weight(_text_:interest in 5266) [ClassicSimilarity], result of:
          0.11919801 = score(doc=5266,freq=6.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.47537887 = fieldWeight in 5266, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5266)
        0.024488911 = product of:
          0.048977822 = sum of:
            0.048977822 = weight(_text_:classification in 5266) [ClassicSimilarity], result of:
              0.048977822 = score(doc=5266,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.3047229 = fieldWeight in 5266, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5266)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Through their work as instructors in a master of library science program, the authors observed a sharp increase in students' desire to adopt the reader-interest classification approach of genrefication for their school libraries' fiction collections. In order to better understand this trend, the researchers interviewed seven school librarians regarding their motivations for genrefying their libraries' fiction collections; the challenges they encountered during or after the genrefication process; and any benefits they perceived as having resulted in the implementation of genrefication. The data suggest that the librarians' interests in genrefication stem mostly from the lack of time they have to help individual students find materials, and the lack of time students are given out of the instructional day to explore the libraries' fiction collections. The participants felt that reclassifying the library's fiction collection by genre gave students more ownership of the fiction collection and allowed them to find ma-terials that genuinely interested them. The significant challenges the librarians faced in the reorganization process speak to challenges regarding the ways in which librarians attempt to provide access to diverse materials for all patrons.
    Object
    Reader Interest Classification
  13. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.10
    0.09533234 = product of:
      0.1429985 = sum of:
        0.068818994 = weight(_text_:interest in 2765) [ClassicSimilarity], result of:
          0.068818994 = score(doc=2765,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 2765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2765)
        0.07417951 = sum of:
          0.039990224 = weight(_text_:classification in 2765) [ClassicSimilarity], result of:
            0.039990224 = score(doc=2765,freq=4.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.24880521 = fieldWeight in 2765, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.034189284 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.034189284 = score(doc=2765,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.6666667 = coord(2/3)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
  14. Mahesh, K.: Highly expressive tagging for knowledge organization in the 21st century (2014) 0.10
    0.09533234 = product of:
      0.1429985 = sum of:
        0.068818994 = weight(_text_:interest in 1434) [ClassicSimilarity], result of:
          0.068818994 = score(doc=1434,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 1434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1434)
        0.07417951 = sum of:
          0.039990224 = weight(_text_:classification in 1434) [ClassicSimilarity], result of:
            0.039990224 = score(doc=1434,freq=4.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.24880521 = fieldWeight in 1434, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
          0.034189284 = weight(_text_:22 in 1434) [ClassicSimilarity], result of:
            0.034189284 = score(doc=1434,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.19345059 = fieldWeight in 1434, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1434)
      0.6666667 = coord(2/3)
    
    Abstract
    Knowledge organization of large-scale content on the Web requires substantial amounts of semantic metadata that is expensive to generate manually. Recent developments in Web technologies have enabled any user to tag documents and other forms of content thereby generating metadata that could help organize knowledge. However, merely adding one or more tags to a document is highly inadequate to capture the aboutness of the document and thereby to support powerful semantic functions such as automatic classification, question answering or true semantic search and retrieval. This is true even when the tags used are labels from a well-designed classification system such as a thesaurus or taxonomy. There is a strong need to develop a semantic tagging mechanism with sufficient expressive power to capture the aboutness of each part of a document or dataset or multimedia content in order to enable applications that can benefit from knowledge organization on the Web. This article proposes a highly expressive mechanism of using ontology snippets as semantic tags that map portions of a document or a part of a dataset or a segment of a multimedia content to concepts and relations in an ontology of the domain(s) of interest.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  15. Information retrieval: new systems and current research : Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94 (1996) 0.09
    0.09164122 = product of:
      0.13746183 = sum of:
        0.110110395 = weight(_text_:interest in 6945) [ClassicSimilarity], result of:
          0.110110395 = score(doc=6945,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 6945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=6945)
        0.027351426 = product of:
          0.054702852 = sum of:
            0.054702852 = weight(_text_:22 in 6945) [ClassicSimilarity], result of:
              0.054702852 = score(doc=6945,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.30952093 = fieldWeight in 6945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6945)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    The 13 papers cover a wide range of specialist interest subjects grouped under the headings: logic and information retrieval; natural language; weighting and indexing strategies; user interfaces; and information policy
  16. Liu, Y.: Precision One MediaSource : film/video locator on CD-ROM (1995) 0.09
    0.09164122 = product of:
      0.13746183 = sum of:
        0.110110395 = weight(_text_:interest in 7744) [ClassicSimilarity], result of:
          0.110110395 = score(doc=7744,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 7744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=7744)
        0.027351426 = product of:
          0.054702852 = sum of:
            0.054702852 = weight(_text_:22 in 7744) [ClassicSimilarity], result of:
              0.054702852 = score(doc=7744,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.30952093 = fieldWeight in 7744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7744)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Precision One MediaSource First Edition is the first film and video listing on CD-ROM containing bibliographic records and information about renatl sources. It was co-produced by the Brodart Co., Pennsylvania, and the Consortium of College and University Media Centres (CCUMC) and requires an IBM/campatible with hard disk, CD-ROM drive and DOS 3.3 or higher. MediaSource is intended for educational and business users and is of particular interest to public, school and academic libraries. Discusses installation, the interface and searching, data quality and documentation
    Date
    22. 6.1997 16:34:51
  17. Mischo, W.H.; Lee, J.: End-user searching in bibliographic databases (1987) 0.09
    0.09164122 = product of:
      0.13746183 = sum of:
        0.110110395 = weight(_text_:interest in 336) [ClassicSimilarity], result of:
          0.110110395 = score(doc=336,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=336)
        0.027351426 = product of:
          0.054702852 = sum of:
            0.054702852 = weight(_text_:22 in 336) [ClassicSimilarity], result of:
              0.054702852 = score(doc=336,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.30952093 = fieldWeight in 336, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=336)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The growing interest in end user or direct patron access to on-line bibliographic databases is reviewed with references to online catalogues, databases, and CD-ROMs. The literature of end user searching is surveyed with notes on: user training, software search aids, end user services in libraries: characterisation of end user searches; the role of librarians; and CD-ROMs as end user media
    Source
    Annual review of information science and technology. 22(1987), S.227-263
  18. Pichappan, P.; Sangaranachiyar, S.: Ageing approach to scientific eponyms (1996) 0.09
    0.09164122 = product of:
      0.13746183 = sum of:
        0.110110395 = weight(_text_:interest in 80) [ClassicSimilarity], result of:
          0.110110395 = score(doc=80,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 80, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=80)
        0.027351426 = product of:
          0.054702852 = sum of:
            0.054702852 = weight(_text_:22 in 80) [ClassicSimilarity], result of:
              0.054702852 = score(doc=80,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.30952093 = fieldWeight in 80, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=80)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Report presented at the 16th National Indian Association of Special Libraries and Information Centres Seminar Special Interest Group Meeting on Informatrics in Bombay, 19-22 Dec 94
  19. Dempsey, L.; Russell, R.; Kirriemur, J.W.: Towards distributed library systems : Z39.50 in a European context (1996) 0.09
    0.09164122 = product of:
      0.13746183 = sum of:
        0.110110395 = weight(_text_:interest in 127) [ClassicSimilarity], result of:
          0.110110395 = score(doc=127,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=127)
        0.027351426 = product of:
          0.054702852 = sum of:
            0.054702852 = weight(_text_:22 in 127) [ClassicSimilarity], result of:
              0.054702852 = score(doc=127,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.30952093 = fieldWeight in 127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Z39.50 is an information retrieval protocol. It has generated much interest but is so far little deployed in UK systems and services. Gives a functional overview of the protocol itself and the standards background, describes some European initiatives which make use of it, and outlines various issues to do with its future use and acceptance. Z39.50 is a crucial building block of future distributed information systems but it needs to be considered alongside other protocols and services to provide useful applications
    Source
    Program. 30(1996) no.1, S.1-22
  20. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.09
    0.09164122 = product of:
      0.13746183 = sum of:
        0.110110395 = weight(_text_:interest in 3376) [ClassicSimilarity], result of:
          0.110110395 = score(doc=3376,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.43913615 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0625 = fieldNorm(doc=3376)
        0.027351426 = product of:
          0.054702852 = sum of:
            0.054702852 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.054702852 = score(doc=3376,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This chapter presents ontologies and their role in the creation of the Semantic Web. Ontologies hold special interest, because they are very closely related to the way we understand the world. They provide common understanding, the very first step to successful communication. In following sections, we will present ontologies, how they are created and used. We will describe available tools for specifying and working with ontologies.
    Date
    31. 7.2010 16:58:22

Authors

Languages

Types

  • a 7454
  • m 785
  • el 368
  • s 344
  • b 63
  • x 56
  • r 36
  • i 26
  • p 24
  • ? 15
  • n 9
  • z 5
  • d 4
  • h 4
  • u 4
  • ag 2
  • l 2
  • au 1
  • More… Less…

Themes

Subjects

Classifications