Search (156 results, page 1 of 8)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Zeng, M.L.; Panzer, M.; Salaba, A.: Expressing classification schemes with OWL 2 Web Ontology Language : exploring issues and opportunities based on experiments using OWL 2 for three classification schemes 0.02
    0.01859981 = product of:
      0.086799115 = sum of:
        0.03271481 = weight(_text_:web in 3130) [ClassicSimilarity], result of:
          0.03271481 = score(doc=3130,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.4079388 = fieldWeight in 3130, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3130)
        0.03752027 = weight(_text_:frankfurt in 3130) [ClassicSimilarity], result of:
          0.03752027 = score(doc=3130,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.36736545 = fieldWeight in 3130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0625 = fieldNorm(doc=3130)
        0.016564032 = product of:
          0.049692094 = sum of:
            0.049692094 = weight(_text_:2010 in 3130) [ClassicSimilarity], result of:
              0.049692094 = score(doc=3130,freq=2.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.4227747 = fieldWeight in 3130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3130)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Based on the research on three general classification schemes, this paper discusses issues encountered when expressing classification schemes in SKOS and explores opportunities of resolving major issues using OWL 2 Web Ontology Language.
    Source
    Paradigms and conceptual systems in knowledge organization: Proceedings of the Eleventh International ISKO conference, Rome, 23-26 February 2010, ed. Claudio Gnoli, Indeks, Frankfurt M
  2. Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification" (2010) 0.02
    0.016329218 = product of:
      0.07620301 = sum of:
        0.008695048 = weight(_text_:information in 2945) [ClassicSimilarity], result of:
          0.008695048 = score(doc=2945,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.20156369 = fieldWeight in 2945, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.014905514 = weight(_text_:retrieval in 2945) [ClassicSimilarity], result of:
          0.014905514 = score(doc=2945,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 2945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.052602448 = product of:
          0.07890367 = sum of:
            0.058927573 = weight(_text_:2010 in 2945) [ClassicSimilarity], result of:
              0.058927573 = score(doc=2945,freq=5.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.5013491 = fieldWeight in 2945, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2945)
            0.019976096 = weight(_text_:22 in 2945) [ClassicSimilarity], result of:
              0.019976096 = score(doc=2945,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 2945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2945)
          0.6666667 = coord(2/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Argues that Beghtol's (2003) use of the terms "naive classification" and "professional classification" is valid because they are nominal definitions and that the distinction between these two types of classification points up the need for researchers in knowledge organization to broaden their scope beyond traditional classification systems intended for information retrieval. Argues that work by Beghtol (2003), Kwasnik (1999) and Bailey (1994) offer direction for the development of a classification of classifications based on the pragmatic dimensions of extant classification systems. Bezugnahme auf: Beghtol, C.: Naïve classification systems and the global information society. In: Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine. Würzburg: Ergon Verlag 2004. S.19-22. (Advances in knowledge organization; vol.9)
    Source
    Knowledge organization. 37(2010) no.2, S.111-120
    Year
    2010
  3. Fripp, D.: Using linked data to classify web documents (2010) 0.01
    0.014840477 = product of:
      0.06925556 = sum of:
        0.04048251 = weight(_text_:web in 4172) [ClassicSimilarity], result of:
          0.04048251 = score(doc=4172,freq=8.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.50479853 = fieldWeight in 4172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
        0.00585677 = weight(_text_:information in 4172) [ClassicSimilarity], result of:
          0.00585677 = score(doc=4172,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13576832 = fieldWeight in 4172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
        0.02291628 = product of:
          0.06874884 = sum of:
            0.06874884 = weight(_text_:2010 in 4172) [ClassicSimilarity], result of:
              0.06874884 = score(doc=4172,freq=5.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.5849073 = fieldWeight in 4172, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4172)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
    Source
    Aslib proceedings. 62(2010) no.6, S.585 - 595
    Theme
    Semantic Web
    Year
    2010
  4. Broughton, V.: ¬The need for a faceted classification as the basis of all methods of information retrieval (2006) 0.01
    0.012529018 = product of:
      0.05846875 = sum of:
        0.020446755 = weight(_text_:web in 2874) [ClassicSimilarity], result of:
          0.020446755 = score(doc=2874,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.25496176 = fieldWeight in 2874, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
        0.010247213 = weight(_text_:information in 2874) [ClassicSimilarity], result of:
          0.010247213 = score(doc=2874,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23754507 = fieldWeight in 2874, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
        0.027774787 = weight(_text_:retrieval in 2874) [ClassicSimilarity], result of:
          0.027774787 = score(doc=2874,freq=10.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.37365708 = fieldWeight in 2874, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - The aim of this article is to estimate the impact of faceted classification and the faceted analytical method on the development of various information retrieval tools over the latter part of the twentieth and early twenty-first centuries. Design/methodology/approach - The article presents an examination of various subject access tools intended for retrieval of both print and digital materials to determine whether they exhibit features of faceted systems. Some attention is paid to use of the faceted approach as a means of structuring information on commercial web sites. The secondary and research literature is also surveyed for commentary on and evaluation of facet analysis as a basis for the building of vocabulary and conceptual tools. Findings - The study finds that faceted systems are now very common, with a major increase in their use over the last 15 years. Most LIS subject indexing tools (classifications, subject heading lists and thesauri) now demonstrate features of facet analysis to a greater or lesser degree. A faceted approach is frequently taken to the presentation of product information on commercial web sites, and there is an independent strand of theory and documentation related to this application. There is some significant research on semi-automatic indexing and retrieval (query expansion and query formulation) using facet analytical techniques. Originality/value - This article provides an overview of an important conceptual approach to information retrieval, and compares different understandings and applications of this methodology.
    Footnote
    Beitrag in einem Themenheft: UK library & information schools: UCL SLAIS.
  5. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.01
    0.012372 = product of:
      0.057736002 = sum of:
        0.008366814 = weight(_text_:information in 3483) [ClassicSimilarity], result of:
          0.008366814 = score(doc=3483,freq=8.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.19395474 = fieldWeight in 3483, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.017566316 = weight(_text_:retrieval in 3483) [ClassicSimilarity], result of:
          0.017566316 = score(doc=3483,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.23632148 = fieldWeight in 3483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.031802874 = product of:
          0.04770431 = sum of:
            0.031057559 = weight(_text_:2010 in 3483) [ClassicSimilarity], result of:
              0.031057559 = score(doc=3483,freq=2.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.2642342 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
            0.016646748 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
              0.016646748 = score(doc=3483,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.19345059 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.6666667 = coord(2/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Classification is an activity that transcends time and space and that bridges the divisions between different languages and cultures, including the divisions between academic disciplines. Classificatory activity, however, serves different purposes in different situations. Classifications for infonnation retrieval can be called "professional" classifications and classifications in other fields can be called "naïve" classifications because they are developed by people who have no particular interest in classificatory issues. The general purpose of naïve classification systems is to discover new knowledge. In contrast, the general purpose of information retrieval classifications is to classify pre-existing knowledge. Different classificatory purposes may thus inform systems that are intended to span the cultural specifics of the globalized information society. This paper builds an previous research into the purposes and characteristics of naïve classifications. It describes some of the relationships between the purpose and context of a naive classification, the units of analysis used in it, and the theory that the context and the units of analysis imply.
    Footnote
    Vgl.: Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification". In: Knowledge organization. 37(2010) no.2, S.111-120.
    Pages
    S.19-22
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  6. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.01
    0.012032174 = product of:
      0.056150142 = sum of:
        0.030050473 = weight(_text_:web in 726) [ClassicSimilarity], result of:
          0.030050473 = score(doc=726,freq=6.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.37471575 = fieldWeight in 726, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.0050200885 = weight(_text_:information in 726) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=726,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.021079581 = weight(_text_:retrieval in 726) [ClassicSimilarity], result of:
          0.021079581 = score(doc=726,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.2835858 = fieldWeight in 726, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
      0.21428572 = coord(3/14)
    
    Abstract
    This article gives a cheerfully brief and undetailed account of how to make a faceted classification system, then describes information retrieval and searching on the web. It concludes by saying that facets would be excellent in helping users search and browse the web, but offers no real clues as to how this can be done.
    Theme
    Klassifikationssysteme im Online-Retrieval
  7. Gnoli, C.; Mei, H.: Freely faceted classification for Web-based information retrieval (2006) 0.01
    0.011865708 = product of:
      0.055373304 = sum of:
        0.024536107 = weight(_text_:web in 534) [ClassicSimilarity], result of:
          0.024536107 = score(doc=534,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.3059541 = fieldWeight in 534, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
        0.0050200885 = weight(_text_:information in 534) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=534,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 534, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
        0.02581711 = weight(_text_:retrieval in 534) [ClassicSimilarity], result of:
          0.02581711 = score(doc=534,freq=6.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.34732026 = fieldWeight in 534, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
      0.21428572 = coord(3/14)
    
    Abstract
    In free classification, each concept is expressed by a constant notation, and classmarks are formed by free combinations of them, allowing the retrieval of records from a database by searching any of the component concepts. A refinement of free classification is freely faceted classification, where notation can include facets, expressing the kind of relations held between the concepts. The Integrative Level Classification project aims at testing free and freely faceted classification by applying them to small bibliographical samples in various domains. A sample, called the Dandelion Bibliography of Facet Analysis, is described here. Experience was gained using this system to classify 300 specialized papers dealing with facet analysis itself recorded on a MySQL database and building a Web interface exploiting freely faceted notation. The interface is written in PHP and uses string functions to process the queries and to yield relevant results selected and ordered according to the principles of integrative levels.
    Theme
    Klassifikationssysteme im Online-Retrieval
  8. Frické, M.: Logic and the organization of information (2012) 0.01
    0.009126513 = product of:
      0.042590395 = sum of:
        0.017529441 = weight(_text_:web in 1782) [ClassicSimilarity], result of:
          0.017529441 = score(doc=1782,freq=6.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.21858418 = fieldWeight in 1782, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.012764533 = weight(_text_:information in 1782) [ClassicSimilarity], result of:
          0.012764533 = score(doc=1782,freq=38.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.29590017 = fieldWeight in 1782, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.012296421 = weight(_text_:retrieval in 1782) [ClassicSimilarity], result of:
          0.012296421 = score(doc=1782,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.16542503 = fieldWeight in 1782, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.21428572 = coord(3/14)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
    LCSH
    Information Systems
    Information storage and retrieval systems
    Subject
    Information Systems
    Information storage and retrieval systems
  9. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.01
    0.008338684 = product of:
      0.038913857 = sum of:
        0.017349645 = weight(_text_:web in 2464) [ClassicSimilarity], result of:
          0.017349645 = score(doc=2464,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.21634221 = fieldWeight in 2464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2464)
        0.014905514 = weight(_text_:retrieval in 2464) [ClassicSimilarity], result of:
          0.014905514 = score(doc=2464,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 2464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2464)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
              0.019976096 = score(doc=2464,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
    Theme
    Klassifikationssysteme im Online-Retrieval
  10. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.01
    0.008331071 = product of:
      0.03887833 = sum of:
        0.025863327 = weight(_text_:web in 311) [ClassicSimilarity], result of:
          0.025863327 = score(doc=311,freq=10.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.32250395 = fieldWeight in 311, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.0047329846 = weight(_text_:information in 311) [ClassicSimilarity], result of:
          0.0047329846 = score(doc=311,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.10971737 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.008282016 = product of:
          0.024846047 = sum of:
            0.024846047 = weight(_text_:2010 in 311) [ClassicSimilarity], result of:
              0.024846047 = score(doc=311,freq=2.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.21138735 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03125 = fieldNorm(doc=311)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
    Methodology Based on information from research papers, more research was done on SKOS and examples of SKOS and shared faceted classifications in the Semantic Web and about SKOS and how to express SKOS in RDF/XML. Once confident with these ideas, the author used a faceted taxonomy created in a Vocabulary Design class and encoded it using SKOS. Instead of writing RDF in a program such as Notepad, a thesaurus tool was used to create the taxonomy according to SKOS standards and then export the thesaurus in RDF/XML format. These processes and tools are then analyzed. Results The initial statement of the problem was simply an extension of the survey paper done earlier in this class. To continue on with the research, more research was done into SKOS - a standard for expressing thesauri, taxonomies and faceted classifications so they can be shared on the semantic web.
  11. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.01
    0.008175849 = product of:
      0.03815396 = sum of:
        0.020241255 = weight(_text_:web in 3494) [ClassicSimilarity], result of:
          0.020241255 = score(doc=3494,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.25239927 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.010144223 = weight(_text_:information in 3494) [ClassicSimilarity], result of:
          0.010144223 = score(doc=3494,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23515764 = fieldWeight in 3494, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.007768482 = product of:
          0.023305446 = sum of:
            0.023305446 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
              0.023305446 = score(doc=3494,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.2708308 = fieldWeight in 3494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3494)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Pages
    S.22-36
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  12. Shera, J.H.: Pattern, structure, and conceptualization in classification for information retrieval (1957) 0.01
    0.00805116 = product of:
      0.056358118 = sum of:
        0.014198954 = weight(_text_:information in 1287) [ClassicSimilarity], result of:
          0.014198954 = score(doc=1287,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.3291521 = fieldWeight in 1287, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1287)
        0.042159162 = weight(_text_:retrieval in 1287) [ClassicSimilarity], result of:
          0.042159162 = score(doc=1287,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.5671716 = fieldWeight in 1287, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=1287)
      0.14285715 = coord(2/14)
    
    Source
    Proceedings of the International Study Conference on Classification for Information Retrieval, held at Beatrice Webb House, Dorking, England, 13.-17.5.1957
  13. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.01
    0.0076992605 = product of:
      0.03592988 = sum of:
        0.025042059 = weight(_text_:web in 2467) [ClassicSimilarity], result of:
          0.025042059 = score(doc=2467,freq=24.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.3122631 = fieldWeight in 2467, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.004677191 = weight(_text_:information in 2467) [ClassicSimilarity], result of:
          0.004677191 = score(doc=2467,freq=10.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.10842399 = fieldWeight in 2467, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.006210631 = weight(_text_:retrieval in 2467) [ClassicSimilarity], result of:
          0.006210631 = score(doc=2467,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.08355226 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.21428572 = coord(3/14)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
    Theme
    Klassifikationssysteme im Online-Retrieval
  14. Green, R.; Panzer, M.: ¬The ontological character of classes in the Dewey Decimal Classification 0.01
    0.0067605386 = product of:
      0.047323767 = sum of:
        0.03283024 = weight(_text_:frankfurt in 3530) [ClassicSimilarity], result of:
          0.03283024 = score(doc=3530,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.32144478 = fieldWeight in 3530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3530)
        0.014493528 = product of:
          0.043480583 = sum of:
            0.043480583 = weight(_text_:2010 in 3530) [ClassicSimilarity], result of:
              0.043480583 = score(doc=3530,freq=2.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.36992785 = fieldWeight in 3530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3530)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Source
    Paradigms and conceptual systems in knowledge organization: Proceedings of the Eleventh International ISKO conference, Rome, 23-26 February 2010, ed. Claudio Gnoli, Indeks, Frankfurt M
  15. Ranganathan, S.R.: Library classification as a discipline (1957) 0.01
    0.006641868 = product of:
      0.046493076 = sum of:
        0.01171354 = weight(_text_:information in 564) [ClassicSimilarity], result of:
          0.01171354 = score(doc=564,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.27153665 = fieldWeight in 564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=564)
        0.034779534 = weight(_text_:retrieval in 564) [ClassicSimilarity], result of:
          0.034779534 = score(doc=564,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.46789268 = fieldWeight in 564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=564)
      0.14285715 = coord(2/14)
    
    Source
    Proceedings of the International Study Conference on Classification for Information Retrieval, held at Beatrice Webb House,Dorking, England, 13.-17.5.1957
  16. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.01
    0.005696636 = product of:
      0.026584301 = sum of:
        0.0050200885 = weight(_text_:information in 780) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=780,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.014905514 = weight(_text_:retrieval in 780) [ClassicSimilarity], result of:
          0.014905514 = score(doc=780,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.019976096 = score(doc=780,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
    Theme
    Klassifikationssysteme im Online-Retrieval
  17. Vickery, B.C.: Relations between subject fields : problems of constructing a general classification (1957) 0.01
    0.005693029 = product of:
      0.039851204 = sum of:
        0.010040177 = weight(_text_:information in 566) [ClassicSimilarity], result of:
          0.010040177 = score(doc=566,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23274569 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=566)
        0.029811028 = weight(_text_:retrieval in 566) [ClassicSimilarity], result of:
          0.029811028 = score(doc=566,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.40105087 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=566)
      0.14285715 = coord(2/14)
    
    Source
    Proceedings of the International Study Conference on Classification for Information Retrieval, held at Beatrice Webb House, Dorking, England, 13.-17.5.1957
  18. ¬The need for a faceted classification as the basis of all methods of information retrieval : Memorandum of the Classification Research Group (1997) 0.01
    0.005671358 = product of:
      0.039699506 = sum of:
        0.011593399 = weight(_text_:information in 562) [ClassicSimilarity], result of:
          0.011593399 = score(doc=562,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.2687516 = fieldWeight in 562, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
        0.028106106 = weight(_text_:retrieval in 562) [ClassicSimilarity], result of:
          0.028106106 = score(doc=562,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.37811437 = fieldWeight in 562, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
      0.14285715 = coord(2/14)
    
    Footnote
    Wiederabdruck aus: Proceedings of the International Study Conference on Classification for Information Retrieval, Dorking. London: Aslib 1957.
    Imprint
    The Hague : International Federation for Information and Documentation (FID)
  19. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.01
    0.0056436416 = product of:
      0.026336994 = sum of:
        0.008366814 = weight(_text_:information in 1778) [ClassicSimilarity], result of:
          0.008366814 = score(doc=1778,freq=8.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.19395474 = fieldWeight in 1778, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.012421262 = weight(_text_:retrieval in 1778) [ClassicSimilarity], result of:
          0.012421262 = score(doc=1778,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.16710453 = fieldWeight in 1778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.0055489163 = product of:
          0.016646748 = sum of:
            0.016646748 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.016646748 = score(doc=1778,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
  20. Karamuftuoglu, M.: Need for a systemic theory of classification in information science (2007) 0.01
    0.005585574 = product of:
      0.039099015 = sum of:
        0.013281905 = weight(_text_:information in 615) [ClassicSimilarity], result of:
          0.013281905 = score(doc=615,freq=14.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.3078936 = fieldWeight in 615, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=615)
        0.02581711 = weight(_text_:retrieval in 615) [ClassicSimilarity], result of:
          0.02581711 = score(doc=615,freq=6.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.34732026 = fieldWeight in 615, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=615)
      0.14285715 = coord(2/14)
    
    Abstract
    In the article, the author aims to clarify some of the issues surrounding the discussion regarding the usefulness of a substantive classification theory in information science (IS) by means of a broad perspective. By utilizing a concrete example from the High Accuracy Retrieval from Documents (HARD) track of a Text REtrieval Conference (TREC), the author suggests that the bag of words approach to information retrieval (IR) and techniques such as relevance feedback have significant limitations in expressing and resolving complex user information needs. He argues that a comprehensive analysis of information needs involves explicating often-implicit assumptions made by the authors of scholarly documents, as well as everyday texts such as news articles. He also argues that progress in IS can be furthered by developing general theories that are applicable to multiple domains. The concrete example of application of the domain-analytic approach to subject analysis in IS to the aesthetic evaluation of works of information arts is used to support this argument.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.13, S.1977-1987

Authors

Languages

Types

  • a 141
  • m 11
  • el 7
  • s 4
  • More… Less…