Search (251 results, page 1 of 13)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.08
    0.08131836 = product of:
      0.16263673 = sum of:
        0.0067648874 = weight(_text_:information in 2763) [ClassicSimilarity], result of:
          0.0067648874 = score(doc=2763,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.10971737 = fieldWeight in 2763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.010943705 = weight(_text_:for in 2763) [ClassicSimilarity], result of:
          0.010943705 = score(doc=2763,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 2763, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.016842863 = weight(_text_:the in 2763) [ClassicSimilarity], result of:
          0.016842863 = score(doc=2763,freq=38.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.30393726 = fieldWeight in 2763, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.016545137 = weight(_text_:of in 2763) [ClassicSimilarity], result of:
          0.016545137 = score(doc=2763,freq=38.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.30123898 = fieldWeight in 2763, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.016842863 = weight(_text_:the in 2763) [ClassicSimilarity], result of:
          0.016842863 = score(doc=2763,freq=38.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.30393726 = fieldWeight in 2763, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.094697274 = sum of:
          0.07566263 = weight(_text_:communities in 2763) [ClassicSimilarity], result of:
            0.07566263 = score(doc=2763,freq=6.0), product of:
              0.18632571 = queryWeight, product of:
                5.3049703 = idf(docFreq=596, maxDocs=44218)
                0.035122856 = queryNorm
              0.4060772 = fieldWeight in 2763, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.3049703 = idf(docFreq=596, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
          0.019034648 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
            0.019034648 = score(doc=2763,freq=2.0), product of:
              0.12299426 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.035122856 = queryNorm
              0.15476047 = fieldWeight in 2763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
      0.5 = coord(6/12)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  2. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.06
    0.05532878 = product of:
      0.11065756 = sum of:
        0.024659418 = product of:
          0.07397825 = sum of:
            0.07397825 = weight(_text_:f in 3773) [ClassicSimilarity], result of:
              0.07397825 = score(doc=3773,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.52844554 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.33333334 = coord(1/3)
        0.022874914 = weight(_text_:und in 3773) [ClassicSimilarity], result of:
          0.022874914 = score(doc=3773,freq=2.0), product of:
            0.07784514 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.035122856 = queryNorm
            0.29385152 = fieldWeight in 3773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=3773)
        0.011592053 = weight(_text_:the in 3773) [ClassicSimilarity], result of:
          0.011592053 = score(doc=3773,freq=2.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.20918396 = fieldWeight in 3773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.09375 = fieldNorm(doc=3773)
        0.011387144 = weight(_text_:of in 3773) [ClassicSimilarity], result of:
          0.011387144 = score(doc=3773,freq=2.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.20732689 = fieldWeight in 3773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=3773)
        0.011592053 = weight(_text_:the in 3773) [ClassicSimilarity], result of:
          0.011592053 = score(doc=3773,freq=2.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.20918396 = fieldWeight in 3773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.09375 = fieldNorm(doc=3773)
        0.028551972 = product of:
          0.057103943 = sum of:
            0.057103943 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.057103943 = score(doc=3773,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Date
    1. 8.1996 22:01:00
    Footnote
    Übers. d. Titels: The good use of facets: from classifications to thesauri
    Language
    f
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Campbell, G.: ¬A queer eye for the faceted guy : how a universal classification principle can be applied to a distinct subculture (2004) 0.05
    0.050407317 = product of:
      0.10081463 = sum of:
        0.014350493 = weight(_text_:information in 2639) [ClassicSimilarity], result of:
          0.014350493 = score(doc=2639,freq=18.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.23274568 = fieldWeight in 2639, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2639)
        0.010943705 = weight(_text_:for in 2639) [ClassicSimilarity], result of:
          0.010943705 = score(doc=2639,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 2639, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=2639)
        0.014965276 = weight(_text_:the in 2639) [ClassicSimilarity], result of:
          0.014965276 = score(doc=2639,freq=30.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27005535 = fieldWeight in 2639, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2639)
        0.014700741 = weight(_text_:of in 2639) [ClassicSimilarity], result of:
          0.014700741 = score(doc=2639,freq=30.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.26765788 = fieldWeight in 2639, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2639)
        0.014965276 = weight(_text_:the in 2639) [ClassicSimilarity], result of:
          0.014965276 = score(doc=2639,freq=30.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27005535 = fieldWeight in 2639, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2639)
        0.030889137 = product of:
          0.061778273 = sum of:
            0.061778273 = weight(_text_:communities in 2639) [ClassicSimilarity], result of:
              0.061778273 = score(doc=2639,freq=4.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.33156064 = fieldWeight in 2639, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2639)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    The results of a small qualitative study of gay and lesbian information users suggest that facet analysis as it is increasingly practised in the field of information architecture provides a promising avenue for improving information access to gay and lesbian information resources. Findings indicated that gay and lesbian information users have an acute sense of categorization grounded in the need to identify gay-positive physical and social spaces, and in their finely-honed practices of detecting gay "facets" to general information themes. They are also, however, very flexible and adaptable in their application of gay-related facet values, which suggests that browsing systems will have to be designed with considerable care.
    Content
    1. Introduction The title of this paper is taken from a TV show which has gained considerable popularity in North America: A Queer Eye for the Straight Guy, in which a group of gay men subject a helpless straight male to a complete fashion makeover. In facet analysis, this would probably be seen as an "operation upon" something, and the Bliss Bibliographic Classification would place it roughly two-thirds of the way along its facet order, after "types" and "materials," but before "space" and "time." But the link between gay communities and facet analysis extends beyond the facetious title. As Web-based information resources for gay and lesbian users continue to grow, Web sites that cater to, or at least refrain from discriminating against gay and lesbian users are faced with a daunting challenge when trying to organize these diverse resources in a way that facilitates congenial browsing. And principles of faceted classification, with their emphasis an clear and consistent principles of subdivision and their care in defining the order of subdivisions, offer an important opportunity to use time-honoured classification principles to serve the growing needs of these communities. If faceted organization schemes are to work, however, we need to know more about gay and lesbian users, and how they categorize themselves and their information sources. This paper presents the results of an effort to learn more.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  4. Svenonius, E.: Facets as semantic categories (1979) 0.05
    0.046953723 = product of:
      0.093907446 = sum of:
        0.010147331 = weight(_text_:information in 1427) [ClassicSimilarity], result of:
          0.010147331 = score(doc=1427,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.16457605 = fieldWeight in 1427, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1427)
        0.019810257 = weight(_text_:und in 1427) [ClassicSimilarity], result of:
          0.019810257 = score(doc=1427,freq=6.0), product of:
            0.07784514 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.035122856 = queryNorm
            0.2544829 = fieldWeight in 1427, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=1427)
        0.008207779 = weight(_text_:for in 1427) [ClassicSimilarity], result of:
          0.008207779 = score(doc=1427,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.12446466 = fieldWeight in 1427, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1427)
        0.020897869 = weight(_text_:the in 1427) [ClassicSimilarity], result of:
          0.020897869 = score(doc=1427,freq=26.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.37711173 = fieldWeight in 1427, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1427)
        0.013946345 = weight(_text_:of in 1427) [ClassicSimilarity], result of:
          0.013946345 = score(doc=1427,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.25392252 = fieldWeight in 1427, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1427)
        0.020897869 = weight(_text_:the in 1427) [ClassicSimilarity], result of:
          0.020897869 = score(doc=1427,freq=26.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.37711173 = fieldWeight in 1427, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1427)
      0.5 = coord(6/12)
    
    Abstract
    The paper looks at the semantic and syntactic components of facet definition. In synthetic classificatory languages, primitive terms are categorized into facets; facet information, when, is used in stating the syntactic rules for combining primitive terms into the acceptable (well-formed) complex expressions in the language. In other words, the structure of a synthetic classificatory language can be defined in terms of the facets recognized in the language and the syntactic rules employed by the language. Thus, facets are the "grammatical categories" of classificatory languages and their definition is the first step in formulating structural descriptions of such languages. As well, the study of how facets are defined can give some insight into how language is used to embody information
    Source
    Klassifikation und Erkenntnis II. Proc. der Plenarvorträge und der Sektion 2 u. 3 "Wissensdarstellung und Wissensvermittlung" der 3. Fachtagung der Gesellschaft für Klassifikation, Königstein/Ts., 5.-6.4.1979
  5. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.05
    0.046725266 = product of:
      0.09345053 = sum of:
        0.011958744 = weight(_text_:information in 1778) [ClassicSimilarity], result of:
          0.011958744 = score(doc=1778,freq=8.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.19395474 = fieldWeight in 1778, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.00967296 = weight(_text_:for in 1778) [ClassicSimilarity], result of:
          0.00967296 = score(doc=1778,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.14668301 = fieldWeight in 1778, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.024150109 = weight(_text_:the in 1778) [ClassicSimilarity], result of:
          0.024150109 = score(doc=1778,freq=50.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.43579993 = fieldWeight in 1778, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.011621956 = weight(_text_:of in 1778) [ClassicSimilarity], result of:
          0.011621956 = score(doc=1778,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.21160212 = fieldWeight in 1778, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.024150109 = weight(_text_:the in 1778) [ClassicSimilarity], result of:
          0.024150109 = score(doc=1778,freq=50.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.43579993 = fieldWeight in 1778, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.011896656 = product of:
          0.023793312 = sum of:
            0.023793312 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.023793312 = score(doc=1778,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
    Source
    Journal of documentation. 70(2014) no.3, S.364-391
  6. Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification" (2010) 0.05
    0.04645404 = product of:
      0.09290808 = sum of:
        0.012427893 = weight(_text_:information in 2945) [ClassicSimilarity], result of:
          0.012427893 = score(doc=2945,freq=6.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.20156369 = fieldWeight in 2945, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.018353151 = weight(_text_:for in 2945) [ClassicSimilarity], result of:
          0.018353151 = score(doc=2945,freq=10.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.27831143 = fieldWeight in 2945, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.016393637 = weight(_text_:the in 2945) [ClassicSimilarity], result of:
          0.016393637 = score(doc=2945,freq=16.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.2958308 = fieldWeight in 2945, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.015063776 = weight(_text_:of in 2945) [ClassicSimilarity], result of:
          0.015063776 = score(doc=2945,freq=14.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.2742677 = fieldWeight in 2945, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.016393637 = weight(_text_:the in 2945) [ClassicSimilarity], result of:
          0.016393637 = score(doc=2945,freq=16.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.2958308 = fieldWeight in 2945, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.014275986 = product of:
          0.028551972 = sum of:
            0.028551972 = weight(_text_:22 in 2945) [ClassicSimilarity], result of:
              0.028551972 = score(doc=2945,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23214069 = fieldWeight in 2945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2945)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Argues that Beghtol's (2003) use of the terms "naive classification" and "professional classification" is valid because they are nominal definitions and that the distinction between these two types of classification points up the need for researchers in knowledge organization to broaden their scope beyond traditional classification systems intended for information retrieval. Argues that work by Beghtol (2003), Kwasnik (1999) and Bailey (1994) offer direction for the development of a classification of classifications based on the pragmatic dimensions of extant classification systems. Bezugnahme auf: Beghtol, C.: Naïve classification systems and the global information society. In: Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine. Würzburg: Ergon Verlag 2004. S.19-22. (Advances in knowledge organization; vol.9)
    Content
    Beitrag in einem Special issue: A Festschrift for Clare Beghtol
  7. Szostak, R.: Classifying science : phenomena, data, theory, method, practice (2004) 0.05
    0.04574219 = product of:
      0.09148438 = sum of:
        0.0050736656 = weight(_text_:information in 325) [ClassicSimilarity], result of:
          0.0050736656 = score(doc=325,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.08228803 = fieldWeight in 325, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=325)
        0.010857872 = weight(_text_:for in 325) [ClassicSimilarity], result of:
          0.010857872 = score(doc=325,freq=14.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16465127 = fieldWeight in 325, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0234375 = fieldNorm(doc=325)
        0.019223245 = weight(_text_:the in 325) [ClassicSimilarity], result of:
          0.019223245 = score(doc=325,freq=88.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.34689236 = fieldWeight in 325, product of:
              9.380832 = tf(freq=88.0), with freq of:
                88.0 = termFreq=88.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=325)
        0.020724913 = weight(_text_:of in 325) [ClassicSimilarity], result of:
          0.020724913 = score(doc=325,freq=106.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.3773406 = fieldWeight in 325, product of:
              10.29563 = tf(freq=106.0), with freq of:
                106.0 = termFreq=106.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=325)
        0.019223245 = weight(_text_:the in 325) [ClassicSimilarity], result of:
          0.019223245 = score(doc=325,freq=88.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.34689236 = fieldWeight in 325, product of:
              9.380832 = tf(freq=88.0), with freq of:
                88.0 = termFreq=88.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=325)
        0.016381439 = product of:
          0.032762878 = sum of:
            0.032762878 = weight(_text_:communities in 325) [ClassicSimilarity], result of:
              0.032762878 = score(doc=325,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.17583658 = fieldWeight in 325, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=325)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Classification is the essential first step in science. The study of science, as well as the practice of science, will thus benefit from a detailed classification of different types of science. In this book, science - defined broadly to include the social sciences and humanities - is first unpacked into its constituent elements: the phenomena studied, the data used, the theories employed, the methods applied, and the practices of scientists. These five elements are then classified in turn. Notably, the classifications of both theory types and methods allow the key strengths and weaknesses of different theories and methods to be readily discerned and compared. Connections across classifications are explored: should certain theories or phenomena be investigated only with certain methods? What is the proper function and form of scientific paradigms? Are certain common errors and biases in scientific practice associated with particular phenomena, data, theories, or methods? The classifications point to several ways of improving both specialized and interdisciplinary research and teaching, and especially of enhancing communication across communities of scholars. The classifications also support a superior system of document classification that would allow searches by theory and method used as well as causal links investigated.
    Content
    Inhalt: - Chapter 1: Classifying Science: 1.1. A Simple Classificatory Guideline - 1.2. The First "Cut" (and Plan of Work) - 1.3. Some Preliminaries - Chapter 2: Classifying Phenomena and Data: 2.1. Classifying Phenomena - 2.2. Classifying Data - Chapter 3: Classifying Theory: 3.1. Typology of Theory - 3.2. What Is a Theory? - 3.3. Evaluating Theories - 3.4. Types of Theory and the Five Types of Causation - 3.5. Classifying Individual Theories - 3.6. Advantages of a Typology of Theory - Chapter 4: Classifying Method: 4.1. Classifying Methods - 4.2. Typology of Strengths and Weaknesses of Methods - 4.3. Qualitative Versus Quantitative Analysis Revisited - 4.4. Evaluating Methods - 4.5. Classifying Particular Methods Within The Typology - 4.6. Advantages of a Typology of Methods - Chapter 5: Classifying Practice: 5.1. Errors and Biases in ScienceChapter - 5.2. Typology of (Critiques of) Scientific Practice - 5.3. Utilizing This Classification - 5.4. The Five Types of Ethical Analysis - Chapter 6: Drawing Connections Across These Classifications: 6.1. Theory and Method - 6.2. Theory (Method) and Phenomena (Data) - 6.3. Better Paradigms - 6.4. Critiques of Scientific Practice: Are They Correlated with Other Classifications? - Chapter 7: Classifying Scientific Documents: 7.1. Faceted or Enumerative? - 7.2. Classifying By Phenomena Studied - 7.3. Classifying By Theory Used - 7.4. Classifying By Method Used - 7.5 Links Among Subjects - 7.6. Type of Work, Language, and More - 7.7. Critiques of Scientific Practice - 7.8. Classifying Philosophy - 7.9. Evaluating the System - Chapter 8: Concluding Remarks: 8.1. The Classifications - 8.2. Advantages of These Various Classifications - 8.3. Drawing Connections Across Classifications - 8.4. Golden Mean Arguments - 8.5. Why Should Science Be Believed? - 8.6. How Can Science Be Improved? - 8.7. How Should Science Be Taught?
    Footnote
    Rez. in: KO 32(2005) no.2, S.93-95 (H. Albrechtsen): "The book deals with mapping of the structures and contents of sciences, defined broadly to include the social sciences and the humanities. According to the author, the study of science, as well as the practice of science, could benefit from a detailed classification of different types of science. The book defines five universal constituents of the sciences: phenomena, data, theories, methods and practice. For each of these constituents, the author poses five questions, in the well-known 5W format: Who, What, Where, When, Why? - with the addition of the question How? (Szostak 2003). Two objectives of the author's endeavor stand out: 1) decision support for university curriculum development across disciplines and decision support for university students at advanced levels of education in selection of appropriate courses for their projects and to support cross-disciplinary inquiry for researchers and students; 2) decision support for researchers and students in scientific inquiry across disciplines, methods and theories. The main prospective audience of this book is university curriculum developers, university students and researchers, in that order of priority. The heart of the book is the chapters unfolding the author's ideas about how to classify phenomena and data, theory, method and practice, by use of the 5W inquiry model. . . .
    Despite its methodological flaws and lack of empirical foundation, the book could potentially bring new ideas to current discussions within the practices of curriculum development and knowledge management as weIl as design of information systems, an classification schemes as tools for knowledge sharing, decision-making and knowledge exploration. I hesitate to recommend the book to students, except to students at advanced levels of study, because of its biased presentation of the new ideas and its basis an secondary literature."
    LCSH
    Classification of sciences
    Series
    Information Science & Knowledge Management ; 7
    Subject
    Classification of sciences
  8. Dousa, T.M.: Categories and the architectonics of system in Julius Otto Kaiser's method of systematic indexing (2014) 0.05
    0.04558039 = product of:
      0.09116078 = sum of:
        0.005979372 = weight(_text_:information in 1418) [ClassicSimilarity], result of:
          0.005979372 = score(doc=1418,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.09697737 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.006839816 = weight(_text_:for in 1418) [ClassicSimilarity], result of:
          0.006839816 = score(doc=1418,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.103720546 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.021600515 = weight(_text_:the in 1418) [ClassicSimilarity], result of:
          0.021600515 = score(doc=1418,freq=40.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3897913 = fieldWeight in 1418, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.023243912 = weight(_text_:of in 1418) [ClassicSimilarity], result of:
          0.023243912 = score(doc=1418,freq=48.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.42320424 = fieldWeight in 1418, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.021600515 = weight(_text_:the in 1418) [ClassicSimilarity], result of:
          0.021600515 = score(doc=1418,freq=40.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3897913 = fieldWeight in 1418, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.011896656 = product of:
          0.023793312 = sum of:
            0.023793312 = weight(_text_:22 in 1418) [ClassicSimilarity], result of:
              0.023793312 = score(doc=1418,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.19345059 = fieldWeight in 1418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1418)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Categories, or concepts of high generality representing the most basic kinds of entities in the world, have long been understood to be a fundamental element in the construction of knowledge organization systems (KOSs), particularly faceted ones. Commentators on facet analysis have tended to foreground the role of categories in the structuring of controlled vocabularies and the construction of compound index terms, and the implications of this for subject representation and information retrieval. Less attention has been paid to the variety of ways in which categories can shape the overall architectonic framework of a KOS. This case study explores the range of functions that categories took in structuring various aspects of an early analytico-synthetic KOS, Julius Otto Kaiser's method of Systematic Indexing (SI). Within SI, categories not only functioned as mechanisms to partition an index vocabulary into smaller groupings of terms and as elements in the construction of compound index terms but also served as means of defining the units of indexing, or index items, incorporated into an index; determining the organization of card index files and the articulation of the guide card system serving as a navigational aids thereto; and setting structural constraints to the establishment of cross-references between terms. In all these ways, Kaiser's system of categories contributed to the general systematicity of SI.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  9. Olson, H.A.: Wind and rain and dark of night : classification in scientific discourse communities (2008) 0.05
    0.045182753 = product of:
      0.10843861 = sum of:
        0.014216291 = weight(_text_:for in 2270) [ClassicSimilarity], result of:
          0.014216291 = score(doc=2270,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.21557912 = fieldWeight in 2270, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2270)
        0.020078024 = weight(_text_:the in 2270) [ClassicSimilarity], result of:
          0.020078024 = score(doc=2270,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.36231726 = fieldWeight in 2270, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2270)
        0.021303395 = weight(_text_:of in 2270) [ClassicSimilarity], result of:
          0.021303395 = score(doc=2270,freq=28.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.38787308 = fieldWeight in 2270, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2270)
        0.020078024 = weight(_text_:the in 2270) [ClassicSimilarity], result of:
          0.020078024 = score(doc=2270,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.36231726 = fieldWeight in 2270, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2270)
        0.032762878 = product of:
          0.065525755 = sum of:
            0.065525755 = weight(_text_:communities in 2270) [ClassicSimilarity], result of:
              0.065525755 = score(doc=2270,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35167316 = fieldWeight in 2270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2270)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Content
    Classifications of natural phenomena demonstrate the applicability of discourse analysis in finding the importance of concepts such as warrant for categorization and classification. Temperature scales provide a body of official literature for close consideration. Official documents of the International Bureau of Weights and Measures (BIPM) reveal the reasoning behind choices affecting these standards. A more cursory scrutiny of the Saffir-Simpson Scale through scholarly publications and documentation from the National Institute of Standards and Technology (KIST) indicates the potential of this form of analysis. The same holds true for an examination of the definition of what is a planet as determined by the International Astronomical Union. As Sayers, Richardson, and Bliss have indicated, there seem to be principles and a reliance on context that bridge the differences between natural and artificial, scientific and bibliographic classifications.
    Source
    Culture and identity in knowledge organization: Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada. Ed. by Clément Arsenault and Joseph T. Tennis
  10. Molholt, P.: Qualities of classification schemes for the Information Superhighway (1995) 0.04
    0.044761945 = product of:
      0.08952389 = sum of:
        0.008456109 = weight(_text_:information in 5562) [ClassicSimilarity], result of:
          0.008456109 = score(doc=5562,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.13714671 = fieldWeight in 5562, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.01675406 = weight(_text_:for in 5562) [ClassicSimilarity], result of:
          0.01675406 = score(doc=5562,freq=12.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.2540624 = fieldWeight in 5562, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.018706596 = weight(_text_:the in 5562) [ClassicSimilarity], result of:
          0.018706596 = score(doc=5562,freq=30.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.33756918 = fieldWeight in 5562, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.015003879 = weight(_text_:of in 5562) [ClassicSimilarity], result of:
          0.015003879 = score(doc=5562,freq=20.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27317715 = fieldWeight in 5562, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.018706596 = weight(_text_:the in 5562) [ClassicSimilarity], result of:
          0.018706596 = score(doc=5562,freq=30.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.33756918 = fieldWeight in 5562, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.011896656 = product of:
          0.023793312 = sum of:
            0.023793312 = weight(_text_:22 in 5562) [ClassicSimilarity], result of:
              0.023793312 = score(doc=5562,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.19345059 = fieldWeight in 5562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5562)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    For my segment of this program I'd like to focus on some basic qualities of classification schemes. These qualities are critical to our ability to truly organize knowledge for access. As I see it, there are at least five qualities of note. The first one of these properties that I want to talk about is "authoritative." By this I mean standardized, but I mean more than standardized with a built in consensus-building process. A classification scheme constructed by a collaborative, consensus-building process carries the approval, and the authority, of the discipline groups that contribute to it and that it affects... The next property of classification systems is "expandable," living, responsive, with a clear locus of responsibility for its continuous upkeep. The worst thing you can do with a thesaurus, or a classification scheme, is to finish it. You can't ever finish it because it reflects ongoing intellectual activity... The third property is "intuitive." That is, the system has to be approachable, it has to be transparent, or at least capable of being transparent. It has to have an underlying logic that supports the classification scheme but doesn't dominate it... The fourth property is "organized and logical." I advocate very strongly, and agree with Lois Chan, that classification must be based on a rule-based structure, on somebody's world-view of the syndetic structure... The fifth property is "universal" by which I mean the classification scheme needs be useable by any specific system or application, and be available as a language for multiple purposes.
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
    Source
    Cataloging and classification quarterly. 21(1995) no.2, S.19-22
  11. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.04
    0.044644516 = product of:
      0.08928903 = sum of:
        0.009566996 = weight(_text_:information in 7242) [ClassicSimilarity], result of:
          0.009566996 = score(doc=7242,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1551638 = fieldWeight in 7242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.010943705 = weight(_text_:for in 7242) [ClassicSimilarity], result of:
          0.010943705 = score(doc=7242,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 7242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.017280413 = weight(_text_:the in 7242) [ClassicSimilarity], result of:
          0.017280413 = score(doc=7242,freq=10.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31183305 = fieldWeight in 7242, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.015182858 = weight(_text_:of in 7242) [ClassicSimilarity], result of:
          0.015182858 = score(doc=7242,freq=8.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27643585 = fieldWeight in 7242, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.017280413 = weight(_text_:the in 7242) [ClassicSimilarity], result of:
          0.017280413 = score(doc=7242,freq=10.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31183305 = fieldWeight in 7242, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.019034648 = product of:
          0.038069297 = sum of:
            0.038069297 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.038069297 = score(doc=7242,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Reports results of a comparative study of 3 classification schemes: LCC, DDC and NLM Classification to determine their effectiveness in classifying materials on health insurance. Examined 2 hypotheses: that there would be no differences in the scatter of the 3 classification schemes; and that there would be overlap between all 3 schemes but no difference in the classes into which the subject was placed. There was subject scatter in all 3 classification schemes and litlle overlap between the 3 systems
    Date
    22. 4.1997 21:10:19
  12. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.04
    0.04456541 = product of:
      0.08913082 = sum of:
        0.012655946 = weight(_text_:information in 2346) [ClassicSimilarity], result of:
          0.012655946 = score(doc=2346,freq=14.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.20526241 = fieldWeight in 2346, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.010943705 = weight(_text_:for in 2346) [ClassicSimilarity], result of:
          0.010943705 = score(doc=2346,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 2346, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.021164097 = weight(_text_:the in 2346) [ClassicSimilarity], result of:
          0.021164097 = score(doc=2346,freq=60.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.38191593 = fieldWeight in 2346, product of:
              7.745967 = tf(freq=60.0), with freq of:
                60.0 = termFreq=60.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.013685644 = weight(_text_:of in 2346) [ClassicSimilarity], result of:
          0.013685644 = score(doc=2346,freq=26.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.2491759 = fieldWeight in 2346, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.021164097 = weight(_text_:the in 2346) [ClassicSimilarity], result of:
          0.021164097 = score(doc=2346,freq=60.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.38191593 = fieldWeight in 2346, product of:
              7.745967 = tf(freq=60.0), with freq of:
                60.0 = termFreq=60.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.009517324 = product of:
          0.019034648 = sum of:
            0.019034648 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
              0.019034648 = score(doc=2346,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.15476047 = fieldWeight in 2346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
    Source
    Journal of documentation. 64(2008) no.6, S.842-876
    Theme
    Information Resources Management
  13. Beghtol, C.: Response to Hjoerland and Nicolaisen (2004) 0.04
    0.04409705 = product of:
      0.0881941 = sum of:
        0.0059192767 = weight(_text_:information in 3536) [ClassicSimilarity], result of:
          0.0059192767 = score(doc=3536,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.0960027 = fieldWeight in 3536, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
        0.0066718496 = weight(_text_:und in 3536) [ClassicSimilarity], result of:
          0.0066718496 = score(doc=3536,freq=2.0), product of:
            0.07784514 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.035122856 = queryNorm
            0.085706696 = fieldWeight in 3536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
        0.016585674 = weight(_text_:for in 3536) [ClassicSimilarity], result of:
          0.016585674 = score(doc=3536,freq=24.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.25150898 = fieldWeight in 3536, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
        0.020565914 = weight(_text_:the in 3536) [ClassicSimilarity], result of:
          0.020565914 = score(doc=3536,freq=74.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.37112147 = fieldWeight in 3536, product of:
              8.602325 = tf(freq=74.0), with freq of:
                74.0 = termFreq=74.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
        0.01788548 = weight(_text_:of in 3536) [ClassicSimilarity], result of:
          0.01788548 = score(doc=3536,freq=58.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.32564276 = fieldWeight in 3536, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
        0.020565914 = weight(_text_:the in 3536) [ClassicSimilarity], result of:
          0.020565914 = score(doc=3536,freq=74.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.37112147 = fieldWeight in 3536, product of:
              8.602325 = tf(freq=74.0), with freq of:
                74.0 = termFreq=74.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3536)
      0.5 = coord(6/12)
    
    Abstract
    I am writing to correct some of the misconceptions that Hjoerland and Nicolaisen appear to have about my paper in the previous issue of Knowledge Organization. I would like to address aspects of two of these misapprehensions. The first is the faulty interpretation they have given to my use of the term "naïve classification," and the second is the kinds of classification systems that they appear to believe are discussed in my paper as examples of "naïve classifications." First, the term "naïve classification" is directly analogous to the widely-understood and widelyaccepted term "naïve indexing." It is not analogous to the terms to which Hjorland and Nicolaisen compare it (i.e., "naïve physics", "naïve biology"). The term as I have defined it is not pejorative. It does not imply that the scholars who have developed naïve classifications have not given profoundly serious thought to their own scholarly work. My paper distinguishes between classifications for new knowledge developed by scholars in the various disciplines for the purposes of advancing disciplinary knowledge ("naïve classifications") and classifications for previously existing knowledge developed by information professionals for the purposes of creating access points in information retrieval systems ("professional classifications"). This distinction rests primarily an the purpose of the kind of classification system in question and only secondarily an the knowledge base of the scholars who have created it. Hjoerland and Nicolaisen appear to have misunderstood this point, which is made clearly and adequately in the title, in the abstract and throughout the text of my paper.
    Second, the paper posits that these different reasons for creating classification systems strongly influence the content and extent of the two kinds of classifications, but not necessarily their structures. By definition, naïve classifications for new knowledge have been developed for discrete areas of disciplinary inquiry in new areas of knowledge. These classifications do not attempt to classify the whole of that disciplinary area. That is, naïve classifications have a explicit purpose that is significantly different from the purpose of the major disciplinary classifications Hjoer-land and Nicolaisen provide as examples of classifications they think I discuss under the rubric of "naïve classifications" (e.g., classifications for the entire field of archaeology, biology, linguistics, music, psychology, etc.). My paper is not concerned with these important classifications for major disciplinary areas. Instead, it is concerned solely and specifically with scholarly classifications for small areas of new knowledge within these major disciplines (e.g., cloth of aresta, double harpsichords, child-rearing practices, anomalous phenomena, etc.). Thus, I have nowhere suggested or implied that the broad disciplinary classifications mentioned by Hjoerland and Nicolaisen are appropriately categorized as "naïve classifications." For example, I have not associated the Periodic System of the Elements with naïve classifications, as Hjoerland and Nicolaisen state that I have done. Indeed, broad classifications of this type fall well outside the definition of naïve classifications set out in my paper. In this case, too, 1 believe that Hjorland and Nicolaisen have misunderstood an important point in my paper. I agree with a number of points made in Hjorland and Nicolaisen's paper. In particular, I agree that researchers in the knowledge organization field should adhere to the highest standards of scholarly and scientific precision. For that reason, I am glad to have had the opportunity to respond to their paper.
    Footnote
    Bezugnahme auf: Hjoerland, B., J. Nicolaisen: Scientific and scholarly classifications are not "naïve": a comment to Beghtol (2003). In: Knowledge organization. 31(2004) no.1, S.55-61. - Vgl. die Erwiderung von Nicolaisen und Hjoerland in KO 31(2004) no.3, S.199-201.
  14. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.04
    0.043586206 = product of:
      0.08717241 = sum of:
        0.011958744 = weight(_text_:information in 3483) [ClassicSimilarity], result of:
          0.011958744 = score(doc=3483,freq=8.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.19395474 = fieldWeight in 3483, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.00967296 = weight(_text_:for in 3483) [ClassicSimilarity], result of:
          0.00967296 = score(doc=3483,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.14668301 = fieldWeight in 3483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.019320088 = weight(_text_:the in 3483) [ClassicSimilarity], result of:
          0.019320088 = score(doc=3483,freq=32.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.34863994 = fieldWeight in 3483, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.015003879 = weight(_text_:of in 3483) [ClassicSimilarity], result of:
          0.015003879 = score(doc=3483,freq=20.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27317715 = fieldWeight in 3483, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.019320088 = weight(_text_:the in 3483) [ClassicSimilarity], result of:
          0.019320088 = score(doc=3483,freq=32.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.34863994 = fieldWeight in 3483, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3483)
        0.011896656 = product of:
          0.023793312 = sum of:
            0.023793312 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
              0.023793312 = score(doc=3483,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.19345059 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Classification is an activity that transcends time and space and that bridges the divisions between different languages and cultures, including the divisions between academic disciplines. Classificatory activity, however, serves different purposes in different situations. Classifications for infonnation retrieval can be called "professional" classifications and classifications in other fields can be called "naïve" classifications because they are developed by people who have no particular interest in classificatory issues. The general purpose of naïve classification systems is to discover new knowledge. In contrast, the general purpose of information retrieval classifications is to classify pre-existing knowledge. Different classificatory purposes may thus inform systems that are intended to span the cultural specifics of the globalized information society. This paper builds an previous research into the purposes and characteristics of naïve classifications. It describes some of the relationships between the purpose and context of a naive classification, the units of analysis used in it, and the theory that the context and the units of analysis imply.
    Footnote
    Vgl.: Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification". In: Knowledge organization. 37(2010) no.2, S.111-120.
    Pages
    S.19-22
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  15. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.04
    0.043206647 = product of:
      0.086413294 = sum of:
        0.0035876236 = weight(_text_:information in 3651) [ClassicSimilarity], result of:
          0.0035876236 = score(doc=3651,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.058186423 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.02540392 = weight(_text_:dokumentation in 3651) [ClassicSimilarity], result of:
          0.02540392 = score(doc=3651,freq=2.0), product of:
            0.16407113 = queryWeight, product of:
              4.671349 = idf(docFreq=1124, maxDocs=44218)
              0.035122856 = queryNorm
            0.1548348 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.671349 = idf(docFreq=1124, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.0058037764 = weight(_text_:for in 3651) [ClassicSimilarity], result of:
          0.0058037764 = score(doc=3651,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.088009804 = fieldWeight in 3651, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.01738808 = weight(_text_:the in 3651) [ClassicSimilarity], result of:
          0.01738808 = score(doc=3651,freq=72.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31377596 = fieldWeight in 3651, product of:
              8.485281 = tf(freq=72.0), with freq of:
                72.0 = termFreq=72.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.016841812 = weight(_text_:of in 3651) [ClassicSimilarity], result of:
          0.016841812 = score(doc=3651,freq=70.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.3066406 = fieldWeight in 3651, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.01738808 = weight(_text_:the in 3651) [ClassicSimilarity], result of:
          0.01738808 = score(doc=3651,freq=72.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31377596 = fieldWeight in 3651, product of:
              8.485281 = tf(freq=72.0), with freq of:
                72.0 = termFreq=72.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.5 = coord(6/12)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Original in: Ottawa Conference on the Conceptual Basis of the Classification of Knowledge, Ottawa, 1971. Ed.: Jerzy A Wojceichowski. Pullach: Verlag Dokumentation 1974. S.404-412.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  16. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.04
    0.043161612 = product of:
      0.086323224 = sum of:
        0.007175247 = weight(_text_:information in 780) [ClassicSimilarity], result of:
          0.007175247 = score(doc=780,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.116372846 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.018353151 = weight(_text_:for in 780) [ClassicSimilarity], result of:
          0.018353151 = score(doc=780,freq=10.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.27831143 = fieldWeight in 780, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.018328644 = weight(_text_:the in 780) [ClassicSimilarity], result of:
          0.018328644 = score(doc=780,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 780, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.009861556 = weight(_text_:of in 780) [ClassicSimilarity], result of:
          0.009861556 = score(doc=780,freq=6.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.17955035 = fieldWeight in 780, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.018328644 = weight(_text_:the in 780) [ClassicSimilarity], result of:
          0.018328644 = score(doc=780,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 780, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.014275986 = product of:
          0.028551972 = sum of:
            0.028551972 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.028551972 = score(doc=780,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  17. Kaula, P.N.: Canons in analytico-synthetic classification (1979) 0.04
    0.042241022 = product of:
      0.101378456 = sum of:
        0.026413675 = weight(_text_:und in 1428) [ClassicSimilarity], result of:
          0.026413675 = score(doc=1428,freq=6.0), product of:
            0.07784514 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.035122856 = queryNorm
            0.33931053 = fieldWeight in 1428, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1428)
        0.015476737 = weight(_text_:for in 1428) [ClassicSimilarity], result of:
          0.015476737 = score(doc=1428,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.23469281 = fieldWeight in 1428, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=1428)
        0.020446459 = weight(_text_:the in 1428) [ClassicSimilarity], result of:
          0.020446459 = score(doc=1428,freq=14.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.36896583 = fieldWeight in 1428, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=1428)
        0.018595127 = weight(_text_:of in 1428) [ClassicSimilarity], result of:
          0.018595127 = score(doc=1428,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.33856338 = fieldWeight in 1428, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1428)
        0.020446459 = weight(_text_:the in 1428) [ClassicSimilarity], result of:
          0.020446459 = score(doc=1428,freq=14.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.36896583 = fieldWeight in 1428, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=1428)
      0.41666666 = coord(5/12)
    
    Abstract
    Presentation of the rules (canons) which S.R. Ranganathan laid down for the three planes of work, the idea plane, the verbal plane and the notational plane and explanation of each of these 34 canons, indispensable tools for the establishment of any classification system. An overall survey of the canons is given
    Source
    Klassifikation und Erkenntnis II. Proc. der Plenarvorträge und der Sektion 2 u. 3 "Wissensdarstellung und Wissensvermittlung" der 3. Fachtagung der Gesellschaft für Klassifikation, Königstein/Ts., 5.-6.4.1979
  18. Tennis, J.T.: ¬The strange case of eugenics : a subject's ontogeny in a long-lived classification scheme and the question of collocative integrity (2012) 0.04
    0.04180202 = product of:
      0.10032485 = sum of:
        0.013529775 = weight(_text_:information in 275) [ClassicSimilarity], result of:
          0.013529775 = score(doc=275,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.21943474 = fieldWeight in 275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
        0.018955056 = weight(_text_:for in 275) [ClassicSimilarity], result of:
          0.018955056 = score(doc=275,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.28743884 = fieldWeight in 275, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
        0.023184106 = weight(_text_:the in 275) [ClassicSimilarity], result of:
          0.023184106 = score(doc=275,freq=18.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.41836792 = fieldWeight in 275, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
        0.021471804 = weight(_text_:of in 275) [ClassicSimilarity], result of:
          0.021471804 = score(doc=275,freq=16.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.39093933 = fieldWeight in 275, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
        0.023184106 = weight(_text_:the in 275) [ClassicSimilarity], result of:
          0.023184106 = score(doc=275,freq=18.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.41836792 = fieldWeight in 275, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
      0.41666666 = coord(5/12)
    
    Abstract
    This article introduces the problem of collocative integrity present in long-lived classification schemes that undergo several changes. A case study of the subject "eugenics" in the Dewey Decimal Classification is presented to illustrate this phenomenon. Eugenics is strange because of the kinds of changes it undergoes. The article closes with a discussion of subject ontogeny as the name for this phenomenon and describes implications for information searching and browsing.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1350-1359
  19. Gnoli, C.; Ledl, A.; Park, Z.; Trzmielewski, M.: Phenomenon-based vs. disciplinary classification : possibilities for evaluating and for mapping (2018) 0.04
    0.041450474 = product of:
      0.08290095 = sum of:
        0.016439613 = product of:
          0.049318835 = sum of:
            0.049318835 = weight(_text_:f in 4804) [ClassicSimilarity], result of:
              0.049318835 = score(doc=4804,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35229704 = fieldWeight in 4804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4804)
          0.33333334 = coord(1/3)
        0.009566996 = weight(_text_:information in 4804) [ClassicSimilarity], result of:
          0.009566996 = score(doc=4804,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1551638 = fieldWeight in 4804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4804)
        0.02188741 = weight(_text_:for in 4804) [ClassicSimilarity], result of:
          0.02188741 = score(doc=4804,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.33190575 = fieldWeight in 4804, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=4804)
        0.010929092 = weight(_text_:the in 4804) [ClassicSimilarity], result of:
          0.010929092 = score(doc=4804,freq=4.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.19722053 = fieldWeight in 4804, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=4804)
        0.013148742 = weight(_text_:of in 4804) [ClassicSimilarity], result of:
          0.013148742 = score(doc=4804,freq=6.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.23940048 = fieldWeight in 4804, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4804)
        0.010929092 = weight(_text_:the in 4804) [ClassicSimilarity], result of:
          0.010929092 = score(doc=4804,freq=4.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.19722053 = fieldWeight in 4804, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=4804)
      0.5 = coord(6/12)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  20. Dousa, T.M.; Ibekwe-SanJuan, F.: Epistemological and methodological eclecticism in the construction of knowledge organization systems (KOSs) : the case of analytico-synthetic KOSs (2014) 0.04
    0.040193584 = product of:
      0.08038717 = sum of:
        0.010274758 = product of:
          0.030824272 = sum of:
            0.030824272 = weight(_text_:f in 1417) [ClassicSimilarity], result of:
              0.030824272 = score(doc=1417,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.22018565 = fieldWeight in 1417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1417)
          0.33333334 = coord(1/3)
        0.00967296 = weight(_text_:for in 1417) [ClassicSimilarity], result of:
          0.00967296 = score(doc=1417,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.14668301 = fieldWeight in 1417, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1417)
        0.014490065 = weight(_text_:the in 1417) [ClassicSimilarity], result of:
          0.014490065 = score(doc=1417,freq=18.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.26147994 = fieldWeight in 1417, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1417)
        0.019562665 = weight(_text_:of in 1417) [ClassicSimilarity], result of:
          0.019562665 = score(doc=1417,freq=34.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.35617945 = fieldWeight in 1417, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1417)
        0.014490065 = weight(_text_:the in 1417) [ClassicSimilarity], result of:
          0.014490065 = score(doc=1417,freq=18.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.26147994 = fieldWeight in 1417, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1417)
        0.011896656 = product of:
          0.023793312 = sum of:
            0.023793312 = weight(_text_:22 in 1417) [ClassicSimilarity], result of:
              0.023793312 = score(doc=1417,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.19345059 = fieldWeight in 1417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1417)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    In recent years, Hjørland has developed a typology of basic epistemological approaches to KO that identifies four basic positions - empiricism, rationalism, historicism/hermeneutics, and pragmatism -with which to characterize the epistemological bases and methodological orientation of KOSs. Although scholars of KO have noted that the design of a single KOS may incorporate epistemological-methodological features from more than one of these approaches, studies of concrete examples of epistemologico-methodological eclecticism have been rare. In this paper, we consider the phenomenon of epistemologico-methodological eclecticism in one theoretically significant family of KOSs - namely analytico-synthetic, or faceted, KOSs - by examining two cases - Julius Otto Kaiser's method of Systematic Indexing (SI) and Brian Vickery's method of facet analysis (FA) for document classification. We show that both of these systems combined classical features of rationalism with elements of empiricism and pragmatism and argue that such eclecticism is the norm, rather than the exception, for such KOSs in general.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik

Authors

Languages

Types

  • a 218
  • m 23
  • el 11
  • s 4
  • b 2
  • n 2
  • d 1
  • More… Less…