Search (38 results, page 1 of 2)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  • × type_ss:"a"
  1. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.04
    0.03847589 = product of:
      0.07695178 = sum of:
        0.07695178 = sum of:
          0.03456243 = weight(_text_:data in 780) [ClassicSimilarity], result of:
            0.03456243 = score(doc=780,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.2096163 = fieldWeight in 780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
          0.04238935 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
            0.04238935 = score(doc=780,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.23214069 = fieldWeight in 780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
      0.5 = coord(1/2)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  2. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.03
    0.025650594 = product of:
      0.05130119 = sum of:
        0.05130119 = sum of:
          0.02304162 = weight(_text_:data in 2763) [ClassicSimilarity], result of:
            0.02304162 = score(doc=2763,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.1397442 = fieldWeight in 2763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
          0.028259566 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
            0.028259566 = score(doc=2763,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.15476047 = fieldWeight in 2763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
      0.5 = coord(1/2)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  3. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.03
    0.025650594 = product of:
      0.05130119 = sum of:
        0.05130119 = sum of:
          0.02304162 = weight(_text_:data in 2346) [ClassicSimilarity], result of:
            0.02304162 = score(doc=2346,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.1397442 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
          0.028259566 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
            0.028259566 = score(doc=2346,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.15476047 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
  4. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.02
    0.021194674 = product of:
      0.04238935 = sum of:
        0.04238935 = product of:
          0.0847787 = sum of:
            0.0847787 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
              0.0847787 = score(doc=6404,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.46428138 = fieldWeight in 6404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6404)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  5. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.02
    0.021194674 = product of:
      0.04238935 = sum of:
        0.04238935 = product of:
          0.0847787 = sum of:
            0.0847787 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.0847787 = score(doc=3773,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  6. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.02
    0.021194674 = product of:
      0.04238935 = sum of:
        0.04238935 = product of:
          0.0847787 = sum of:
            0.0847787 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.0847787 = score(doc=3176,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 5.2017 18:46:22
  7. Fripp, D.: Using linked data to classify web documents (2010) 0.02
    0.020161418 = product of:
      0.040322836 = sum of:
        0.040322836 = product of:
          0.08064567 = sum of:
            0.08064567 = weight(_text_:data in 4172) [ClassicSimilarity], result of:
              0.08064567 = score(doc=4172,freq=8.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.48910472 = fieldWeight in 4172, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
  8. Kashyap, M.M.: Likeness between Ranganathan's postulations based approach to knowledge classification and entity relationship data modelling approach (2003) 0.02
    0.019320987 = product of:
      0.038641974 = sum of:
        0.038641974 = product of:
          0.07728395 = sum of:
            0.07728395 = weight(_text_:data in 2045) [ClassicSimilarity], result of:
              0.07728395 = score(doc=2045,freq=10.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.46871632 = fieldWeight in 2045, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2045)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes the Postulations Based Approach to Facet Classification as articulated by S. R. Ranganathan for knowledge classification and for the design of a facet scheme of library classification, and the Entity-Relationship Data Modelling and Analysis Approach set by Peter Pin-Sen Chen; both further modified by other experts. Efforts have been made to show the parallelism between the two approaches. It points out that, both the theoretical approaches are concerned with the organisation of knowledge or information, and apply almost similar theoretical principles, concepts, and techniques for the design and development of a framework for the organisation of knowledge, information, or data, in their respective domains. It states that both the approaches are complementary and supplementary to each other. The paper also argues that Ranganathan's postulations based approach or analytico-synthetic approach to knowledge classification can be applied for developing efficient data retrieval systems in addition to the data analysis and modelling domain.
  9. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.02
    0.017281216 = product of:
      0.03456243 = sum of:
        0.03456243 = product of:
          0.06912486 = sum of:
            0.06912486 = weight(_text_:data in 2651) [ClassicSimilarity], result of:
              0.06912486 = score(doc=2651,freq=8.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.4192326 = fieldWeight in 2651, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
  10. Cordeiro, M.I.; Slavic, A.: Data models for knowledge organization tools : evolution and perspectives (2003) 0.01
    0.014965973 = product of:
      0.029931946 = sum of:
        0.029931946 = product of:
          0.05986389 = sum of:
            0.05986389 = weight(_text_:data in 2632) [ClassicSimilarity], result of:
              0.05986389 = score(doc=2632,freq=6.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.3630661 = fieldWeight in 2632, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2632)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper focuses on the need for knowledge organization (KO) tools, such as library classifications, thesauri and subject heading systems, to be fully disclosed and available in the open network environment. The authors look at the place and value of traditional library knowledge organization tools in relation to the technical environment and expectations of the Semantic Web. Future requirements in this context are explored, stressing the need for KO systems to support semantic interoperability. In order to be fully shareable KO tools need to be reframed and reshaped in terms of conceptual and data models. The authors suggest that some useful approaches to this already exist in methodological and technical developments within the fields of ontology modelling and lexicographic and terminological data interchange.
  11. McCool, M.; St. Amant, K.: Field dependence and classification : implications for global information systems (2009) 0.01
    0.014256276 = product of:
      0.028512552 = sum of:
        0.028512552 = product of:
          0.057025105 = sum of:
            0.057025105 = weight(_text_:data in 2854) [ClassicSimilarity], result of:
              0.057025105 = score(doc=2854,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.34584928 = fieldWeight in 2854, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2854)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes research designed to assess the interaction between culture and classification. Mounting evidence in cross-cultural psychology has indicated that culture may affect classification, which is an important dimension to global information systems. Data were obtained through three classification tasks, two of which were adapted from recent studies in cross-cultural psychology. Data were collected from 36 participants, 19 from China and 17 from the United States. The results of this research indicate that Chinese participants appear to be more field dependent, which may be related to a cultural preference for relationships instead of categories.
  12. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.01
    0.014129783 = product of:
      0.028259566 = sum of:
        0.028259566 = product of:
          0.056519132 = sum of:
            0.056519132 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.056519132 = score(doc=7242,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.1997 21:10:19
  13. Belayche, C.: ¬A propos de la classification de Dewey (1997) 0.01
    0.014129783 = product of:
      0.028259566 = sum of:
        0.028259566 = product of:
          0.056519132 = sum of:
            0.056519132 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.056519132 = score(doc=1171,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30952093 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Bulletin d'informations de l'Association des Bibliothecaires Francais. 1997, no.175, S.22-23
  14. Lin, W.-Y.C.: ¬The concept and applications of faceted classifications (2006) 0.01
    0.014129783 = product of:
      0.028259566 = sum of:
        0.028259566 = product of:
          0.056519132 = sum of:
            0.056519132 = weight(_text_:22 in 5083) [ClassicSimilarity], result of:
              0.056519132 = score(doc=5083,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30952093 = fieldWeight in 5083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5083)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 5.2007 22:19:35
  15. Lorenz, B.: Zur Theorie und Terminologie der bibliothekarischen Klassifikation (2018) 0.01
    0.014129783 = product of:
      0.028259566 = sum of:
        0.028259566 = product of:
          0.056519132 = sum of:
            0.056519132 = weight(_text_:22 in 4339) [ClassicSimilarity], result of:
              0.056519132 = score(doc=4339,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30952093 = fieldWeight in 4339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4339)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  16. Zarrad, R.; Doggaz, N.; Zagrouba, E.: Wikipedia HTML structure analysis for ontology construction (2018) 0.01
    0.012471643 = product of:
      0.024943287 = sum of:
        0.024943287 = product of:
          0.049886573 = sum of:
            0.049886573 = weight(_text_:data in 4302) [ClassicSimilarity], result of:
              0.049886573 = score(doc=4302,freq=6.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30255508 = fieldWeight in 4302, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4302)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Previously, the main problem of information extraction was to gather enough data. Today, the challenge is not to collect data but to interpret and represent them in order to deduce information. Ontologies are considered suitable solutions for organizing information. The classic methods for ontology construction from textual documents rely on natural language analysis and are generally based on statistical or linguistic approaches. However, these approaches do not consider the document structure which provides additional knowledge. In fact, the structural organization of documents also conveys meaning. In this context, new approaches focus on document structure analysis to extract knowledge. This paper describes a methodology for ontology construction from web data and especially from Wikipedia articles. It focuses mainly on document structure in order to extract the main concepts and their relations. The proposed methods extract not only taxonomic and non-taxonomic relations but also give the labels describing non-taxonomic relations. The extraction of non-taxonomic relations is established by analyzing the titles hierarchy in each document. A pattern matching is also applied in order to extract known semantic relations. We propose also to apply a refinement to the extracted relations in order to keep only those that are relevant. The refinement process is performed by applying the transitive property, checking the nature of the relations and analyzing taxonomic relations having inverted arguments. Experiments have been performed on French Wikipedia articles related to the medical field. Ontology evaluation is performed by comparing it to gold standards.
  17. Winske, E.: ¬The development and structure of an urban, regional, and local documents classification scheme (1996) 0.01
    0.0123635605 = product of:
      0.024727121 = sum of:
        0.024727121 = product of:
          0.049454242 = sum of:
            0.049454242 = weight(_text_:22 in 7241) [ClassicSimilarity], result of:
              0.049454242 = score(doc=7241,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2708308 = fieldWeight in 7241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7241)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Paper presented at conference on 'Local documents, a new classification scheme' at the Research Caucus of the Florida Library Association Annual Conference, Fort Lauderdale, Florida 22 Apr 95
  18. Olson, H.A.: Sameness and difference : a cultural foundation of classification (2001) 0.01
    0.0123635605 = product of:
      0.024727121 = sum of:
        0.024727121 = product of:
          0.049454242 = sum of:
            0.049454242 = weight(_text_:22 in 166) [ClassicSimilarity], result of:
              0.049454242 = score(doc=166,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2708308 = fieldWeight in 166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=166)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  19. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.01
    0.0123635605 = product of:
      0.024727121 = sum of:
        0.024727121 = product of:
          0.049454242 = sum of:
            0.049454242 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
              0.049454242 = score(doc=3494,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2708308 = fieldWeight in 3494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.22-36
  20. Farradane, J.E.L.: ¬A scientific theory of classification and indexing and its practical applications (1950) 0.01
    0.012219666 = product of:
      0.024439331 = sum of:
        0.024439331 = product of:
          0.048878662 = sum of:
            0.048878662 = weight(_text_:data in 1654) [ClassicSimilarity], result of:
              0.048878662 = score(doc=1654,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.29644224 = fieldWeight in 1654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1654)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A classification is a theory of the structure of knowledge. From a discussion of the nature of truth, it is held that scientific knowledge is the only knowledge which can be regarded as true. The method of induction from empirical data is therefore applied to the construction of a classification. Items of knowledge are divided into uniquely definable terms, called isolates, and the relations between them, called operators. It is shown that only four basic operators exist, expressing appurtenance, equivalence, reaction and causation; using symbols for these operators, all subjects can be analysed in a linear form called an analet. With the addition of the permissible permutations of such analets, formed according to simple rules, alphabetical arrangement of the first terms provide a complete, logical subject index. Examples are given, and possible difficulties are considered. A classification can then be constructed by selection of deductive relations, arranged in hierarchical form. The nature of possible classifications is discussed. It is claimed that such an inductively constructed classification is the only true representation of the structure of knowledge, and that these principles provide a simple technique for accurately and fully indexing and classifying any given set of data, with complete flexibility