Search (41 results, page 1 of 3)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 780) [ClassicSimilarity], result of:
          0.031038022 = score(doc=780,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.038066804 = score(doc=780,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  2. Fripp, D.: Using linked data to classify web documents (2010) 0.02
    0.018105512 = product of:
      0.07242205 = sum of:
        0.07242205 = weight(_text_:data in 4172) [ClassicSimilarity], result of:
          0.07242205 = score(doc=4172,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 4172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
  3. Kashyap, M.M.: Likeness between Ranganathan's postulations based approach to knowledge classification and entity relationship data modelling approach (2003) 0.02
    0.017350782 = product of:
      0.06940313 = sum of:
        0.06940313 = weight(_text_:data in 2045) [ClassicSimilarity], result of:
          0.06940313 = score(doc=2045,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46871632 = fieldWeight in 2045, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2045)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes the Postulations Based Approach to Facet Classification as articulated by S. R. Ranganathan for knowledge classification and for the design of a facet scheme of library classification, and the Entity-Relationship Data Modelling and Analysis Approach set by Peter Pin-Sen Chen; both further modified by other experts. Efforts have been made to show the parallelism between the two approaches. It points out that, both the theoretical approaches are concerned with the organisation of knowledge or information, and apply almost similar theoretical principles, concepts, and techniques for the design and development of a framework for the organisation of knowledge, information, or data, in their respective domains. It states that both the approaches are complementary and supplementary to each other. The paper also argues that Ranganathan's postulations based approach or analytico-synthetic approach to knowledge classification can be applied for developing efficient data retrieval systems in addition to the data analysis and modelling domain.
  4. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.02
    0.016690476 = product of:
      0.03338095 = sum of:
        0.020692015 = weight(_text_:data in 2763) [ClassicSimilarity], result of:
          0.020692015 = score(doc=2763,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.1397442 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.025377871 = score(doc=2763,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  5. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.02
    0.016690476 = product of:
      0.03338095 = sum of:
        0.020692015 = weight(_text_:data in 2346) [ClassicSimilarity], result of:
          0.020692015 = score(doc=2346,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.1397442 = fieldWeight in 2346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
              0.025377871 = score(doc=2346,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 2346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
  6. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 2651) [ClassicSimilarity], result of:
          0.062076043 = score(doc=2651,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 2651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2651)
      0.25 = coord(1/4)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
  7. Cordeiro, M.I.; Slavic, A.: Data models for knowledge organization tools : evolution and perspectives (2003) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 2632) [ClassicSimilarity], result of:
          0.053759433 = score(doc=2632,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 2632, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2632)
      0.25 = coord(1/4)
    
    Abstract
    This paper focuses on the need for knowledge organization (KO) tools, such as library classifications, thesauri and subject heading systems, to be fully disclosed and available in the open network environment. The authors look at the place and value of traditional library knowledge organization tools in relation to the technical environment and expectations of the Semantic Web. Future requirements in this context are explored, stressing the need for KO systems to support semantic interoperability. In order to be fully shareable KO tools need to be reframed and reshaped in terms of conceptual and data models. The authors suggest that some useful approaches to this already exist in methodological and technical developments within the fields of ontology modelling and lexicographic and terminological data interchange.
  8. McCool, M.; St. Amant, K.: Field dependence and classification : implications for global information systems (2009) 0.01
    0.012802532 = product of:
      0.051210128 = sum of:
        0.051210128 = weight(_text_:data in 2854) [ClassicSimilarity], result of:
          0.051210128 = score(doc=2854,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 2854, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2854)
      0.25 = coord(1/4)
    
    Abstract
    This article describes research designed to assess the interaction between culture and classification. Mounting evidence in cross-cultural psychology has indicated that culture may affect classification, which is an important dimension to global information systems. Data were obtained through three classification tasks, two of which were adapted from recent studies in cross-cultural psychology. Data were collected from 36 participants, 19 from China and 17 from the United States. The results of this research indicate that Chinese participants appear to be more field dependent, which may be related to a cultural preference for relationships instead of categories.
  9. Zarrad, R.; Doggaz, N.; Zagrouba, E.: Wikipedia HTML structure analysis for ontology construction (2018) 0.01
    0.011199882 = product of:
      0.04479953 = sum of:
        0.04479953 = weight(_text_:data in 4302) [ClassicSimilarity], result of:
          0.04479953 = score(doc=4302,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.30255508 = fieldWeight in 4302, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4302)
      0.25 = coord(1/4)
    
    Abstract
    Previously, the main problem of information extraction was to gather enough data. Today, the challenge is not to collect data but to interpret and represent them in order to deduce information. Ontologies are considered suitable solutions for organizing information. The classic methods for ontology construction from textual documents rely on natural language analysis and are generally based on statistical or linguistic approaches. However, these approaches do not consider the document structure which provides additional knowledge. In fact, the structural organization of documents also conveys meaning. In this context, new approaches focus on document structure analysis to extract knowledge. This paper describes a methodology for ontology construction from web data and especially from Wikipedia articles. It focuses mainly on document structure in order to extract the main concepts and their relations. The proposed methods extract not only taxonomic and non-taxonomic relations but also give the labels describing non-taxonomic relations. The extraction of non-taxonomic relations is established by analyzing the titles hierarchy in each document. A pattern matching is also applied in order to extract known semantic relations. We propose also to apply a refinement to the extracted relations in order to keep only those that are relevant. The refinement process is performed by applying the transitive property, checking the nature of the relations and analyzing taxonomic relations having inverted arguments. Experiments have been performed on French Wikipedia articles related to the medical field. Ontology evaluation is performed by comparing it to gold standards.
  10. Farradane, J.E.L.: ¬A scientific theory of classification and indexing and its practical applications (1950) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 1654) [ClassicSimilarity], result of:
          0.043894395 = score(doc=1654,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 1654, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1654)
      0.25 = coord(1/4)
    
    Abstract
    A classification is a theory of the structure of knowledge. From a discussion of the nature of truth, it is held that scientific knowledge is the only knowledge which can be regarded as true. The method of induction from empirical data is therefore applied to the construction of a classification. Items of knowledge are divided into uniquely definable terms, called isolates, and the relations between them, called operators. It is shown that only four basic operators exist, expressing appurtenance, equivalence, reaction and causation; using symbols for these operators, all subjects can be analysed in a linear form called an analet. With the addition of the permissible permutations of such analets, formed according to simple rules, alphabetical arrangement of the first terms provide a complete, logical subject index. Examples are given, and possible difficulties are considered. A classification can then be constructed by selection of deductive relations, arranged in hierarchical form. The nature of possible classifications is discussed. It is claimed that such an inductively constructed classification is the only true representation of the structure of knowledge, and that these principles provide a simple technique for accurately and fully indexing and classifying any given set of data, with complete flexibility
  11. Szostak, R.: Classifying science : phenomena, data, theory, method, practice (2004) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 325) [ClassicSimilarity], result of:
          0.043894395 = score(doc=325,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 325, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0234375 = fieldNorm(doc=325)
      0.25 = coord(1/4)
    
    Abstract
    Classification is the essential first step in science. The study of science, as well as the practice of science, will thus benefit from a detailed classification of different types of science. In this book, science - defined broadly to include the social sciences and humanities - is first unpacked into its constituent elements: the phenomena studied, the data used, the theories employed, the methods applied, and the practices of scientists. These five elements are then classified in turn. Notably, the classifications of both theory types and methods allow the key strengths and weaknesses of different theories and methods to be readily discerned and compared. Connections across classifications are explored: should certain theories or phenomena be investigated only with certain methods? What is the proper function and form of scientific paradigms? Are certain common errors and biases in scientific practice associated with particular phenomena, data, theories, or methods? The classifications point to several ways of improving both specialized and interdisciplinary research and teaching, and especially of enhancing communication across communities of scholars. The classifications also support a superior system of document classification that would allow searches by theory and method used as well as causal links investigated.
    Content
    Inhalt: - Chapter 1: Classifying Science: 1.1. A Simple Classificatory Guideline - 1.2. The First "Cut" (and Plan of Work) - 1.3. Some Preliminaries - Chapter 2: Classifying Phenomena and Data: 2.1. Classifying Phenomena - 2.2. Classifying Data - Chapter 3: Classifying Theory: 3.1. Typology of Theory - 3.2. What Is a Theory? - 3.3. Evaluating Theories - 3.4. Types of Theory and the Five Types of Causation - 3.5. Classifying Individual Theories - 3.6. Advantages of a Typology of Theory - Chapter 4: Classifying Method: 4.1. Classifying Methods - 4.2. Typology of Strengths and Weaknesses of Methods - 4.3. Qualitative Versus Quantitative Analysis Revisited - 4.4. Evaluating Methods - 4.5. Classifying Particular Methods Within The Typology - 4.6. Advantages of a Typology of Methods - Chapter 5: Classifying Practice: 5.1. Errors and Biases in ScienceChapter - 5.2. Typology of (Critiques of) Scientific Practice - 5.3. Utilizing This Classification - 5.4. The Five Types of Ethical Analysis - Chapter 6: Drawing Connections Across These Classifications: 6.1. Theory and Method - 6.2. Theory (Method) and Phenomena (Data) - 6.3. Better Paradigms - 6.4. Critiques of Scientific Practice: Are They Correlated with Other Classifications? - Chapter 7: Classifying Scientific Documents: 7.1. Faceted or Enumerative? - 7.2. Classifying By Phenomena Studied - 7.3. Classifying By Theory Used - 7.4. Classifying By Method Used - 7.5 Links Among Subjects - 7.6. Type of Work, Language, and More - 7.7. Critiques of Scientific Practice - 7.8. Classifying Philosophy - 7.9. Evaluating the System - Chapter 8: Concluding Remarks: 8.1. The Classifications - 8.2. Advantages of These Various Classifications - 8.3. Drawing Connections Across Classifications - 8.4. Golden Mean Arguments - 8.5. Why Should Science Be Believed? - 8.6. How Can Science Be Improved? - 8.7. How Should Science Be Taught?
    Footnote
    Rez. in: KO 32(2005) no.2, S.93-95 (H. Albrechtsen): "The book deals with mapping of the structures and contents of sciences, defined broadly to include the social sciences and the humanities. According to the author, the study of science, as well as the practice of science, could benefit from a detailed classification of different types of science. The book defines five universal constituents of the sciences: phenomena, data, theories, methods and practice. For each of these constituents, the author poses five questions, in the well-known 5W format: Who, What, Where, When, Why? - with the addition of the question How? (Szostak 2003). Two objectives of the author's endeavor stand out: 1) decision support for university curriculum development across disciplines and decision support for university students at advanced levels of education in selection of appropriate courses for their projects and to support cross-disciplinary inquiry for researchers and students; 2) decision support for researchers and students in scientific inquiry across disciplines, methods and theories. The main prospective audience of this book is university curriculum developers, university students and researchers, in that order of priority. The heart of the book is the chapters unfolding the author's ideas about how to classify phenomena and data, theory, method and practice, by use of the 5W inquiry model. . . .
  12. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.01
    0.009516701 = product of:
      0.038066804 = sum of:
        0.038066804 = product of:
          0.07613361 = sum of:
            0.07613361 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
              0.07613361 = score(doc=6404,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.46428138 = fieldWeight in 6404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6404)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:01:00
  13. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.01
    0.009516701 = product of:
      0.038066804 = sum of:
        0.038066804 = product of:
          0.07613361 = sum of:
            0.07613361 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.07613361 = score(doc=3773,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:01:00
  14. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.01
    0.009516701 = product of:
      0.038066804 = sum of:
        0.038066804 = product of:
          0.07613361 = sum of:
            0.07613361 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.07613361 = score(doc=3176,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    6. 5.2017 18:46:22
  15. Hillman, D.J.: Mathematical classification techniques for nonstatic document collections, with particular reference to the problem of relevance (1965) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 5516) [ClassicSimilarity], result of:
          0.036211025 = score(doc=5516,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 5516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5516)
      0.25 = coord(1/4)
    
    Abstract
    It is first argued that classification schemes have an essentially hypothetical nature, whose adoption is not anything which can be true or false. Such schemes are therefore corrigible and susceptible of modification as fresh data accrue. These schemes are tools for the logical analysis of the structure of recorded knowledge. Their use amount to the adoption of a hypothesis. ... It is therefore imperative that classification schemes be devised which do allow us to deal with sets of documents that change with time. The formal bases of two such schemes are next described. They are known, respectively, as implicative lattices and subtractive lattices ...
  16. Szostak, R.: ¬A grammatical approach to subject classification in museums (2017) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 4136) [ClassicSimilarity], result of:
          0.036211025 = score(doc=4136,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 4136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4136)
      0.25 = coord(1/4)
    
    Abstract
    Several desiderata of a system of subject classification for museums are identified. The limitations of existing approaches are reviewed. It is argued that an approach which synthesizes basic concepts within a grammatical structure can achieve the goals of subject classification in museums while addressing diverse challenges. The same approach can also be applied in galleries, archives, and libraries. The approach is described in some detail and examples are provided of its application. The article closes with brief discussions of thesauri and linked open data.
  17. Broughton, V.: Faceted classification as a basis for knowledge organization in a digital environment : the Bliss Bibliographic Classification as a model for vocabulary management and the creation of multidimensional knowledge structures (2003) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 2631) [ClassicSimilarity], result of:
          0.031038022 = score(doc=2631,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 2631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2631)
      0.25 = coord(1/4)
    
    Abstract
    The paper examines the way in which classification schemes can be applied to the organization of digital resources. The case is argued for the particular suitability of schemes based an faceted principles for the organization of complex digital objects. Details are given of a co-operative project between the School of Library Archive & Information Studies, University College London, and the United Kingdom Higher Education gateways Arts and Humanities Data Service and Humbul, in which a faceted knowledge structure is being developed for the indexing and display of digital materials within a new combined humanities portal.
  18. Loehrlein, A.J.; Lemieux, V.L.; Bennett, M.: ¬The classification of financial products (2014) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 1196) [ClassicSimilarity], result of:
          0.031038022 = score(doc=1196,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 1196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1196)
      0.25 = coord(1/4)
    
    Abstract
    In the wake of the global financial crisis, the U.S. Dodd- Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank) was enacted to provide increased transparency in financial markets. In response to Dodd-Frank, a series of rules relating to swaps record keeping have been issued, and one such rule calls for the creation of a financial products classification system. The manner in which financial products are classified will have a profound effect on data integration and analysis in the financial industry. This article considers various approaches that can be taken when classifying financial products and recommends the use of facet analysis. The article argues that this type of analysis is flexible enough to accommodate multiple viewpoints and rigorous enough to facilitate inferences that are based on the hierarchical structure. Various use cases are examined that pertain to the organization of financial products. The use cases confirm the practical utility of taxonomies that are designed according to faceted principles.
  19. Broughton, V.; Slavic, A.: Building a faceted classification for the humanities : principles and procedures (2007) 0.01
    0.007315732 = product of:
      0.029262928 = sum of:
        0.029262928 = weight(_text_:data in 2875) [ClassicSimilarity], result of:
          0.029262928 = score(doc=2875,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.19762816 = fieldWeight in 2875, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2875)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to provide an overview of principles and procedures involved in creating a faceted classification scheme for use in resource discovery in an online environment. Design/methodology/approach - Facet analysis provides an established rigorous methodology for the conceptual organization of a subject field, and the structuring of an associated classification or controlled vocabulary. This paper explains how that methodology was applied to the humanities in the FATKS project, where the objective was to explore the potential of facet analytical theory for creating a controlled vocabulary for the humanities, and to establish the requirements of a faceted classification appropriate to an online environment. A detailed faceted vocabulary was developed for two areas of the humanities within a broader facet framework for the whole of knowledge. Research issues included how to create a data model which made the faceted structure explicit and machine-readable and provided for its further development and use. Findings - In order to support easy facet combination in indexing, and facet searching and browsing on the interface, faceted classification requires a formalized data structure and an appropriate tool for its management. The conceptual framework of a faceted system proper can be applied satisfactorily to humanities, and fully integrated within a vocabulary management system. Research limitations/implications - The procedures described in this paper are concerned only with the structuring of the classification, and do not extend to indexing, retrieval and application issues. Practical implications - Many stakeholders in the domain of resource discovery consider developing their own classification system and supporting tools. The methods described in this paper may clarify the process of building a faceted classification and may provide some useful ideas with respect to the vocabulary maintenance tool. Originality/value - As far as the authors are aware there is no comparable research in this area.
  20. Parrochia, D.: Mathematical theory of classification (2018) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 4308) [ClassicSimilarity], result of:
          0.02586502 = score(doc=4308,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 4308, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4308)
      0.25 = coord(1/4)
    
    Abstract
    One of the main topics of scientific research, classification is the operation consisting of distributing objects in classes or groups which are, in general, less numerous than them. From Antiquity to the Classical Age, it has a long history where philosophers (Aristotle), and natural scientists (Linnaeus), took a great part. But from the nineteenth century (with the growth of chemistry and information science) and the twentieth century (with the arrival of mathematical models and computer science), mathematics (especially theory of orders and theory of graphs or hypergraphs) allows us to compute all the possible partitions, chains of partitions, covers, hypergraphs or systems of classes we can construct on a domain. In spite of these advances, most of classifications are still based on the evaluation of ressemblances between objects that constitute the empirical data. However, all these classifications remain, for technical and epistemological reasons we detail below, very unstable ones. We lack a real algebra of classifications, which could explain their properties and the relations existing between them. Though the aim of a general theory of classifications is surely a wishful thought, some recent conjecture gives the hope that the existence of a metaclassification (or classification of all classification schemes) is possible