Search (47 results, page 1 of 3)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Molholt, P.: Qualities of classification schemes for the Information Superhighway (1995) 0.05
    0.047725018 = product of:
      0.19090007 = sum of:
        0.1773545 = weight(_text_:property in 5562) [ClassicSimilarity], result of:
          0.1773545 = score(doc=5562,freq=8.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.699991 = fieldWeight in 5562, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.013545574 = product of:
          0.027091147 = sum of:
            0.027091147 = weight(_text_:22 in 5562) [ClassicSimilarity], result of:
              0.027091147 = score(doc=5562,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.19345059 = fieldWeight in 5562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5562)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    For my segment of this program I'd like to focus on some basic qualities of classification schemes. These qualities are critical to our ability to truly organize knowledge for access. As I see it, there are at least five qualities of note. The first one of these properties that I want to talk about is "authoritative." By this I mean standardized, but I mean more than standardized with a built in consensus-building process. A classification scheme constructed by a collaborative, consensus-building process carries the approval, and the authority, of the discipline groups that contribute to it and that it affects... The next property of classification systems is "expandable," living, responsive, with a clear locus of responsibility for its continuous upkeep. The worst thing you can do with a thesaurus, or a classification scheme, is to finish it. You can't ever finish it because it reflects ongoing intellectual activity... The third property is "intuitive." That is, the system has to be approachable, it has to be transparent, or at least capable of being transparent. It has to have an underlying logic that supports the classification scheme but doesn't dominate it... The fourth property is "organized and logical." I advocate very strongly, and agree with Lois Chan, that classification must be based on a rule-based structure, on somebody's world-view of the syndetic structure... The fifth property is "universal" by which I mean the classification scheme needs be useable by any specific system or application, and be available as a language for multiple purposes.
    Source
    Cataloging and classification quarterly. 21(1995) no.2, S.19-22
  2. Green, R.: Relational aspects of subject authority control : the contributions of classificatory structure (2015) 0.01
    0.014339966 = product of:
      0.057359863 = sum of:
        0.04381429 = weight(_text_:network in 2282) [ClassicSimilarity], result of:
          0.04381429 = score(doc=2282,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.2460165 = fieldWeight in 2282, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2282)
        0.013545574 = product of:
          0.027091147 = sum of:
            0.027091147 = weight(_text_:22 in 2282) [ClassicSimilarity], result of:
              0.027091147 = score(doc=2282,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.19345059 = fieldWeight in 2282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2282)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The structure of a classification system contributes in a variety of ways to representing semantic relationships between its topics in the context of subject authority control. We explore this claim using the Dewey Decimal Classification (DDC) system as a case study. The DDC links its classes into a notational hierarchy, supplemented by a network of relationships between topics, expressed in class descriptions and in the Relative Index (RI). Topics/subjects are expressed both by the natural language text of the caption and notes (including Manual notes) in a class description and by the controlled vocabulary of the RI's alphabetic index, which shows where topics are treated in the classificatory structure. The expression of relationships between topics depends on paradigmatic and syntagmatic relationships between natural language terms in captions, notes, and RI terms; on the meaning of specific note types; and on references recorded between RI terms. The specific means used in the DDC for capturing hierarchical (including disciplinary), equivalence and associative relationships are surveyed.
    Date
    8.11.2015 21:27:22
  3. Zarrad, R.; Doggaz, N.; Zagrouba, E.: Wikipedia HTML structure analysis for ontology construction (2018) 0.01
    0.011084656 = product of:
      0.08867725 = sum of:
        0.08867725 = weight(_text_:property in 4302) [ClassicSimilarity], result of:
          0.08867725 = score(doc=4302,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.3499955 = fieldWeight in 4302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4302)
      0.125 = coord(1/8)
    
    Abstract
    Previously, the main problem of information extraction was to gather enough data. Today, the challenge is not to collect data but to interpret and represent them in order to deduce information. Ontologies are considered suitable solutions for organizing information. The classic methods for ontology construction from textual documents rely on natural language analysis and are generally based on statistical or linguistic approaches. However, these approaches do not consider the document structure which provides additional knowledge. In fact, the structural organization of documents also conveys meaning. In this context, new approaches focus on document structure analysis to extract knowledge. This paper describes a methodology for ontology construction from web data and especially from Wikipedia articles. It focuses mainly on document structure in order to extract the main concepts and their relations. The proposed methods extract not only taxonomic and non-taxonomic relations but also give the labels describing non-taxonomic relations. The extraction of non-taxonomic relations is established by analyzing the titles hierarchy in each document. A pattern matching is also applied in order to extract known semantic relations. We propose also to apply a refinement to the extracted relations in order to keep only those that are relevant. The refinement process is performed by applying the transitive property, checking the nature of the relations and analyzing taxonomic relations having inverted arguments. Experiments have been performed on French Wikipedia articles related to the medical field. Ontology evaluation is performed by comparing it to gold standards.
  4. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.01
    0.011054387 = product of:
      0.04421755 = sum of:
        0.03338109 = weight(_text_:computer in 2763) [ClassicSimilarity], result of:
          0.03338109 = score(doc=2763,freq=4.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.22840683 = fieldWeight in 2763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.010836459 = product of:
          0.021672918 = sum of:
            0.021672918 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.021672918 = score(doc=2763,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  5. Olson, H.A.: Sameness and difference : a cultural foundation of classification (2001) 0.01
    0.009892544 = product of:
      0.07914035 = sum of:
        0.07914035 = sum of:
          0.041212745 = weight(_text_:resources in 166) [ClassicSimilarity], result of:
            0.041212745 = score(doc=166,freq=2.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.28231642 = fieldWeight in 166, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.0546875 = fieldNorm(doc=166)
          0.037927605 = weight(_text_:22 in 166) [ClassicSimilarity], result of:
            0.037927605 = score(doc=166,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.2708308 = fieldWeight in 166, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=166)
      0.125 = coord(1/8)
    
    Date
    10. 9.2000 17:38:22
    Source
    Library resources and technical services. 45(2001) no.3, S.115-122
  6. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.01
    0.009892544 = product of:
      0.07914035 = sum of:
        0.07914035 = sum of:
          0.041212745 = weight(_text_:resources in 3494) [ClassicSimilarity], result of:
            0.041212745 = score(doc=3494,freq=2.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.28231642 = fieldWeight in 3494, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3494)
          0.037927605 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
            0.037927605 = score(doc=3494,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.2708308 = fieldWeight in 3494, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3494)
      0.125 = coord(1/8)
    
    Pages
    S.22-36
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  7. Vickery, B.C.: Systematic subject indexing (1985) 0.01
    0.008867725 = product of:
      0.0709418 = sum of:
        0.0709418 = weight(_text_:property in 3636) [ClassicSimilarity], result of:
          0.0709418 = score(doc=3636,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.2799964 = fieldWeight in 3636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.03125 = fieldNorm(doc=3636)
      0.125 = coord(1/8)
    
    Abstract
    - adding a relational term ("operator") to identify and join terms; - indicating grammatical case with terms where this would help clarify relationships; and - analyzing elementary terms to reveal fundamental categories where needed. He further added that a standard order for showing relational factors was highly desirable. Eventually, some years later, he was able to suggest such an order. This was accepted by his peers in the Classification Research Group, and utilized by Derek Austin in PRECIS (q.v.). Vickery began where Farradane began - with perception (a sound base according to current cognitive psychology). From this came further recognition of properties, parts, constituents, organs, effects, reactions, operations (physical and mental), added to the original "identity," "difference," "class membership," and "species." By defining categories more carefully, Vickery arrived at six (in addition to space (geographic) and time): - personality, thing, substance (e.g., dog, bicycle, rose) - part (e.g., paw, wheel, leaf) - substance (e.g., copper, water, butter) - action (e.g., scattering) - property (e.g., length, velocity) - operation (e.g., analysis, measurement) Thus, as early as 1953, the foundations were already laid for research that ultimately produced very sophisticated systems, such as PRECIS.
  8. Hurt, C.D.: Classification and subject analysis : looking to the future at a distance (1997) 0.01
    0.008762858 = product of:
      0.07010286 = sum of:
        0.07010286 = weight(_text_:network in 6929) [ClassicSimilarity], result of:
          0.07010286 = score(doc=6929,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.3936264 = fieldWeight in 6929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0625 = fieldNorm(doc=6929)
      0.125 = coord(1/8)
    
    Abstract
    Classic classification schemes are uni-dimensional, with few exceptions. One of the challenges of distance education and new learning strategies is that the proliferation of course work defies the traditional categorization. The rigidity of most present classification schemes does not mesh well with the burgeoning fluidity of the academic environment. One solution is a return to a largely forgotten area of study - classification theory. Some suggestions for exploration are nonmonotonic logic systems, neural network models, and non-library models.
  9. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.01
    0.008227873 = product of:
      0.03291149 = sum of:
        0.025552073 = weight(_text_:computer in 3262) [ClassicSimilarity], result of:
          0.025552073 = score(doc=3262,freq=6.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.17483756 = fieldWeight in 3262, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3262)
        0.0073594186 = product of:
          0.014718837 = sum of:
            0.014718837 = weight(_text_:resources in 3262) [ClassicSimilarity], result of:
              0.014718837 = score(doc=3262,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.10082729 = fieldWeight in 3262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
  10. Classification Research Group: ¬The need for a faceted classification as the basis of all methods of information retrieval (1985) 0.01
    0.006650794 = product of:
      0.05320635 = sum of:
        0.05320635 = weight(_text_:property in 3640) [ClassicSimilarity], result of:
          0.05320635 = score(doc=3640,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.2099973 = fieldWeight in 3640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3640)
      0.125 = coord(1/8)
    
    Abstract
    The technique chosen was S. R. Ranganathan's facet analysis (q.v.). This method works from the bottom up: a term is categorized according to its parent class, as a kind, state, property, action, operation upon something, result of an Operation, agent, and so on. These modes of definition represent characteristics of division. Following the publication of this paper, the group worked for over ten years developing systems following this general pattern with various changes and experimental arrangements. Ranganathan's Colon Classification was the original of this type of method, but the Group rejected his contention that there are only five fundamental categories to be found in the knowledge base. They did, in fact, end up with varying numbers of categories in the experimental systems which they ultimately were to make. Notation was also recognized as a problem, being complex, illogical, lengthy, obscure and hard to understand. The Group tried to develop a rationale for notation, both as an ordering and as a finding device. To describe and represent a class, a notation could be long, but as a finding device, brevity would be preferable. The Group was to experiment with this aspect of classification and produce a number of interesting results. The Classification Research Group began meeting informally to discuss classification matters in 1952 and continues to meet, usually in London, to the present day. Most of the British authors whose work is presented in these pages have been members for most of the Group's life and continue in it. The Group maintains the basic position outlined in this paper to the present day. Its experimental approach has resulted in much more information about the nature and functions of classification systems. The ideal system has yet to be found. Classification research is still a promising area. The future calls for more experimentation based an reasoned approaches, following the example set by the Classification Research Group.
  11. Cordeiro, M.I.; Slavic, A.: Data models for knowledge organization tools : evolution and perspectives (2003) 0.01
    0.006572143 = product of:
      0.052577145 = sum of:
        0.052577145 = weight(_text_:network in 2632) [ClassicSimilarity], result of:
          0.052577145 = score(doc=2632,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.29521978 = fieldWeight in 2632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=2632)
      0.125 = coord(1/8)
    
    Abstract
    This paper focuses on the need for knowledge organization (KO) tools, such as library classifications, thesauri and subject heading systems, to be fully disclosed and available in the open network environment. The authors look at the place and value of traditional library knowledge organization tools in relation to the technical environment and expectations of the Semantic Web. Future requirements in this context are explored, stressing the need for KO systems to support semantic interoperability. In order to be fully shareable KO tools need to be reframed and reshaped in terms of conceptual and data models. The authors suggest that some useful approaches to this already exist in methodological and technical developments within the fields of ontology modelling and lexicographic and terminological data interchange.
  12. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.01
    0.0062900716 = product of:
      0.025160287 = sum of:
        0.014752497 = weight(_text_:computer in 2467) [ClassicSimilarity], result of:
          0.014752497 = score(doc=2467,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.100942515 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.01040779 = product of:
          0.02081558 = sum of:
            0.02081558 = weight(_text_:resources in 2467) [ClassicSimilarity], result of:
              0.02081558 = score(doc=2467,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.14259133 = fieldWeight in 2467, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  13. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.01
    0.005652882 = product of:
      0.045223057 = sum of:
        0.045223057 = sum of:
          0.02355014 = weight(_text_:resources in 2346) [ClassicSimilarity], result of:
            0.02355014 = score(doc=2346,freq=2.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.16132367 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
          0.021672918 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
            0.021672918 = score(doc=2346,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.15476047 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
      0.125 = coord(1/8)
    
    Date
    7.11.2008 15:22:04
    Theme
    Information Resources Management
  14. Zackland, M.; Fontaine, D.: Systematic building of conceptual classification systems with C-KAT (1996) 0.01
    0.0051633734 = product of:
      0.041306987 = sum of:
        0.041306987 = weight(_text_:computer in 5145) [ClassicSimilarity], result of:
          0.041306987 = score(doc=5145,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.28263903 = fieldWeight in 5145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5145)
      0.125 = coord(1/8)
    
    Source
    International journal of human-computer studies. 44(1996) no.5, S.603-627
  15. Frické, M.: Logic and the organization of information (2012) 0.00
    0.0044716126 = product of:
      0.0357729 = sum of:
        0.0357729 = weight(_text_:computer in 1782) [ClassicSimilarity], result of:
          0.0357729 = score(doc=1782,freq=6.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24477258 = fieldWeight in 1782, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.125 = coord(1/8)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
    LCSH
    Computer science
    Subject
    Computer science
  16. Batty, D.: ¬The future of DDC in the perspective of current classification research (1989) 0.00
    0.004425749 = product of:
      0.035405993 = sum of:
        0.035405993 = weight(_text_:computer in 2070) [ClassicSimilarity], result of:
          0.035405993 = score(doc=2070,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 2070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2070)
      0.125 = coord(1/8)
    
    Source
    Classification theory in the computer age: conversations across the disciplines. Proc. from the Conf. 18.-19.11.1988, Albany, NY
  17. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.00
    0.004063672 = product of:
      0.032509375 = sum of:
        0.032509375 = product of:
          0.06501875 = sum of:
            0.06501875 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
              0.06501875 = score(doc=6404,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.46428138 = fieldWeight in 6404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6404)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    1. 8.1996 22:01:00
  18. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.00
    0.004063672 = product of:
      0.032509375 = sum of:
        0.032509375 = product of:
          0.06501875 = sum of:
            0.06501875 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.06501875 = score(doc=3773,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    1. 8.1996 22:01:00
  19. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.00
    0.004063672 = product of:
      0.032509375 = sum of:
        0.032509375 = product of:
          0.06501875 = sum of:
            0.06501875 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.06501875 = score(doc=3176,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    6. 5.2017 18:46:22
  20. Parrochia, D.: Mathematical theory of classification (2018) 0.00
    0.0036881242 = product of:
      0.029504994 = sum of:
        0.029504994 = weight(_text_:computer in 4308) [ClassicSimilarity], result of:
          0.029504994 = score(doc=4308,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 4308, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4308)
      0.125 = coord(1/8)
    
    Abstract
    One of the main topics of scientific research, classification is the operation consisting of distributing objects in classes or groups which are, in general, less numerous than them. From Antiquity to the Classical Age, it has a long history where philosophers (Aristotle), and natural scientists (Linnaeus), took a great part. But from the nineteenth century (with the growth of chemistry and information science) and the twentieth century (with the arrival of mathematical models and computer science), mathematics (especially theory of orders and theory of graphs or hypergraphs) allows us to compute all the possible partitions, chains of partitions, covers, hypergraphs or systems of classes we can construct on a domain. In spite of these advances, most of classifications are still based on the evaluation of ressemblances between objects that constitute the empirical data. However, all these classifications remain, for technical and epistemological reasons we detail below, very unstable ones. We lack a real algebra of classifications, which could explain their properties and the relations existing between them. Though the aim of a general theory of classifications is surely a wishful thought, some recent conjecture gives the hope that the existence of a metaclassification (or classification of all classification schemes) is possible

Years

Languages

  • e 42
  • f 3
  • chi 1
  • d 1
  • More… Less…

Types

  • a 42
  • el 3
  • m 3
  • s 2
  • More… Less…