Search (35 results, page 1 of 2)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  • × year_i:[2010 TO 2020}
  1. Frické, M.: Logic and the organization of information (2012) 0.01
    0.011005791 = product of:
      0.051360358 = sum of:
        0.021139 = weight(_text_:web in 1782) [ClassicSimilarity], result of:
          0.021139 = score(doc=1782,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21858418 = fieldWeight in 1782, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.01539293 = weight(_text_:information in 1782) [ClassicSimilarity], result of:
          0.01539293 = score(doc=1782,freq=38.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.29590017 = fieldWeight in 1782, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.014828428 = weight(_text_:retrieval in 1782) [ClassicSimilarity], result of:
          0.014828428 = score(doc=1782,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16542503 = fieldWeight in 1782, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.21428572 = coord(3/14)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
    LCSH
    Information Systems
    Information storage and retrieval systems
    Subject
    Information Systems
    Information storage and retrieval systems
  2. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.01
    0.009859371 = product of:
      0.046010397 = sum of:
        0.024409214 = weight(_text_:web in 3494) [ClassicSimilarity], result of:
          0.024409214 = score(doc=3494,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.012233062 = weight(_text_:information in 3494) [ClassicSimilarity], result of:
          0.012233062 = score(doc=3494,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 3494, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.009368123 = product of:
          0.028104367 = sum of:
            0.028104367 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
              0.028104367 = score(doc=3494,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.2708308 = fieldWeight in 3494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3494)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Pages
    S.22-36
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  3. Fripp, D.: Using linked data to classify web documents (2010) 0.01
    0.007983028 = product of:
      0.05588119 = sum of:
        0.048818428 = weight(_text_:web in 4172) [ClassicSimilarity], result of:
          0.048818428 = score(doc=4172,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.50479853 = fieldWeight in 4172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
        0.0070627616 = weight(_text_:information in 4172) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=4172,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 4172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
    Theme
    Semantic Web
  4. Jacob, E.K.: Proposal for a classification of classifications built on Beghtol's distinction between "Naïve Classification" and "Professional Classification" (2010) 0.01
    0.0078193005 = product of:
      0.036490068 = sum of:
        0.0104854815 = weight(_text_:information in 2945) [ClassicSimilarity], result of:
          0.0104854815 = score(doc=2945,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.20156369 = fieldWeight in 2945, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.01797477 = weight(_text_:retrieval in 2945) [ClassicSimilarity], result of:
          0.01797477 = score(doc=2945,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 2945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2945)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 2945) [ClassicSimilarity], result of:
              0.024089456 = score(doc=2945,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 2945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2945)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Argues that Beghtol's (2003) use of the terms "naive classification" and "professional classification" is valid because they are nominal definitions and that the distinction between these two types of classification points up the need for researchers in knowledge organization to broaden their scope beyond traditional classification systems intended for information retrieval. Argues that work by Beghtol (2003), Kwasnik (1999) and Bailey (1994) offer direction for the development of a classification of classifications based on the pragmatic dimensions of extant classification systems. Bezugnahme auf: Beghtol, C.: Naïve classification systems and the global information society. In: Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine. Würzburg: Ergon Verlag 2004. S.19-22. (Advances in knowledge organization; vol.9)
  5. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.01
    0.0068057464 = product of:
      0.03176015 = sum of:
        0.010089659 = weight(_text_:information in 1778) [ClassicSimilarity], result of:
          0.010089659 = score(doc=1778,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 1778, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.014978974 = weight(_text_:retrieval in 1778) [ClassicSimilarity], result of:
          0.014978974 = score(doc=1778,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 1778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.020074548 = score(doc=1778,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
  6. Dousa, T.M.: Categories and the architectonics of system in Julius Otto Kaiser's method of systematic indexing (2014) 0.01
    0.0057247113 = product of:
      0.02671532 = sum of:
        0.0050448296 = weight(_text_:information in 1418) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=1418,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.014978974 = weight(_text_:retrieval in 1418) [ClassicSimilarity], result of:
          0.014978974 = score(doc=1418,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 1418) [ClassicSimilarity], result of:
              0.020074548 = score(doc=1418,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 1418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1418)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Categories, or concepts of high generality representing the most basic kinds of entities in the world, have long been understood to be a fundamental element in the construction of knowledge organization systems (KOSs), particularly faceted ones. Commentators on facet analysis have tended to foreground the role of categories in the structuring of controlled vocabularies and the construction of compound index terms, and the implications of this for subject representation and information retrieval. Less attention has been paid to the variety of ways in which categories can shape the overall architectonic framework of a KOS. This case study explores the range of functions that categories took in structuring various aspects of an early analytico-synthetic KOS, Julius Otto Kaiser's method of Systematic Indexing (SI). Within SI, categories not only functioned as mechanisms to partition an index vocabulary into smaller groupings of terms and as elements in the construction of compound index terms but also served as means of defining the units of indexing, or index items, incorporated into an index; determining the organization of card index files and the articulation of the guide card system serving as a navigational aids thereto; and setting structural constraints to the establishment of cross-references between terms. In all these ways, Kaiser's system of categories contributed to the general systematicity of SI.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  7. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.01
    0.0052709323 = product of:
      0.036896523 = sum of:
        0.03118895 = weight(_text_:web in 311) [ClassicSimilarity], result of:
          0.03118895 = score(doc=311,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.32250395 = fieldWeight in 311, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.005707573 = weight(_text_:information in 311) [ClassicSimilarity], result of:
          0.005707573 = score(doc=311,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.10971737 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
    Methodology Based on information from research papers, more research was done on SKOS and examples of SKOS and shared faceted classifications in the Semantic Web and about SKOS and how to express SKOS in RDF/XML. Once confident with these ideas, the author used a faceted taxonomy created in a Vocabulary Design class and encoded it using SKOS. Instead of writing RDF in a program such as Notepad, a thesaurus tool was used to create the taxonomy according to SKOS standards and then export the thesaurus in RDF/XML format. These processes and tools are then analyzed. Results The initial statement of the problem was simply an extension of the survey paper done earlier in this class. To continue on with the research, more research was done into SKOS - a standard for expressing thesauri, taxonomies and faceted classifications so they can be shared on the semantic web.
  8. Zarrad, R.; Doggaz, N.; Zagrouba, E.: Wikipedia HTML structure analysis for ontology construction (2018) 0.00
    0.003739008 = product of:
      0.026173055 = sum of:
        0.017435152 = weight(_text_:web in 4302) [ClassicSimilarity], result of:
          0.017435152 = score(doc=4302,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 4302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4302)
        0.008737902 = weight(_text_:information in 4302) [ClassicSimilarity], result of:
          0.008737902 = score(doc=4302,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 4302, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4302)
      0.14285715 = coord(2/14)
    
    Abstract
    Previously, the main problem of information extraction was to gather enough data. Today, the challenge is not to collect data but to interpret and represent them in order to deduce information. Ontologies are considered suitable solutions for organizing information. The classic methods for ontology construction from textual documents rely on natural language analysis and are generally based on statistical or linguistic approaches. However, these approaches do not consider the document structure which provides additional knowledge. In fact, the structural organization of documents also conveys meaning. In this context, new approaches focus on document structure analysis to extract knowledge. This paper describes a methodology for ontology construction from web data and especially from Wikipedia articles. It focuses mainly on document structure in order to extract the main concepts and their relations. The proposed methods extract not only taxonomic and non-taxonomic relations but also give the labels describing non-taxonomic relations. The extraction of non-taxonomic relations is established by analyzing the titles hierarchy in each document. A pattern matching is also applied in order to extract known semantic relations. We propose also to apply a refinement to the extracted relations in order to keep only those that are relevant. The refinement process is performed by applying the transitive property, checking the nature of the relations and analyzing taxonomic relations having inverted arguments. Experiments have been performed on French Wikipedia articles related to the medical field. Ontology evaluation is performed by comparing it to gold standards.
  9. Keshet, Y.: Classification systems in the light of sociology of knowledge (2011) 0.00
    0.0035099457 = product of:
      0.02456962 = sum of:
        0.017435152 = weight(_text_:web in 4493) [ClassicSimilarity], result of:
          0.017435152 = score(doc=4493,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 4493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4493)
        0.0071344664 = weight(_text_:information in 4493) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=4493,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 4493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4493)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - Classification is an important process in making sense of the world, and has a pronounced social dimension. This paper aims to compare folksonomy, a new social classification system currently being developed on the web, with conventional taxonomy in the light of theoretical sociological and anthropological approaches. The co-existence of these two types of classification system raises the questions: Will and should taxonomies be hybridized with folksonomies? What can each of these systems contribute to information-searching processes, and how can the sociology of knowledge provide an answer to these questions? This paper aims also to address these issues. Design/methodology/approach - This paper is situated at the meeting point of the sociology of knowledge, epistemology and information science and aims at examining systems of classification in the light of both classical theory and current late-modern sociological and anthropological approaches. Findings - Using theoretical approaches current in the sociology of science and knowledge, the paper envisages two divergent possible outcomes. Originality/value - While concentrating on classifications systems, this paper addresses the more general social issue of what we know and how it is known. The concept of hybrid knowledge is suggested in order to illuminate the epistemological basis of late-modern knowledge being constructed by hybridizing contradictory modern knowledge categories, such as the subjective with the objective and the social with the natural. Integrating tree-like taxonomies with folksonomies or, in other words, generating a naturalized structural order of objective relations with social, subjective classification systems, can create a vast range of hybrid knowledge.
  10. Gnoli, C.: Metadata about what? : distinguishing between ontic, epistemic, and documental dimensions in knowledge organization (2012) 0.00
    0.0035099457 = product of:
      0.02456962 = sum of:
        0.017435152 = weight(_text_:web in 323) [ClassicSimilarity], result of:
          0.017435152 = score(doc=323,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
        0.0071344664 = weight(_text_:information in 323) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=323,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 323, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
      0.14285715 = coord(2/14)
    
    Abstract
    The spread of many new media and formats is changing the scenario faced by knowledge organizers: as printed monographs are not the only standard form of knowledge carrier anymore, the traditional kind of knowledge organization (KO) systems based on academic disciplines is put into question. A sounder foundation can be provided by an analysis of the different dimensions concurring to form the content of any knowledge item-what Brian Vickery described as the steps "from the world to the classifier." The ultimate referents of documents are the phenomena of the real world, that can be ordered by ontology, the study of what exists. Phenomena coexist in subjects with the perspectives by which they are considered, pertaining to epistemology, and with the formal features of knowledge carriers, adding a further, pragmatic layer. All these dimensions can be accounted for in metadata, but are often done so in mixed ways, making indexes less rigorous and interoperable. For example, while facet analysis was originally developed for subject indexing, many "faceted" interfaces today mix subject facets with form facets, and schemes presented as "ontologies" for the "semantic Web" also code for non-semantic information. In bibliographic classifications, phenomena are often confused with the disciplines dealing with them, the latter being assumed to be the most useful starting point, for users will have either one or another perspective. A general citation order of dimensions- phenomena, perspective, carrier-is recommended, helping to concentrate most relevant information at the beginning of headings.
  11. Tennis, J.T.: Never facets alone : the evolving thought and persistent problems in Ranganathan's theories of classification (2017) 0.00
    0.003211426 = product of:
      0.022479981 = sum of:
        0.017435152 = weight(_text_:web in 5800) [ClassicSimilarity], result of:
          0.017435152 = score(doc=5800,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 5800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5800)
        0.0050448296 = weight(_text_:information in 5800) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=5800,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 5800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5800)
      0.14285715 = coord(2/14)
    
    Abstract
    Shiyali Ramamrita Ranganathan's theory of classification spans a number of works over a number of decades. And while he was devoted to solving many problems in the practice of librarianship, and is known as the father of library science in India (Garfield, 1984), his work in classification revolves around one central concern. His classification research addressed the problems that arose from introducing new ideas into a scheme for classification, while maintaining a meaningful hierarchical and systematically arranged order of classes. This is because hierarchical and systematically arranged classes are the defining characteristic of useful classification. To lose this order is to through the addition of new classes is to introduce confusion, if not chaos, and to move toward a useless classification - or at least one that requires complete revision. In the following chapter, I outline the stages, and the elements of those stages, in Ranganathan's thought on classification from 1926-1972, as well as posthumous work that continues his agenda. And while facets figure prominently in all of these stages; but for Ranganathan to achieve his goal, he must continually add to this central feature of his theory of classification. I will close this chapter with an outline of persistent problems that represent research fronts for the field. Chief among these are what to do about scheme change and the open question about the rigor of information modeling in light of semantic web developments.
  12. Blake, J.: Some issues in the classification of zoology (2011) 0.00
    0.0031590632 = product of:
      0.02211344 = sum of:
        0.0071344664 = weight(_text_:information in 4845) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=4845,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 4845, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
        0.014978974 = weight(_text_:retrieval in 4845) [ClassicSimilarity], result of:
          0.014978974 = score(doc=4845,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 4845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper identifies and discusses features of the classification of mammals that are relevant to the bibliographic classification of the subject. The tendency of zoological classifications to change, the differing sizes of groups of species, the use zoologists make of groupings other than taxa, and the links in zoology between classification and nomenclature, are identified as key themes the bibliographic classificationist needs to be aware of. The impact of cladistics, a novel classificatory method and philosophy adopted by zoologists in the last few decades, is identified as the defining feature of the current, rather turbulent, state of zoological classification. However because zoologists still employ some non-cladistic classifications, because cladistic classifications are in some way unsuited to optimal information storage and retrieval, and because some of their consequences for zoological classification are as yet unknown, bibliographic classifications cannot be modelled entirely on them.
    Content
    This paper is based on a thesis of the same title, completed as part of an MA in Library and Information Studies at University College London in 2009, and available at http://62.32.98.6/elibsql2uk_Z10300UK_Documents/Catalogued_PDFs/ Some_issues_in_the_classification_of_zoology.PDF. Thanks are due to Vanda Broughton, who supervised the MA thesis; and to Diane Tough of the Natural History Museum, London and Ann Sylph of the Zoological Society of London, who both provided valuable insights into the classification of zoological literature.
  13. Gnoli, C.: Classifying phenomena : part 4: themes and rhemes (2018) 0.00
    0.002011945 = product of:
      0.014083615 = sum of:
        0.0060537956 = weight(_text_:information in 4152) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=4152,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4152)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 4152) [ClassicSimilarity], result of:
              0.024089456 = score(doc=4152,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4152)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This is the fourth in a series of papers on classification based on phenomena instead of disciplines. Together with types, levels and facets that have been discussed in the previous parts, themes and rhemes are further structural components of such a classification. In a statement or in a longer document, a base theme and several particular themes can be identified. Base theme should be cited first in a classmark, followed by particular themes, each with its own facets. In some cases, rhemes can also be expressed, that is new information provided about a theme, converting an abstract statement ("wolves, affected by cervids") into a claim that some thing actually occurs ("wolves are affected by cervids"). In the Integrative Levels Classification rhemes can be expressed by special deictic classes, including those for actual specimens, anaphoras, unknown values, conjunctions and spans, whole universe, anthropocentric favoured classes, and favoured host classes. These features, together with rules for pronounciation, make a classification of phenomena a true language, that may be suitable for many uses.
    Date
    17. 2.2018 18:22:25
  14. Dimensions of knowledge : facets for knowledge organization (2017) 0.00
    0.001245368 = product of:
      0.017435152 = sum of:
        0.017435152 = weight(_text_:web in 4154) [ClassicSimilarity], result of:
          0.017435152 = score(doc=4154,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 4154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4154)
      0.071428575 = coord(1/14)
    
    Abstract
    The identification and contextual definition of concepts is the core of knowledge organization. The full expression of comprehension is accomplished through the use of an extension device called the facet. A facet is a category of dimensional characteristics that cross the hierarchical array of concepts to provide extension, or breadth, to the contexts in which they are discovered or expressed in knowledge organization systems. The use of the facet in knowledge organization has a rich history arising in the mid-nineteenth century. As it has matured through more than a century of application, the notion of the facet in knowledge organization has taken on a variety of meanings, from that of simple categories used in web search engines to the more sophisticated idea of intersecting dimensions of knowledge. This book describes the state of the art of the understanding of facets in knowledge organization today.
  15. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.00
    0.001147117 = product of:
      0.016059637 = sum of:
        0.016059637 = product of:
          0.04817891 = sum of:
            0.04817891 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.04817891 = score(doc=3176,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    6. 5.2017 18:46:22
  16. Smiraglia, R.P.; Heuvel, C. van den: Classifications and concepts : towards an elementary theory of knowledge interaction (2013) 0.00
    0.0010699268 = product of:
      0.014978974 = sum of:
        0.014978974 = weight(_text_:retrieval in 1758) [ClassicSimilarity], result of:
          0.014978974 = score(doc=1758,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 1758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1758)
      0.071428575 = coord(1/14)
    
    Abstract
    Purpose - This paper seeks to outline the central role of concepts in the knowledge universe, and the intertwining roles of works, instantiations, and documents. In particular the authors are interested in ontological and epistemological aspects of concepts and in the question to which extent there is a need for natural languages to link concepts to create meaningful patterns. Design/methodology/approach - The authors describe the quest for the smallest elements of knowledge from a historical perspective. They focus on the metaphor of the universe of knowledge and its impact on classification and retrieval of concepts. They outline the major components of an elementary theory of knowledge interaction. Findings - The paper outlines the major components of an elementary theory of knowledge interaction that is based on the structure of knowledge rather than on the content of documents, in which semantics becomes not a matter of synonymous concepts, but rather of coordinating knowledge structures. The evidence is derived from existing empirical research. Originality/value - The paper shifts the bases for knowledge organization from a search for a universal order to an understanding of a universal structure within which many context-dependent orders are possible.
  17. Adler, M.; Harper, L.M.: Race and ethnicity in classification systems : teaching knowledge organization from a social justice perspective (2018) 0.00
    9.986174E-4 = product of:
      0.013980643 = sum of:
        0.013980643 = weight(_text_:information in 5518) [ClassicSimilarity], result of:
          0.013980643 = score(doc=5518,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2687516 = fieldWeight in 5518, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5518)
      0.071428575 = coord(1/14)
    
    Abstract
    Classification and the organization of information are directly connected to issues surrounding social justice, diversity, and inclusion. This paper is written from the standpoint that political and epistemological aspects of knowledge organization are fundamental to research and practice and suggests ways to integrate social justice and diversity issues into courses on the organization of information.
    Content
    Beitrag in einem Themenheft: 'Race and Ethnicity in Library and Information Science: An Update'.
  18. Szostak, R.: ¬A pluralistic approach to the philosophy of classification : a case for "public knowledge" (2015) 0.00
    8.737902E-4 = product of:
      0.012233062 = sum of:
        0.012233062 = weight(_text_:information in 5541) [ClassicSimilarity], result of:
          0.012233062 = score(doc=5541,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 5541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5541)
      0.071428575 = coord(1/14)
    
    Abstract
    Any classification system should be evaluated with respect to a variety of philosophical and practical concerns. This paper explores several distinct issues: the nature of a work, the value of a statement, the contribution of information science to philosophy, the nature of hierarchy, ethical evaluation, pre- versus postcoordination, the lived experience of librarians, and formalization versus natural language. It evaluates a particular approach to classification in terms of each of these but draws general lessons for philosophical evaluation. That approach to classification emphasizes the free combination of basic concepts representing both real things in the world and the relationships among these; works are also classified in terms of theories, methods, and perspectives applied.
    Content
    Beitrag in einem Themenheft: 'Exploring Philosophies of Information'.
    Theme
    Information
  19. Tennis, J.T.: ¬The strange case of eugenics : a subject's ontogeny in a long-lived classification scheme and the question of collocative integrity (2012) 0.00
    8.153676E-4 = product of:
      0.011415146 = sum of:
        0.011415146 = weight(_text_:information in 275) [ClassicSimilarity], result of:
          0.011415146 = score(doc=275,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=275)
      0.071428575 = coord(1/14)
    
    Abstract
    This article introduces the problem of collocative integrity present in long-lived classification schemes that undergo several changes. A case study of the subject "eugenics" in the Dewey Decimal Classification is presented to illustrate this phenomenon. Eugenics is strange because of the kinds of changes it undergoes. The article closes with a discussion of subject ontogeny as the name for this phenomenon and describes implications for information searching and browsing.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1350-1359
  20. Lorenz, B.: Zur Theorie und Terminologie der bibliothekarischen Klassifikation (2018) 0.00
    7.6474476E-4 = product of:
      0.010706427 = sum of:
        0.010706427 = product of:
          0.032119278 = sum of:
            0.032119278 = weight(_text_:22 in 4339) [ClassicSimilarity], result of:
              0.032119278 = score(doc=4339,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.30952093 = fieldWeight in 4339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4339)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Pages
    S.1-22