Search (65 results, page 1 of 4)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.02
    0.02268027 = product of:
      0.08505101 = sum of:
        0.0069713015 = product of:
          0.013942603 = sum of:
            0.013942603 = weight(_text_:online in 2467) [ClassicSimilarity], result of:
              0.013942603 = score(doc=2467,freq=6.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.14519453 = fieldWeight in 2467, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.5 = coord(1/2)
        0.01945211 = weight(_text_:software in 2467) [ClassicSimilarity], result of:
          0.01945211 = score(doc=2467,freq=4.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.15496688 = fieldWeight in 2467, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.03224443 = weight(_text_:web in 2467) [ClassicSimilarity], result of:
          0.03224443 = score(doc=2467,freq=24.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.3122631 = fieldWeight in 2467, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.026383169 = weight(_text_:site in 2467) [ClassicSimilarity], result of:
          0.026383169 = score(doc=2467,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.15176146 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.26666668 = coord(4/15)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
    Theme
    Klassifikationssysteme im Online-Retrieval
  2. Giunchiglia, F.; Zaihrayeu, I.; Farazi, F.: Converting classifications into OWL ontologies (2009) 0.01
    0.0095606195 = product of:
      0.07170464 = sum of:
        0.033011325 = weight(_text_:software in 4690) [ClassicSimilarity], result of:
          0.033011325 = score(doc=4690,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.2629875 = fieldWeight in 4690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
        0.038693316 = weight(_text_:web in 4690) [ClassicSimilarity], result of:
          0.038693316 = score(doc=4690,freq=6.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.37471575 = fieldWeight in 4690, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4690)
      0.13333334 = coord(2/15)
    
    Abstract
    Classification schemes, such as the DMoZ web directory, provide a convenient and intuitive way for humans to access classified contents. While being easy to be dealt with for humans, classification schemes remain hard to be reasoned about by automated software agents. Among other things, this hardness is conditioned by the ambiguous na- ture of the natural language used to describe classification categories. In this paper we describe how classification schemes can be converted into OWL ontologies, thus enabling reasoning on them by Semantic Web applications. The proposed solution is based on a two phase approach in which category names are first encoded in a concept language and then, together with the structure of the classification scheme, are converted into an OWL ontology. We demonstrate the practical applicability of our approach by showing how the results of reasoning on these OWL ontologies can help improve the organization and use of web directories.
  3. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.01
    0.008972007 = product of:
      0.04486003 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 2464) [ClassicSimilarity], result of:
              0.019319436 = score(doc=2464,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.5 = coord(1/2)
        0.022339594 = weight(_text_:web in 2464) [ClassicSimilarity], result of:
          0.022339594 = score(doc=2464,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 2464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2464)
        0.01286072 = product of:
          0.02572144 = sum of:
            0.02572144 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
              0.02572144 = score(doc=2464,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23214069 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Classification research for knowledge representation and organization : Proc. of the 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991 (1992) 0.01
    0.008664933 = product of:
      0.043324664 = sum of:
        0.008365562 = product of:
          0.016731124 = sum of:
            0.016731124 = weight(_text_:online in 2072) [ClassicSimilarity], result of:
              0.016731124 = score(doc=2072,freq=6.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.17423344 = fieldWeight in 2072, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2072)
          0.5 = coord(1/2)
        0.016505662 = weight(_text_:software in 2072) [ClassicSimilarity], result of:
          0.016505662 = score(doc=2072,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.13149375 = fieldWeight in 2072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
        0.018453438 = weight(_text_:evaluation in 2072) [ClassicSimilarity], result of:
          0.018453438 = score(doc=2072,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.139036 = fieldWeight in 2072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
      0.2 = coord(3/15)
    
    Content
    Enthält die Beiträge: SVENONIUS, E.: Classification: prospects, problems, and possibilities; BEALL, J.: Editing the Dewey Decimal Classification online: the evolution of the DDC database; BEGHTOL, C.: Toward a theory of fiction analysis for information storage and retrieval; CRAVEN, T.C.: Concept relation structures and their graphic display; FUGMANN, R.: Illusory goals in information science research; GILCHRIST, A.: UDC: the 1990's and beyond; GREEN, R.: The expression of syntagmatic relationships in indexing: are frame-based index languages the answer?; HUMPHREY, S.M.: Use and management of classification systems for knowledge-based indexing; MIKSA, F.L.: The concept of the universe of knowledge and the purpose of LIS classification; SCOTT, M. u. A.F. FONSECA: Methodology for functional appraisal of records and creation of a functional thesaurus; ALBRECHTSEN, H.: PRESS: a thesaurus-based information system for software reuse; AMAESHI, B.: A preliminary AAT compatible African art thesaurus; CHATTERJEE, A.: Structures of Indian classification systems of the pre-Ranganathan era and their impact on the Colon Classification; COCHRANE, P.A.: Indexing and searching thesauri, the Janus or Proteus of information retrieval; CRAVEN, T.C.: A general versus a special algorithm in the graphic display of thesauri; DAHLBERG, I.: The basis of a new universal classification system seen from a philosophy of science point of view: DRABENSTOTT, K.M., RIESTER, L.C. u. B.A.DEDE: Shelflisting using expert systems; FIDEL, R.: Thesaurus requirements for an intermediary expert system; GREEN, R.: Insights into classification from the cognitive sciences: ramifications for index languages; GROLIER, E. de: Towards a syndetic information retrieval system; GUENTHER, R.: The USMARC format for classification data: development and implementation; HOWARTH, L.C.: Factors influencing policies for the adoption and integration of revisions to classification schedules; HUDON, M.: Term definitions in subject thesauri: the Canadian literacy thesaurus experience; HUSAIN, S.: Notational techniques for the accomodation of subjects in Colon Classification 7th edition: theoretical possibility vis-à-vis practical need; KWASNIK, B.H. u. C. JORGERSEN: The exploration by means of repertory grids of semantic differences among names of official documents; MICCO, M.: Suggestions for automating the Library of Congress Classification schedules; PERREAULT, J.M.: An essay on the prehistory of general categories (II): G.W. Leibniz, Conrad Gesner; REES-POTTER, L.K.: How well do thesauri serve the social sciences?; REVIE, C.W. u. G. SMART: The construction and the use of faceted classification schema in technical domains; ROCKMORE, M.: Structuring a flexible faceted thsaurus record for corporate information retrieval; ROULIN, C.: Sub-thesauri as part of a metathesaurus; SMITH, L.C.: UNISIST revisited: compatibility in the context of collaboratories; STILES, W.G.: Notes concerning the use chain indexing as a possible means of simulating the inductive leap within artificial intelligence; SVENONIUS, E., LIU, S. u. B. SUBRAHMANYAM: Automation in chain indexing; TURNER, J.: Structure in data in the Stockshot database at the National Film Board of Canada; VIZINE-GOETZ, D.: The Dewey Decimal Classification as an online classification tool; WILLIAMSON, N.J.: Restructuring UDC: problems and possibilies; WILSON, A.: The hierarchy of belief: ideological tendentiousness in universal classification; WILSON, B.F.: An evaluation of the systematic botany schedule of the Universal Decimal Classification (English full edition, 1979); ZENG, L.: Research and development of classification and thesauri in China; CONFERENCE SUMMARY AND CONCLUSIONS
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Broughton, V.: ¬The need for a faceted classification as the basis of all methods of information retrieval (2006) 0.01
    0.0076110926 = product of:
      0.057083193 = sum of:
        0.030755727 = weight(_text_:evaluation in 2874) [ClassicSimilarity], result of:
          0.030755727 = score(doc=2874,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 2874, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
        0.026327467 = weight(_text_:web in 2874) [ClassicSimilarity], result of:
          0.026327467 = score(doc=2874,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.25496176 = fieldWeight in 2874, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2874)
      0.13333334 = coord(2/15)
    
    Abstract
    Purpose - The aim of this article is to estimate the impact of faceted classification and the faceted analytical method on the development of various information retrieval tools over the latter part of the twentieth and early twenty-first centuries. Design/methodology/approach - The article presents an examination of various subject access tools intended for retrieval of both print and digital materials to determine whether they exhibit features of faceted systems. Some attention is paid to use of the faceted approach as a means of structuring information on commercial web sites. The secondary and research literature is also surveyed for commentary on and evaluation of facet analysis as a basis for the building of vocabulary and conceptual tools. Findings - The study finds that faceted systems are now very common, with a major increase in their use over the last 15 years. Most LIS subject indexing tools (classifications, subject heading lists and thesauri) now demonstrate features of facet analysis to a greater or lesser degree. A faceted approach is frequently taken to the presentation of product information on commercial web sites, and there is an independent strand of theory and documentation related to this application. There is some significant research on semi-automatic indexing and retrieval (query expansion and query formulation) using facet analytical techniques. Originality/value - This article provides an overview of an important conceptual approach to information retrieval, and compares different understandings and applications of this methodology.
  6. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.01
    0.007228325 = product of:
      0.054212436 = sum of:
        0.043495167 = weight(_text_:evaluation in 1778) [ClassicSimilarity], result of:
          0.043495167 = score(doc=1778,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.327711 = fieldWeight in 1778, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.010717267 = product of:
          0.021434534 = sum of:
            0.021434534 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.021434534 = score(doc=1778,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
  7. Zarrad, R.; Doggaz, N.; Zagrouba, E.: Wikipedia HTML structure analysis for ontology construction (2018) 0.01
    0.006582941 = product of:
      0.049372055 = sum of:
        0.030755727 = weight(_text_:evaluation in 4302) [ClassicSimilarity], result of:
          0.030755727 = score(doc=4302,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 4302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4302)
        0.01861633 = weight(_text_:web in 4302) [ClassicSimilarity], result of:
          0.01861633 = score(doc=4302,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 4302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4302)
      0.13333334 = coord(2/15)
    
    Abstract
    Previously, the main problem of information extraction was to gather enough data. Today, the challenge is not to collect data but to interpret and represent them in order to deduce information. Ontologies are considered suitable solutions for organizing information. The classic methods for ontology construction from textual documents rely on natural language analysis and are generally based on statistical or linguistic approaches. However, these approaches do not consider the document structure which provides additional knowledge. In fact, the structural organization of documents also conveys meaning. In this context, new approaches focus on document structure analysis to extract knowledge. This paper describes a methodology for ontology construction from web data and especially from Wikipedia articles. It focuses mainly on document structure in order to extract the main concepts and their relations. The proposed methods extract not only taxonomic and non-taxonomic relations but also give the labels describing non-taxonomic relations. The extraction of non-taxonomic relations is established by analyzing the titles hierarchy in each document. A pattern matching is also applied in order to extract known semantic relations. We propose also to apply a refinement to the extracted relations in order to keep only those that are relevant. The refinement process is performed by applying the transitive property, checking the nature of the relations and analyzing taxonomic relations having inverted arguments. Experiments have been performed on French Wikipedia articles related to the medical field. Ontology evaluation is performed by comparing it to gold standards.
  8. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.01
    0.0064470717 = product of:
      0.048353035 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 726) [ClassicSimilarity], result of:
              0.019319436 = score(doc=726,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 726, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=726)
          0.5 = coord(1/2)
        0.038693316 = weight(_text_:web in 726) [ClassicSimilarity], result of:
          0.038693316 = score(doc=726,freq=6.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.37471575 = fieldWeight in 726, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
      0.13333334 = coord(2/15)
    
    Abstract
    This article gives a cheerfully brief and undetailed account of how to make a faceted classification system, then describes information retrieval and searching on the web. It concludes by saying that facets would be excellent in helping users search and browse the web, but offers no real clues as to how this can be done.
    Theme
    Klassifikationssysteme im Online-Retrieval
  9. Lin, W.-Y.C.: ¬The concept and applications of faceted classifications (2006) 0.01
    0.0062578344 = product of:
      0.046933755 = sum of:
        0.029786127 = weight(_text_:web in 5083) [ClassicSimilarity], result of:
          0.029786127 = score(doc=5083,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.2884563 = fieldWeight in 5083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5083)
        0.017147627 = product of:
          0.034295253 = sum of:
            0.034295253 = weight(_text_:22 in 5083) [ClassicSimilarity], result of:
              0.034295253 = score(doc=5083,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.30952093 = fieldWeight in 5083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5083)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    The concept of faceted classification has its long history and importance in the human civilization. Recently, more and more consumer Web sites adopt the idea of facet analysis to organize and display their products or services. The aim of this article is to review the origin and develpment of faceted classification, as well as its concepts, essence, advantage and limitation. Further, the applications of faceted classification in various domians have been explored.
    Date
    27. 5.2007 22:19:35
  10. Gnoli, C.; Mei, H.: Freely faceted classification for Web-based information retrieval (2006) 0.01
    0.005500357 = product of:
      0.041252676 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 534) [ClassicSimilarity], result of:
              0.019319436 = score(doc=534,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=534)
          0.5 = coord(1/2)
        0.031592958 = weight(_text_:web in 534) [ClassicSimilarity], result of:
          0.031592958 = score(doc=534,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.3059541 = fieldWeight in 534, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=534)
      0.13333334 = coord(2/15)
    
    Abstract
    In free classification, each concept is expressed by a constant notation, and classmarks are formed by free combinations of them, allowing the retrieval of records from a database by searching any of the component concepts. A refinement of free classification is freely faceted classification, where notation can include facets, expressing the kind of relations held between the concepts. The Integrative Level Classification project aims at testing free and freely faceted classification by applying them to small bibliographical samples in various domains. A sample, called the Dandelion Bibliography of Facet Analysis, is described here. Experience was gained using this system to classify 300 specialized papers dealing with facet analysis itself recorded on a MySQL database and building a Web interface exploiting freely faceted notation. The interface is written in PHP and uses string functions to process the queries and to yield relevant results selected and ordered according to the principles of integrative levels.
    Theme
    Klassifikationssysteme im Online-Retrieval
  11. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.01
    0.005475605 = product of:
      0.041067034 = sum of:
        0.026062861 = weight(_text_:web in 3494) [ClassicSimilarity], result of:
          0.026062861 = score(doc=3494,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.25239927 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.015004174 = product of:
          0.030008348 = sum of:
            0.030008348 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
              0.030008348 = score(doc=3494,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.2708308 = fieldWeight in 3494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3494)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Pages
    S.22-36
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  12. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.01
    0.0052988958 = product of:
      0.039741717 = sum of:
        0.006439812 = product of:
          0.012879624 = sum of:
            0.012879624 = weight(_text_:online in 311) [ClassicSimilarity], result of:
              0.012879624 = score(doc=311,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.13412495 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=311)
          0.5 = coord(1/2)
        0.033301905 = weight(_text_:web in 311) [ClassicSimilarity], result of:
          0.033301905 = score(doc=311,freq=10.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.32250395 = fieldWeight in 311, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
      0.13333334 = coord(2/15)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
    Methodology Based on information from research papers, more research was done on SKOS and examples of SKOS and shared faceted classifications in the Semantic Web and about SKOS and how to express SKOS in RDF/XML. Once confident with these ideas, the author used a faceted taxonomy created in a Vocabulary Design class and encoded it using SKOS. Instead of writing RDF in a program such as Notepad, a thesaurus tool was used to create the taxonomy according to SKOS standards and then export the thesaurus in RDF/XML format. These processes and tools are then analyzed. Results The initial statement of the problem was simply an extension of the survey paper done earlier in this class. To continue on with the research, more research was done into SKOS - a standard for expressing thesauri, taxonomies and faceted classifications so they can be shared on the semantic web.
    Issue
    [Online: http://unllib.unl.edu/LPP/].
  13. Beghtol, C.: General classification systems : structural principles for multidisciplinary specification (1998) 0.00
    0.004266575 = product of:
      0.031999312 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 44) [ClassicSimilarity], result of:
              0.019319436 = score(doc=44,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 44, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=44)
          0.5 = coord(1/2)
        0.022339594 = weight(_text_:web in 44) [ClassicSimilarity], result of:
          0.022339594 = score(doc=44,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 44, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
      0.13333334 = coord(2/15)
    
    Abstract
    In this century, knowledge creation, production, dissemination and use have changed profoundly. Intellectual and physical barriers have been substantially reduced by the rise of multidisciplinarity and by the influence of computerization, particularly by the spread of the World Wide Web (WWW). Bibliographic classification systems need to respond to this situation. Three possible strategic responses are described: 1) adopting an existing system; 2) adapting an existing system; and 3) finding new structural principles for classification systems. Examples of these three responses are given. An extended example of the third option uses the knowledge outline in the Spectrum of Britannica Online to suggest a theory of "viewpoint warrant" that could be used to incorporate differing perspectives into general classification systems
  14. Szostak, R.: ¬A pluralistic approach to the philosophy of classification : a case for "public knowledge" (2015) 0.00
    0.0040595494 = product of:
      0.060893238 = sum of:
        0.060893238 = weight(_text_:evaluation in 5541) [ClassicSimilarity], result of:
          0.060893238 = score(doc=5541,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.4587954 = fieldWeight in 5541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5541)
      0.06666667 = coord(1/15)
    
    Abstract
    Any classification system should be evaluated with respect to a variety of philosophical and practical concerns. This paper explores several distinct issues: the nature of a work, the value of a statement, the contribution of information science to philosophy, the nature of hierarchy, ethical evaluation, pre- versus postcoordination, the lived experience of librarians, and formalization versus natural language. It evaluates a particular approach to classification in terms of each of these but draws general lessons for philosophical evaluation. That approach to classification emphasizes the free combination of basic concepts representing both real things in the world and the relationships among these; works are also classified in terms of theories, methods, and perspectives applied.
  15. Mai, J.E.: Classification of the Web : challenges and inquiries (2004) 0.00
    0.0039714836 = product of:
      0.059572253 = sum of:
        0.059572253 = weight(_text_:web in 3075) [ClassicSimilarity], result of:
          0.059572253 = score(doc=3075,freq=8.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.5769126 = fieldWeight in 3075, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3075)
      0.06666667 = coord(1/15)
    
    Abstract
    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges that call for inquiries into the theoretical foundation of bibliographic classification theory.
  16. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.00
    0.003536217 = product of:
      0.026521625 = sum of:
        0.013660905 = product of:
          0.02732181 = sum of:
            0.02732181 = weight(_text_:online in 780) [ClassicSimilarity], result of:
              0.02732181 = score(doc=780,freq=4.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.284522 = fieldWeight in 780, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
        0.01286072 = product of:
          0.02572144 = sum of:
            0.02572144 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.02572144 = score(doc=780,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
    Theme
    Klassifikationssysteme im Online-Retrieval
  17. Fripp, D.: Using linked data to classify web documents (2010) 0.00
    0.0034750483 = product of:
      0.052125722 = sum of:
        0.052125722 = weight(_text_:web in 4172) [ClassicSimilarity], result of:
          0.052125722 = score(doc=4172,freq=8.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.50479853 = fieldWeight in 4172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
    Theme
    Semantic Web
  18. Bosch, M.: Ontologies, different reasoning strategies, different logics, different kinds of knowledge representation : working together (2006) 0.00
    0.0030094802 = product of:
      0.0451422 = sum of:
        0.0451422 = weight(_text_:web in 166) [ClassicSimilarity], result of:
          0.0451422 = score(doc=166,freq=6.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.43716836 = fieldWeight in 166, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=166)
      0.06666667 = coord(1/15)
    
    Abstract
    The recent experiences in the building, maintenance and reuse of ontologies has shown that the most efficient approach is the collaborative one. However, communication between collaborators such as IT professionals, librarians, web designers and subject matter experts is difficult and time consuming. This is because there are different reasoning strategies, different logics and different kinds of knowledge representation in the applications of Semantic Web. This article intends to be a reference scheme. It uses concise and simple explanations that can be used in common by specialists of different backgrounds working together in an application of Semantic Web.
  19. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2013) 0.00
    0.002899678 = product of:
      0.043495167 = sum of:
        0.043495167 = weight(_text_:evaluation in 789) [ClassicSimilarity], result of:
          0.043495167 = score(doc=789,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.327711 = fieldWeight in 789, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=789)
      0.06666667 = coord(1/15)
    
    Abstract
    Any ontological theory commits us to accept and classify a number of phenomena in a more or less specific way-and vice versa: a classification tends to reveal the theoretical outlook of its creator. Objects and their descriptions and relations are not just "given," but determined by theories. Knowledge is fallible, and consensus is rare. By implication, knowledge organization has to consider different theories/views and their foundations. Bibliographical classifications depend on subject knowledge and on the same theories as corresponding scientific and scholarly classifications. Some classifications are based on logical distinctions, others on empirical examinations, and some on mappings of common ancestors or on establishing functional criteria. To evaluate a classification is to involve oneself in the research which has produced the given classification. Because research is always based more or less on specific epistemological ideals (e.g., empiricism, rationalism, historicism, or pragmatism), the evaluation of classification includes the evaluation of the epistemological foundations of the research on which given classifications have been based. The field of knowledge organization itself is based on different approaches and traditions such as user-based and cognitive views, facet-analytical views, numeric taxonomic approaches, bibliometrics, and domain-analytic approaches. These approaches and traditions are again connected to epistemological views, which have to be considered. Only the domain-analytic view is fully committed to exploring knowledge organization in the light of subject knowledge and substantial scholarly theories.
  20. Ullah, A.; Khusro, S.; Ullah, I.: Bibliographic classification in the digital age : current trends & future directions (2017) 0.00
    0.0028705348 = product of:
      0.04305802 = sum of:
        0.04305802 = weight(_text_:evaluation in 5717) [ClassicSimilarity], result of:
          0.04305802 = score(doc=5717,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.32441732 = fieldWeight in 5717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5717)
      0.06666667 = coord(1/15)
    
    Abstract
    Bibliographic classification is among the core activities of Library & Information Science that brings order and proper management to the holdings of a library. Compared to printed media, digital collections present numerous challenges regarding their preservation, curation, organization and resource discovery & access. Therefore, true native perspective is needed to be adopted for bibliographic classification in digital environments. In this research article, we have investigated and reported different approaches to bibliographic classification of digital collections. The article also contributes two evaluation frameworks that evaluate the existing classification schemes and systems. The article presents a bird's-eye view for researchers in reaching a generalized and holistic approach towards bibliographic classification research, where new research avenues have been identified.

Years

Languages

  • e 56
  • d 4
  • f 3
  • chi 1
  • i 1
  • More… Less…

Types

  • a 57
  • m 6
  • el 3
  • s 3
  • More… Less…