Search (49 results, page 1 of 3)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Classification research for knowledge representation and organization : Proc. of the 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991 (1992) 0.02
    0.02322064 = product of:
      0.09288256 = sum of:
        0.09288256 = weight(_text_:graphic in 2072) [ClassicSimilarity], result of:
          0.09288256 = score(doc=2072,freq=4.0), product of:
            0.29924196 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.045191016 = queryNorm
            0.31039283 = fieldWeight in 2072, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
      0.25 = coord(1/4)
    
    Content
    Enthält die Beiträge: SVENONIUS, E.: Classification: prospects, problems, and possibilities; BEALL, J.: Editing the Dewey Decimal Classification online: the evolution of the DDC database; BEGHTOL, C.: Toward a theory of fiction analysis for information storage and retrieval; CRAVEN, T.C.: Concept relation structures and their graphic display; FUGMANN, R.: Illusory goals in information science research; GILCHRIST, A.: UDC: the 1990's and beyond; GREEN, R.: The expression of syntagmatic relationships in indexing: are frame-based index languages the answer?; HUMPHREY, S.M.: Use and management of classification systems for knowledge-based indexing; MIKSA, F.L.: The concept of the universe of knowledge and the purpose of LIS classification; SCOTT, M. u. A.F. FONSECA: Methodology for functional appraisal of records and creation of a functional thesaurus; ALBRECHTSEN, H.: PRESS: a thesaurus-based information system for software reuse; AMAESHI, B.: A preliminary AAT compatible African art thesaurus; CHATTERJEE, A.: Structures of Indian classification systems of the pre-Ranganathan era and their impact on the Colon Classification; COCHRANE, P.A.: Indexing and searching thesauri, the Janus or Proteus of information retrieval; CRAVEN, T.C.: A general versus a special algorithm in the graphic display of thesauri; DAHLBERG, I.: The basis of a new universal classification system seen from a philosophy of science point of view: DRABENSTOTT, K.M., RIESTER, L.C. u. B.A.DEDE: Shelflisting using expert systems; FIDEL, R.: Thesaurus requirements for an intermediary expert system; GREEN, R.: Insights into classification from the cognitive sciences: ramifications for index languages; GROLIER, E. de: Towards a syndetic information retrieval system; GUENTHER, R.: The USMARC format for classification data: development and implementation; HOWARTH, L.C.: Factors influencing policies for the adoption and integration of revisions to classification schedules; HUDON, M.: Term definitions in subject thesauri: the Canadian literacy thesaurus experience; HUSAIN, S.: Notational techniques for the accomodation of subjects in Colon Classification 7th edition: theoretical possibility vis-à-vis practical need; KWASNIK, B.H. u. C. JORGERSEN: The exploration by means of repertory grids of semantic differences among names of official documents; MICCO, M.: Suggestions for automating the Library of Congress Classification schedules; PERREAULT, J.M.: An essay on the prehistory of general categories (II): G.W. Leibniz, Conrad Gesner; REES-POTTER, L.K.: How well do thesauri serve the social sciences?; REVIE, C.W. u. G. SMART: The construction and the use of faceted classification schema in technical domains; ROCKMORE, M.: Structuring a flexible faceted thsaurus record for corporate information retrieval; ROULIN, C.: Sub-thesauri as part of a metathesaurus; SMITH, L.C.: UNISIST revisited: compatibility in the context of collaboratories; STILES, W.G.: Notes concerning the use chain indexing as a possible means of simulating the inductive leap within artificial intelligence; SVENONIUS, E., LIU, S. u. B. SUBRAHMANYAM: Automation in chain indexing; TURNER, J.: Structure in data in the Stockshot database at the National Film Board of Canada; VIZINE-GOETZ, D.: The Dewey Decimal Classification as an online classification tool; WILLIAMSON, N.J.: Restructuring UDC: problems and possibilies; WILSON, A.: The hierarchy of belief: ideological tendentiousness in universal classification; WILSON, B.F.: An evaluation of the systematic botany schedule of the Universal Decimal Classification (English full edition, 1979); ZENG, L.: Research and development of classification and thesauri in China; CONFERENCE SUMMARY AND CONCLUSIONS
  2. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.02
    0.021289835 = product of:
      0.08515934 = sum of:
        0.08515934 = sum of:
          0.048422787 = weight(_text_:methods in 2464) [ClassicSimilarity], result of:
            0.048422787 = score(doc=2464,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.26651827 = fieldWeight in 2464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.046875 = fieldNorm(doc=2464)
          0.03673655 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
            0.03673655 = score(doc=2464,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.23214069 = fieldWeight in 2464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2464)
      0.25 = coord(1/4)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
  3. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.02
    0.01753612 = product of:
      0.07014448 = sum of:
        0.07014448 = sum of:
          0.045653444 = weight(_text_:methods in 2346) [ClassicSimilarity], result of:
            0.045653444 = score(doc=2346,freq=4.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.25127584 = fieldWeight in 2346, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
          0.024491036 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
            0.024491036 = score(doc=2346,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.15476047 = fieldWeight in 2346, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2346)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
  4. Tkalac, S.; Mateljan, V.: Neke karakteristike notacijskih shema (1996) 0.01
    0.011413361 = product of:
      0.045653444 = sum of:
        0.045653444 = product of:
          0.09130689 = sum of:
            0.09130689 = weight(_text_:methods in 655) [ClassicSimilarity], result of:
              0.09130689 = score(doc=655,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5025517 = fieldWeight in 655, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=655)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Presents a short review of fundamental knowledge representation methods: logical, graphical, structured and procedural notational schemes. Special attention is given to notational schemes' classifications and the characteristics on which classifications were done. Knowledge representation is one of the central problems in artificial intelligence, but a complete theory of it does not exist, and it remains a set of methods that are used, with more or less success, in attempts to solve a given problem. The characteristics of knowledge schemes play a significant role
  5. Szostak, R.: Classifying science : phenomena, data, theory, method, practice (2004) 0.01
    0.010483841 = product of:
      0.041935366 = sum of:
        0.041935366 = product of:
          0.08387073 = sum of:
            0.08387073 = weight(_text_:methods in 325) [ClassicSimilarity], result of:
              0.08387073 = score(doc=325,freq=24.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.4616232 = fieldWeight in 325, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=325)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Classification is the essential first step in science. The study of science, as well as the practice of science, will thus benefit from a detailed classification of different types of science. In this book, science - defined broadly to include the social sciences and humanities - is first unpacked into its constituent elements: the phenomena studied, the data used, the theories employed, the methods applied, and the practices of scientists. These five elements are then classified in turn. Notably, the classifications of both theory types and methods allow the key strengths and weaknesses of different theories and methods to be readily discerned and compared. Connections across classifications are explored: should certain theories or phenomena be investigated only with certain methods? What is the proper function and form of scientific paradigms? Are certain common errors and biases in scientific practice associated with particular phenomena, data, theories, or methods? The classifications point to several ways of improving both specialized and interdisciplinary research and teaching, and especially of enhancing communication across communities of scholars. The classifications also support a superior system of document classification that would allow searches by theory and method used as well as causal links investigated.
    Content
    Inhalt: - Chapter 1: Classifying Science: 1.1. A Simple Classificatory Guideline - 1.2. The First "Cut" (and Plan of Work) - 1.3. Some Preliminaries - Chapter 2: Classifying Phenomena and Data: 2.1. Classifying Phenomena - 2.2. Classifying Data - Chapter 3: Classifying Theory: 3.1. Typology of Theory - 3.2. What Is a Theory? - 3.3. Evaluating Theories - 3.4. Types of Theory and the Five Types of Causation - 3.5. Classifying Individual Theories - 3.6. Advantages of a Typology of Theory - Chapter 4: Classifying Method: 4.1. Classifying Methods - 4.2. Typology of Strengths and Weaknesses of Methods - 4.3. Qualitative Versus Quantitative Analysis Revisited - 4.4. Evaluating Methods - 4.5. Classifying Particular Methods Within The Typology - 4.6. Advantages of a Typology of Methods - Chapter 5: Classifying Practice: 5.1. Errors and Biases in ScienceChapter - 5.2. Typology of (Critiques of) Scientific Practice - 5.3. Utilizing This Classification - 5.4. The Five Types of Ethical Analysis - Chapter 6: Drawing Connections Across These Classifications: 6.1. Theory and Method - 6.2. Theory (Method) and Phenomena (Data) - 6.3. Better Paradigms - 6.4. Critiques of Scientific Practice: Are They Correlated with Other Classifications? - Chapter 7: Classifying Scientific Documents: 7.1. Faceted or Enumerative? - 7.2. Classifying By Phenomena Studied - 7.3. Classifying By Theory Used - 7.4. Classifying By Method Used - 7.5 Links Among Subjects - 7.6. Type of Work, Language, and More - 7.7. Critiques of Scientific Practice - 7.8. Classifying Philosophy - 7.9. Evaluating the System - Chapter 8: Concluding Remarks: 8.1. The Classifications - 8.2. Advantages of These Various Classifications - 8.3. Drawing Connections Across Classifications - 8.4. Golden Mean Arguments - 8.5. Why Should Science Be Believed? - 8.6. How Can Science Be Improved? - 8.7. How Should Science Be Taught?
    Footnote
    Rez. in: KO 32(2005) no.2, S.93-95 (H. Albrechtsen): "The book deals with mapping of the structures and contents of sciences, defined broadly to include the social sciences and the humanities. According to the author, the study of science, as well as the practice of science, could benefit from a detailed classification of different types of science. The book defines five universal constituents of the sciences: phenomena, data, theories, methods and practice. For each of these constituents, the author poses five questions, in the well-known 5W format: Who, What, Where, When, Why? - with the addition of the question How? (Szostak 2003). Two objectives of the author's endeavor stand out: 1) decision support for university curriculum development across disciplines and decision support for university students at advanced levels of education in selection of appropriate courses for their projects and to support cross-disciplinary inquiry for researchers and students; 2) decision support for researchers and students in scientific inquiry across disciplines, methods and theories. The main prospective audience of this book is university curriculum developers, university students and researchers, in that order of priority. The heart of the book is the chapters unfolding the author's ideas about how to classify phenomena and data, theory, method and practice, by use of the 5W inquiry model. . . .
  6. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
              0.0734731 = score(doc=6404,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 6404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6404)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:01:00
  7. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
              0.0734731 = score(doc=3773,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 3773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:01:00
  8. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 3176) [ClassicSimilarity], result of:
              0.0734731 = score(doc=3176,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 3176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3176)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    6. 5.2017 18:46:22
  9. Hjoerland, B.: ¬The methodology of constructing classification schemes : a discussion of the state-of-the-art (2003) 0.01
    0.0090230545 = product of:
      0.036092218 = sum of:
        0.036092218 = product of:
          0.072184436 = sum of:
            0.072184436 = weight(_text_:methods in 2760) [ClassicSimilarity], result of:
              0.072184436 = score(doc=2760,freq=10.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.397302 = fieldWeight in 2760, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Special classifications have been somewhat neglected in KO compared to general classifications. The methodology of constructing special classifications is important, however, also for the methodology of constructing general classification schemes. The methodology of constructing special classifications can be regarded as one among about a dozen approaches to domain analysis. The methodology of (special) classification in LIS has been dominated by the rationalistic facet-analytic tradition, which, however, neglects the question of the empirical basis of classification. The empirical basis is much better grasped by, for example, bibliometric methods. Even the combination of rational and empirical methods is insufficient. This presentation will provide evidence for the necessity of historical and pragmatic methods for the methodology of classification and will point to the necessity of analyzing "paradigms". The presentation covers the methods of constructing classifications from Ranganathan to the design of ontologies in computer science and further to the recent "paradigm shift" in classification research. 1. Introduction Classification of a subject field is one among about eleven approaches to analyzing a domain that are specific for information science and in my opinion define the special competencies of information specialists (Hjoerland, 2002a). Classification and knowledge organization are commonly regarded as core qualifications of librarians and information specialists. Seen from this perspective one expects a firm methodological basis for the field. This paper tries to explore the state-of-the-art conceming the methodology of classification. 2. Classification: Science or non-science? As it is part of the curriculum at universities and subject in scientific journals and conferences like ISKO, orte expects classification/knowledge organization to be a scientific or scholarly activity and a scientific field. However, very often when information specialists classify or index documents and when they revise classification system, the methods seem to be rather ad hoc. Research libraries or scientific databases may employ people with adequate subject knowledge. When information scientists construct or evaluate systems, they very often elicit the knowledge from "experts" (Hjorland, 2002b, p. 260). Mostly no specific arguments are provided for the specific decisions in these processes.
  10. ¬The need for a faceted classification as the basis of all methods of information retrieval : Memorandum of the Classification Research Group (1997) 0.01
    0.008070464 = product of:
      0.032281857 = sum of:
        0.032281857 = product of:
          0.064563714 = sum of:
            0.064563714 = weight(_text_:methods in 562) [ClassicSimilarity], result of:
              0.064563714 = score(doc=562,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.35535768 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  11. Szostak, R.: Interdisciplinarity and the classification of scholarly documents by phenomena, theories and methods (2007) 0.01
    0.008070464 = product of:
      0.032281857 = sum of:
        0.032281857 = product of:
          0.064563714 = sum of:
            0.064563714 = weight(_text_:methods in 1135) [ClassicSimilarity], result of:
              0.064563714 = score(doc=1135,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.35535768 = fieldWeight in 1135, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1135)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  12. Beghtol, C.: Classification for information retrieval and classification for knowledge discovery : relationships between "professional" and "naïve" classifications (2003) 0.01
    0.0071333502 = product of:
      0.028533401 = sum of:
        0.028533401 = product of:
          0.057066802 = sum of:
            0.057066802 = weight(_text_:methods in 3021) [ClassicSimilarity], result of:
              0.057066802 = score(doc=3021,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31409478 = fieldWeight in 3021, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3021)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Classification is a transdisciplinary activity that occurs during all human pursuits. Classificatory activity, however, serves different purposes in different situations. In information retrieval, the primary purpose of classification is to find knowledge that already exists, but one of the purposes of classification in other fields is to discover new knowledge. In this paper, classifications for information retrieval are called "professional" classifications because they are devised by people who have a professional interest in classification, and classifications for knowledge discovery are called "naive" classifications because they are devised by people who have no particular interest in studying classification as an end in itself. This paper compares the overall purposes and methods of these two kinds of classifications and provides a general model of the relationships between the two kinds of classificatory activity in the context of information studies. This model addresses issues of the influence of scholarly activity and communication an the creation and revision of classifications for the purposes of information retrieval and for the purposes of knowledge discovery. Further comparisons elucidate the relationships between the universality of classificatory methods and the specific purposes served by naive and professional classification systems.
  13. Szostak, R.: Classification, interdisciplinarity, and the study of science (2008) 0.01
    0.0071333502 = product of:
      0.028533401 = sum of:
        0.028533401 = product of:
          0.057066802 = sum of:
            0.057066802 = weight(_text_:methods in 1893) [ClassicSimilarity], result of:
              0.057066802 = score(doc=1893,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31409478 = fieldWeight in 1893, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1893)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to respond to the 2005 paper by Hjørland and Nissen Pedersen by suggesting that an exhaustive and universal classification of the phenomena that scholars study, and the methods and theories they apply, is feasible. It seeks to argue that such a classification is critical for interdisciplinary scholarship. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Hjørland and Nissen Pedersen as its starting point. Hjørland and Nissen Pedersen had identified several difficulties that would be encountered in developing such a classification; the paper suggests how each of these can be overcome. It also urges a deductive approach as complementary to the inductive approach recommended by Hjørland and Nissen Pedersen. Findings - The paper finds that an exhaustive and universal classification of scholarly documents in terms of (at least) the phenomena that scholars study, and the theories and methods they apply, appears to be both possible and desirable. Practical implications - The paper suggests how such a project can be begun. In particular it stresses the importance of classifying documents in terms of causal links between phenomena. Originality/value - The paper links the information science, interdisciplinary, and study of science literatures, and suggests that the types of classification outlined above would be of great value to scientists/scholars, and that they are possible.
  14. Zarrad, R.; Doggaz, N.; Zagrouba, E.: Wikipedia HTML structure analysis for ontology construction (2018) 0.01
    0.0071333502 = product of:
      0.028533401 = sum of:
        0.028533401 = product of:
          0.057066802 = sum of:
            0.057066802 = weight(_text_:methods in 4302) [ClassicSimilarity], result of:
              0.057066802 = score(doc=4302,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31409478 = fieldWeight in 4302, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4302)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Previously, the main problem of information extraction was to gather enough data. Today, the challenge is not to collect data but to interpret and represent them in order to deduce information. Ontologies are considered suitable solutions for organizing information. The classic methods for ontology construction from textual documents rely on natural language analysis and are generally based on statistical or linguistic approaches. However, these approaches do not consider the document structure which provides additional knowledge. In fact, the structural organization of documents also conveys meaning. In this context, new approaches focus on document structure analysis to extract knowledge. This paper describes a methodology for ontology construction from web data and especially from Wikipedia articles. It focuses mainly on document structure in order to extract the main concepts and their relations. The proposed methods extract not only taxonomic and non-taxonomic relations but also give the labels describing non-taxonomic relations. The extraction of non-taxonomic relations is established by analyzing the titles hierarchy in each document. A pattern matching is also applied in order to extract known semantic relations. We propose also to apply a refinement to the extracted relations in order to keep only those that are relevant. The refinement process is performed by applying the transitive property, checking the nature of the relations and analyzing taxonomic relations having inverted arguments. Experiments have been performed on French Wikipedia articles related to the medical field. Ontology evaluation is performed by comparing it to gold standards.
  15. Mirorikawa, N.: Structures of classification systems : hierarchical and multidimensional (1996) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 6583) [ClassicSimilarity], result of:
              0.056493253 = score(doc=6583,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 6583, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6583)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Considers classification systems from a structural point of view. Distinguishes between 2 kinds of methods of categorization of classification systems: the first categorized by structure, either hierarchical or multidimensional; and the second by style of expression, either enumerative or sythetic. Identifies 4 leading classification systems according to their structures: DDC, LCC, UDC and Colon Classification. Focuses on DDC referring to 2 interpretatives of its structure, one of which is hierarchical and the other is partially multidimensional. Also relates this to the matter of interpretation of the notation '0', interpreted in one instance as 'generalities', and in another as 'coordination sign'
  16. Kochar, R.S.: Library classification systems (1998) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 931) [ClassicSimilarity], result of:
              0.056493253 = score(doc=931,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 931, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=931)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Contents: Preface. 1. Classification systems. 2. Automatic classification. 3. Knowledge classification. 4. Reflections on library classification. 5. General classification schemes. 6. Hierarchical classification. 7. Faceted classification. B. Present methods and future directions. Index.
  17. Foskett, D.J.: Systems theory and its relevance to documentary classification (2017) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 3617) [ClassicSimilarity], result of:
              0.056493253 = score(doc=3617,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 3617, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3617)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In view of the impact of systems theory for the construction of classification systems the two major contributions of Dewey are summarized as well as the new methods of facet analysis and organization brought into classification by Ranganathan. With the latter's "canonical" solution for the contents and arrangement of main classes, however, contemporary philosophical thought regarding the organization of knowledge seems to have been neglected. The work of the Classification Research Group and elsewhere considering integrative level theory will improve the science of classification systems construction. Besides this the influence from psychology and linguistics on the recognition of relationships between concepts is outlined as well as some practical implications of the systems approach on classification. (I.C.)
  18. Szostak, R.: ¬A pluralistic approach to the philosophy of classification : a case for "public knowledge" (2015) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 5541) [ClassicSimilarity], result of:
              0.056493253 = score(doc=5541,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 5541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5541)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Any classification system should be evaluated with respect to a variety of philosophical and practical concerns. This paper explores several distinct issues: the nature of a work, the value of a statement, the contribution of information science to philosophy, the nature of hierarchy, ethical evaluation, pre- versus postcoordination, the lived experience of librarians, and formalization versus natural language. It evaluates a particular approach to classification in terms of each of these but draws general lessons for philosophical evaluation. That approach to classification emphasizes the free combination of basic concepts representing both real things in the world and the relationships among these; works are also classified in terms of theories, methods, and perspectives applied.
  19. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.01
    0.006122759 = product of:
      0.024491036 = sum of:
        0.024491036 = product of:
          0.048982073 = sum of:
            0.048982073 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.048982073 = score(doc=7242,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 4.1997 21:10:19
  20. Belayche, C.: ¬A propos de la classification de Dewey (1997) 0.01
    0.006122759 = product of:
      0.024491036 = sum of:
        0.024491036 = product of:
          0.048982073 = sum of:
            0.048982073 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.048982073 = score(doc=1171,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.30952093 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Bulletin d'informations de l'Association des Bibliothecaires Francais. 1997, no.175, S.22-23