Search (30 results, page 1 of 2)

  • × author_ss:"Szostak, R."
  1. Szostak, R.: Speaking truth to power in classification : response to Fox's review of my work; KO 39:4, 300 (2013) 0.03
    0.025881905 = product of:
      0.07764571 = sum of:
        0.07764571 = sum of:
          0.007594823 = weight(_text_:a in 591) [ClassicSimilarity], result of:
            0.007594823 = score(doc=591,freq=2.0), product of:
              0.04968032 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04308612 = queryNorm
              0.15287387 = fieldWeight in 591, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.09375 = fieldNorm(doc=591)
          0.07005089 = weight(_text_:22 in 591) [ClassicSimilarity], result of:
            0.07005089 = score(doc=591,freq=2.0), product of:
              0.15088025 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04308612 = queryNorm
              0.46428138 = fieldWeight in 591, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=591)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.2013 12:35:05
    Type
    a
  2. Szostak, R.: Skepticism and knowledge organization (2014) 0.02
    0.01617885 = product of:
      0.048536547 = sum of:
        0.048536547 = sum of:
          0.007673528 = weight(_text_:a in 1404) [ClassicSimilarity], result of:
            0.007673528 = score(doc=1404,freq=6.0), product of:
              0.04968032 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04308612 = queryNorm
              0.1544581 = fieldWeight in 1404, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1404)
          0.04086302 = weight(_text_:22 in 1404) [ClassicSimilarity], result of:
            0.04086302 = score(doc=1404,freq=2.0), product of:
              0.15088025 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04308612 = queryNorm
              0.2708308 = fieldWeight in 1404, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1404)
      0.33333334 = coord(1/3)
    
    Abstract
    The key argument of this paper is that the field of knowledge organization can potentially provide a powerful - and indeed the only powerful - response to the skeptical claims that are common in the contemporary academy. Though skeptical arguments have an important place in our field - the present author readily confesses to having learned much in responding to such arguments - it would be unfortunate if the field of knowledge organization were to assume the correctness of a skeptical outlook. Rather, the field should essay to combat the sources of skepticism. Strategies for doing so are outlined.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  3. Szostak, R.: ¬A pluralistic approach to the philosophy of classification : a case for "public knowledge" (2015) 0.00
    0.0019535846 = product of:
      0.005860754 = sum of:
        0.005860754 = product of:
          0.011721508 = sum of:
            0.011721508 = weight(_text_:a in 5541) [ClassicSimilarity], result of:
              0.011721508 = score(doc=5541,freq=14.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.23593865 = fieldWeight in 5541, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5541)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Any classification system should be evaluated with respect to a variety of philosophical and practical concerns. This paper explores several distinct issues: the nature of a work, the value of a statement, the contribution of information science to philosophy, the nature of hierarchy, ethical evaluation, pre- versus postcoordination, the lived experience of librarians, and formalization versus natural language. It evaluates a particular approach to classification in terms of each of these but draws general lessons for philosophical evaluation. That approach to classification emphasizes the free combination of basic concepts representing both real things in the world and the relationships among these; works are also classified in terms of theories, methods, and perspectives applied.
    Type
    a
  4. Szostak, R.: Classifying relationships (2012) 0.00
    0.0018086678 = product of:
      0.005426003 = sum of:
        0.005426003 = product of:
          0.010852006 = sum of:
            0.010852006 = weight(_text_:a in 1923) [ClassicSimilarity], result of:
              0.010852006 = score(doc=1923,freq=12.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.21843673 = fieldWeight in 1923, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1923)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper develops a classification of relationships among things, with many potential uses within information science. Unlike previous classifications of relationships, it is hoped that this classification will provide benefits that exceed the costs of application. The major theoretical innovation is to stress the importance of causal relationships, albeit not exclusively. The paper also stresses the advantages of using compounds of simpler terms: verbs compounded with other verbs, adverbs, or things. The classification builds upon a review of the previous literature and a broad inductive survey of potential sources in a recent article in this journal. The result is a classification that is both manageable in size and easy to apply and yet encompasses all of the relationships necessary for classifying documents or even ideas.
    Type
    a
  5. Szostak, R.: Toward a classification of relationships (2012) 0.00
    0.0018086678 = product of:
      0.005426003 = sum of:
        0.005426003 = product of:
          0.010852006 = sum of:
            0.010852006 = weight(_text_:a in 131) [ClassicSimilarity], result of:
              0.010852006 = score(doc=131,freq=12.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.21843673 = fieldWeight in 131, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=131)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Several attempts have been made to develop a classification of relationships, but none of these have been widely accepted or applied within information science. It would seem that information scientists, while appreciating the potential value of a classification of relationships, have found all previous classifications to be too complicated in application relative to the benefits they provide. This paper begins by reviewing previous attempts and drawing lessons from these. It then surveys a range of sources within and beyond the field of knowledge organization that can together provide the basis for the development of a novel classification of relationships. One critical insight is that relationships governing causation/influence should be accorded priority.
    Type
    a
  6. Szostak, R.: Classifying for social diversity (2014) 0.00
    0.0017901169 = product of:
      0.0053703506 = sum of:
        0.0053703506 = product of:
          0.010740701 = sum of:
            0.010740701 = weight(_text_:a in 1378) [ClassicSimilarity], result of:
              0.010740701 = score(doc=1378,freq=16.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.2161963 = fieldWeight in 1378, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1378)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper argues that a new approach to classification best supports and respects social diversity. We should want a classification that facilitates communication both within groups and across groups. We should also want no group to be privileged within the classification. These goals are best accomplished through a truly universal classification, grounded in basic concepts, that classifies works in terms of authorial perspective. Strategies for classifying perspective are discussed. The paper then addresses issues of classification structure. It follows a feminist approach to classification, and shows how a web-of-relations approach can be instantiated in a classification. Finally the paper turns to classificatory process. The key argument here is that much (perhaps all) of the concern regarding the possibility that classes can be subdivided into subclasses in multiple ways, each favored by different groups or individuals, simply vanish es within a web-of-relations approach. The reason is that most of these supposed ways of subdividing classes are in fact ways of subdividing different relationships among classes.
    Type
    a
  7. Szostak, R.: Employing a synthetic approach to subject classification across galleries, libraries, archives, and museums (2016) 0.00
    0.0016877383 = product of:
      0.005063215 = sum of:
        0.005063215 = product of:
          0.01012643 = sum of:
            0.01012643 = weight(_text_:a in 4930) [ClassicSimilarity], result of:
              0.01012643 = score(doc=4930,freq=8.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.20383182 = fieldWeight in 4930, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4930)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
    Type
    a
  8. Szostak, R.: Facet analysis using grammar (2017) 0.00
    0.001667843 = product of:
      0.0050035287 = sum of:
        0.0050035287 = product of:
          0.010007057 = sum of:
            0.010007057 = weight(_text_:a in 3866) [ClassicSimilarity], result of:
              0.010007057 = score(doc=3866,freq=20.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.20142901 = fieldWeight in 3866, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3866)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Basic grammar can achieve most/all of the goals of facet analysis without requiring the use of facet indicators. Facet analysis is thus rendered far simpler for classificationist, classifier, and user. We compare facet analysis and grammar, and show how various facets can be represented grammatically. We then address potential challenges in employing grammar as subject classification. A detailed review of basic grammar supports the hypothesis that it is feasible to usefully employ grammatical construction in subject classification. A manageable - and programmable - set of adjustments is required as classifiers move fairly directly from sentences in a document (or object or idea) description to formulating a subject classification. The user likewise can move fairly quickly from a query to the identification of relevant works. A review of theories in linguistics indicates that a grammatical approach should reduce ambiguity while encouraging ease of use. This paper applies the recommended approach to a small sample of recently published books. It finds that the approach is feasible and results in a more precise subject description than the subject headings assigned at present. It then explores PRECIS, an indexing system developed in the 1970s. Though our approach differs from PRECIS in many important ways, the experience of PRECIS supports our conclusions regarding both feasibility and precision.
    Type
    a
  9. Szostak, R.: Classfying scholarly theories and methods (2003) 0.00
    0.0016510803 = product of:
      0.004953241 = sum of:
        0.004953241 = product of:
          0.009906482 = sum of:
            0.009906482 = weight(_text_:a in 2104) [ClassicSimilarity], result of:
              0.009906482 = score(doc=2104,freq=10.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.19940455 = fieldWeight in 2104, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2104)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper develops a simple yet powerful typology of scholarly theory, based an the 5W questions: "Who?", "What?", "Where?", "When?", and "Why?". It also develops a list of the twelve distinct methods used by scholars. These are then evaluated in terms of the 5W questions. Classifying theory types and methods allows scholars and students to better appreciate the advantages and disadvantages of different theory types and methods. Classifications of theory and method can and should be important components of a system for classifying scholarly documents. Researchers and students are presently limited in their ability to search by theory type or method. As a result, scholars often "re-invent" previous research of which they were unaware.
    Type
    a
  10. Gnoli, C.; Pullman, T.; Cousson, P.; Merli, G.; Szostak, R.: Representing the structural elements of a freely faceted classification (2011) 0.00
    0.0015822549 = product of:
      0.0047467644 = sum of:
        0.0047467644 = product of:
          0.009493529 = sum of:
            0.009493529 = weight(_text_:a in 4825) [ClassicSimilarity], result of:
              0.009493529 = score(doc=4825,freq=18.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.19109234 = fieldWeight in 4825, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4825)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Freely faceted classifications allow for free combination of concepts across all knowledge domains, and for sorting of the resulting compound classmarks. Starting from work by the Classification Research Group, the Integrative Levels Classification (ILC) project has produced a first edition of a general freely faceted scheme. The system is managed as a MySQL database, and can be browsed through a Web interface. The ILC database structure provides a case for identifying and representing the structural elements of any freely faceted classification. These belong to both the notational and the verbal planes. Notational elements include: arrays, chains, deictics, facets, foci, place of definition of foci, examples of combinations, subclasses of a faceted class, groupings, related classes; verbal elements include: main caption, synonyms, descriptions, included terms, related terms, notes. Encoding of some of these elements in an international mark-up format like SKOS can be problematic, especially as this does not provide for faceted structures, although approximate SKOS equivalents are identified for most of them.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  11. Szostak, R.: Classifying the humanities (2014) 0.00
    0.0015502865 = product of:
      0.0046508596 = sum of:
        0.0046508596 = product of:
          0.009301719 = sum of:
            0.009301719 = weight(_text_:a in 1084) [ClassicSimilarity], result of:
              0.009301719 = score(doc=1084,freq=12.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.18723148 = fieldWeight in 1084, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1084)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A synthetic and universal approach to classification which allows the free combination of basic concepts would better address a variety of challenges in classifying both humanities scholarship and the works of art (including literature) that humanists study. Four key characteristics of this classificatory approach are stressed: a universal non-discipline-based approach, a synthetic approach that allows free combination of any concepts but stresses a sentence-like structure, emphasis on basic concepts (for which there are broadly shared understandings across groups and individuals), and finally classification of works also in terms of the theories, methods, and perspectives applied. The implications of these four characteristics, alone or (often) in concert, for many aspects of classification in the humanities are discussed. Several advantages are found both for classifying humanities scholarship and works of art. The se four characteristics are each found in the Basic Concepts Classification (which is briefly compared to other faceted classifications), but each could potentially be adopted elsewhere as well.
    Type
    a
  12. Szostak, R.: Classification, interdisciplinarity, and the study of science (2008) 0.00
    0.0014917641 = product of:
      0.0044752923 = sum of:
        0.0044752923 = product of:
          0.008950585 = sum of:
            0.008950585 = weight(_text_:a in 1893) [ClassicSimilarity], result of:
              0.008950585 = score(doc=1893,freq=16.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.18016359 = fieldWeight in 1893, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1893)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper aims to respond to the 2005 paper by Hjørland and Nissen Pedersen by suggesting that an exhaustive and universal classification of the phenomena that scholars study, and the methods and theories they apply, is feasible. It seeks to argue that such a classification is critical for interdisciplinary scholarship. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Hjørland and Nissen Pedersen as its starting point. Hjørland and Nissen Pedersen had identified several difficulties that would be encountered in developing such a classification; the paper suggests how each of these can be overcome. It also urges a deductive approach as complementary to the inductive approach recommended by Hjørland and Nissen Pedersen. Findings - The paper finds that an exhaustive and universal classification of scholarly documents in terms of (at least) the phenomena that scholars study, and the theories and methods they apply, appears to be both possible and desirable. Practical implications - The paper suggests how such a project can be begun. In particular it stresses the importance of classifying documents in terms of causal links between phenomena. Originality/value - The paper links the information science, interdisciplinary, and study of science literatures, and suggests that the types of classification outlined above would be of great value to scientists/scholars, and that they are possible.
    Content
    Bezugnahme auf: Hjoerland, B., K.N. Pedersen: A substantive theory of classification for information retrieval. In: Journal of documentation. 61(2005) no.5, S.582-597. - Vgl. auch: Hjoerland, R.: Core classification theory: : a reply to Szostak. In: Journal of documentation. 64(2008) no.3, S.333 - 342.
    Type
    a
  13. Szostak, R.: ¬The basic concepts classification (2012) 0.00
    0.0014917641 = product of:
      0.0044752923 = sum of:
        0.0044752923 = product of:
          0.008950585 = sum of:
            0.008950585 = weight(_text_:a in 821) [ClassicSimilarity], result of:
              0.008950585 = score(doc=821,freq=4.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.18016359 = fieldWeight in 821, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=821)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  14. Szostak, R.: ¬A grammatical approach to subject classification in museums (2017) 0.00
    0.0014767711 = product of:
      0.004430313 = sum of:
        0.004430313 = product of:
          0.008860626 = sum of:
            0.008860626 = weight(_text_:a in 4136) [ClassicSimilarity], result of:
              0.008860626 = score(doc=4136,freq=8.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.17835285 = fieldWeight in 4136, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4136)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Several desiderata of a system of subject classification for museums are identified. The limitations of existing approaches are reviewed. It is argued that an approach which synthesizes basic concepts within a grammatical structure can achieve the goals of subject classification in museums while addressing diverse challenges. The same approach can also be applied in galleries, archives, and libraries. The approach is described in some detail and examples are provided of its application. The article closes with brief discussions of thesauri and linked open data.
    Type
    a
  15. Szostak, R.: Universal and domain-specific classifications from an interdisciplinary perspective (2010) 0.00
    0.0014616244 = product of:
      0.004384873 = sum of:
        0.004384873 = product of:
          0.008769746 = sum of:
            0.008769746 = weight(_text_:a in 3516) [ClassicSimilarity], result of:
              0.008769746 = score(doc=3516,freq=6.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.17652355 = fieldWeight in 3516, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3516)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A universal non-discipline-based classification is a complement to, rather than substitute for, domain-specific classifications. Cognitive work analysis suggests that especially interdisciplinary researchers but also specialized researchers would benefit from both types of classification. Both practical and theoretical considerations point to complementarity. The research efforts of scholars pursuing both types of classification can thus usefully reinforce each other.
    Type
    a
  16. Szostak, R.; Gnoli, C.; López-Huertas, M.: Interdisciplinary knowledge organization 0.00
    0.0014616244 = product of:
      0.004384873 = sum of:
        0.004384873 = product of:
          0.008769746 = sum of:
            0.008769746 = weight(_text_:a in 3804) [ClassicSimilarity], result of:
              0.008769746 = score(doc=3804,freq=24.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.17652355 = fieldWeight in 3804, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3804)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This book proposes a novel approach to classification, discusses its myriad advantages, and outlines how such an approach to classification can best be pursued. It encourages a collaborative effort toward the detailed development of such a classification. This book is motivated by the increased importance of interdisciplinary scholarship in the academy, and the widely perceived shortcomings of existing knowledge organization schemes in serving interdisciplinary scholarship. It is designed for scholars of classification research, knowledge organization, the digital environment, and interdisciplinarity itself. The approach recommended blends a general classification with domain-specific classification practices. The book reaches a set of very strong conclusions:
    -Existing classification systems serve interdisciplinary research and teaching poorly. -A novel approach to classification, grounded in the phenomena studied rather than disciplines, would serve interdisciplinary scholarship much better. It would also have advantages for disciplinary scholarship. The productivity of scholarship would thus be increased. -This novel approach is entirely feasible. Various concerns that might be raised can each be addressed. The broad outlines of what a new classification would look like are developed. -This new approach might serve as a complement to or a substitute for existing classification systems. -Domain analysis can and should be employed in the pursuit of a general classification. This will be particularly important with respect to interdisciplinary domains. -Though the impetus for this novel approach comes from interdisciplinarity, it is also better suited to the needs of the Semantic Web, and a digital environment more generally. Though the primary focus of the book is on classification systems, most chapters also address how the analysis could be extended to thesauri and ontologies. The possibility of a universal thesaurus is explored. The classification proposed has many of the advantages sought in ontologies for the Semantic Web. The book is therefore of interest to scholars working in these areas as well.
  17. Szostak, R.: Complex concepts into basic concepts (2011) 0.00
    0.0012919056 = product of:
      0.0038757168 = sum of:
        0.0038757168 = product of:
          0.0077514336 = sum of:
            0.0077514336 = weight(_text_:a in 4926) [ClassicSimilarity], result of:
              0.0077514336 = score(doc=4926,freq=12.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.15602624 = fieldWeight in 4926, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4926)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Interdisciplinary communication, and thus the rate of progress in scholarly understanding, would be greatly enhanced if scholars had access to a universal classification of documents or ideas not grounded in particular disciplines or cultures. Such a classification is feasible if complex concepts can be understood as some combination of more basic concepts. There appear to be five main types of concept theory in the philosophical literature. Each provides some support for the idea of breaking complex into basic concepts that can be understood across disciplines or cultures, but each has detractors. None of these criticisms represents a substantive obstacle to breaking complex concepts into basic concepts within information science. Can we take the subject entries in existing universal but discipline-based classifications, and break these into a set of more basic concepts that can be applied across disciplinary classes? The author performs this sort of analysis for Dewey classes 300 to 339.9. This analysis will serve to identify the sort of 'basic concepts' that would lie at the heart of a truly universal classification. There are two key types of basic concept: the things we study (individuals, rocks, trees), and the relationships among these (talking, moving, paying).
    Type
    a
  18. Szostak, R.; Gnoli, C.: Classifying by phenomena, theories and methods : examples with focused social science theories (2008) 0.00
    0.0012789214 = product of:
      0.003836764 = sum of:
        0.003836764 = product of:
          0.007673528 = sum of:
            0.007673528 = weight(_text_:a in 2250) [ClassicSimilarity], result of:
              0.007673528 = score(doc=2250,freq=6.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.1544581 = fieldWeight in 2250, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2250)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    This paper shows how a variety of theories employed across a range of social sciences could be classified in terms of theory type. In each case, notation within the Integrated Level Classification is provided. The paper thus illustrates how one key element of the Leon Manifesto that scholarly documents should be classified in terms of the theory(ies) applied can be achieved in practice.
    Type
    a
  19. Szostak, R.; Smiraglia, R.P.: Comparative approaches to interdisciplinary KOSs : use cases of converting UDC to BCC (2017) 0.00
    0.0012789214 = product of:
      0.003836764 = sum of:
        0.003836764 = product of:
          0.007673528 = sum of:
            0.007673528 = weight(_text_:a in 3874) [ClassicSimilarity], result of:
              0.007673528 = score(doc=3874,freq=6.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.1544581 = fieldWeight in 3874, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3874)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We take a small sample of works and compare how these are classified within both the Universal Decimal Classification and the Basic concepts Classification. We examine notational length, expressivity, network effects, and the number of subject strings. One key finding is that BCC typically synthesizes many more terms than UDC in classifying a particular document - but the length of classificatory notations is roughly equivalent for the two KOSs. BCC captures documents with fewer subject strings (generally one) but these are more complex.
    Type
    a
  20. Szostak, R.: Facet analysis without facet indicators (2017) 0.00
    0.0012658038 = product of:
      0.0037974115 = sum of:
        0.0037974115 = product of:
          0.007594823 = sum of:
            0.007594823 = weight(_text_:a in 4159) [ClassicSimilarity], result of:
              0.007594823 = score(doc=4159,freq=2.0), product of:
                0.04968032 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04308612 = queryNorm
                0.15287387 = fieldWeight in 4159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4159)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Type
    a