Search (106 results, page 2 of 6)

  • × theme_ss:"Universale Facettenklassifikationen"
  1. Austin, D.: ¬The theory of integrative levels reconsidered as the basis of a general classification (1969) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 1286) [ClassicSimilarity], result of:
              0.011481222 = score(doc=1286,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 1286, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1286)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  2. Foskett, D.J.: Classification for a general index language: a review of recent research by the Classification Research Group (1970) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 3403) [ClassicSimilarity], result of:
              0.011481222 = score(doc=3403,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 3403, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Smiraglia, R.P.: ¬A brief introduction to facets in knowledge organization (2017) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 1131) [ClassicSimilarity], result of:
              0.011481222 = score(doc=1131,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 1131, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1131)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  4. Broughton, V.: Concepts and terms in the faceted classification : the case of UDC (2010) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 4065) [ClassicSimilarity], result of:
              0.011219106 = score(doc=4065,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 4065, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4065)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent revision of UDC classes has aimed at implementing a more faceted approach. Many compound classes have been removed from the main tables, and more radical revisions of classes (particularly those for Medicine and Religion) have introduced a rigorous analysis, a clearer sense of citation order, and building of compound classes according to a more logical system syntax. The faceted approach provides a means of formalizing the relationships in the classification and making them explicit for machine recognition. In the Bliss Bibliographic Classification (BC2) (which has been a source for both UDC classes mentioned above), terminologies are encoded for automatic generation of hierarchical and associative relationships. Nevertheless, difficulties are encountered in vocabulary control, and a similar phenomenon is observed in UDC. Current work has revealed differences in the vocabulary of humanities and science, notably the way in which terms in the humanities should be handled when these are semantically complex. Achieving a balance between rigour in the structure of the classification and the complexity of natural language expression remains partially unresolved at present, but provides a fertile field for further research.
    Content
    Teil von: Papers from Classification at a Crossroads: Multiple Directions to Usability: International UDC Seminar 2009-Part 2
    Type
    a
  5. Broughton, V.: Facet analysis as a tool for modelling subject domains and terminologies (2011) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 4826) [ClassicSimilarity], result of:
              0.011219106 = score(doc=4826,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 4826, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Facet analysis is proposed as a general theory of knowledge organization, with an associated methodology that may be applied to the development of terminology tools in a variety of contexts and formats. Faceted classifications originated as a means of representing complexity in semantic content that facilitates logical organization and effective retrieval in a physical environment. This is achieved through meticulous analysis of concepts, their structural and functional status (based on fundamental categories), and their inter-relationships. These features provide an excellent basis for the general conceptual modelling of domains, and for the generation of KOS other than systematic classifications. This is demonstrated by the adoption of a faceted approach to many web search and visualization tools, and by the emergence of a facet based methodology for the construction of thesauri. Current work on the Bliss Bibliographic Classification (Second Edition) is investigating the ways in which the full complexity of faceted structures may be represented through encoded data, capable of generating intellectually and mechanically compatible forms of indexing tools from a single source. It is suggested that a number of research questions relating to the Semantic Web could be tackled through the medium of facet analysis.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  6. Satija, M. P.: Use of Colon Classification (1986) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 101) [ClassicSimilarity], result of:
              0.0108246 = score(doc=101,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=101)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Kaiser, J.: Systematic indexing (1926) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 217) [ClassicSimilarity], result of:
              0.0108246 = score(doc=217,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  8. Grolier, E. de: ¬A study of general categories applicable to classification and coding in documentation (1962) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 3388) [ClassicSimilarity], result of:
              0.0108246 = score(doc=3388,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 3388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=3388)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Dahlberg, I.: ¬A faceted classification of general concepts (2011) 0.00
    0.0026742492 = product of:
      0.0053484985 = sum of:
        0.0053484985 = product of:
          0.010696997 = sum of:
            0.010696997 = weight(_text_:a in 4824) [ClassicSimilarity], result of:
              0.010696997 = score(doc=4824,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20142901 = fieldWeight in 4824, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4824)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    General concepts are all those form-categorial concepts which - attached to a specific concept of a classification system or thesaurus - can help to widen, sometimes even in a syntactical sense, the understanding of a case. In some existing universal classification systems such concepts have been named "auxiliaries" or "common isolates" as in the Colon Classification (CC). However, by such auxiliaries, different kinds of such concepts are listed, e.g. concepts of space and time, concepts of races and languages and concepts of kinds of documents, next to them also concepts of kinds of general activities, properties, persons, and institutions. Such latter kinds form part of the nine aspects ruling the facets in the Information Coding Classification (ICC) through the principle of using a Systematiser for the subdivision of subject groups and fields. Based on this principle and using and extending existing systems of such concepts, e.g. which A. Diemer had presented to the German Thesaurus Committee as well as those found in the UDC, in CC and attached to the Subject Heading System of the German National Library, a faceted classification is proposed for critical assessment, necessary improvement and possible later use in classification systems and thesauri.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  10. Szostak, R.: Facet analysis using grammar (2017) 0.00
    0.0026742492 = product of:
      0.0053484985 = sum of:
        0.0053484985 = product of:
          0.010696997 = sum of:
            0.010696997 = weight(_text_:a in 3866) [ClassicSimilarity], result of:
              0.010696997 = score(doc=3866,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20142901 = fieldWeight in 3866, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3866)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Basic grammar can achieve most/all of the goals of facet analysis without requiring the use of facet indicators. Facet analysis is thus rendered far simpler for classificationist, classifier, and user. We compare facet analysis and grammar, and show how various facets can be represented grammatically. We then address potential challenges in employing grammar as subject classification. A detailed review of basic grammar supports the hypothesis that it is feasible to usefully employ grammatical construction in subject classification. A manageable - and programmable - set of adjustments is required as classifiers move fairly directly from sentences in a document (or object or idea) description to formulating a subject classification. The user likewise can move fairly quickly from a query to the identification of relevant works. A review of theories in linguistics indicates that a grammatical approach should reduce ambiguity while encouraging ease of use. This paper applies the recommended approach to a small sample of recently published books. It finds that the approach is feasible and results in a more precise subject description than the subject headings assigned at present. It then explores PRECIS, an indexing system developed in the 1970s. Though our approach differs from PRECIS in many important ways, the experience of PRECIS supports our conclusions regarding both feasibility and precision.
    Type
    a
  11. Dahlberg, I.: Why a new universal classification system is needed (2017) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 3614) [ClassicSimilarity], result of:
              0.010589487 = score(doc=3614,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 3614, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3614)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Research history of the last 70 years highlights various systems for contents assessment and retrieval of scientific literature, such as universal classifications, thesauri, ontologies etc., which have followed developments of their own, notwithstanding a general trend towards interoperability, i.e. either to become instruments for cooperation or to widen their scope to encompass neighbouring fields within their framework. In the case of thesauri and ontologies, the endeavour to upgrade them into a universal system was bound to miscarry. This paper purports to indicate ways to gain from past experience and possibly rally material achievements while updating and promoting the ontologically-based faceted Information Coding Classification as a progressive universal system fit for meeting whatever requirements in the fields of information and science at large.
    Type
    a
  12. Kaiser, J.O.: Systematic indexing (1985) 0.00
    0.0026202186 = product of:
      0.005240437 = sum of:
        0.005240437 = product of:
          0.010480874 = sum of:
            0.010480874 = weight(_text_:a in 571) [ClassicSimilarity], result of:
              0.010480874 = score(doc=571,freq=30.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19735932 = fieldWeight in 571, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=571)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A native of Germany and a former teacher of languages and music, Julius Otto Kaiser (1868-1927) came to the Philadelphia Commercial Museum to be its librarian in 1896. Faced with the problem of making "information" accessible, he developed a method of indexing he called systematic indexing. The first draft of his scheme, published in 1896-97, was an important landmark in the history of subject analysis. R. K. Olding credits Kaiser with making the greatest single advance in indexing theory since Charles A. Cutter and John Metcalfe eulogizes him by observing that "in sheer capacity for really scientific and logical thinking, Kaiser's was probably the best mind that has ever applied itself to subject indexing." Kaiser was an admirer of "system." By systematic indexing he meant indicating information not with natural language expressions as, for instance, Cutter had advocated, but with artificial expressions constructed according to formulas. Kaiser grudged natural language its approximateness, its vagaries, and its ambiguities. The formulas he introduced were to provide a "machinery for regularising or standardising language" (paragraph 67). Kaiser recognized three categories or "facets" of index terms: (1) terms of concretes, representing things, real or imaginary (e.g., money, machines); (2) terms of processes, representing either conditions attaching to things or their actions (e.g., trade, manufacture); and (3) terms of localities, representing, for the most part, countries (e.g., France, South Africa). Expressions in Kaiser's index language were called statements. Statements consisted of sequences of terms, the syntax of which was prescribed by formula. These formulas specified sequences of terms by reference to category types. Only three citation orders were permitted: a term in the concrete category followed by one in the process category (e.g., Wool-Scouring); (2) a country term followed by a process term (e.g., Brazil - Education); and (3) a concrete term followed by a country term, followed by a process term (e.g., Nitrate-Chile-Trade). Kaiser's system was a precursor of two of the most significant developments in twentieth-century approaches to subject access-the special purpose use of language for indexing, thus the concept of index language, which was to emerge as a generative idea at the time of the second Cranfield experiment (1966) and the use of facets to categorize subject indicators, which was to become the characterizing feature of analytico-synthetic indexing methods such as the Colon classification. In addition to its visionary quality, Kaiser's work is notable for its meticulousness and honesty, as can be seen, for instance, in his observations about the difficulties in facet definition.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a
  13. Gnoli, C.: ¬The meaning of facets in non-disciplinary classifications (2006) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 2291) [ClassicSimilarity], result of:
              0.010148063 = score(doc=2291,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 2291, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2291)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Disciplines are felt by many to be a constraint in classification, though they are a structuring principle of most bibliographic classification schemes. A non-disciplinary approach has been explored by the Classification Research Group, and research in this direction has been resumed recently by the Integrative Level Classification project. This paper focuses on the role and the definition of facets in non-disciplinary schemes. A generalized definition of facets is suggested with reference to predicate logic, allowing for having facets of phenomena as well as facets of disciplines. The general categories under which facets are often subsumed can be related ontologically to the evolutionary sequence of integrative levels. As a facet can be semantically connected with phenomena from any other part of a general scheme, its values can belong to three types, here called extra-defined foci (either special or general), and context-defined foci. Non-disciplinary freely faceted classification is being tested by applying it to little bibliographic samples stored in a MySQL database, and developing Web search interfaces to demonstrate possible uses of the described techniques.
    Source
    Knowledge organization for a global learning society: Proceedings of the 9th International ISKO Conference, 4-7 July 2006, Vienna, Austria. Hrsg.: G. Budin, C. Swertz u. K. Mitgutsch
    Type
    a
  14. Gnoli, C.; Pullman, T.; Cousson, P.; Merli, G.; Szostak, R.: Representing the structural elements of a freely faceted classification (2011) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 4825) [ClassicSimilarity], result of:
              0.010148063 = score(doc=4825,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 4825, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4825)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Freely faceted classifications allow for free combination of concepts across all knowledge domains, and for sorting of the resulting compound classmarks. Starting from work by the Classification Research Group, the Integrative Levels Classification (ILC) project has produced a first edition of a general freely faceted scheme. The system is managed as a MySQL database, and can be browsed through a Web interface. The ILC database structure provides a case for identifying and representing the structural elements of any freely faceted classification. These belong to both the notational and the verbal planes. Notational elements include: arrays, chains, deictics, facets, foci, place of definition of foci, examples of combinations, subclasses of a faceted class, groupings, related classes; verbal elements include: main caption, synonyms, descriptions, included terms, related terms, notes. Encoding of some of these elements in an international mark-up format like SKOS can be problematic, especially as this does not provide for faceted structures, although approximate SKOS equivalents are identified for most of them.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  15. Barité, M.; Rauch, M.: Systematifier : in rescue of a useful tool in domain analysis (2017) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 4142) [ClassicSimilarity], result of:
              0.010148063 = score(doc=4142,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 4142, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4142)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Literature on the systematifier is remarkably limited in knowledge organization. Dahlberg created the procedure in the seventies as a guide for the construction of classification systems and showed its applicability in systems she developed. According to her initial conception, all disciplines should be structured in the following sequence: Foundations and theories-Subjects of study-Methods-Influences-Applications-Environment. The nature of the procedure is determined in this study and the concept is situated in relation with the domain analysis methodologies. As a tool for the organization of the map of a certain domain, it is associated with a rationalist perspective and the top-down design of systems construction. It would require a reassessment of its scope in order to ensure its applicability to multidisciplinary and interdisciplinary domains. Among other conclusions, it is highlighted that the greatest potential of the systematifier is given by the fact that-as a methodological device-it can act as: i)an analyzer of a subject area; ii)an organizer of its main terms; and, iii)an identifier of links, bridges and intersection points with other knowledge areas.
    Type
    a
  16. Broughton, V.: Finding Bliss on the Web : some problems of representing faceted terminologies in digital environments 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 3532) [ClassicSimilarity], result of:
              0.00994303 = score(doc=3532,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 3532, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3532)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Bliss Bibliographic Classification is the only example of a fully faceted general classification scheme in the Western world. Although it is the object of much interest as a model for other tools it suffers from the lack of a web presence, and remedying this is an immediate objective for its editors. Understanding how this might be done presents some challenges, as the scheme is semantically very rich and complex in the range and nature of the relationships it contains. The automatic management of these is already in place using local software, but exporting this to a common data format needs careful thought and planning. Various encoding schemes, both for traditional classifications, and for digital materials, represent variously: the concepts; their functional roles; and the relationships between them. Integrating these aspects in a coherent and interchangeable manner appears to be achievable, but the most appropriate format is as yet unclear.
    Type
    a
  17. Rajaram, S.: Principles for helpful sequence and their relevance in technical writings : a study (2015) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 2795) [ClassicSimilarity], result of:
              0.00994303 = score(doc=2795,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 2795, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2795)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A modest attempt is made in this paper to show how Ranganathan's Principles for Helpful Sequence are relevant in technical writings as writers need to organise the knowledge in a helpful sequence. Instead of relying on intuition, a deliberate understanding of the Principles for Helpful Sequence as recognised by Ranganathan would be more useful in bringing out effective products. The paper first outlines the eight Principles for Helpful Sequence and then goes on to explore the relevance of each of these eight principles to a wide range of technical documents. The paper concludes that an understanding of these principles is part of the core competencies of technical writers even in the web environment.
    Type
    a
  18. Austin, D.: Differences between library classifications and machine-based subject retrieval systems : some inferences drawn from research in Britain, 1963-1973 (1979) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 2564) [ClassicSimilarity], result of:
              0.009567685 = score(doc=2564,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 2564, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Ordering systems for global information networks. Proc. of the 3rd Int. Study Conf. on Classification Research, Bombay 1975. Ed. by A. Neelameghan
    Type
    a
  19. Dahlberg, I.: ¬The future of classification in libraries and networks : a theoretical point of view (1995) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 5563) [ClassicSimilarity], result of:
              0.009567685 = score(doc=5563,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 5563, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Some time ago, some people said classification is dead, we don't need it any more. They probably thought that subject headings could do the job of the necessary subject analysis and shelving of books. However, all of a sudden in 1984 the attitude changed, when an OCLC study of Karen Markey started to show what could be done even with an "outdated system" such as the Dewey Decimal Classification in the computer, once it was visible on a screen to show the helpfulness of a classified library catalogue called an OPAC; classification was brought back into the minds of doubtful librarians and of all those who thought they would not need it any longer. But the problem once phrased: "We are stuck with the two old systems, LCC and DDC" would not find a solution and is still with us today. We know that our systems are outdated but we seem still to be unable to replace them with better ones. What then should one do and advise, knowing that we need something better? Perhaps a new universal ordering system which more adequately represents and mediates the world of our present day knowledge? If we were to develop it from scratch, how would we create it and implement it in such a way that it would be acceptable to the majority of the present intellectual world population?
    Type
    a
  20. Aschero, B.; Negrini, G.; Zanola, R.; Zozi, P.: Systematifier : a guide for the systematization of Italian literature (1995) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 4128) [ClassicSimilarity], result of:
              0.009567685 = score(doc=4128,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 4128, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4128)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Languages

  • e 100
  • d 5
  • chi 1
  • More… Less…

Types

  • a 93
  • el 9
  • m 6
  • s 4
  • b 1
  • More… Less…