Search (22 results, page 1 of 2)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  • × year_i:[2000 TO 2010}
  1. Chandler, A.; LeBlanc, J.: Exploring the potential of a virtual undergraduate library collection based on the hierarchical interface to LC Classification (2006) 0.04
    0.03847589 = product of:
      0.07695178 = sum of:
        0.07695178 = sum of:
          0.03456243 = weight(_text_:data in 769) [ClassicSimilarity], result of:
            0.03456243 = score(doc=769,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.2096163 = fieldWeight in 769, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=769)
          0.04238935 = weight(_text_:22 in 769) [ClassicSimilarity], result of:
            0.04238935 = score(doc=769,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.23214069 = fieldWeight in 769, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=769)
      0.5 = coord(1/2)
    
    Abstract
    The Hierarchical Interface to Library of Congress Classification (HILCC) is a system developed by the Columbia University Library to leverage call number data from the MARC holdings records in Columbia's online catalog to create a structured, hierarchical menuing system that provides subject access to the library's electronic resources. In this paper, the authors describe a research initiative at the Cornell University Library to discover if the Columbia HILCC scheme can be used as developed or in modified form to create a virtual undergraduate print collection outside the context of the traditional online catalog. Their results indicate that, with certain adjustments, an HILCC model can indeed, be used to represent the holdings of a large research library's undergraduate collection of approximately 150,000 titles, but that such a model is not infinitely scalable and may require a new approach to browsing such a large information space.
    Date
    10. 9.2000 17:38:22
  2. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.04
    0.03847589 = product of:
      0.07695178 = sum of:
        0.07695178 = sum of:
          0.03456243 = weight(_text_:data in 780) [ClassicSimilarity], result of:
            0.03456243 = score(doc=780,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.2096163 = fieldWeight in 780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
          0.04238935 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
            0.04238935 = score(doc=780,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.23214069 = fieldWeight in 780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
      0.5 = coord(1/2)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  3. Concise UNIMARC Classification Format : Draft 5 (20000125) (2000) 0.02
    0.02304162 = product of:
      0.04608324 = sum of:
        0.04608324 = product of:
          0.09216648 = sum of:
            0.09216648 = weight(_text_:data in 4421) [ClassicSimilarity], result of:
              0.09216648 = score(doc=4421,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.5589768 = fieldWeight in 4421, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.125 = fieldNorm(doc=4421)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Object
    UNIMARC for classification data
  4. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.021194674 = product of:
      0.04238935 = sum of:
        0.04238935 = product of:
          0.0847787 = sum of:
            0.0847787 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.0847787 = score(doc=6040,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:42:47
  5. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017662229 = product of:
      0.035324458 = sum of:
        0.035324458 = product of:
          0.070648916 = sum of:
            0.070648916 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.070648916 = score(doc=611,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  6. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.02
    0.017460302 = product of:
      0.034920603 = sum of:
        0.034920603 = product of:
          0.069841206 = sum of:
            0.069841206 = weight(_text_:data in 3061) [ClassicSimilarity], result of:
              0.069841206 = score(doc=3061,freq=6.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.42357713 = fieldWeight in 3061, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3061)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  7. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.02
    0.017281216 = product of:
      0.03456243 = sum of:
        0.03456243 = product of:
          0.06912486 = sum of:
            0.06912486 = weight(_text_:data in 2651) [ClassicSimilarity], result of:
              0.06912486 = score(doc=2651,freq=8.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.4192326 = fieldWeight in 2651, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
  8. Tunkelang, D.: Dynamic category sets : an approach for faceted search (2006) 0.01
    0.014256276 = product of:
      0.028512552 = sum of:
        0.028512552 = product of:
          0.057025105 = sum of:
            0.057025105 = weight(_text_:data in 3082) [ClassicSimilarity], result of:
              0.057025105 = score(doc=3082,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.34584928 = fieldWeight in 3082, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3082)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we present Dynamic Category Sets, a novel approach that addresses the vocabulary problem for faceted data. In their paper on the vocabulary problem, Furnas et al. note that "the keywords that are assigned by indexers are often at odds with those tried by searchers." Faceted search systems exhibit an interesting aspect of this problem: users do not necessarily understand an information space in terms of the same facets as the indexers who designed it. Our approach addresses this problem by employing a data-driven approach to discover sets of values across multiple facets that best match the query. When there are multiple candidates, we offer a clarification dialog that allows the user to disambiguate them.
  9. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.014129783 = product of:
      0.028259566 = sum of:
        0.028259566 = product of:
          0.056519132 = sum of:
            0.056519132 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.056519132 = score(doc=2871,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 7.2004 12:22:52
  10. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.01
    0.014129783 = product of:
      0.028259566 = sum of:
        0.028259566 = product of:
          0.056519132 = sum of:
            0.056519132 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.056519132 = score(doc=4869,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:39:23
  11. Van Dijck, P.: Introduction to XFML (2003) 0.01
    0.014129783 = product of:
      0.028259566 = sum of:
        0.028259566 = product of:
          0.056519132 = sum of:
            0.056519132 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
              0.056519132 = score(doc=2474,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.30952093 = fieldWeight in 2474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2474)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html
  12. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.01
    0.0123635605 = product of:
      0.024727121 = sum of:
        0.024727121 = product of:
          0.049454242 = sum of:
            0.049454242 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
              0.049454242 = score(doc=4321,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2708308 = fieldWeight in 4321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4321)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.
  13. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.0123635605 = product of:
      0.024727121 = sum of:
        0.024727121 = product of:
          0.049454242 = sum of:
            0.049454242 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.049454242 = score(doc=88,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  14. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.01
    0.010707158 = product of:
      0.021414315 = sum of:
        0.021414315 = product of:
          0.04282863 = sum of:
            0.04282863 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
              0.04282863 = score(doc=4406,freq=6.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.23454636 = fieldWeight in 4406, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
  15. Chan, L.M.; Childress, E.; Dean, R.; O'Neill, E.T.; Vizine-Goetz, D.: ¬A faceted approach to subject data in the Dublin Core metadata record (2001) 0.01
    0.010080709 = product of:
      0.020161418 = sum of:
        0.020161418 = product of:
          0.040322836 = sum of:
            0.040322836 = weight(_text_:data in 6109) [ClassicSimilarity], result of:
              0.040322836 = score(doc=6109,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.24455236 = fieldWeight in 6109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Place, E.: International collaboration on Internet subject gateways (2000) 0.01
    0.0088311145 = product of:
      0.017662229 = sum of:
        0.017662229 = product of:
          0.035324458 = sum of:
            0.035324458 = weight(_text_:22 in 4584) [ClassicSimilarity], result of:
              0.035324458 = score(doc=4584,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.19345059 = fieldWeight in 4584, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4584)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:35:35
  17. Broughton, V.; Slavic, A.: Building a faceted classification for the humanities : principles and procedures (2007) 0.01
    0.008146443 = product of:
      0.016292887 = sum of:
        0.016292887 = product of:
          0.032585774 = sum of:
            0.032585774 = weight(_text_:data in 2875) [ClassicSimilarity], result of:
              0.032585774 = score(doc=2875,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.19762816 = fieldWeight in 2875, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2875)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper aims to provide an overview of principles and procedures involved in creating a faceted classification scheme for use in resource discovery in an online environment. Design/methodology/approach - Facet analysis provides an established rigorous methodology for the conceptual organization of a subject field, and the structuring of an associated classification or controlled vocabulary. This paper explains how that methodology was applied to the humanities in the FATKS project, where the objective was to explore the potential of facet analytical theory for creating a controlled vocabulary for the humanities, and to establish the requirements of a faceted classification appropriate to an online environment. A detailed faceted vocabulary was developed for two areas of the humanities within a broader facet framework for the whole of knowledge. Research issues included how to create a data model which made the faceted structure explicit and machine-readable and provided for its further development and use. Findings - In order to support easy facet combination in indexing, and facet searching and browsing on the interface, faceted classification requires a formalized data structure and an appropriate tool for its management. The conceptual framework of a faceted system proper can be applied satisfactorily to humanities, and fully integrated within a vocabulary management system. Research limitations/implications - The procedures described in this paper are concerned only with the structuring of the classification, and do not extend to indexing, retrieval and application issues. Practical implications - Many stakeholders in the domain of resource discovery consider developing their own classification system and supporting tools. The methods described in this paper may clarify the process of building a faceted classification and may provide some useful ideas with respect to the vocabulary maintenance tool. Originality/value - As far as the authors are aware there is no comparable research in this area.
  18. Broughton, V.; Lane, H.: ¬The Bliss Bibliographic Classification in action : moving from a special to a universal faceted classification via a digital platform (2004) 0.01
    0.007200507 = product of:
      0.014401014 = sum of:
        0.014401014 = product of:
          0.028802028 = sum of:
            0.028802028 = weight(_text_:data in 2633) [ClassicSimilarity], result of:
              0.028802028 = score(doc=2633,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.17468026 = fieldWeight in 2633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2633)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper examines the differences in the functional requirements of a faceted classification system when used in a conventional print-based environment (where the emphasis is on the browse function of the classification) as compared to its application to digital collections (where the retrieval function is paramount). The use of the second edition of Bliss's Bibliographic Classification (BC2) as a general classification for the physical organization of undergraduate collections in the University of Cambridge is described. The development of an online tool for indexing of digital resources using the Bliss terminologies is also described, and the advantages of facet analysis for data structuring and system syntax within the prototype tool are discussed. The move from the print-based environment to the digital makes different demands an both the content and the syntax of the classification, and while the conceptual structure remains similar, manipulation of the scheme and the process of content description can be markedly different.
  19. Oberhauser, O.: Implementierung und Parametrisierung klassifikatorischer Recherchekomponenten im OPAC (2005) 0.01
    0.0061817802 = product of:
      0.0123635605 = sum of:
        0.0123635605 = product of:
          0.024727121 = sum of:
            0.024727121 = weight(_text_:22 in 3353) [ClassicSimilarity], result of:
              0.024727121 = score(doc=3353,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.1354154 = fieldWeight in 3353, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3353)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 58(2005) H.1, S.22-37
  20. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.01
    0.005760405 = product of:
      0.01152081 = sum of:
        0.01152081 = product of:
          0.02304162 = sum of:
            0.02304162 = weight(_text_:data in 3966) [ClassicSimilarity], result of:
              0.02304162 = score(doc=3966,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.1397442 = fieldWeight in 3966, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3966)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.