Search (93 results, page 1 of 5)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.04
    0.04149951 = product of:
      0.08299902 = sum of:
        0.08299902 = sum of:
          0.008118451 = weight(_text_:a in 6040) [ClassicSimilarity], result of:
            0.008118451 = score(doc=6040,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 6040, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.09375 = fieldNorm(doc=6040)
          0.07488057 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
            0.07488057 = score(doc=6040,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.46428138 = fieldWeight in 6040, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=6040)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:42:47
    Type
    a
  2. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.03
    0.031011326 = product of:
      0.062022652 = sum of:
        0.062022652 = sum of:
          0.012102271 = weight(_text_:a in 4869) [ClassicSimilarity], result of:
            0.012102271 = score(doc=4869,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.22789092 = fieldWeight in 4869, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=4869)
          0.04992038 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
            0.04992038 = score(doc=4869,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 4869, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4869)
      0.5 = coord(1/2)
    
    Abstract
    This article gives an overview of the design and organisation of DutchESS, a Dutch information subject gateway created as a national collaborative effort of the National Library and a number of academic libraries. The combined centralised and distributed model of DutchESS is discussed, as well as its selection policy, its metadata format, classification scheme and retrieval options. Also some options for future collaboration on an international level are explored
    Date
    22. 6.2002 19:39:23
    Type
    a
  3. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.03
    0.03037249 = product of:
      0.06074498 = sum of:
        0.06074498 = sum of:
          0.0108246 = weight(_text_:a in 2871) [ClassicSimilarity], result of:
            0.0108246 = score(doc=2871,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.20383182 = fieldWeight in 2871, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
          0.04992038 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
            0.04992038 = score(doc=2871,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 2871, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
      0.5 = coord(1/2)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52
    Type
    a
  4. Van Dijck, P.: Introduction to XFML (2003) 0.03
    0.028787265 = product of:
      0.05757453 = sum of:
        0.05757453 = sum of:
          0.007654148 = weight(_text_:a in 2474) [ClassicSimilarity], result of:
            0.007654148 = score(doc=2474,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14413087 = fieldWeight in 2474, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=2474)
          0.04992038 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
            0.04992038 = score(doc=2474,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 2474, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2474)
      0.5 = coord(1/2)
    
    Abstract
    Van Dijck builds up an example of actual XFML by showing how to organize tourist information about what restaurants in what cities feature which kind of music: <facet id="city">City</facet> and <topic id="ny" facetid="city"><name>New York</name></topic> combine to mean that New York is the name of a city internally represented as "ny". It is written in the usual clear and practical style of articles on xml.com. Highly recommended as an introduction for anyone interested in XFML.
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html
  5. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.03
    0.026575929 = product of:
      0.053151857 = sum of:
        0.053151857 = sum of:
          0.009471525 = weight(_text_:a in 88) [ClassicSimilarity], result of:
            0.009471525 = score(doc=88,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.17835285 = fieldWeight in 88, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=88)
          0.043680333 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
            0.043680333 = score(doc=88,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 88, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=88)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
    Type
    a
  6. Chandler, A.; LeBlanc, J.: Exploring the potential of a virtual undergraduate library collection based on the hierarchical interface to LC Classification (2006) 0.03
    0.025451606 = product of:
      0.050903212 = sum of:
        0.050903212 = sum of:
          0.013462927 = weight(_text_:a in 769) [ClassicSimilarity], result of:
            0.013462927 = score(doc=769,freq=22.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.25351265 = fieldWeight in 769, product of:
                4.690416 = tf(freq=22.0), with freq of:
                  22.0 = termFreq=22.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=769)
          0.037440285 = weight(_text_:22 in 769) [ClassicSimilarity], result of:
            0.037440285 = score(doc=769,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 769, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=769)
      0.5 = coord(1/2)
    
    Abstract
    The Hierarchical Interface to Library of Congress Classification (HILCC) is a system developed by the Columbia University Library to leverage call number data from the MARC holdings records in Columbia's online catalog to create a structured, hierarchical menuing system that provides subject access to the library's electronic resources. In this paper, the authors describe a research initiative at the Cornell University Library to discover if the Columbia HILCC scheme can be used as developed or in modified form to create a virtual undergraduate print collection outside the context of the traditional online catalog. Their results indicate that, with certain adjustments, an HILCC model can indeed, be used to represent the holdings of a large research library's undergraduate collection of approximately 150,000 titles, but that such a model is not infinitely scalable and may require a new approach to browsing such a large information space.
    Date
    10. 9.2000 17:38:22
    Type
    a
  7. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.02
    0.024208048 = product of:
      0.048416097 = sum of:
        0.048416097 = sum of:
          0.0047357627 = weight(_text_:a in 4321) [ClassicSimilarity], result of:
            0.0047357627 = score(doc=4321,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.089176424 = fieldWeight in 4321, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4321)
          0.043680333 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
            0.043680333 = score(doc=4321,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 4321, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4321)
      0.5 = coord(1/2)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.
    Type
    a
  8. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.02
    0.022779368 = product of:
      0.045558736 = sum of:
        0.045558736 = sum of:
          0.008118451 = weight(_text_:a in 780) [ClassicSimilarity], result of:
            0.008118451 = score(doc=780,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 780, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
          0.037440285 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
            0.037440285 = score(doc=780,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
      0.5 = coord(1/2)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
    Type
    a
  9. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.02
    0.020098079 = product of:
      0.040196158 = sum of:
        0.040196158 = sum of:
          0.0023678814 = weight(_text_:a in 4406) [ClassicSimilarity], result of:
            0.0023678814 = score(doc=4406,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.044588212 = fieldWeight in 4406, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4406)
          0.037828278 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
            0.037828278 = score(doc=4406,freq=6.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23454636 = fieldWeight in 4406, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4406)
      0.5 = coord(1/2)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
    Type
    a
  10. Place, E.: International collaboration on Internet subject gateways (2000) 0.02
    0.017291464 = product of:
      0.034582928 = sum of:
        0.034582928 = sum of:
          0.0033826875 = weight(_text_:a in 4584) [ClassicSimilarity], result of:
            0.0033826875 = score(doc=4584,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.06369744 = fieldWeight in 4584, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4584)
          0.03120024 = weight(_text_:22 in 4584) [ClassicSimilarity], result of:
            0.03120024 = score(doc=4584,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 4584, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4584)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:35:35
    Type
    a
  11. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.01560012 = product of:
      0.03120024 = sum of:
        0.03120024 = product of:
          0.06240048 = sum of:
            0.06240048 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06240048 = score(doc=611,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  12. Oberhauser, O.: Implementierung und Parametrisierung klassifikatorischer Recherchekomponenten im OPAC (2005) 0.01
    0.012594428 = product of:
      0.025188856 = sum of:
        0.025188856 = sum of:
          0.00334869 = weight(_text_:a in 3353) [ClassicSimilarity], result of:
            0.00334869 = score(doc=3353,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.06305726 = fieldWeight in 3353, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3353)
          0.021840166 = weight(_text_:22 in 3353) [ClassicSimilarity], result of:
            0.021840166 = score(doc=3353,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.1354154 = fieldWeight in 3353, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3353)
      0.5 = coord(1/2)
    
    Location
    A
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 58(2005) H.1, S.22-37
    Type
    a
  13. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.01
    0.010067122 = product of:
      0.020134244 = sum of:
        0.020134244 = sum of:
          0.007654148 = weight(_text_:a in 2047) [ClassicSimilarity], result of:
            0.007654148 = score(doc=2047,freq=64.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14413087 = fieldWeight in 2047, product of:
                8.0 = tf(freq=64.0), with freq of:
                  64.0 = termFreq=64.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.015625 = fieldNorm(doc=2047)
          0.012480095 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
            0.012480095 = score(doc=2047,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.07738023 = fieldWeight in 2047, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=2047)
      0.5 = coord(1/2)
    
    Date
    2. 1.2004 10:35:22
    Editor
    Neelameghan, A. u. K.N. Prasad
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.40-42 (J.-E. Mai): "Introduction: This is a collection of papers presented at the National Seminar an Classification in the Digital Environment held in Bangalore, India, an August 9-11 2001. The collection contains 18 papers dealing with various issues related to knowledge organization and classification theory. The issue of transferring the knowledge, traditions, and theories of bibliographic classification to the digital environment is an important one, and I was excited to learn that proceedings from this seminar were available. Many of us experience frustration an a daily basis due to poorly constructed Web search mechanisms and Web directories. As a community devoted to making information easily accessible we have something to offer the Web community and a seminar an the topic was indeed much needed. Below are brief summaries of the 18 papers presented at the seminar. The order of the summaries follows the order of the papers in the proceedings. The titles of the paper are given in parentheses after the author's name. AHUJA and WESLEY (From "Subject" to "Need": Shift in Approach to Classifying Information an the Internet/Web) argue that traditional bibliographic classification systems fall in the digital environment. One problem is that bibliographic classification systems have been developed to organize library books an shelves and as such are unidimensional and tied to the paper-based environment. Another problem is that they are "subject" oriented in the sense that they assume a relatively stable universe of knowledge containing basic and fixed compartments of knowledge that can be identified and represented. Ahuja and Wesley suggest that classification in the digital environment should be need-oriented instead of subjectoriented ("One important link that binds knowledge and human being is his societal need. ... Hence, it will be ideal to organise knowledge based upon need instead of subject." (p. 10)).
    AHUJA and SATIJA (Relevance of Ranganathan's Classification Theory in the Age of Digital Libraries) note that traditional bibliographic classification systems have been applied in the digital environment with only limited success. They find that the "inherent flexibility of electronic manipulation of documents or their surrogates should allow a more organic approach to allocation of new subjects and appropriate linkages between subject hierarchies." (p. 18). Ahija and Satija also suggest that it is necessary to shift from a "subject" focus to a "need" focus when applying classification theory in the digital environment. They find Ranganathan's framework applicable in the digital environment. Although Ranganathan's focus is "subject oriented and hence emphasise the hierarchical and linear relationships" (p. 26), his framework "can be successfully adopted with certain modifications ... in the digital environment." (p. 26). SHAH and KUMAR (Model for System Unification of Geographical Schedules (Space Isolates)) report an a plan to develop a single schedule for geographical Subdivision that could be used across all classification systems. The authors argue that this is needed in order to facilitate interoperability in the digital environment. SAN SEGUNDO MANUEL (The Representation of Knowledge as a Symbolization of Productive Electronic Information) distills different approaches and definitions of the term "representation" as it relates to representation of knowledge in the library and information science literature and field. SHARADA (Linguistic and Document Classification: Paradigmatic Merger Possibilities) suggests the development of a universal indexing language. The foundation for the universal indexing language is Chomsky's Minimalist Program and Ranganathan's analytico-synthetic classification theory; Acording to the author, based an these approaches, it "should not be a problem" (p. 62) to develop a universal indexing language.
    SELVI (Knowledge Classification of Digital Information Materials with Special Reference to Clustering Technique) finds that it is essential to classify digital material since the amount of material that is becoming available is growing. Selvi suggests using automated classification to "group together those digital information materials or documents that are "most similar" (p. 65). This can be attained by using Cluster analysis methods. PRADHAN and THULASI (A Study of the Use of Classification and Indexing Systems by Web Resource Directories) compare and contrast the classificatory structures of Google, Yahoo, and Looksmart's directories and compare the directories to Dewey Decimal Classification, Library of Congress Classification and Colon Classification's classificatory structures. They find differentes between the directories' and the bibliographic classification systems' classificatory structures and principles. These differentes stem from the fact that bibliographic classification systems are used to "classify academic resources for the research community" (p. 83) and directories "aim to categorize a wider breath of information groups, entertainment, recreation, govt. information, commercial information" (p. 83). NEELAMEGHAN (Hierarchy, Hierarchical Relation and Hierarchical Arrangement) reviews the concept of hierarchy and the formation of hierarchical structures across a variety of domains. NEELAMEGHAN and PRADAD (Digitized Schemes for Subject Classification and Thesauri: Complementary Roles) demonstrate how thesaural relationships (NT, BT, and RT) can be applied to a classification scheme, the Colon Classification in this Gase. NEELAMEGHAN and ASUNDI (Metadata Framework for Describing Embodied Knowledge and Subject Content) propose to use the Generalized Facet Structure framework which is based an Ranganathan's General Theory of Knowledge Classification as a framework for describing the content of documents in a metadata element set for the representation of web documents. CHUDAMANI (Classified Catalogue as a Tool for Subject Based Information Retrieval in both Traditional and Electronic Library Environment) explains why the classified catalogue is superior to the alphabetic cata logue and argues that the same is true in the digital environment.
    PARAMESWARAN (Classification and Indexing: Impact of Classification Theory an PRECIS) reviews the PRECIS system and finds that "it Gould not escape from the impact of the theory of classification" (p. 131). The author further argues that the purpose of classification and subject indexing is the same and that both approaches depends an syntax. This leads to the conclusion that "there is an absolute syntax as the Indian theory of classification points out" (p. 131). SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 1. SA TSAN- A Computer Based Learning Package) and SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 2. Semi-Automatic Synthesis of CC Numbers) present an application to automate classification using a facet classification system, in this Gase, the Colon Classification system. GAIKAIWARI (An Interactive Application for Faceted Classification Systems) presents an application, called SRR, for managing and using a faceted classification scheme in a digital environment. IYER (Use of Instructional Technology to Support Traditional Classroom Learning: A Case Study) describes a course an "Information and Knowledge Organization" that she teaches at the University at Albany (SUNY). The course is a conceptual course that introduces the student to various aspects of knowledge organization. GOPINATH (Universal Classification: How can it be used?) lists fifteen uses of universal classifications and discusses the entities of a number of disciplines. GOPINATH (Knowledge Classification: The Theory of Classification) briefly reviews the foundations for research in automatic classification, summarizes the history of classification, and places Ranganathan's thought in the history of classification.
    Discussion The proceedings of the National Seminar an Classification in the Digital Environment give some insights. However, the depth of analysis and discussion is very uneven across the papers. Some of the papers have substantive research content while others appear to be notes used in the oral presentation. The treatments of the topics are very general in nature. Some papers have a very limited list of references while others have no bibliography. No index has been provided. The transfer of bibliographic knowledge organization theory to the digital environment is an important topic. However, as the papers at this conference have shown, it is also a difficult task. Of the 18 papers presented at this seminar an classification in the digital environment, only 4-5 papers actually deal directly with this important topic. The remaining papers deal with issues that are more or less relevant to classification in the digital environment without explicitly discussing the relation. The reason could be that the authors take up issues in knowledge organization that still need to be investigated and clarified before their application in the digital environment can be considered. Nonetheless, one wishes that the knowledge organization community would discuss the application of classification theory in the digital environment in greater detail. It is obvious from the comparisons of the classificatory structures of bibliographic classification systems and Web directories that these are different and that they probably should be different, since they serve different purposes. Interesting questions in the transformation of bibliographic classification theories to the digital environment are: "Given the existing principles in bibliographic knowledge organization, what are the optimum principles for organization of information, irrespectively of context?" and "What are the fundamental theoretical and practical principles for the construction of Web directories?" Unfortunately, the papers presented at this seminar do not attempt to answer or discuss these questions."
  14. MacLennan, A.: Classification and the Internet (2000) 0.00
    0.004101291 = product of:
      0.008202582 = sum of:
        0.008202582 = product of:
          0.016405163 = sum of:
            0.016405163 = weight(_text_:a in 3150) [ClassicSimilarity], result of:
              0.016405163 = score(doc=3150,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.3089162 = fieldWeight in 3150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3150)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    The future of classification. Ed. R. Marcella u. A. Maltby
    Type
    a
  15. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    0.0032090992 = product of:
      0.0064181983 = sum of:
        0.0064181983 = product of:
          0.012836397 = sum of:
            0.012836397 = weight(_text_:a in 2131) [ClassicSimilarity], result of:
              0.012836397 = score(doc=2131,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.24171482 = fieldWeight in 2131, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
  16. Saeed, H.; Chaudhry, A.S.: Using Dewey decimal classification scheme (DDC) for building taxonomies for knowledge organisation (2002) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 4461) [ClassicSimilarity], result of:
              0.012102271 = score(doc=4461,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 4461, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4461)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Terms drawn from DDC indexes and IEEE Web Thesaurus were merged with DDC hierarchies to build a taxonomy in the domain of computer science. When displayed as a directory structure using a shareware tool MyInfo, the resultant taxonomy appeared to be a promising tool for categorisation that can facilitate browsing of information resources in an electronic environment.
    Type
    a
  17. Hjoerland, B.; Pedersen, K.N.: ¬A substantive theory of classification for information retrieval (2005) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 1892) [ClassicSimilarity], result of:
              0.011717974 = score(doc=1892,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 1892, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1892)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To suggest that a theory of classification for information retrieval (IR), asked for by Spärck Jones in a 1970 paper, presupposes a full implementation of a pragmatic understanding. Part of the Journal of Documentation celebration, "60 years of the best in information research". Design/methodology/approach - Literature-based conceptual analysis, taking Sparck Jones as its starting-point. Analysis involves distinctions between "positivism" and "pragmatism" and "classical" versus Kuhnian understandings of concepts. Findings - Classification, both manual and automatic, for retrieval benefits from drawing upon a combination of qualitative and quantitative techniques, a consideration of theories of meaning, and the adding of top-down approaches to IR in which divisions of labour, domains, traditions, genres, document architectures etc. are included as analytical elements and in which specific IR algorithms are based on the examination of specific literatures. Introduces an example illustrating the consequences of a full implementation of a pragmatist understanding when handling homonyms. Practical implications - Outlines how to classify from a pragmatic-philosophical point of view. Originality/value - Provides, emphasizing a pragmatic understanding, insights of importance to classification for retrieval, both manual and automatic. - Vgl. auch: Szostak, R.: Classification, interdisciplinarity, and the study of science. In: Journal of documentation. 64(2008) no.3, S.319-332.
    Type
    a
  18. LaBarre, K.: Adventures in faceted classification: a brave new world or a world of confusion? (2004) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 2634) [ClassicSimilarity], result of:
              0.011600202 = score(doc=2634,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 2634, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2634)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A preliminary, purposive survey of definitions and current applications of facet analytical theory (FA) is used to develop a framework for the analysis of Websites. This set of guidelines may well serve to highlight commonalities and differences among FA applications an the Web. Rather than identifying FA as the terrain of a particular interest group, the goal is to explore current practices, uncover common misconceptions, extend understanding, and highlight developments that augment the traditional practice of FA and faceted classification (FC).
    Type
    a
  19. Louie, A.J.; Maddox, E.L.; Washington, W.: Using faceted classification to provide structure for information architecture (2003) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 2471) [ClassicSimilarity], result of:
              0.011481222 = score(doc=2471,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 2471, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2471)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a short, but very thorough and very interesting, report on how the writers built a faceted classification for some legal information and used it to structure a web site with navigation and searching. There is a good summary of why facets work well and how they fit into bibliographic control in general. The last section is about their implementation of a web site for the Washington State Bar Association's Council for Legal Public Education. Their classification uses three facets: Purpose (the general aim of the document, e.g. Resources for K-12 Teachers), Topic (the subject of the document), and Type (the legal format of the document). See Example Web Sites, below, for a discussion of the site and a problem with its design.
    Content
    A very large PDF of the six-foot-wide illustrated poster from their poster session is available at http://depts.washington.edu/pettt/presentations/conf_2003/IASummit-Poster-Louie.pdf.
  20. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.00
    0.0027127487 = product of:
      0.0054254974 = sum of:
        0.0054254974 = product of:
          0.010850995 = sum of:
            0.010850995 = weight(_text_:a in 97) [ClassicSimilarity], result of:
              0.010850995 = score(doc=97,freq=42.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20432885 = fieldWeight in 97, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=97)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
    An interesting but somewhat confusing article telling how the writers described web pages with Dublin Core metadata, including a faceted classification, and built a system that lets users browse the collection through the facets. They seem to want to cover too much in a short article, and unnecessary space is given over to screen shots showing how Dublin Core metadata was entered. The screen shots of the resulting browsable system are, unfortunately, not as enlightening as one would hope, and there is no discussion of how the system was actually written or the technology behind it. Still, it could be worth reading as an example of such a system and how it is treated in journals.
    Footnote
    Vgl. auch: Devadason, F.J.: Facet analysis and Semantic Web: musings of a student of Ranganathan. Unter: http://www.geocities.com/devadason.geo/FASEMWEB.html#FacetedIndex.
    Type
    a

Languages

  • e 77
  • d 15
  • hu 1
  • More… Less…

Types

  • a 71
  • el 19
  • m 5
  • s 2
  • p 1
  • x 1
  • More… Less…