Search (19 results, page 1 of 1)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Van Dijck, P.: Introduction to XFML (2003) 0.03
    0.028787265 = product of:
      0.05757453 = sum of:
        0.05757453 = sum of:
          0.007654148 = weight(_text_:a in 2474) [ClassicSimilarity], result of:
            0.007654148 = score(doc=2474,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14413087 = fieldWeight in 2474, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=2474)
          0.04992038 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
            0.04992038 = score(doc=2474,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 2474, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2474)
      0.5 = coord(1/2)
    
    Abstract
    Van Dijck builds up an example of actual XFML by showing how to organize tourist information about what restaurants in what cities feature which kind of music: <facet id="city">City</facet> and <topic id="ny" facetid="city"><name>New York</name></topic> combine to mean that New York is the name of a city internally represented as "ny". It is written in the usual clear and practical style of articles on xml.com. Highly recommended as an introduction for anyone interested in XFML.
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html
  2. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.01560012 = product of:
      0.03120024 = sum of:
        0.03120024 = product of:
          0.06240048 = sum of:
            0.06240048 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06240048 = score(doc=611,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  3. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    0.0032090992 = product of:
      0.0064181983 = sum of:
        0.0064181983 = product of:
          0.012836397 = sum of:
            0.012836397 = weight(_text_:a in 2131) [ClassicSimilarity], result of:
              0.012836397 = score(doc=2131,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.24171482 = fieldWeight in 2131, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
  4. Louie, A.J.; Maddox, E.L.; Washington, W.: Using faceted classification to provide structure for information architecture (2003) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 2471) [ClassicSimilarity], result of:
              0.011481222 = score(doc=2471,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 2471, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2471)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a short, but very thorough and very interesting, report on how the writers built a faceted classification for some legal information and used it to structure a web site with navigation and searching. There is a good summary of why facets work well and how they fit into bibliographic control in general. The last section is about their implementation of a web site for the Washington State Bar Association's Council for Legal Public Education. Their classification uses three facets: Purpose (the general aim of the document, e.g. Resources for K-12 Teachers), Topic (the subject of the document), and Type (the legal format of the document). See Example Web Sites, below, for a discussion of the site and a problem with its design.
    Content
    A very large PDF of the six-foot-wide illustrated poster from their poster session is available at http://depts.washington.edu/pettt/presentations/conf_2003/IASummit-Poster-Louie.pdf.
  5. Fast, K.; Leise, F.; Steckel, M.: What is a controlled vocabulary? (2002) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 2417) [ClassicSimilarity], result of:
              0.0108246 = score(doc=2417,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 2417, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An online series of articles explaining controlled vocabularies and, in particular, faceted classification. It is not yet finished, but what they have covered is very well done, practical and informative, with useful advice and a full treatment. It is worth reading now, and when they actually get to performing facet analysis and making a faceted system, it will make a very useful reference.
  6. Fast, K.; Leise, F.; Steckel, M.: Creating a controlled vocabulary (2003) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 2461) [ClassicSimilarity], result of:
              0.0108246 = score(doc=2461,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 2461, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2461)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An online series of articles explaining controlled vocabularies and, in particular, faceted classification. It is not yet finished, but what they have covered is very well done, practical and informative, with useful advice and a full treatment. It is worth reading now, and when they actually get to performing facet analysis and making a faceted system, it will make a very useful reference.
  7. Fast, K.; Leise, F.; Steckel, M.: Controlled vocabularies : a glosso-thesaurus (2003) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 2469) [ClassicSimilarity], result of:
              0.0108246 = score(doc=2469,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 2469, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2469)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An online series of articles explaining controlled vocabularies and, in particular, faceted classification. It is not yet finished, but what they have covered is very well done, practical and informative, with useful advice and a full treatment. It is worth reading now, and when they actually get to performing facet analysis and making a faceted system, it will make a very useful reference.
  8. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 3061) [ClassicSimilarity], result of:
              0.009471525 = score(doc=3061,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 3061, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3061)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  9. Fast, K.; Leise, F.; Steckel, M.: Facets and controlled vocabularies : an annotated bibliography (2003) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 2900) [ClassicSimilarity], result of:
              0.009374379 = score(doc=2900,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 2900, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2900)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An online series of articles explaining controlled vocabularies and, in particular, faceted classification. It is not yet finished, but what they have covered is very well done, practical and informative, with useful advice and a full treatment. It is worth reading now, and when they actually get to performing facet analysis and making a faceted system, it will make a very useful reference.
  10. Fast, K.; Leise, F.; Steckel, M.: All about facets and controlled vocabularies (2002) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 5141) [ClassicSimilarity], result of:
              0.009374379 = score(doc=5141,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 5141, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An online series of articles explaining controlled vocabularies and, in particular, faceted classification. It is not yet finished, but what they have covered is very well done, practical and informative, with useful advice and a full treatment. It is worth reading now, and when they actually get to performing facet analysis and making a faceted system, it will make a very useful reference.
  11. Fast, K.; Leise, F.; Steckel, M.: Synonym rings and authority files (2003) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 2468) [ClassicSimilarity], result of:
              0.009374379 = score(doc=2468,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 2468, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2468)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An online series of articles explaining controlled vocabularies and, in particular, faceted classification. It is not yet finished, but what they have covered is very well done, practical and informative, with useful advice and a full treatment. It is worth reading now, and when they actually get to performing facet analysis and making a faceted system, it will make a very useful reference.
  12. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.00
    0.002325213 = product of:
      0.004650426 = sum of:
        0.004650426 = product of:
          0.009300852 = sum of:
            0.009300852 = weight(_text_:a in 1202) [ClassicSimilarity], result of:
              0.009300852 = score(doc=1202,freq=42.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17513901 = fieldWeight in 1202, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1202)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
    Type
    a
  13. Robbio, A. de; Maguolo, D.; Marini, A.: Scientific and general subject classifications in the digital world (2001) 0.00
    0.0022438213 = product of:
      0.0044876426 = sum of:
        0.0044876426 = product of:
          0.008975285 = sum of:
            0.008975285 = weight(_text_:a in 2) [ClassicSimilarity], result of:
              0.008975285 = score(doc=2,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16900843 = fieldWeight in 2, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of Web pages produced by a pool of software tools for developing hypertextual presentations of single or paired subject classifications from sequential source files, as well as facilities for gathering information from KWIC lists of classification descriptions. Further we propose a concept-oriented methodology for interconnecting subject classifications, with the concrete support of a relational analysis of the whole Mathematics Subject Classification through its evolution since 1959. Finally, we recall a very basic method for interconnection provided by coreference in bibliographic records among index elements from different systems, and point out the advantages of establishing the conditions of a more widespread application of such a method. A part of these contents was presented under the title Mathematics Subject Classification and related Classifications in the Digital World at the Eighth International Conference Crimea 2001, "Libraries and Associations in the Transient World: New Technologies and New Forms of Cooperation", Sudak, Ukraine, June 9-17, 2001, in a special session on electronic libraries, electronic publishing and electronic information in science chaired by Bernd Wegner, Editor-in-Chief of Zentralblatt MATH.
  14. Panzer, M.: Towards the "webification" of controlled subject vocabulary : a case study involving the Dewey Decimal Classification (2007) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 538) [ClassicSimilarity], result of:
              0.008202582 = score(doc=538,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 538, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=538)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The presentation will briefly introduce a series of major principles for bringing subject terminology to the network level. A closer look at one KOS in particular, the Dewey Decimal Classification, should help to gain more insight into the perceived difficulties and potential benefits of building taxonomy services out and on top of classic large-scale vocabularies or taxonomies.
  15. Tunkelang, D.: Dynamic category sets : an approach for faceted search (2006) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 3082) [ClassicSimilarity], result of:
              0.008202582 = score(doc=3082,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 3082, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3082)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we present Dynamic Category Sets, a novel approach that addresses the vocabulary problem for faceted data. In their paper on the vocabulary problem, Furnas et al. note that "the keywords that are assigned by indexers are often at odds with those tried by searchers." Faceted search systems exhibit an interesting aspect of this problem: users do not necessarily understand an information space in terms of the same facets as the indexers who designed it. Our approach addresses this problem by employing a data-driven approach to discover sets of values across multiple facets that best match the query. When there are multiple candidates, we offer a clarification dialog that allows the user to disambiguate them.
  16. Pika, J.: Universal Decimal Classification at the ETH-Bibliothek Zürich : a Swiss perspective (2007) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 5899) [ClassicSimilarity], result of:
              0.008118451 = score(doc=5899,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 5899, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5899)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The ETH library has been using the UDC for the past twenty-five years and yet most of the users had almost never taken a single notice about it. The query in today's NEBIS-OPAC (former ETHICS) is based on verbal search with three-lingual descriptors and corresponding related search-terms including e.g. synonyma as well as user-friendly expressions from scientific journals - scientific jargon - to facilitate the dialog with OPAC. A single UDC number, standing behind these descriptors, connects them to the related document-titles, regardless of language. Thus the user actually works with the UDC, without realizing it. This paper describes the experience with this OPAC and the work behind it.
    Type
    a
  17. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.00
    0.0017433972 = product of:
      0.0034867944 = sum of:
        0.0034867944 = product of:
          0.006973589 = sum of:
            0.006973589 = weight(_text_:a in 2467) [ClassicSimilarity], result of:
              0.006973589 = score(doc=2467,freq=34.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13131565 = fieldWeight in 2467, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  18. XFML Core - eXchangeable Faceted Metadata Language (2003) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 6673) [ClassicSimilarity], result of:
              0.006765375 = score(doc=6673,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 6673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The specification for XFML, a markup language designed to handle faceted classifications. Browsing the site (http://www.xfml.org/) will reveal news about XFML and links to related software and web sites. XFML is not an officially recognized Internet standard, but is the de facto standard.
  19. Schallier, W.: Why organize information if you can find it? : UDC and libraries in an Internet world (2007) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 549) [ClassicSimilarity], result of:
              0.006765375 = score(doc=549,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 549, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Belgians Otlet and LaFontaine created the Universal Decimal Classification in order to collect and organize the world's knowledge. This happened in an age when information was almost exclusively made available by libraries. Since the internet, the quantity of information outside libraries is enormous and keeps growing every day. The internet is accessible to anybody, it is fundamentally unorganized and its content changes constantly. Collecting and organizing the world's knowledge seem to have become an impossible ambition. Perhaps it is even unnecessary, since search engines make information retrievable now. And why would we organize information if we can find it? So what will be the role of UDC and libraries in this internet environment? Libraries can still play a role as a major information provider, if they adapt fully to the expectations of a modern end user. The design and the functionalities of online catalogues should allow maximal accessibility, usability and active participation of the end user in the internet environment. Metadata, like UDC, should maximize the visibility of information, enrich it and invite the end user to assign metadata himself.
    Type
    a