Search (60 results, page 1 of 3)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.05
    0.05316878 = product of:
      0.07975317 = sum of:
        0.07035815 = weight(_text_:sociology in 1202) [ClassicSimilarity], result of:
          0.07035815 = score(doc=1202,freq=2.0), product of:
            0.30495512 = queryWeight, product of:
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.043811057 = queryNorm
            0.2307164 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
        0.009395021 = product of:
          0.018790042 = sum of:
            0.018790042 = weight(_text_:of in 1202) [ClassicSimilarity], result of:
              0.018790042 = score(doc=1202,freq=56.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2742677 = fieldWeight in 1202, product of:
                  7.483315 = tf(freq=56.0), with freq of:
                    56.0 = termFreq=56.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1202)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  2. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.03
    0.028477818 = product of:
      0.08543345 = sum of:
        0.08543345 = sum of:
          0.014203937 = weight(_text_:of in 6040) [ClassicSimilarity], result of:
            0.014203937 = score(doc=6040,freq=2.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.20732689 = fieldWeight in 6040, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.09375 = fieldNorm(doc=6040)
          0.07122952 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
            0.07122952 = score(doc=6040,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.46428138 = fieldWeight in 6040, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=6040)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2002 19:42:47
  3. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.02
    0.022886775 = product of:
      0.06866033 = sum of:
        0.06866033 = sum of:
          0.021173978 = weight(_text_:of in 4869) [ClassicSimilarity], result of:
            0.021173978 = score(doc=4869,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.3090647 = fieldWeight in 4869, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=4869)
          0.047486346 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
            0.047486346 = score(doc=4869,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 4869, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4869)
      0.33333334 = coord(1/3)
    
    Abstract
    This article gives an overview of the design and organisation of DutchESS, a Dutch information subject gateway created as a national collaborative effort of the National Library and a number of academic libraries. The combined centralised and distributed model of DutchESS is discussed, as well as its selection policy, its metadata format, classification scheme and retrieval options. Also some options for future collaboration on an international level are explored
    Date
    22. 6.2002 19:39:23
  4. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.02
    0.020292649 = product of:
      0.060877945 = sum of:
        0.060877945 = sum of:
          0.0133916 = weight(_text_:of in 2871) [ClassicSimilarity], result of:
            0.0133916 = score(doc=2871,freq=4.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.19546966 = fieldWeight in 2871, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
          0.047486346 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
            0.047486346 = score(doc=2871,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 2871, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
      0.33333334 = coord(1/3)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52
  5. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.02
    0.02002593 = product of:
      0.060077786 = sum of:
        0.060077786 = sum of:
          0.018527232 = weight(_text_:of in 88) [ClassicSimilarity], result of:
            0.018527232 = score(doc=88,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2704316 = fieldWeight in 88, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=88)
          0.041550554 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
            0.041550554 = score(doc=88,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 88, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=88)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  6. Chandler, A.; LeBlanc, J.: Exploring the potential of a virtual undergraduate library collection based on the hierarchical interface to LC Classification (2006) 0.02
    0.017165082 = product of:
      0.051495243 = sum of:
        0.051495243 = sum of:
          0.015880484 = weight(_text_:of in 769) [ClassicSimilarity], result of:
            0.015880484 = score(doc=769,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.23179851 = fieldWeight in 769, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=769)
          0.03561476 = weight(_text_:22 in 769) [ClassicSimilarity], result of:
            0.03561476 = score(doc=769,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 769, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=769)
      0.33333334 = coord(1/3)
    
    Abstract
    The Hierarchical Interface to Library of Congress Classification (HILCC) is a system developed by the Columbia University Library to leverage call number data from the MARC holdings records in Columbia's online catalog to create a structured, hierarchical menuing system that provides subject access to the library's electronic resources. In this paper, the authors describe a research initiative at the Cornell University Library to discover if the Columbia HILCC scheme can be used as developed or in modified form to create a virtual undergraduate print collection outside the context of the traditional online catalog. Their results indicate that, with certain adjustments, an HILCC model can indeed, be used to represent the holdings of a large research library's undergraduate collection of approximately 150,000 titles, but that such a model is not infinitely scalable and may require a new approach to browsing such a large information space.
    Date
    10. 9.2000 17:38:22
  7. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.02
    0.01597191 = product of:
      0.047915727 = sum of:
        0.047915727 = sum of:
          0.01230097 = weight(_text_:of in 780) [ClassicSimilarity], result of:
            0.01230097 = score(doc=780,freq=6.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.17955035 = fieldWeight in 780, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
          0.03561476 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
            0.03561476 = score(doc=780,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
      0.33333334 = coord(1/3)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  8. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.01
    0.0069250925 = product of:
      0.020775277 = sum of:
        0.020775277 = product of:
          0.041550554 = sum of:
            0.041550554 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
              0.041550554 = score(doc=4321,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2708308 = fieldWeight in 4321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4321)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.
  9. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.01
    0.0059973057 = product of:
      0.017991917 = sum of:
        0.017991917 = product of:
          0.035983834 = sum of:
            0.035983834 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
              0.035983834 = score(doc=4406,freq=6.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.23454636 = fieldWeight in 4406, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4406)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
  10. Place, E.: International collaboration on Internet subject gateways (2000) 0.00
    0.0049464945 = product of:
      0.014839483 = sum of:
        0.014839483 = product of:
          0.029678967 = sum of:
            0.029678967 = weight(_text_:22 in 4584) [ClassicSimilarity], result of:
              0.029678967 = score(doc=4584,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.19345059 = fieldWeight in 4584, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4584)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2002 19:35:35
  11. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.00
    0.0045843013 = product of:
      0.013752903 = sum of:
        0.013752903 = product of:
          0.027505806 = sum of:
            0.027505806 = weight(_text_:of in 831) [ClassicSimilarity], result of:
              0.027505806 = score(doc=831,freq=30.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.4014868 = fieldWeight in 831, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=831)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
    Footnote
    Artikel in einem Themenheft: The philosophy of information
  12. Ellis, D.; Vasconcelos, A.: ¬The relevance of facet analysis for World Wide Web subject organization and searching (2000) 0.00
    0.004428855 = product of:
      0.013286565 = sum of:
        0.013286565 = product of:
          0.02657313 = sum of:
            0.02657313 = weight(_text_:of in 2477) [ClassicSimilarity], result of:
              0.02657313 = score(doc=2477,freq=28.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38787308 = fieldWeight in 2477, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2477)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Different forms of indexing and search facilities available on the Web are described. Use of facet analysis to structure hypertext concept structures is outlined in relation to work on (1) development of hypertext knowledge bases for designers of learning materials and (2) construction of knowledge based hypertext interfaces. The problem of lack of closeness between page designers and potential users is examined. Facet analysis is suggested as a way of alleviating some difficulties associated with this problem of designing for the unknown user.
    This is a revised version of the earlier article by Ellis and Vasconcelos (1999) (see Not Relevant, below), though that is not indicated, and much of it is identical, word for word. There is a new section covering the work of Elizabeth Duncan, which is useful and informative, but the reader is better advised to go to the originals if available.
    Source
    Journal of Internet cataloging. 2(2000) nos.3/4, S.97-114
  13. LaBarre, K.: ¬A multi faceted view : use of facet analysis in the practice of website organization and access (2006) 0.00
    0.0044112457 = product of:
      0.013233736 = sum of:
        0.013233736 = product of:
          0.026467472 = sum of:
            0.026467472 = weight(_text_:of in 257) [ClassicSimilarity], result of:
              0.026467472 = score(doc=257,freq=40.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38633084 = fieldWeight in 257, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=257)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In 2001, information architects and knowledge management specialists charged with designing websites and access to corporate knowledge bases seemingly re-discovered a legacy form of information organization and access: faceted analytico-synthetic theory (FAST). Instrumental in creating new and different ways for people to engage with the digital content of the Web, the members of this group have clearly recognized that faceted approaches have the potential to improve access to information on the web. Some of these practitioners explicitly use the forms and language of FAST, while others seem to mimic the forms implicitly (Adkisson, 2003). The focus of this ongoing research study is two-fold. First, access and organizational structures in a stratified random sample of 200 DMOZ websites were examined for evidence of the use of FAST. Second, in the context of unstructured interviews, the understanding and use of FAST among a group of eighteen practitioners is uncovered. This is a preliminary report of the website component capture and interview phases of this research study. Future work will involve formalizing a set of feature guidelines drawn from the initial phases of this research study. Preliminary observations will be drawn from the first phase of this study.
    Source
    Knowledge organization for a global learning society: Proceedings of the 9th International ISKO Conference, 4-7 July 2006, Vienna, Austria. Hrsg.: G. Budin, C. Swertz u. K. Mitgutsch
  14. Hjoerland, B.; Pedersen, K.N.: ¬A substantive theory of classification for information retrieval (2005) 0.00
    0.0044112457 = product of:
      0.013233736 = sum of:
        0.013233736 = product of:
          0.026467472 = sum of:
            0.026467472 = weight(_text_:of in 1892) [ClassicSimilarity], result of:
              0.026467472 = score(doc=1892,freq=40.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38633084 = fieldWeight in 1892, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1892)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - To suggest that a theory of classification for information retrieval (IR), asked for by Spärck Jones in a 1970 paper, presupposes a full implementation of a pragmatic understanding. Part of the Journal of Documentation celebration, "60 years of the best in information research". Design/methodology/approach - Literature-based conceptual analysis, taking Sparck Jones as its starting-point. Analysis involves distinctions between "positivism" and "pragmatism" and "classical" versus Kuhnian understandings of concepts. Findings - Classification, both manual and automatic, for retrieval benefits from drawing upon a combination of qualitative and quantitative techniques, a consideration of theories of meaning, and the adding of top-down approaches to IR in which divisions of labour, domains, traditions, genres, document architectures etc. are included as analytical elements and in which specific IR algorithms are based on the examination of specific literatures. Introduces an example illustrating the consequences of a full implementation of a pragmatist understanding when handling homonyms. Practical implications - Outlines how to classify from a pragmatic-philosophical point of view. Originality/value - Provides, emphasizing a pragmatic understanding, insights of importance to classification for retrieval, both manual and automatic. - Vgl. auch: Szostak, R.: Classification, interdisciplinarity, and the study of science. In: Journal of documentation. 64(2008) no.3, S.319-332.
    Source
    Journal of documentation. 61(2005) no.5, S.582-597
  15. Sparck Jones, K.: Some thoughts on classification for retrieval (2005) 0.00
    0.004184875 = product of:
      0.012554625 = sum of:
        0.012554625 = product of:
          0.02510925 = sum of:
            0.02510925 = weight(_text_:of in 4392) [ClassicSimilarity], result of:
              0.02510925 = score(doc=4392,freq=36.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.36650562 = fieldWeight in 4392, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4392)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper was originally published in 1970 (Journal of documentation. 26(1970), S.89-101), considered the suggestion that classifications for retrieval should be constructed automatically and raised some serious problems concerning the sorts of classification which were required, and the way in which formal classification theories should be exploited, given that a retrieval classification is required for a purpose. These difficulties had not been sufficiently considered, and the paper, therefore, aims to attempt an analysis of them, though no solutions of immediate application could be suggested. Design/methodology/approach - Starting with the illustrative proposition that a polythetic, multiple, unordered classification is required in automatic thesaurus construction, this is considered in the context of classification in general, where eight sorts of classification can be distinguished, each covering a range of class definitions and class-finding algorithms. Findings - Since there is generally no natural or best classification of a set of objects as such, the evaluation of alternative classifications requires either formal criteria of goodness of fit, or, if a classification is required for a purpose, a precise statement of that purpose. In any case a substantive theory of classification is needed, which does not exist; and, since sufficiently precise specifications of retrieval requirements are also lacking, the only currently available approach to automatic classification experiments for information retrieval is to do enough of them. Originality/value - Gives insights into the classification of material for information retrieval.
    Source
    Journal of documentation. 61(2005) no.5, S.571-581
  16. LaBarre, K.: Faceted navigation and browsing features in new OPACs : a more robust solution to problems of information seekers? (2007) 0.00
    0.0041003237 = product of:
      0.01230097 = sum of:
        0.01230097 = product of:
          0.02460194 = sum of:
            0.02460194 = weight(_text_:of in 688) [ClassicSimilarity], result of:
              0.02460194 = score(doc=688,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3591007 = fieldWeight in 688, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=688)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    At the end of 2005, impending digitization efforts and several developments related to the creation of access and discovery tools for informational and cultural objects resulted in a series of responses that continue to ripple throughout the library, museum and archive communities. These developments have broad implications for all three communities because of the goals shared by each in the creation of description, control and enhanced access to informational and cultural objects. This position paper will consider new implementations of faceted navigation and browsing features in online catalogs. It is also a response to challenges to develop interwoven approaches to the study of information seeking and the design and implementation of search and discovery systems. Urgently needed during this time of experimentation, development and implementation is a framework for system evaluation and critical analysis of needed and missing features that is grounded in traditional principles, borne out by practice. Such a framework could extend feature analysis protocols established during the early years of online catalog development.
  17. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 2651) [ClassicSimilarity], result of:
              0.023554565 = score(doc=2651,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 2651, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  18. LaBarre, K.: Adventures in faceted classification: a brave new world or a world of confusion? (2004) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 2634) [ClassicSimilarity], result of:
              0.0234353 = score(doc=2634,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 2634, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2634)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A preliminary, purposive survey of definitions and current applications of facet analytical theory (FA) is used to develop a framework for the analysis of Websites. This set of guidelines may well serve to highlight commonalities and differences among FA applications an the Web. Rather than identifying FA as the terrain of a particular interest group, the goal is to explore current practices, uncover common misconceptions, extend understanding, and highlight developments that augment the traditional practice of FA and faceted classification (FC).
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  19. Kwasnik, B.H.: Commercial Web sites and the use of classification schemes : the case of Amazon.Com (2003) 0.00
    0.003743066 = product of:
      0.0112291975 = sum of:
        0.0112291975 = product of:
          0.022458395 = sum of:
            0.022458395 = weight(_text_:of in 2696) [ClassicSimilarity], result of:
              0.022458395 = score(doc=2696,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32781258 = fieldWeight in 2696, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2696)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The structure and use of the classification for books on the amazon.com website are described and analyzed. The contents of this very large website are changing constantly and the access mechanisms have the main purpose of enabling searchers to find books for purchase. This includes finding books the searcher knows about at the start of the search, as well as those that might present themselves in the course of searching and that are related in some way. Underlying the many access paths to books is a classification scheme comprising a rich network of terms in an enumerative and multihierarchical structure.
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  20. Broughton, V.; Lane, H.: Classification schemes revisited : applications to Web indexing and searching (2000) 0.00
    0.00355646 = product of:
      0.0106693795 = sum of:
        0.0106693795 = product of:
          0.021338759 = sum of:
            0.021338759 = weight(_text_:of in 2476) [ClassicSimilarity], result of:
              0.021338759 = score(doc=2476,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31146988 = fieldWeight in 2476, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2476)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Basic skills of classification and subject indexing have been little taught in British library schools since automation was introduced into libraries. However, development of the Internet as a major medium of publication has stretched the capability of search engines to cope with retrieval. Consequently, there has been interest in applying existing systems of knowledge organization to electronic resources. Unfortunately, the classification systems have been adopted without a full understanding of modern classification principles. Analytico-synthetic schemes have been used crudely, as in the case of the Universal Decimal Classification (UDC). The fully faceted Bliss Bibliographical Classification, 2nd edition (BC2) with its potential as a tool for electronic resource retrieval is virtually unknown outside academic libraries
    Content
    A short discussion of using classification systems to organize the web, one of many such. The authors are both involved with BC2 and naturally think it is the best system for organizing information online. They list reasons why faceted classifications are best (e.g. no theoretical limits to specificity or exhaustivity; easier to handle complex subjects; flexible enough to accommodate different user needs) and take a brief look at how BC2 works. They conclude with a discussion of how and why it should be applied to online resources, and a plea for recognition of the importance of classification and subject analysis skills, even when full-text searching is available and databases respond instantly.
    Source
    Journal of Internet cataloging. 2(2000) nos.3/4, S.143-155

Languages

  • e 56
  • d 3
  • hu 1
  • More… Less…