Search (35 results, page 1 of 2)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.04
    0.03533584 = product of:
      0.0883396 = sum of:
        0.046268314 = weight(_text_:books in 4406) [ClassicSimilarity], result of:
          0.046268314 = score(doc=4406,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.18689486 = fieldWeight in 4406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4406)
        0.04207128 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
          0.04207128 = score(doc=4406,freq=6.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.23454636 = fieldWeight in 4406, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4406)
      0.4 = coord(2/5)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Das Neueste zuerst Die Herausgeber der DDC präsentierten eine neue Informationsplattform "025.431: The Dewey blog"; seit Anfang Juli erreichbar unter http://ddc.typepad.com/. Neu ist auch der fünfsprachige, mit einem Farbleitsystem ausgestattete "DeweyBrowser" von OCLC; der Protoyp führt bereits in einen Katalog von 125.000 e-books und kann unter http://ddcresearch.oclc.org ,i ebooks/fileServer erprobt werden. OCLC bietet seit April 2005 eine neue Current Awareness-Schiene zur DDC mit unterschiedlichen Schwerpunkten an: Dewey Mappings, Dewey News, DeweyTips, Dewey Updates, Deweyjournal (letzteres fängt Themen aus allen 4 Teilbereichen auf); zu subskribieren unter http://www.oclc.org/dewey/syndicated/rss.htm. Wichtig für Freihandaufstellungen Die Segmentierung von Dewey-Notationen wurde reduziert! Ab September 2005 vergibt LoC nur noch ein einziges Segmentierungszeichen, und zwar an der Stelle, an der die jeweilige Notation in der englischen Kurzausgabe endet. Der Beginn einer Teilnotation aus Hilfstafel 1: Standardunterteilungen, wird also nun nicht mehr markiert. Für die Bildung von Standortsignaturen bietet sich das Dewey Cutter Programm an; Downloaden unter www.oclc.org/dewey/support/program.
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
  2. Kwasnik, B.H.: Commercial Web sites and the use of classification schemes : the case of Amazon.Com (2003) 0.03
    0.03172685 = product of:
      0.15863423 = sum of:
        0.15863423 = weight(_text_:books in 2696) [ClassicSimilarity], result of:
          0.15863423 = score(doc=2696,freq=8.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6407824 = fieldWeight in 2696, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.046875 = fieldNorm(doc=2696)
      0.2 = coord(1/5)
    
    Abstract
    The structure and use of the classification for books on the amazon.com website are described and analyzed. The contents of this very large website are changing constantly and the access mechanisms have the main purpose of enabling searchers to find books for purchase. This includes finding books the searcher knows about at the start of the search, as well as those that might present themselves in the course of searching and that are related in some way. Underlying the many access paths to books is a classification scheme comprising a rich network of terms in an enumerative and multihierarchical structure.
  3. ¬The UDC : Essays for a new decade (1990) 0.03
    0.026173312 = product of:
      0.13086656 = sum of:
        0.13086656 = weight(_text_:books in 661) [ClassicSimilarity], result of:
          0.13086656 = score(doc=661,freq=4.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.5286185 = fieldWeight in 661, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0546875 = fieldNorm(doc=661)
      0.2 = coord(1/5)
    
    LCSH
    Classification / Books
    Subject
    Classification / Books
  4. Pasanen-Tuomainen, I.: Analysis of subject searching in the TENTTU books database (1992) 0.02
    0.022434268 = product of:
      0.11217134 = sum of:
        0.11217134 = weight(_text_:books in 4252) [ClassicSimilarity], result of:
          0.11217134 = score(doc=4252,freq=4.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.45310158 = fieldWeight in 4252, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.046875 = fieldNorm(doc=4252)
      0.2 = coord(1/5)
    
    Abstract
    Presents a pilot study for an Internordic project to monitor the use of online catalogues in the Nordic technological university libraries. Focuses on the use of classification in subject searching, how the UDC is used and the extent of its use. Studies user interaction with the OPACs and improvements to information retrieval in the catalogues using the transaction log method to gather data. The pilot study examnines the TENTTU Books database which is the online union catalogue of the Helsinki Univ. of Technology Library, a multilingual database with true information retrieval. The Internordic study itself will make comparisons between the TENTTU system and the new Virginia Tech Library System. Discusses the users monitored, method of analysis, subject searching in the database, results and how the UDC codes were used. Compares this to other studies conducted in Finland and evaluates the project
  5. Hill, J.S.: Online classification number access : some practical considerations (1984) 0.02
    0.022207877 = product of:
      0.111039385 = sum of:
        0.111039385 = weight(_text_:22 in 7684) [ClassicSimilarity], result of:
          0.111039385 = score(doc=7684,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.61904186 = fieldWeight in 7684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=7684)
      0.2 = coord(1/5)
    
    Source
    Journal of academic librarianship. 10(1984), S.17-22
  6. Micco, M.: Suggestions for automating the Library of Congress Classification schedules (1992) 0.02
    0.021151232 = product of:
      0.105756156 = sum of:
        0.105756156 = weight(_text_:books in 2108) [ClassicSimilarity], result of:
          0.105756156 = score(doc=2108,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.42718828 = fieldWeight in 2108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0625 = fieldNorm(doc=2108)
      0.2 = coord(1/5)
    
    Abstract
    It will not be an easy task to automate the Library of Congress Classification schedules because it is a very large system and also because it developed long before automation. The designers were creating a system for shelving books effiently and had not even imagined the constraints imposed by automation. A number of problems and possible solutions are discussed. The MARC format proposed for classification has some serious problems which are identified
  7. Welty, C.A.; Jenkins, J.: Formal ontology for subject (1999) 0.02
    0.021151232 = product of:
      0.105756156 = sum of:
        0.105756156 = weight(_text_:books in 4962) [ClassicSimilarity], result of:
          0.105756156 = score(doc=4962,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.42718828 = fieldWeight in 4962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0625 = fieldNorm(doc=4962)
      0.2 = coord(1/5)
    
    Abstract
    Subject based classification is an important part of information retrieval, and has a long history in libraries, where a subject taxonomy was used to determine the location of books on the shelves. We have been studying the notion of subject itself, in order to determine a formal ontology of subject for a large scale digital library card catalog system. Deep analysis reveals a lot of ambiguity regarding the usage of subjects in existing systems and terminology, and we attempt to formalize these notions into a single framework for representing it.
  8. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.02
    0.020985337 = product of:
      0.10492668 = sum of:
        0.10492668 = weight(_text_:books in 1202) [ClassicSimilarity], result of:
          0.10492668 = score(doc=1202,freq=14.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.42383775 = fieldWeight in 1202, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
      0.2 = coord(1/5)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  9. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.016655907 = product of:
      0.083279535 = sum of:
        0.083279535 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
          0.083279535 = score(doc=6040,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.46428138 = fieldWeight in 6040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6040)
      0.2 = coord(1/5)
    
    Date
    22. 6.2002 19:42:47
  10. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.02
    0.016127585 = product of:
      0.040318962 = sum of:
        0.026439039 = weight(_text_:books in 2047) [ClassicSimilarity], result of:
          0.026439039 = score(doc=2047,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.10679707 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
        0.013879923 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
          0.013879923 = score(doc=2047,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.07738023 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
      0.4 = coord(2/5)
    
    Date
    2. 1.2004 10:35:22
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.40-42 (J.-E. Mai): "Introduction: This is a collection of papers presented at the National Seminar an Classification in the Digital Environment held in Bangalore, India, an August 9-11 2001. The collection contains 18 papers dealing with various issues related to knowledge organization and classification theory. The issue of transferring the knowledge, traditions, and theories of bibliographic classification to the digital environment is an important one, and I was excited to learn that proceedings from this seminar were available. Many of us experience frustration an a daily basis due to poorly constructed Web search mechanisms and Web directories. As a community devoted to making information easily accessible we have something to offer the Web community and a seminar an the topic was indeed much needed. Below are brief summaries of the 18 papers presented at the seminar. The order of the summaries follows the order of the papers in the proceedings. The titles of the paper are given in parentheses after the author's name. AHUJA and WESLEY (From "Subject" to "Need": Shift in Approach to Classifying Information an the Internet/Web) argue that traditional bibliographic classification systems fall in the digital environment. One problem is that bibliographic classification systems have been developed to organize library books an shelves and as such are unidimensional and tied to the paper-based environment. Another problem is that they are "subject" oriented in the sense that they assume a relatively stable universe of knowledge containing basic and fixed compartments of knowledge that can be identified and represented. Ahuja and Wesley suggest that classification in the digital environment should be need-oriented instead of subjectoriented ("One important link that binds knowledge and human being is his societal need. ... Hence, it will be ideal to organise knowledge based upon need instead of subject." (p. 10)).
  11. Spiteri, L.: ¬A simplified model for facet analysis : Ranganathan 101 (1998) 0.02
    0.015863424 = product of:
      0.079317115 = sum of:
        0.079317115 = weight(_text_:books in 3842) [ClassicSimilarity], result of:
          0.079317115 = score(doc=3842,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.3203912 = fieldWeight in 3842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.046875 = fieldNorm(doc=3842)
      0.2 = coord(1/5)
    
    Abstract
    Ranganathan's canons, principles, and postulates can easily confuse readers, especially because he revised and added to them in various editions of his many books. The Classification Research Group, who drew on Ranganathan's work as their basis for classification theory but developed it in their own way, has never clearly organized all their equivalent canons and principles. In this article Spiteri gathers the fundamental rules from both systems and compares and contrasts them. She makes her own clearer set of principles for constructing facets, stating the subject of a document, and designing notation. Spiteri's "simplified model" is clear and understandable, but certainly not simplistic. The model does not include methods for making a faceted system, but will serve as a very useful guide in how to turn initial work into a rigorous classification. Highly recommended
  12. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.01
    0.013879924 = product of:
      0.06939962 = sum of:
        0.06939962 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
          0.06939962 = score(doc=3576,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.38690117 = fieldWeight in 3576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3576)
      0.2 = coord(1/5)
    
    Date
    8. 1.2007 12:22:40
  13. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.013879924 = product of:
      0.06939962 = sum of:
        0.06939962 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
          0.06939962 = score(doc=611,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.38690117 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
      0.2 = coord(1/5)
    
    Date
    22. 8.2009 12:54:24
  14. McGarry, D.: Displays of bibliographic records in call number order : functions of the displays and data elements needed (1992) 0.01
    0.01321952 = product of:
      0.0660976 = sum of:
        0.0660976 = weight(_text_:books in 2384) [ClassicSimilarity], result of:
          0.0660976 = score(doc=2384,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.2669927 = fieldWeight in 2384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2384)
      0.2 = coord(1/5)
    
    Abstract
    Online displays of bibliographic records in call number order can serve various functions. A literature search showed no papers or books discussing this topic directly. Various displays from online catalogues available via the Internet were examined, as were displays sent to the author by colleagues. A number of the displays were uninformative to the extent that the identification of works associated with call numbers was difficult or impossible without follow-up searching of the individual bibliographic records. Other displays provided information where further searching of the database would not be required for most purposes. Displays noted ranged from displays with call numbers alone, with no bibliographic information, to records including main entry, title, statement of responsibility, place, publisher, and date. Suggestions of useful data elements to be included in displays of bibliographic records in call number order are made for the following functions: shelflisting, cataloguing, catalogue maintenance, reference, public searches, acquisition and collection development, and inventory control. Recommendations are made that the following data elements should be present in call number displays: entire call number as a sequencing element; main entry; entire title proper, and the date. Concern is expressed that the call number filing arrangement be that followed in traditional shelflists, and a suggestion is made that possible consensus on the placement of the data elements within a display be considered in the future
  15. Pollitt, A.S.: ¬The application of Dewey Classification in a view-based searching OPAC (1998) 0.01
    0.01321952 = product of:
      0.0660976 = sum of:
        0.0660976 = weight(_text_:books in 73) [ClassicSimilarity], result of:
          0.0660976 = score(doc=73,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.2669927 = fieldWeight in 73, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=73)
      0.2 = coord(1/5)
    
    Abstract
    This paper examines issues relating to the use of the Dewey Decimal Classification (DDC) in a future development of view-based searching to Online Public Access Catalogues (OPAC). View-based searching systems, exercising the principles of fully faceted classification techniques for both bibliographic and corporate database retrieval applications, are now being applied to utilise Dewey concept hierarchies in a University OPAC. Issues of efficiency and effectiveness in the evolving organisation and classification of information within libraries are examined to explain why fully faceted classification schemes have yet to realise their full potential in libraries. The key to their application in OPACs lies in the use of faceted classification as pre-coordinated indexing and abandoning the single dimension relative ordering of books on shelves. The need to maintain a single relative physical position on a bookshelf is the major source of complexity in classification. Extensive latent benefits will be realised when systematic subject arrangements, providing alternative views onto OPACs, are coupled to view-based browser and search techniques. Time and effort will be saved, and effectiveness increased, as rapid access is provided to the most appropriate information to satisfy the needs of the user. A future for Dewey Classification divorced from its decimal notation is anticipated
  16. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.011777506 = product of:
      0.05888753 = sum of:
        0.05888753 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
          0.05888753 = score(doc=4379,freq=4.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.32829654 = fieldWeight in 4379, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
      0.2 = coord(1/5)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  17. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.01
    0.01144844 = product of:
      0.0572422 = sum of:
        0.0572422 = weight(_text_:books in 2467) [ClassicSimilarity], result of:
          0.0572422 = score(doc=2467,freq=6.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.23122245 = fieldWeight in 2467, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.2 = coord(1/5)
    
    Abstract
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  18. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.011103938 = product of:
      0.055519693 = sum of:
        0.055519693 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
          0.055519693 = score(doc=2871,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.30952093 = fieldWeight in 2871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2871)
      0.2 = coord(1/5)
    
    Date
    30. 7.2004 12:22:52
  19. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.01
    0.011103938 = product of:
      0.055519693 = sum of:
        0.055519693 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
          0.055519693 = score(doc=4869,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.30952093 = fieldWeight in 4869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4869)
      0.2 = coord(1/5)
    
    Date
    22. 6.2002 19:39:23
  20. Van Dijck, P.: Introduction to XFML (2003) 0.01
    0.011103938 = product of:
      0.055519693 = sum of:
        0.055519693 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
          0.055519693 = score(doc=2474,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.30952093 = fieldWeight in 2474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2474)
      0.2 = coord(1/5)
    
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html