Search (38 results, page 1 of 2)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Tunkelang, D.: Dynamic category sets : an approach for faceted search (2006) 0.02
    0.022415739 = product of:
      0.13449442 = sum of:
        0.13449442 = weight(_text_:problem in 3082) [ClassicSimilarity], result of:
          0.13449442 = score(doc=3082,freq=8.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.6565352 = fieldWeight in 3082, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3082)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, we present Dynamic Category Sets, a novel approach that addresses the vocabulary problem for faceted data. In their paper on the vocabulary problem, Furnas et al. note that "the keywords that are assigned by indexers are often at odds with those tried by searchers." Faceted search systems exhibit an interesting aspect of this problem: users do not necessarily understand an information space in terms of the same facets as the indexers who designed it. Our approach addresses this problem by employing a data-driven approach to discover sets of values across multiple facets that best match the query. When there are multiple candidates, we offer a clarification dialog that allows the user to disambiguate them.
  2. Oberhauser, O.: Implementierung und Parametrisierung klassifikatorischer Recherchekomponenten im OPAC (2005) 0.02
    0.018836789 = product of:
      0.056510366 = sum of:
        0.033623606 = weight(_text_:problem in 3353) [ClassicSimilarity], result of:
          0.033623606 = score(doc=3353,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.1641338 = fieldWeight in 3353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3353)
        0.02288676 = weight(_text_:22 in 3353) [ClassicSimilarity], result of:
          0.02288676 = score(doc=3353,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.1354154 = fieldWeight in 3353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3353)
      0.33333334 = coord(2/6)
    
    Abstract
    Das in den letzten Jahren wiedererwachte Interesse an der klassifikatorischen Erschließung und Recherche hat sich allem Anschein nach noch nicht ausreichend bis zu den Herstellern integrierter Bibliothekssysteme herumgesprochen. Wie wäre es sonst zu erklären, dass im OPAC-Modul eines führenden Systems wie Aleph 500 so gut wie keine Features für klassifikationsbasierte Recherchen zu erblicken sind? Tatsächlich finden wir heute einen im Vergleich zum einstigen System Bibos kaum veränderten Zustand vor: Notationen eines oder mehrerer Klassifikationssysteme können in einer durch MAB dafür bestimmten Kategorie (700, nebst Indikatoren) katalogisiert und dann recherchiert bzw. angezeigt werden. Doch welcher Benutzer weiß schon, was diese Notationen im einzelnen bedeuten? Wer macht sich die Mühe, dies selbst herauszufinden, um dann danach zu recherchieren? Hier liegt im wesentlich dasselbe Problem vor, das schon dem systematischen Zettelkatalog anhaftete und ihn zu einem zwar mühevoll erstellten, aber wenig genutzten Rechercheinstrument machte, das nur dann (zwangsläufig) angenommen wurde, wenn ein verbaler Sachkatalog fehlte. Nun könnte eingewandt werden, dass im Vergleich zu früher unter Aleph 500 wenigstens das Aufblättern von Indizes möglich sei, sodass im OPAC ein Index für die vergebenen Notationen angeboten werden kann (bzw. mehrere solche Indizes bei Verwendung von mehr als nur einem Klassifikationssystem). Gewiss, doch was bringt dem Uneingeweihten das Aufblättern des Notationsindex - außer einer alphabetischen Liste von kryptischen Codes? Weiter könnte man einwenden, dass es im Aleph-500-OPAC die so genannten Suchdienste ("services") gibt, mithilfe derer von bestimmten Elementen einer Vollanzeige hypertextuell weiternavigiert werden kann. Richtig, doch damit kann man bloß wiederum den Index aufblättern oder alle anderen Werke anzeigen lassen, die dieselbe Notationen - also einen Code, dessen Bedeutung meist unbekannt ist - aufweisen. Wie populär mag dieses Feature beim Publikum wohl sein? Ein anderer Einwand wäre der Hinweis auf das inzwischen vom Hersteller angebotene Thesaurus-Modul, das vermutlich auch für Klassifikationssysteme eingesetzt werden könnte. Doch wie viele Bibliotheken unseres Verbundes waren bisher bereit, für dieses Modul, das man eigentlich als Bestandteil des Basissystems erwarten könnte, gesondert zu bezahlen? Schließlich mag man noch einwenden, dass es im Gegensatz zur Bibos-Zeit nun die Möglichkeit gibt, Systematiken und Klassifikationen als Normdateien zu implementieren und diese beim Retrieval für verbale Einstiege in die klassifikatorische Recherche oder zumindest für die Veranschaulichung der Klassenbenennungen in der Vollanzeige zu nutzen. Korrekt - dies ist möglich und wurde sogar einst für die MSC (Mathematics Subject Classification, auch bekannt als "AMS-Klassifikation") versucht. Dieses Projekt, das noch unter der Systemversion 11.5 begonnen wurde, geriet jedoch nach einiger Zeit ins Stocken und fand bedauerlicherweise nie seinen Weg in die folgende Version (14.2). Mag auch zu hoffen sein, dass es unter der neuen Version 16 wieder weitergeführt werden kann, so weist dieses Beispiel doch auf die grundsätzliche Problematik des Normdatei-Ansatzes (zusätzlicher Aufwand, Kontinuität) hin. Zudem lohnt sich die Implementierung einer eigenen Normdatei 4 wohl nur bei einem größeren bzw. komplexen Klassifikationssystem, wogegen man im Falle kleinerer Systematiken kaum daran denken würde.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 58(2005) H.1, S.22-37
  3. Hill, J.S.: Online classification number access : some practical considerations (1984) 0.02
    0.017437533 = product of:
      0.104625195 = sum of:
        0.104625195 = weight(_text_:22 in 7684) [ClassicSimilarity], result of:
          0.104625195 = score(doc=7684,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.61904186 = fieldWeight in 7684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=7684)
      0.16666667 = coord(1/6)
    
    Source
    Journal of academic librarianship. 10(1984), S.17-22
  4. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.02
    0.015452298 = product of:
      0.046356894 = sum of:
        0.033278745 = weight(_text_:problem in 2047) [ClassicSimilarity], result of:
          0.033278745 = score(doc=2047,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.16245036 = fieldWeight in 2047, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
        0.013078149 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
          0.013078149 = score(doc=2047,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.07738023 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
      0.33333334 = coord(2/6)
    
    Date
    2. 1.2004 10:35:22
    Footnote
    Rez. in: Knowledge organization 30(2003) no.1, S.40-42 (J.-E. Mai): "Introduction: This is a collection of papers presented at the National Seminar an Classification in the Digital Environment held in Bangalore, India, an August 9-11 2001. The collection contains 18 papers dealing with various issues related to knowledge organization and classification theory. The issue of transferring the knowledge, traditions, and theories of bibliographic classification to the digital environment is an important one, and I was excited to learn that proceedings from this seminar were available. Many of us experience frustration an a daily basis due to poorly constructed Web search mechanisms and Web directories. As a community devoted to making information easily accessible we have something to offer the Web community and a seminar an the topic was indeed much needed. Below are brief summaries of the 18 papers presented at the seminar. The order of the summaries follows the order of the papers in the proceedings. The titles of the paper are given in parentheses after the author's name. AHUJA and WESLEY (From "Subject" to "Need": Shift in Approach to Classifying Information an the Internet/Web) argue that traditional bibliographic classification systems fall in the digital environment. One problem is that bibliographic classification systems have been developed to organize library books an shelves and as such are unidimensional and tied to the paper-based environment. Another problem is that they are "subject" oriented in the sense that they assume a relatively stable universe of knowledge containing basic and fixed compartments of knowledge that can be identified and represented. Ahuja and Wesley suggest that classification in the digital environment should be need-oriented instead of subjectoriented ("One important link that binds knowledge and human being is his societal need. ... Hence, it will be ideal to organise knowledge based upon need instead of subject." (p. 10)).
    AHUJA and SATIJA (Relevance of Ranganathan's Classification Theory in the Age of Digital Libraries) note that traditional bibliographic classification systems have been applied in the digital environment with only limited success. They find that the "inherent flexibility of electronic manipulation of documents or their surrogates should allow a more organic approach to allocation of new subjects and appropriate linkages between subject hierarchies." (p. 18). Ahija and Satija also suggest that it is necessary to shift from a "subject" focus to a "need" focus when applying classification theory in the digital environment. They find Ranganathan's framework applicable in the digital environment. Although Ranganathan's focus is "subject oriented and hence emphasise the hierarchical and linear relationships" (p. 26), his framework "can be successfully adopted with certain modifications ... in the digital environment." (p. 26). SHAH and KUMAR (Model for System Unification of Geographical Schedules (Space Isolates)) report an a plan to develop a single schedule for geographical Subdivision that could be used across all classification systems. The authors argue that this is needed in order to facilitate interoperability in the digital environment. SAN SEGUNDO MANUEL (The Representation of Knowledge as a Symbolization of Productive Electronic Information) distills different approaches and definitions of the term "representation" as it relates to representation of knowledge in the library and information science literature and field. SHARADA (Linguistic and Document Classification: Paradigmatic Merger Possibilities) suggests the development of a universal indexing language. The foundation for the universal indexing language is Chomsky's Minimalist Program and Ranganathan's analytico-synthetic classification theory; Acording to the author, based an these approaches, it "should not be a problem" (p. 62) to develop a universal indexing language.
  5. Ellis, D.; Vasconcelos, A.: ¬The relevance of facet analysis for World Wide Web subject organization and searching (2000) 0.01
    0.01358599 = product of:
      0.08151594 = sum of:
        0.08151594 = weight(_text_:problem in 2477) [ClassicSimilarity], result of:
          0.08151594 = score(doc=2477,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.39792046 = fieldWeight in 2477, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=2477)
      0.16666667 = coord(1/6)
    
    Abstract
    Different forms of indexing and search facilities available on the Web are described. Use of facet analysis to structure hypertext concept structures is outlined in relation to work on (1) development of hypertext knowledge bases for designers of learning materials and (2) construction of knowledge based hypertext interfaces. The problem of lack of closeness between page designers and potential users is examined. Facet analysis is suggested as a way of alleviating some difficulties associated with this problem of designing for the unknown user.
  6. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
          0.0784689 = score(doc=6040,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 6040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6040)
      0.16666667 = coord(1/6)
    
    Date
    22. 6.2002 19:42:47
  7. Pollitt, A.S.; Tinker, A.J.; Braekevelt, P.A.J.: Improving access to online information using dynamic faceted classification (1998) 0.01
    0.011207869 = product of:
      0.06724721 = sum of:
        0.06724721 = weight(_text_:problem in 4427) [ClassicSimilarity], result of:
          0.06724721 = score(doc=4427,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 4427, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4427)
      0.16666667 = coord(1/6)
    
    Abstract
    The human natural ability to store and process images and speech provides clues for improving access to online information. The principles underpinning the maps people use in their minds can be applied to maps that can be presented at the user interface to online systems. Traditional classification organizes information into structured hierarchies and simplifies the search problem, but has serious limitations. Discusses the prospects for improving access to online information through the application of dynamic faceted classification. Presents a glimpse into the navigation of n-dimensional information space for future library OPACs using a modified DDC
  8. Pollitt, A.S.; Tinker, A.J.: Enhanced view-based searching through the decomposition of Dewey Decimal Classification codes (2000) 0.01
    0.011092915 = product of:
      0.06655749 = sum of:
        0.06655749 = weight(_text_:problem in 6486) [ClassicSimilarity], result of:
          0.06655749 = score(doc=6486,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.32490072 = fieldWeight in 6486, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=6486)
      0.16666667 = coord(1/6)
    
    Abstract
    The scatter of items dealing with similar concepts through the physical library is a consequence of a classification process that produces a single notation to enable relative location. Compromises must be made to place an item where it is most appropriate for a given user community. No such compromise is needed with a digital library where the item can be considered to occupy a very large number of relative locations, as befits the needs of the user. Interfaces to these digital libraries can reuse the knowledge structures of their physical counterparts yet still address the problem of scatter. View-based searching is an approach that takes advantage of the knowledge structures but addresses the problem of scatter by applying a facetted approach to information retrieval. This paper describes the most recent developments in the implementation of a view-based searching system for a University Library OPAC. The user interface exploits the knowledge structures in the Dewey Decimal Classification Scheme (DDC) in navigable views with implicit Boolean searching. DDC classifies multifaceted items by building a single relative code from components. These codes may already have been combined in the schedules or be built according to well-documented instructions. Rules can be applied to decode these numbers to provide codes for each additional facet. To enhance the retrieval power of the view-based searching system, multiple facet codes are being extracted through decomposition from single Dewey Class Codes. This paper presents the results of applying automatic decomposition in respect of Geographic Area and the creation of a view (by Geographic Area) for the full collection of over 250,000 library items. This is the first step in demonstrating how the problem of scatter of subject matter across the disciplines of the Dewey Decimal Classification and the physical library collection can be addressed through the use of facets and view-based searching
  9. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.01
    0.010898459 = product of:
      0.06539075 = sum of:
        0.06539075 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
          0.06539075 = score(doc=3576,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.38690117 = fieldWeight in 3576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3576)
      0.16666667 = coord(1/6)
    
    Date
    8. 1.2007 12:22:40
  10. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.010898459 = product of:
      0.06539075 = sum of:
        0.06539075 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
          0.06539075 = score(doc=611,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.38690117 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
      0.16666667 = coord(1/6)
    
    Date
    22. 8.2009 12:54:24
  11. Drabenstott, K.M.: Classification to the rescue : handling the problems of too many and too few retrievals (1996) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 5164) [ClassicSimilarity], result of:
          0.05764047 = score(doc=5164,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 5164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=5164)
      0.16666667 = coord(1/6)
    
    Abstract
    The first studies of online catalog use demonstrated that the problems of too many and too few retrievals plagued the earliest online catalog users. Despite 15 years of system development, implementation, and evaluation, these problems still adversely affect the subject searches of today's online catalog users. In fact, the large-retrievals problem has grown more acute due to the growth of online catalog databases. This paper explores the use of library classifications for consolidating and summarizing high-posted subject searches and for handling subject searches that result in no or too few retrievals. Findings are presented in the form of generalization about retrievals and library classifications, needed improvements to classification terminology, and suggestions for improved functionality to facilitate the display of retrieved titles in online catalogs
  12. Louie, A.J.; Maddox, E.L.; Washington, W.: Using faceted classification to provide structure for information architecture (2003) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 2471) [ClassicSimilarity], result of:
          0.05764047 = score(doc=2471,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 2471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=2471)
      0.16666667 = coord(1/6)
    
    Abstract
    This is a short, but very thorough and very interesting, report on how the writers built a faceted classification for some legal information and used it to structure a web site with navigation and searching. There is a good summary of why facets work well and how they fit into bibliographic control in general. The last section is about their implementation of a web site for the Washington State Bar Association's Council for Legal Public Education. Their classification uses three facets: Purpose (the general aim of the document, e.g. Resources for K-12 Teachers), Topic (the subject of the document), and Type (the legal format of the document). See Example Web Sites, below, for a discussion of the site and a problem with its design.
  13. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.009247649 = product of:
      0.055485893 = sum of:
        0.055485893 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
          0.055485893 = score(doc=4379,freq=4.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.32829654 = fieldWeight in 4379, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
      0.16666667 = coord(1/6)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  14. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.008718766 = product of:
      0.052312598 = sum of:
        0.052312598 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
          0.052312598 = score(doc=2871,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 2871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2871)
      0.16666667 = coord(1/6)
    
    Date
    30. 7.2004 12:22:52
  15. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.01
    0.008718766 = product of:
      0.052312598 = sum of:
        0.052312598 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
          0.052312598 = score(doc=4869,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 4869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4869)
      0.16666667 = coord(1/6)
    
    Date
    22. 6.2002 19:39:23
  16. Van Dijck, P.: Introduction to XFML (2003) 0.01
    0.008718766 = product of:
      0.052312598 = sum of:
        0.052312598 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
          0.052312598 = score(doc=2474,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 2474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2474)
      0.16666667 = coord(1/6)
    
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html
  17. Bambey, D.: Thesauri und Klassifikationen im Netz : Neue Herausforderungen für klassische Werkzeuge (2000) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 5505) [ClassicSimilarity], result of:
          0.04803372 = score(doc=5505,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 5505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5505)
      0.16666667 = coord(1/6)
    
    Abstract
    Die verstärkte Diskussion über qualitativ bessere Such- und Erschließungsmethoden im Internet führt auch dazu, dass Thesauri und Klassifikation bei Fachanbietern und im wissenschaftlich-bibliothekarischen Bereich verstärkt wieder Thema und auch Gegenstand von Projekten geworden sind. Solche Konjunkturschwankungen sind ein bekanntes Phänomen, denn schon immer haben fachlich-methodische Instrumente in Zeiten technologischer Schübe schlechte Konjunktur. Wenn die technologischen Machbarkeiten dann kritisch überdacht werden müssen und die Probleme der Qualitätssicherung ins Auge fallen, rückt das Problem der Vermittlung technologischer Verfahren mit sach- und inhaltsbezogenen Anforderungen unweigerlich wieder stärker in den Mittelpunkt des Interesses'. Meine Ausführungen richten sich vor allem auf aktuelle Probleme der Produktion und Wiedergewinnung von Informationen oder präziser: von Fachinformationen, Fragen der Qualitätssicherung und die Rolle, die Klassifikationen und Thesauri in diesem Zusammenhang spielen oder spielen könnten. Insbesondere der Aspekt der Nutzerakzeptanz wird hier stärker thematisiert. Der Punkt nettere Ansätze wird etwas eingehender am Beispiel der Vernetzung verschiedener Thesauri und Klassifikationen mittels sogenannter Cross-Konkordanzen erläutert. Im Folgenden beziehe ich mich vor allem auf die Sozialwissenschaften und insbesondere die Erziehungswissenschaft. Dies ist der fachliche Background des Fachinformationssystem Bildung, und des Deutschen Bildungsservers in deren Kontext ich mit den hier angesprochenen Problemen befasst bin
  18. Sparck Jones, K.: Some thoughts on classification for retrieval (1970) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 4327) [ClassicSimilarity], result of:
          0.04803372 = score(doc=4327,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 4327, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4327)
      0.16666667 = coord(1/6)
    
    Abstract
    The suggestion that classifications for retrieval should be constructed automatically raises some serious problems concerning the sorts of classification which are required, and the way in which formal classification theories should be exploited, given that a retrieval classification is required for a purpose. These difficulties have not been sufficiently considered, and the paper therefore attempts an analysis of them, though no solution of immediate application can be suggested. Starting with the illustrative proposition that a polythetic, multiple, unordered classification is required in automatic thesaurus construction, this is considered in the context of classification in general, where eight sorts of classification can be distinguished, each covering a range of class definitions and class-finding algorithms. The problem which follows is that since there is generally no natural or best classification of a set of objects as such, the evaluation of alternative classifications requires either formal criteria of goodness of fit, or, if a classification is required for a purpose, a precises statement of that purpose. In any case a substantive theory of classification is needed, which does not exist; and since sufficiently precise specifications of retrieval requirements are also lacking, the only currently available approach to automatic classification experiments for information retrieval is to do enough of them
  19. Wyly, B.: What lies ahead for classification in information networks? : report of a panel discussion (1995) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 5568) [ClassicSimilarity], result of:
          0.04803372 = score(doc=5568,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 5568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5568)
      0.16666667 = coord(1/6)
    
    Abstract
    Ia McIlwaine, head of the Classification Research Group and editor of the UDC, noticed that the session's title invited crystal ball gazing, a talent she denied possessing. However, she admitted that she had asked the Classification Research Group to engage in such an exercise with her. The Group found, like the participants at the Allerton Institute were finding, that the contemplation of classification's future provided more questions than answers, but the questions were well worth considering. Her talk focused around a problem which originates in the difference between classifiers' uses and users' uses for classification systems. For users, who speak with the paraphrased self-confidence of Humpty Dumpty, a subject is a subject because they say it is. McIlwaine pointed out that this process of "saying" is at the heart of the users' needs which should be addressed by classification systems. Users use words to approach information systems and their associated classification systems. Classifiers need to recognize that this is the use to which their systems will be put. A body of users external to the classification process will make very different demands upon the system as compared to the users of the classification system who are also the creators of the system. Users desire information grouped for individual usefulness, and the groupings need to be according to words through which users can approach the system.
  20. Dack, D.: Australian attends conference on Dewey (1989) 0.01
    0.0076289205 = product of:
      0.04577352 = sum of:
        0.04577352 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
          0.04577352 = score(doc=2509,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 2509, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
      0.16666667 = coord(1/6)
    
    Date
    8.11.1995 11:52:22