Search (27 results, page 1 of 2)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.06
    0.05908383 = product of:
      0.11816766 = sum of:
        0.11816766 = sum of:
          0.061605897 = weight(_text_:systems in 2871) [ClassicSimilarity], result of:
            0.061605897 = score(doc=2871,freq=4.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.38414678 = fieldWeight in 2871, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
          0.056561764 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
            0.056561764 = score(doc=2871,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.30952093 = fieldWeight in 2871, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
      0.5 = coord(1/2)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52
  2. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.04
    0.044312872 = product of:
      0.088625744 = sum of:
        0.088625744 = sum of:
          0.04620442 = weight(_text_:systems in 780) [ClassicSimilarity], result of:
            0.04620442 = score(doc=780,freq=4.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.28811008 = fieldWeight in 780, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
          0.042421322 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
            0.042421322 = score(doc=780,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.23214069 = fieldWeight in 780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=780)
      0.5 = coord(1/2)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  3. Oberhauser, O.: Implementierung und Parametrisierung klassifikatorischer Recherchekomponenten im OPAC (2005) 0.02
    0.021902062 = product of:
      0.043804124 = sum of:
        0.043804124 = sum of:
          0.019058352 = weight(_text_:systems in 3353) [ClassicSimilarity], result of:
            0.019058352 = score(doc=3353,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.118839346 = fieldWeight in 3353, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3353)
          0.024745772 = weight(_text_:22 in 3353) [ClassicSimilarity], result of:
            0.024745772 = score(doc=3353,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.1354154 = fieldWeight in 3353, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3353)
      0.5 = coord(1/2)
    
    Abstract
    Das in den letzten Jahren wiedererwachte Interesse an der klassifikatorischen Erschließung und Recherche hat sich allem Anschein nach noch nicht ausreichend bis zu den Herstellern integrierter Bibliothekssysteme herumgesprochen. Wie wäre es sonst zu erklären, dass im OPAC-Modul eines führenden Systems wie Aleph 500 so gut wie keine Features für klassifikationsbasierte Recherchen zu erblicken sind? Tatsächlich finden wir heute einen im Vergleich zum einstigen System Bibos kaum veränderten Zustand vor: Notationen eines oder mehrerer Klassifikationssysteme können in einer durch MAB dafür bestimmten Kategorie (700, nebst Indikatoren) katalogisiert und dann recherchiert bzw. angezeigt werden. Doch welcher Benutzer weiß schon, was diese Notationen im einzelnen bedeuten? Wer macht sich die Mühe, dies selbst herauszufinden, um dann danach zu recherchieren? Hier liegt im wesentlich dasselbe Problem vor, das schon dem systematischen Zettelkatalog anhaftete und ihn zu einem zwar mühevoll erstellten, aber wenig genutzten Rechercheinstrument machte, das nur dann (zwangsläufig) angenommen wurde, wenn ein verbaler Sachkatalog fehlte. Nun könnte eingewandt werden, dass im Vergleich zu früher unter Aleph 500 wenigstens das Aufblättern von Indizes möglich sei, sodass im OPAC ein Index für die vergebenen Notationen angeboten werden kann (bzw. mehrere solche Indizes bei Verwendung von mehr als nur einem Klassifikationssystem). Gewiss, doch was bringt dem Uneingeweihten das Aufblättern des Notationsindex - außer einer alphabetischen Liste von kryptischen Codes? Weiter könnte man einwenden, dass es im Aleph-500-OPAC die so genannten Suchdienste ("services") gibt, mithilfe derer von bestimmten Elementen einer Vollanzeige hypertextuell weiternavigiert werden kann. Richtig, doch damit kann man bloß wiederum den Index aufblättern oder alle anderen Werke anzeigen lassen, die dieselbe Notationen - also einen Code, dessen Bedeutung meist unbekannt ist - aufweisen. Wie populär mag dieses Feature beim Publikum wohl sein? Ein anderer Einwand wäre der Hinweis auf das inzwischen vom Hersteller angebotene Thesaurus-Modul, das vermutlich auch für Klassifikationssysteme eingesetzt werden könnte. Doch wie viele Bibliotheken unseres Verbundes waren bisher bereit, für dieses Modul, das man eigentlich als Bestandteil des Basissystems erwarten könnte, gesondert zu bezahlen? Schließlich mag man noch einwenden, dass es im Gegensatz zur Bibos-Zeit nun die Möglichkeit gibt, Systematiken und Klassifikationen als Normdateien zu implementieren und diese beim Retrieval für verbale Einstiege in die klassifikatorische Recherche oder zumindest für die Veranschaulichung der Klassenbenennungen in der Vollanzeige zu nutzen. Korrekt - dies ist möglich und wurde sogar einst für die MSC (Mathematics Subject Classification, auch bekannt als "AMS-Klassifikation") versucht. Dieses Projekt, das noch unter der Systemversion 11.5 begonnen wurde, geriet jedoch nach einiger Zeit ins Stocken und fand bedauerlicherweise nie seinen Weg in die folgende Version (14.2). Mag auch zu hoffen sein, dass es unter der neuen Version 16 wieder weitergeführt werden kann, so weist dieses Beispiel doch auf die grundsätzliche Problematik des Normdatei-Ansatzes (zusätzlicher Aufwand, Kontinuität) hin. Zudem lohnt sich die Implementierung einer eigenen Normdatei 4 wohl nur bei einem größeren bzw. komplexen Klassifikationssystem, wogegen man im Falle kleinerer Systematiken kaum daran denken würde.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 58(2005) H.1, S.22-37
  4. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.021210661 = product of:
      0.042421322 = sum of:
        0.042421322 = product of:
          0.084842645 = sum of:
            0.084842645 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.084842645 = score(doc=6040,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:42:47
  5. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.02
    0.01633573 = product of:
      0.03267146 = sum of:
        0.03267146 = product of:
          0.06534292 = sum of:
            0.06534292 = weight(_text_:systems in 2651) [ClassicSimilarity], result of:
              0.06534292 = score(doc=2651,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.4074492 = fieldWeight in 2651, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
  6. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.01
    0.014140441 = product of:
      0.028280882 = sum of:
        0.028280882 = product of:
          0.056561764 = sum of:
            0.056561764 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.056561764 = score(doc=4869,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:39:23
  7. Koch, T.: ¬Az internetforrasok toketesebb leirasahoz, szervezesehez es keresesehez alkalmas oszatlyozasi rendszerek hasznalata (2000) 0.01
    0.013613109 = product of:
      0.027226217 = sum of:
        0.027226217 = product of:
          0.054452434 = sum of:
            0.054452434 = weight(_text_:systems in 3210) [ClassicSimilarity], result of:
              0.054452434 = score(doc=3210,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.339541 = fieldWeight in 3210, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3210)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Übers. d. Titels: The use of improved classification systems for the description management and searching of Internet sources
  8. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.01
    0.012372886 = product of:
      0.024745772 = sum of:
        0.024745772 = product of:
          0.049491543 = sum of:
            0.049491543 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
              0.049491543 = score(doc=4321,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2708308 = fieldWeight in 4321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4321)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.
  9. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.012372886 = product of:
      0.024745772 = sum of:
        0.024745772 = product of:
          0.049491543 = sum of:
            0.049491543 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.049491543 = score(doc=88,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  10. Chowdhury, S.; Chowdhury, G.G.: Using DDC to create a visual knowledge map as an aid to online information retrieval (2004) 0.01
    0.012175934 = product of:
      0.024351869 = sum of:
        0.024351869 = product of:
          0.048703738 = sum of:
            0.048703738 = weight(_text_:systems in 2643) [ClassicSimilarity], result of:
              0.048703738 = score(doc=2643,freq=10.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.3036947 = fieldWeight in 2643, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    1. Introduction Web search engines and digital libraries usually expect the users to use search terms that most accurately represent their information needs. Finding the most appropriate search terms to represent an information need is an age old problem in information retrieval. Keyword or phrase search may produce good search results as long as the search terms or phrase(s) match those used by the authors and have been chosen for indexing by the concerned information retrieval system. Since this does not always happen, a large number of false drops are produced by information retrieval systems. The retrieval results become worse in very large systems that deal with millions of records, such as the Web search engines and digital libraries. Vocabulary control tools are used to improve the performance of text retrieval systems. Thesauri, the most common type of vocabulary control tool used in information retrieval, appeared in the late fifties, designed for use with the emerging post-coordinate indexing systems of that time. They are used to exert terminology control in indexing, and to aid in searching by allowing the searcher to select appropriate search terms. A large volume of literature exists describing the design features, and experiments with the use, of thesauri in various types of information retrieval systems (see for example, Furnas et.al., 1987; Bates, 1986, 1998; Milstead, 1997, and Shiri et al., 2002).
  11. Broughton, V.; Lane, H.: Classification schemes revisited : applications to Web indexing and searching (2000) 0.01
    0.011789299 = product of:
      0.023578597 = sum of:
        0.023578597 = product of:
          0.047157194 = sum of:
            0.047157194 = weight(_text_:systems in 2476) [ClassicSimilarity], result of:
              0.047157194 = score(doc=2476,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.29405114 = fieldWeight in 2476, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2476)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Basic skills of classification and subject indexing have been little taught in British library schools since automation was introduced into libraries. However, development of the Internet as a major medium of publication has stretched the capability of search engines to cope with retrieval. Consequently, there has been interest in applying existing systems of knowledge organization to electronic resources. Unfortunately, the classification systems have been adopted without a full understanding of modern classification principles. Analytico-synthetic schemes have been used crudely, as in the case of the Universal Decimal Classification (UDC). The fully faceted Bliss Bibliographical Classification, 2nd edition (BC2) with its potential as a tool for electronic resource retrieval is virtually unknown outside academic libraries
    Content
    A short discussion of using classification systems to organize the web, one of many such. The authors are both involved with BC2 and naturally think it is the best system for organizing information online. They list reasons why faceted classifications are best (e.g. no theoretical limits to specificity or exhaustivity; easier to handle complex subjects; flexible enough to accommodate different user needs) and take a brief look at how BC2 works. They conclude with a discussion of how and why it should be applied to online resources, and a plea for recognition of the importance of classification and subject analysis skills, even when full-text searching is available and databases respond instantly.
  12. Sandner, M.; Jahns, Y.: Kurzbericht zum DDC-Übersetzer- und Anwendertreffen bei der IFLA-Konferenz 2005 in Oslo, Norwegen (2005) 0.01
    0.010715233 = product of:
      0.021430466 = sum of:
        0.021430466 = product of:
          0.042860933 = sum of:
            0.042860933 = weight(_text_:22 in 4406) [ClassicSimilarity], result of:
              0.042860933 = score(doc=4406,freq=6.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23454636 = fieldWeight in 4406, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Am 16. August 2005 fand in Oslo im Rahmen der heurigen IFLA-Konferenz das alljährliche Treffen der DDC-Übersetzer und der weltweiten DeweyAnwender-Institutionen (Nationalbibliotheken, Ersteller von Nationalbibliografien) statt. Die im Sommer 2005 bereits abgeschlossene deutsche Übersetzung wird in der Druckfassung Ende des Jahres in 4 Bänden vorliegen, beim K. G. Saur Verlag in München erscheinen (ISBN 3-598-11651-9) und 2006 vom ebenfalls erstmals ins Deutsche übersetzten DDC-Lehrbuch (ISBN 3-598-11748-5) begleitet. Pläne für neu startende Übersetzungen der DDC 22 gibt es für folgende Sprachen: Arabisch (mit der wachsenden Notwendigkeit, Klasse 200 Religion zu revidieren), Französisch (es erschien zuletzt eine neue Kurzausgabe 14, nun werden eine vierbändige Druckausgabe und eine frz. Webversion anvisiert), Schwedisch, Vietnamesisch (hierfür wird eine an die Sprache und Schrift angepasste Version des deutschen Übersetzungstools zum Einsatz kommen).
    Allgemein DDC 22 ist im Gegensatz zu den früheren Neuauflagen der Standard Edition eine Ausgabe ohne generelle Überarbeitung einer gesamten Klasse. Sie enthält jedoch zahlreiche Änderungen und Expansionen in fast allen Disziplinen und in vielen Hilfstafeln. Es erschien auch eine Sonderausgabe der Klasse 200, Religion. In der aktuellen Kurzausgabe der DDC 22 (14, aus 2004) sind all diese Neuerungen berücksichtigt. Auch die elektronische Version exisitiert in einer vollständigen (WebDewey) und in einer KurzVariante (Abridged WebDewey) und ist immer auf dem jüngsten Stand der Klassifikation. Ein Tutorial für die Nutzung von WebDewey steht unter www.oclc.org /dewey/ resourcesitutorial zur Verfügung. Der Index enthält in dieser elektronischen Fassung weit mehr zusammengesetzte Notationen und verbale Sucheinstiege (resultierend aus den Titeldaten des "WorldCat") als die Druckausgabe, sowie Mappings zu den aktuellsten Normdatensätzen aus LCSH und McSH. Aktuell Die personelle Zusammensetzung des EPC (Editorial Policy Committee) hat sich im letzten Jahr verändert. Dieses oberste Gremium der DDC hat Prioritäten für den aktuellen Arbeitsplan festgelegt. Es wurde vereinbart, größere Änderungsvorhaben via Dewey-Website künftig wie in einem Stellungnahmeverfahren zur fachlichen Diskussion zu stellen. www.oclc.org/dewey/discussion/."
  13. Chandler, A.; LeBlanc, J.: Exploring the potential of a virtual undergraduate library collection based on the hierarchical interface to LC Classification (2006) 0.01
    0.010605331 = product of:
      0.021210661 = sum of:
        0.021210661 = product of:
          0.042421322 = sum of:
            0.042421322 = weight(_text_:22 in 769) [ClassicSimilarity], result of:
              0.042421322 = score(doc=769,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23214069 = fieldWeight in 769, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=769)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  14. Tudhope, D.; Binding, C.; Blocks, D.; Cuncliffe, D.: Representation and retrieval in faceted systems (2003) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 2703) [ClassicSimilarity], result of:
              0.038503684 = score(doc=2703,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 2703, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2703)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses two inter-related themes: the retrieval potential of faceted thesauri and XML representations of fundamental facets. Initial findings are discussed from the ongoing 'FACET' project, in collaboration with the National Museum of Science and Industry. The work discussed seeks to take advantage of the structure afforded by faceted systems for multi-term queries and flexible matching, focusing in this paper an the Art and Architecture Thesaurus. A multi-term matching function yields ranked results with partial matches via semantic term expansion, based an a measure of distance over the semantic index space formed by thesaurus relationships. Our intention is to drive the system from general representations and a common query structure and interface. To this end, we are developing an XML representation based an work by the Classification Research Group an fundamental facets or categories. The XML representation maps categories to particular thesauri and hierarchies. The system interface, which is configured by the mapping, incorporates a thesaurus browser with navigation history together with a term search facility and drag and drop query builder.
  15. O'Neill, E.T.; Childress, E.; Dean, R.; Kammerer, K.; Vizine-Goetz, D.; Chan, L.M.; El-Hoshy, L.: FAST: faceted application of subject terminology (2003) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 3816) [ClassicSimilarity], result of:
              0.038503684 = score(doc=3816,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 3816, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3816)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Library of Congress Subject Headings schema (LCSH) is by far the most commonly used and widely accepted subject vocabulary for general application. It is the de facto universal controlled vocabulary and has been a model for developing subject heading systems by many countries. However, LCSH's complex syntax and rules for constructing headings restrict its application by requiring highly skilled personnel and limit the effectiveness of automated authority control. Recent trends, driven to a large extent by the rapid growth of the Web, are forcing changes in bibliographic control systems to make them easier to use, understand, and apply, and subject headings are no exception. The purpose of adapting the LCSH with a simplified syntax to create FAST is to retain the very rich vocabulary of LCSH while making the schema easier to understand, control, apply, and use. The schema maintains upward compatibility with LCSH, and any valid set of LC subject headings can be converted to FAST headings.
  16. Place, E.: International collaboration on Internet subject gateways (2000) 0.01
    0.008837775 = product of:
      0.01767555 = sum of:
        0.01767555 = product of:
          0.0353511 = sum of:
            0.0353511 = weight(_text_:22 in 4584) [ClassicSimilarity], result of:
              0.0353511 = score(doc=4584,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19345059 = fieldWeight in 4584, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4584)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:35:35
  17. Saeed, H.; Chaudry, A.S.: Potential of bibliographic tools to organize knowledge on the Internet : the use of Dewey Decimal classification scheme for organizing Web-based information resources (2001) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 6739) [ClassicSimilarity], result of:
              0.03267146 = score(doc=6739,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 6739, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6739)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Possibilities are being explored to use traditional bibliographic tools, like Dewey Decimal Classification (DDC), Library of Congress Classification (LCC), Library of Congress Subject Headings (LCSH), and Universal Decimal Classification (UDC), to improve the organization of information resources on the Internet. The most recent edition of DDC, with its enhanced features, has greater potential than other traditional approaches. A review of selected Web sites that use DDC to organize Web resources indicates, however, that the full potential of the DDC scheme for this purpose has not been realized. While the review found that the DDC classification structure was more effective when compared with other knowledge organization systems, we conclude that DDC needs to be further enhanced to make it more suitable for this application. As widely reported in the professional literature, OCLC has conducted research on the potential of DDC for organizing Web resources. Such research, however, is experimental and should be supplemented by empirical studies with user participation.
  18. Binding, C.; Tudhope, D.: Integrating faceted structure into the search process (2004) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 2627) [ClassicSimilarity], result of:
              0.03267146 = score(doc=2627,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 2627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2627)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The nature of search requirements is perceived to be changing, fuelled by a growing dissatisfaction with the marginal accuracy and often overwhelming quantity of results from simple keyword matching techniques. Traditional search interfaces fail to acknowledge and utilise the implicit underlying structure present within a typical keyword query. Faceted structure can (and should) perform a significant role in this area - acting as the basis for mediation between searcher and indexer, and guiding query formulation and reformulation by interactively educating the user about the native domain. This paper discusses the possible benefits of applying faceted knowledge organization systems to enhance query structure, query visualisation and the overall query process, drawing an the outcomes of a recently completed research project.
  19. Vizine-Goetz, D.; Thompson, R.: Towards DDC-classified displays of Netfirst search results : subject access issues (2003) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 3815) [ClassicSimilarity], result of:
              0.03267146 = score(doc=3815,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 3815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3815)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    To determine the potential benefits of providing classified displays of search results, we analyzed the classification features of the OCLC NetFirst database using criteria developed by the Subject Analysis Committee (SAC) subcommittee an Metadata and Classification. We also studied NetFirst search logs to better understand how the classification-based searching and limiting functions implemented in the system are being used. Our findings suggest that to increase the use of classification-based features in systems for general users, classificatory functions must be well integrated with the basic search and display functions.
  20. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 831) [ClassicSimilarity], result of:
              0.03267146 = score(doc=831,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=831)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.