Search (100 results, page 1 of 5)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Ardo, A.; Lundberg, S.: ¬A regional distributed WWW search and indexing service : the DESIRE way (1998) 0.04
    0.044392314 = product of:
      0.11098078 = sum of:
        0.068586886 = weight(_text_:system in 4190) [ClassicSimilarity], result of:
          0.068586886 = score(doc=4190,freq=8.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.41757566 = fieldWeight in 4190, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.042393893 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
          0.042393893 = score(doc=4190,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.23214069 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
      0.4 = coord(2/5)
    
    Abstract
    Creates an open, metadata aware system for distributed, collaborative WWW indexing. The system has 3 main components: a harvester (for collecting information), a database (for making the collection searchable), and a user interface (for making the information available). all components can be distributed across networked computers, thus supporting scalability. The system is metadata aware and thus allows searches on several fields including title, document author and URL. Nordic Web Index (NWI) is an application using this system to create a regional Nordic Web-indexing service. NWI is built using 5 collaborating service points within the Nordic countries. The NWI databases can be used to build additional services
    Date
    1. 8.1996 22:08:06
  2. Chandler, A.; LeBlanc, J.: Exploring the potential of a virtual undergraduate library collection based on the hierarchical interface to LC Classification (2006) 0.04
    0.036356855 = product of:
      0.090892136 = sum of:
        0.048498247 = weight(_text_:system in 769) [ClassicSimilarity], result of:
          0.048498247 = score(doc=769,freq=4.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.29527056 = fieldWeight in 769, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=769)
        0.042393893 = weight(_text_:22 in 769) [ClassicSimilarity], result of:
          0.042393893 = score(doc=769,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.23214069 = fieldWeight in 769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=769)
      0.4 = coord(2/5)
    
    Abstract
    The Hierarchical Interface to Library of Congress Classification (HILCC) is a system developed by the Columbia University Library to leverage call number data from the MARC holdings records in Columbia's online catalog to create a structured, hierarchical menuing system that provides subject access to the library's electronic resources. In this paper, the authors describe a research initiative at the Cornell University Library to discover if the Columbia HILCC scheme can be used as developed or in modified form to create a virtual undergraduate print collection outside the context of the traditional online catalog. Their results indicate that, with certain adjustments, an HILCC model can indeed, be used to represent the holdings of a large research library's undergraduate collection of approximately 150,000 titles, but that such a model is not infinitely scalable and may require a new approach to browsing such a large information space.
    Date
    10. 9.2000 17:38:22
  3. Dack, D.: Australian attends conference on Dewey (1989) 0.04
    0.035787422 = product of:
      0.08946855 = sum of:
        0.040009014 = weight(_text_:system in 2509) [ClassicSimilarity], result of:
          0.040009014 = score(doc=2509,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.2435858 = fieldWeight in 2509, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
        0.049459543 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
          0.049459543 = score(doc=2509,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.2708308 = fieldWeight in 2509, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
      0.4 = coord(2/5)
    
    Abstract
    Edited version of a report to the Australian Library and Information Association on the Conference on classification theory in the computer age, Albany, New York, 18-19 Nov 88, and on the meeting of the Dewey Editorial Policy Committee which preceded it. The focus of the Editorial Policy Committee Meeting lay in the following areas: browsing; potential for improved subject access; system design; potential conflict between shelf location and information retrieval; and users. At the Conference on classification theory in the computer age the following papers were presented: Applications of artificial intelligence to bibliographic classification, by Irene Travis; Automation and classification, By Elaine Svenonious; Subject classification and language processing for retrieval in large data bases, by Diana Scott; Implications for information processing, by Carol Mandel; and implications for information science education, by Richard Halsey.
    Date
    8.11.1995 11:52:22
  4. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1998) 0.04
    0.035787422 = product of:
      0.08946855 = sum of:
        0.040009014 = weight(_text_:system in 2342) [ClassicSimilarity], result of:
          0.040009014 = score(doc=2342,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.2435858 = fieldWeight in 2342, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2342)
        0.049459543 = weight(_text_:22 in 2342) [ClassicSimilarity], result of:
          0.049459543 = score(doc=2342,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.2708308 = fieldWeight in 2342, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2342)
      0.4 = coord(2/5)
    
    Abstract
    The knowledge structures that form traditional library classification schemes hold great potential for improving resource description and discovery on the Internet and for organizing electronic document collections. The advantages of assigning subject tokens (classes) to documents from a scheme like the DDC system are well documented
    Date
    22. 9.1997 19:16:05
  5. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.04
    0.035787422 = product of:
      0.08946855 = sum of:
        0.040009014 = weight(_text_:system in 57) [ClassicSimilarity], result of:
          0.040009014 = score(doc=57,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.2435858 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.049459543 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
          0.049459543 = score(doc=57,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.2708308 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
      0.4 = coord(2/5)
    
    Abstract
    Conceptual Knowledge Markup Language (CKML), an application of XML, is a new standard being promoted for the specification of online conceptual knowledge (Kent and Shrivastava, 1998). CKML follows the philosophy of Conceptual Knowledge Processing (Wille, 1982), a principled approach to knowledge representation and data analysis, which advocates the development of methodologies and techniques to support people in their rational thinking, judgement and actions. CKML was developed and is being used in the WAVE networked information discovery and retrieval system (Kent and Neuss, 1994) as a standard for the specification of conceptual knowledge
    Date
    30.12.2001 16:22:41
  6. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.03
    0.030674934 = product of:
      0.076687336 = sum of:
        0.034293443 = weight(_text_:system in 3697) [ClassicSimilarity], result of:
          0.034293443 = score(doc=3697,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.20878783 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.042393893 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
          0.042393893 = score(doc=3697,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.23214069 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
      0.4 = coord(2/5)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
  7. Hill, J.S.: Online classification number access : some practical considerations (1984) 0.02
    0.022610078 = product of:
      0.11305039 = sum of:
        0.11305039 = weight(_text_:22 in 7684) [ClassicSimilarity], result of:
          0.11305039 = score(doc=7684,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.61904186 = fieldWeight in 7684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=7684)
      0.2 = coord(1/5)
    
    Source
    Journal of academic librarianship. 10(1984), S.17-22
  8. Liu, S.; Svenonius, E.: DORS: DDC online retrieval system (1991) 0.02
    0.020448659 = product of:
      0.10224329 = sum of:
        0.10224329 = weight(_text_:system in 1155) [ClassicSimilarity], result of:
          0.10224329 = score(doc=1155,freq=10.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.62248504 = fieldWeight in 1155, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=1155)
      0.2 = coord(1/5)
    
    Abstract
    A model system, the Dewey Online Retrieval System (DORS), was implemented as an interface to an online catalog for the purpose of experimenting with classification-based search strategies and generally seeking further understanding of the role of traditional classifications in automated information retrieval. Specifications for a classification retrieval interface were enumerated and rationalized and the system was developed in accordance with them. The feature that particularly distinguishes the system and enables it to meet its stated specifications is an automatically generated chain index
  9. Oberhauser, O.: Implementierung und Parametrisierung klassifikatorischer Recherchekomponenten im OPAC (2005) 0.02
    0.017893711 = product of:
      0.044734277 = sum of:
        0.020004507 = weight(_text_:system in 3353) [ClassicSimilarity], result of:
          0.020004507 = score(doc=3353,freq=2.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.1217929 = fieldWeight in 3353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3353)
        0.024729772 = weight(_text_:22 in 3353) [ClassicSimilarity], result of:
          0.024729772 = score(doc=3353,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.1354154 = fieldWeight in 3353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3353)
      0.4 = coord(2/5)
    
    Abstract
    Das in den letzten Jahren wiedererwachte Interesse an der klassifikatorischen Erschließung und Recherche hat sich allem Anschein nach noch nicht ausreichend bis zu den Herstellern integrierter Bibliothekssysteme herumgesprochen. Wie wäre es sonst zu erklären, dass im OPAC-Modul eines führenden Systems wie Aleph 500 so gut wie keine Features für klassifikationsbasierte Recherchen zu erblicken sind? Tatsächlich finden wir heute einen im Vergleich zum einstigen System Bibos kaum veränderten Zustand vor: Notationen eines oder mehrerer Klassifikationssysteme können in einer durch MAB dafür bestimmten Kategorie (700, nebst Indikatoren) katalogisiert und dann recherchiert bzw. angezeigt werden. Doch welcher Benutzer weiß schon, was diese Notationen im einzelnen bedeuten? Wer macht sich die Mühe, dies selbst herauszufinden, um dann danach zu recherchieren? Hier liegt im wesentlich dasselbe Problem vor, das schon dem systematischen Zettelkatalog anhaftete und ihn zu einem zwar mühevoll erstellten, aber wenig genutzten Rechercheinstrument machte, das nur dann (zwangsläufig) angenommen wurde, wenn ein verbaler Sachkatalog fehlte. Nun könnte eingewandt werden, dass im Vergleich zu früher unter Aleph 500 wenigstens das Aufblättern von Indizes möglich sei, sodass im OPAC ein Index für die vergebenen Notationen angeboten werden kann (bzw. mehrere solche Indizes bei Verwendung von mehr als nur einem Klassifikationssystem). Gewiss, doch was bringt dem Uneingeweihten das Aufblättern des Notationsindex - außer einer alphabetischen Liste von kryptischen Codes? Weiter könnte man einwenden, dass es im Aleph-500-OPAC die so genannten Suchdienste ("services") gibt, mithilfe derer von bestimmten Elementen einer Vollanzeige hypertextuell weiternavigiert werden kann. Richtig, doch damit kann man bloß wiederum den Index aufblättern oder alle anderen Werke anzeigen lassen, die dieselbe Notationen - also einen Code, dessen Bedeutung meist unbekannt ist - aufweisen. Wie populär mag dieses Feature beim Publikum wohl sein? Ein anderer Einwand wäre der Hinweis auf das inzwischen vom Hersteller angebotene Thesaurus-Modul, das vermutlich auch für Klassifikationssysteme eingesetzt werden könnte. Doch wie viele Bibliotheken unseres Verbundes waren bisher bereit, für dieses Modul, das man eigentlich als Bestandteil des Basissystems erwarten könnte, gesondert zu bezahlen? Schließlich mag man noch einwenden, dass es im Gegensatz zur Bibos-Zeit nun die Möglichkeit gibt, Systematiken und Klassifikationen als Normdateien zu implementieren und diese beim Retrieval für verbale Einstiege in die klassifikatorische Recherche oder zumindest für die Veranschaulichung der Klassenbenennungen in der Vollanzeige zu nutzen. Korrekt - dies ist möglich und wurde sogar einst für die MSC (Mathematics Subject Classification, auch bekannt als "AMS-Klassifikation") versucht. Dieses Projekt, das noch unter der Systemversion 11.5 begonnen wurde, geriet jedoch nach einiger Zeit ins Stocken und fand bedauerlicherweise nie seinen Weg in die folgende Version (14.2). Mag auch zu hoffen sein, dass es unter der neuen Version 16 wieder weitergeführt werden kann, so weist dieses Beispiel doch auf die grundsätzliche Problematik des Normdatei-Ansatzes (zusätzlicher Aufwand, Kontinuität) hin. Zudem lohnt sich die Implementierung einer eigenen Normdatei 4 wohl nur bei einem größeren bzw. komplexen Klassifikationssystem, wogegen man im Falle kleinerer Systematiken kaum daran denken würde.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 58(2005) H.1, S.22-37
  10. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.016957557 = product of:
      0.084787786 = sum of:
        0.084787786 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
          0.084787786 = score(doc=6040,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.46428138 = fieldWeight in 6040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6040)
      0.2 = coord(1/5)
    
    Date
    22. 6.2002 19:42:47
  11. Ishikawa, T.; Nakamura, H.; Nakamura, Y.: UDC number automatic combination system (1994) 0.02
    0.016003607 = product of:
      0.08001803 = sum of:
        0.08001803 = weight(_text_:system in 7732) [ClassicSimilarity], result of:
          0.08001803 = score(doc=7732,freq=8.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.4871716 = fieldWeight in 7732, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7732)
      0.2 = coord(1/5)
    
    Abstract
    In a large-scale classification system, such as UDC, users are often troubled during the process of finding a relevant classification number for his concept or term and producing (combining) a final compound classification number. UDC tables are now computerized in many language editions, and the MRF had released as a master file (database) by the UDCC in 1993. In this paper, a system function is described for a man-machine interactive system to support compound UDC number assignment, and the necessary re-organization of UDC data/file formats are considered for the use in the automatic classification number combination
  12. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing based system for organizing and accessing Internet resources (2002) 0.02
    0.016003607 = product of:
      0.08001803 = sum of:
        0.08001803 = weight(_text_:system in 97) [ClassicSimilarity], result of:
          0.08001803 = score(doc=97,freq=32.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.4871716 = fieldWeight in 97, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=97)
      0.2 = coord(1/5)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the World Wide Web. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. A prototype Software System has been designed to create a database of records specifying Web documents according to the Dublin Core and to input a faceted subject heading according to DSIS. Synonymous terms are added to the Standard terms in the heading using appropriate symbols. Once the data are entered along with a description and the URL of the web document, the record is stored in the System. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The System stores the Surrogates and keeps the faceted subject headings separately after establishing a link. The search is carried out an index entries derived from the faceted subject heading using the chain indexing technique. If a single term is Input, the System searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved Keadings is too large (running into more than a page) the user has the option of entering another search term to be searched in combination. The System searches subject headings already retrieved and looks for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL, the original document an the web can be accessed. The prototype system developed in a Windows NT environment using ASP and a web server is under rigorous testing. The database and Index management routines need further development.
    An interesting but somewhat confusing article telling how the writers described web pages with Dublin Core metadata, including a faceted classification, and built a system that lets users browse the collection through the facets. They seem to want to cover too much in a short article, and unnecessary space is given over to screen shots showing how Dublin Core metadata was entered. The screen shots of the resulting browsable system are, unfortunately, not as enlightening as one would hope, and there is no discussion of how the system was actually written or the technology behind it. Still, it could be worth reading as an example of such a system and how it is treated in journals.
  13. Drabenstott, K.M.; Riester, L.C.; Dede, B.A.: Shelflisting using expert systems (1992) 0.02
    0.015839463 = product of:
      0.07919731 = sum of:
        0.07919731 = weight(_text_:system in 2101) [ClassicSimilarity], result of:
          0.07919731 = score(doc=2101,freq=6.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.48217484 = fieldWeight in 2101, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2101)
      0.2 = coord(1/5)
    
    Abstract
    A prototype expert system for the computer science section (QA75 to QA76.95) of Library of Congress Classification was built using the Mahogany Professional expert system shell. The prototype demonstrates an expert systes application in which the system is enlisted as an intelligent job aid to assist users during the actual performance of shelflisting
  14. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.02
    0.015165131 = product of:
      0.075825654 = sum of:
        0.075825654 = weight(_text_:system in 3966) [ClassicSimilarity], result of:
          0.075825654 = score(doc=3966,freq=22.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.46164727 = fieldWeight in 3966, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3966)
      0.2 = coord(1/5)
    
    Abstract
    Organizing and providing access to the resources an the Internet has been a problem area in spite of the availability of sophisticated search engines and other Software tools. There have been several attempts to organize the resources an the WWW. Some of them have tried to use traditional library classification schemes such as the Library of Congress Classification, the Dewey Decimal Classification and others. However there is a need to assign proper subject headings to them and present them in a logical or hierarchical sequence to cater to the need for browsing. This paper attempts to describe an experimental system designed to organize and provide access to web documents using a faceted pre-coordinate indexing system based an the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing System of Ranganathan. A prototype software system has been designed to create a database of records specifying Web documents according to the Dublin Core and input a faceted subject heading according to DSIS. Synonymous terms are added to the standard terms in the heading using appropriate symbols. Once the data are entered along with a description and URL of the Web document, the record is stored in the system. More than one faceted subject heading can be assigned to a record depending an the content of the original document. The system stores the surrogates and keeps the faceted subject headings separately after establishing a link. Search is carried out an index entries derived from the faceted subject heading using chain indexing technique. If a single term is input, the system searches for its presence in the faceted subject headings and displays the subject headings in a sorted sequence reflecting an organizing sequence. If the number of retrieved headings is too large (running into more than a page) then the user has the option of entering another search term to be searched in combination. The system searches subject headings already retrieved and look for those containing the second term. The retrieved faceted subject headings can be displayed and browsed. When the relevant subject heading is selected the system displays the records with their URLs. Using the URL the original document an the web can be accessed. The prototype system developed under Windows NT environment using ASP and web server is under rigorous testing. The database and indexes management routines need further development.
  15. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.01
    0.014797437 = product of:
      0.036993593 = sum of:
        0.022862293 = weight(_text_:system in 2047) [ClassicSimilarity], result of:
          0.022862293 = score(doc=2047,freq=8.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.13919188 = fieldWeight in 2047, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
        0.014131298 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
          0.014131298 = score(doc=2047,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.07738023 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
      0.4 = coord(2/5)
    
    Date
    2. 1.2004 10:35:22
    Footnote
    AHUJA and SATIJA (Relevance of Ranganathan's Classification Theory in the Age of Digital Libraries) note that traditional bibliographic classification systems have been applied in the digital environment with only limited success. They find that the "inherent flexibility of electronic manipulation of documents or their surrogates should allow a more organic approach to allocation of new subjects and appropriate linkages between subject hierarchies." (p. 18). Ahija and Satija also suggest that it is necessary to shift from a "subject" focus to a "need" focus when applying classification theory in the digital environment. They find Ranganathan's framework applicable in the digital environment. Although Ranganathan's focus is "subject oriented and hence emphasise the hierarchical and linear relationships" (p. 26), his framework "can be successfully adopted with certain modifications ... in the digital environment." (p. 26). SHAH and KUMAR (Model for System Unification of Geographical Schedules (Space Isolates)) report an a plan to develop a single schedule for geographical Subdivision that could be used across all classification systems. The authors argue that this is needed in order to facilitate interoperability in the digital environment. SAN SEGUNDO MANUEL (The Representation of Knowledge as a Symbolization of Productive Electronic Information) distills different approaches and definitions of the term "representation" as it relates to representation of knowledge in the library and information science literature and field. SHARADA (Linguistic and Document Classification: Paradigmatic Merger Possibilities) suggests the development of a universal indexing language. The foundation for the universal indexing language is Chomsky's Minimalist Program and Ranganathan's analytico-synthetic classification theory; Acording to the author, based an these approaches, it "should not be a problem" (p. 62) to develop a universal indexing language.
    PARAMESWARAN (Classification and Indexing: Impact of Classification Theory an PRECIS) reviews the PRECIS system and finds that "it Gould not escape from the impact of the theory of classification" (p. 131). The author further argues that the purpose of classification and subject indexing is the same and that both approaches depends an syntax. This leads to the conclusion that "there is an absolute syntax as the Indian theory of classification points out" (p. 131). SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 1. SA TSAN- A Computer Based Learning Package) and SATYAPAL and SANJIVINI SATYAPAL (Classifying Documents According to Postulational Approach: 2. Semi-Automatic Synthesis of CC Numbers) present an application to automate classification using a facet classification system, in this Gase, the Colon Classification system. GAIKAIWARI (An Interactive Application for Faceted Classification Systems) presents an application, called SRR, for managing and using a faceted classification scheme in a digital environment. IYER (Use of Instructional Technology to Support Traditional Classroom Learning: A Case Study) describes a course an "Information and Knowledge Organization" that she teaches at the University at Albany (SUNY). The course is a conceptual course that introduces the student to various aspects of knowledge organization. GOPINATH (Universal Classification: How can it be used?) lists fifteen uses of universal classifications and discusses the entities of a number of disciplines. GOPINATH (Knowledge Classification: The Theory of Classification) briefly reviews the foundations for research in automatic classification, summarizes the history of classification, and places Ranganathan's thought in the history of classification.
  16. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.01
    0.014131299 = product of:
      0.07065649 = sum of:
        0.07065649 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
          0.07065649 = score(doc=3576,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.38690117 = fieldWeight in 3576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3576)
      0.2 = coord(1/5)
    
    Date
    8. 1.2007 12:22:40
  17. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.014131299 = product of:
      0.07065649 = sum of:
        0.07065649 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
          0.07065649 = score(doc=611,freq=2.0), product of:
            0.18262155 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052150324 = queryNorm
            0.38690117 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
      0.2 = coord(1/5)
    
    Date
    22. 8.2009 12:54:24
  18. Gödert, W.: Facet classification in online retrieval (1991) 0.01
    0.014000239 = product of:
      0.07000119 = sum of:
        0.07000119 = weight(_text_:system in 5825) [ClassicSimilarity], result of:
          0.07000119 = score(doc=5825,freq=12.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.42618635 = fieldWeight in 5825, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5825)
      0.2 = coord(1/5)
    
    Abstract
    The study of faceted classification systems has primarily been directed towards application for precombined catalogues or bibliographies, not so much for use in post coordinated retrieval systems. Argues that faceted classification systems in some respects are superior to other techniques of on-line retrieval as far as facet and concept analysis is combined with an expressive notational system in order to guide a form of retrieval which will use Boolean operators (for combining the facets regardless of one special citation order) and truncation for retrieving hierarchically different sets of documents. This point of view is demonstrated by 2 examples. The 1st one uses a short classification system derived from B. Buchanan and the 2nd is built upon the classification system used by Library and Information Science Abstracts (LISA). Further discussion is concerned with some possible consequences which could be derived from a retrieval with PRECIS strings
    "Online retrieval" conjures up a very different mental image now than in 1991, the year this article was written, and the year Tim Berners-Lee first revealed the new hypertext system he called the World Wide Web. Gödert shows that truncation and Boolean logic, combined with notation from a faceted classification system, will be a powerful way of searching for information. It undoubtedly is, but no system built now would require a user searching for material on "nervous systems of bone fish" to enter "Fdd$ and Leaa$". This is worth reading for someone interested in seeing how searching and facets can go together, but the web has made this article quite out of date.
  19. Gowtham, M.S.; Kamat, S.K.: ¬An expert system as a tool to classification (1995) 0.01
    0.013717378 = product of:
      0.068586886 = sum of:
        0.068586886 = weight(_text_:system in 3735) [ClassicSimilarity], result of:
          0.068586886 = score(doc=3735,freq=8.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.41757566 = fieldWeight in 3735, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3735)
      0.2 = coord(1/5)
    
    Abstract
    Describes the development by the Defence Metallurgical Research Laboratory, Hyderabad, India, of an expert system for classification of technical documents using the UDC schedule for metallurgy as knowledge base and the UDC classification rules as rule base. The scheme was modified from its enumerative structure to an analytico-synthetic structure which is best suited to such an expert system. Some benefits of the expert system are that: it interacts with the classifier making them conform to the route suggested by the classification scheme; it alerts the classifier to the minor variations in the scheme thus avoiding overlooking them; it leads to consistency in class number generation; and it ensures that the classifier has incorporated als the concepts of the subject in the class number, by leading him or her through all the groups, which is not possible in the manual scheme
  20. Micco, M.: Suggestions for automating the Library of Congress Classification schedules (1992) 0.01
    0.012932867 = product of:
      0.064664334 = sum of:
        0.064664334 = weight(_text_:system in 2108) [ClassicSimilarity], result of:
          0.064664334 = score(doc=2108,freq=4.0), product of:
            0.1642502 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.052150324 = queryNorm
            0.3936941 = fieldWeight in 2108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2108)
      0.2 = coord(1/5)
    
    Abstract
    It will not be an easy task to automate the Library of Congress Classification schedules because it is a very large system and also because it developed long before automation. The designers were creating a system for shelving books effiently and had not even imagined the constraints imposed by automation. A number of problems and possible solutions are discussed. The MARC format proposed for classification has some serious problems which are identified

Authors

Years

Languages

  • e 88
  • d 9
  • es 1
  • ja 1
  • nl 1
  • More… Less…

Types

  • a 83
  • el 14
  • m 4
  • s 4
  • p 1
  • x 1
  • More… Less…

Classifications