Search (6 results, page 1 of 1)

  • × year_i:[2000 TO 2010}
  • × type_ss:"r"
  1. Carey, K.; Stringer, R.: ¬The power of nine : a preliminary investigation into navigation strategies for the new library with special reference to disabled people (2000) 0.02
    0.02119053 = product of:
      0.04238106 = sum of:
        0.04238106 = product of:
          0.08476212 = sum of:
            0.08476212 = weight(_text_:22 in 234) [ClassicSimilarity], result of:
              0.08476212 = score(doc=234,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.46428138 = fieldWeight in 234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    22 S
  2. Final Report to the ALCTS CCS SAC Subcommittee on Metadata and Subject Analysis (2001) 0.01
    0.011684213 = product of:
      0.023368426 = sum of:
        0.023368426 = product of:
          0.04673685 = sum of:
            0.04673685 = weight(_text_:classification in 5016) [ClassicSimilarity], result of:
              0.04673685 = score(doc=5016,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.28149095 = fieldWeight in 5016, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5016)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The charge for the SAC Subcommittee on Metadata and Subject Analysis states: Identify and study the major issues surrounding the use of metadata in the subject analysis and classification of digital resources. Provide discussion forums and programs relevant to these issues. Discussion forums should begin by Annual 1998. The continued need for the subcommittee should be reexamined by SAC no later than 2001.
  3. Colomb, R.M.: Quality of ontologies in interoperating information systems (2002) 0.01
    0.010223686 = product of:
      0.020447372 = sum of:
        0.020447372 = product of:
          0.040894743 = sum of:
            0.040894743 = weight(_text_:classification in 7858) [ClassicSimilarity], result of:
              0.040894743 = score(doc=7858,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.24630459 = fieldWeight in 7858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7858)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The focus of this paper is an quality of ontologies as they relate to interoperating information systems. Quality is not a property of something but a judgment, so must be relative to some purpose, and generally involves recognition of design tradeoffs. Ontologies used for information systems interoperability have much in common with classification systems in information science, knowledge based systems, and programming languages, and inherit quality characteristics from each of these older areas. Factors peculiar to the new field lead to some additional characteristics relevant to quality, some of which are more profitably considered quality aspects not of the ontology as such, but of the environment through which the ontology is made available to its users. Suggestions are presented as to how to use these Factors in producing quality ontologies.
  4. Landry, P.; Zumer, M.; Clavel-Merrin, G.: Report on cross-language subject access options (2006) 0.01
    0.00876316 = product of:
      0.01752632 = sum of:
        0.01752632 = product of:
          0.03505264 = sum of:
            0.03505264 = weight(_text_:classification in 2433) [ClassicSimilarity], result of:
              0.03505264 = score(doc=2433,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.21111822 = fieldWeight in 2433, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2433)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This report presents the results of desk-top based study of projects and initiatives in the area of linking and mapping subject tools. While its goal is to provide areas of further study for cross-language subject access in the European Library, and specifically the national libraries of the Ten New Member States, it is not restricted to cross-language mappings since some of the tools used to create links across thesauri or subject headings in the same language may also be appropriate for cross-language mapping. Tools reviewed have been selected to represent a variety of approaches (e.g. subject heading to subject heading, thesaurus to thesaurus, classification to subject heading) reflecting the variety of subject access tools in use in the European Library. The results show that there is no single solution that would be appropriate for all libraries but that parts of several initiatives may be applicable on a technical, organisational or content level.
  5. Reiner, U.: VZG-Projekt Colibri : Bewertung von automatisch DDC-klassifizierten Titeldatensätzen der Deutschen Nationalbibliothek (DNB) (2009) 0.01
    0.0073026326 = product of:
      0.014605265 = sum of:
        0.014605265 = product of:
          0.02921053 = sum of:
            0.02921053 = weight(_text_:classification in 2675) [ClassicSimilarity], result of:
              0.02921053 = score(doc=2675,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.17593184 = fieldWeight in 2675, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2675)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Das VZG-Projekt Colibri/DDC beschäftigt sich seit 2003 mit automatischen Verfahren zur Dewey-Dezimalklassifikation (Dewey Decimal Classification, kurz DDC). Ziel des Projektes ist eine einheitliche DDC-Erschließung von bibliografischen Titeldatensätzen und eine Unterstützung der DDC-Expert(inn)en und DDC-Laien, z. B. bei der Analyse und Synthese von DDC-Notationen und deren Qualitätskontrolle und der DDC-basierten Suche. Der vorliegende Bericht konzentriert sich auf die erste größere automatische DDC-Klassifizierung und erste automatische und intellektuelle Bewertung mit der Klassifizierungskomponente vc_dcl1. Grundlage hierfür waren die von der Deutschen Nationabibliothek (DNB) im November 2007 zur Verfügung gestellten 25.653 Titeldatensätze (12 Wochen-/Monatslieferungen) der Deutschen Nationalbibliografie der Reihen A, B und H. Nach Erläuterung der automatischen DDC-Klassifizierung und automatischen Bewertung in Kapitel 2 wird in Kapitel 3 auf den DNB-Bericht "Colibri_Auswertung_DDC_Endbericht_Sommer_2008" eingegangen. Es werden Sachverhalte geklärt und Fragen gestellt, deren Antworten die Weichen für den Verlauf der weiteren Klassifizierungstests stellen werden. Über das Kapitel 3 hinaus führende weitergehende Betrachtungen und Gedanken zur Fortführung der automatischen DDC-Klassifizierung werden in Kapitel 4 angestellt. Der Bericht dient dem vertieften Verständnis für die automatischen Verfahren.
  6. Sykes, J.: Making solid business decisions through intelligent indexing taxonomies : a white paper prepared for Factiva, Factiva, a Dow Jones and Reuters Company (2003) 0.01
    0.0058421064 = product of:
      0.011684213 = sum of:
        0.011684213 = product of:
          0.023368426 = sum of:
            0.023368426 = weight(_text_:classification in 721) [ClassicSimilarity], result of:
              0.023368426 = score(doc=721,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.14074548 = fieldWeight in 721, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.03125 = fieldNorm(doc=721)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2000, Factiva published "The Value of Indexing," a white paper emphasizing the strategic importance of accurate categorization, based on a robust taxonomy for later retrieval of documents stored in commercial or in-house content repositories. Since that time, there has been resounding agreement between persons who use Web-based systems and those who design these systems that search engines alone are not the answer for effective information retrieval. High-quality categorization is crucial if users are to be able to find the right answers in repositories of articles and documents that are expanding at phenomenal rates. Companies continue to invest in technologies that will help them organize and integrate their content. A March 2002 article in EContent suggests a typical taxonomy implementation usually costs around $100,000. The article also cites a Merrill Lynch study that predicts the market for search and categorization products, now at about $600 million, will more than double by 2005. Classification activities are not new. In the third century B.C., Callimachus of Cyrene managed the ancient Library of Alexandria. To help scholars find items in the collection, he created an index of all the scrolls organized according to a subject taxonomy. Factiva's parent companies, Dow Jones and Reuters, each have more than 20 years of experience with developing taxonomies and painstaking manual categorization processes and also have a solid history with automated categorization techniques. This experience and expertise put Factiva at the leading edge of developing and applying categorization technology today. This paper will update readers about enhancements made to the Factiva Intelligent IndexingT taxonomy. It examines the value these enhancements bring to Factiva's news and business information service, and the value brought to clients who license the Factiva taxonomy as a fundamental component of their own Enterprise Knowledge Architecture. There is a behind-the-scenes-look at how Factiva classifies a huge stream of incoming articles published in a variety of formats and languages. The paper concludes with an overview of new Factiva services and solutions that are designed specifically to help clients improve productivity and make solid business decisions by precisely finding information in their own everexpanding content repositories.