Search (109 results, page 1 of 6)

  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  1. Greenberg, J.: User comprehension and application of information retrieval thesauri (2004) 0.04
    0.044501208 = product of:
      0.13350362 = sum of:
        0.038461216 = weight(_text_:cataloging in 5008) [ClassicSimilarity], result of:
          0.038461216 = score(doc=5008,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.26126182 = fieldWeight in 5008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.046875 = fieldNorm(doc=5008)
        0.024758326 = weight(_text_:data in 5008) [ClassicSimilarity], result of:
          0.024758326 = score(doc=5008,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 5008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5008)
        0.070284076 = weight(_text_:processing in 5008) [ClassicSimilarity], result of:
          0.070284076 = score(doc=5008,freq=6.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.4648076 = fieldWeight in 5008, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=5008)
      0.33333334 = coord(3/9)
    
    Abstract
    While information retrieval thesauri may improve search results, there is little research documenting whether general information system users employ these vocabulary tools. This article explores user comprehension and searching with thesauri. Data was gathered as part of a larger empirical query-expansion study involving the ProQuest Controlled Vocabulary. The results suggest that users' knowledge of thesauri is extremely limited. After receiving a basic thesaurus introduction, however, users indicate a desire to employ these tools. The most significant result was that users expressed a preference for thesauri employment through interactive processing or a combination of automatic and interactive processing, compared to exclusively automatic processing. This article defines information retrieval thesauri, summarizes research results, considers circumstances underlying users' knowledge and searching with thesauri, and highlights future research needs.
    Source
    Cataloging and classification quarterly. 37(2004) nos.3/4, S.xx-xx
  2. Busch, J.A.: Building and accessing vocabulary resources for networked resource discovery and navigation (1998) 0.03
    0.031313088 = product of:
      0.09393926 = sum of:
        0.028884713 = weight(_text_:data in 2346) [ClassicSimilarity], result of:
          0.028884713 = score(doc=2346,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.24455236 = fieldWeight in 2346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2346)
        0.047341615 = weight(_text_:processing in 2346) [ClassicSimilarity], result of:
          0.047341615 = score(doc=2346,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.3130829 = fieldWeight in 2346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2346)
        0.017712934 = product of:
          0.035425868 = sum of:
            0.035425868 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
              0.035425868 = score(doc=2346,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.2708308 = fieldWeight in 2346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  3. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.03
    0.029249936 = product of:
      0.13162471 = sum of:
        0.102740005 = weight(_text_:germany in 604) [ClassicSimilarity], result of:
          0.102740005 = score(doc=604,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.46121946 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.028884713 = weight(_text_:data in 604) [ClassicSimilarity], result of:
          0.028884713 = score(doc=604,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.24455236 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.22222222 = coord(2/9)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  4. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.03
    0.02613402 = product of:
      0.07840206 = sum of:
        0.038461216 = weight(_text_:cataloging in 1181) [ClassicSimilarity], result of:
          0.038461216 = score(doc=1181,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.26126182 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.046875 = fieldNorm(doc=1181)
        0.024758326 = weight(_text_:data in 1181) [ClassicSimilarity], result of:
          0.024758326 = score(doc=1181,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1181)
        0.015182514 = product of:
          0.030365027 = sum of:
            0.030365027 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
              0.030365027 = score(doc=1181,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.23214069 = fieldWeight in 1181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1181)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
    Source
    Cataloging and classification quarterly. 61(2023) no.5-6, S.708-733
  5. Park, Y.C.; Choi, K.-S.: Automatic thesaurus construction using Bayesian networks (1996) 0.02
    0.022397658 = product of:
      0.10078946 = sum of:
        0.04668475 = weight(_text_:data in 6581) [ClassicSimilarity], result of:
          0.04668475 = score(doc=6581,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.3952563 = fieldWeight in 6581, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=6581)
        0.054104704 = weight(_text_:processing in 6581) [ClassicSimilarity], result of:
          0.054104704 = score(doc=6581,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.35780904 = fieldWeight in 6581, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0625 = fieldNorm(doc=6581)
      0.22222222 = coord(2/9)
    
    Abstract
    Automatic thesaurus construction is accomplished by extracting term relations mechanically. A popular method uses statistical analysis to discover the term relations. For low frequency terms the statistical information of the terms cannot be reliably used for deciding the relationship of terms. This problem is referred to as the data sparseness problem. Many studies have shown that low frequency terms are of most use in thesaurus construction. Characterizes the statistical behaviour of terms by using an inference network. Develops a formal approach using a Baysian network for the data sparseness problem
    Source
    Information processing and management. 32(1996) no.5, S.543-553
  6. Milstead, J.L.: Thesauri in a full-text world (1998) 0.02
    0.022366492 = product of:
      0.067099474 = sum of:
        0.02063194 = weight(_text_:data in 2337) [ClassicSimilarity], result of:
          0.02063194 = score(doc=2337,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 2337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.03381544 = weight(_text_:processing in 2337) [ClassicSimilarity], result of:
          0.03381544 = score(doc=2337,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.22363065 = fieldWeight in 2337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.012652095 = product of:
          0.02530419 = sum of:
            0.02530419 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
              0.02530419 = score(doc=2337,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.19345059 = fieldWeight in 2337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2337)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  7. Petersen, T.: Information on images : the Art and Architecture Thesaurus (1989) 0.02
    0.01959795 = product of:
      0.08819077 = sum of:
        0.040849157 = weight(_text_:data in 3565) [ClassicSimilarity], result of:
          0.040849157 = score(doc=3565,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.34584928 = fieldWeight in 3565, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3565)
        0.047341615 = weight(_text_:processing in 3565) [ClassicSimilarity], result of:
          0.047341615 = score(doc=3565,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.3130829 = fieldWeight in 3565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3565)
      0.22222222 = coord(2/9)
    
    Abstract
    The Art and Architecture Thesaurus (AAT) was designed as a comprehensive vocabulary in its domain. Its faceted, hierarchically arranged structure allows for powerful indexing and retrieval capabilities, while its planned network of related term relationships makes it especially amenable to natural language processing. To gauge the AAT's effectiveness as a search tool against natural language queries, an experiment was carried out on DIALOG. There are 3 art data bases on DIALOG and there are also a number of other data bases that contain art related material. The experiment used queries culled from reference librarians at art and architecture libraries.
  8. Lacasta, J.; Falquet, G.; Nogueras Iso, J.N.; Zarazaga-Soria, J.: ¬A software processing chain for evaluating thesaurus quality (2017) 0.02
    0.018254451 = product of:
      0.08214503 = sum of:
        0.024758326 = weight(_text_:data in 3485) [ClassicSimilarity], result of:
          0.024758326 = score(doc=3485,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 3485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
        0.057386704 = weight(_text_:processing in 3485) [ClassicSimilarity], result of:
          0.057386704 = score(doc=3485,freq=4.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.3795138 = fieldWeight in 3485, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=3485)
      0.22222222 = coord(2/9)
    
    Abstract
    Thesauri are knowledge models commonly used for information classication and retrieval whose structure is dened by standards that describe the main features the concepts and relations must have. However, following these standards requires a deep knowledge of the field the thesaurus is going to cover and experience in their creation. To help in this task, this paper describes a software processing chain that provides dierent validation components that evaluates the quality of the main thesaurus features.
    Source
    Semantic keyword-based search on structured data sources: COST Action IC1302. Second International KEYSTONE Conference, IKC 2016, Cluj-Napoca, Romania, September 8-9, 2016, Revised Selected Papers. Eds.: A. Calì, A. et al
  9. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.02
    0.016798241 = product of:
      0.075592086 = sum of:
        0.03501356 = weight(_text_:data in 7862) [ClassicSimilarity], result of:
          0.03501356 = score(doc=7862,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.29644224 = fieldWeight in 7862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.040578526 = weight(_text_:processing in 7862) [ClassicSimilarity], result of:
          0.040578526 = score(doc=7862,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.26835677 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
      0.22222222 = coord(2/9)
    
    Abstract
    The information retrieval problem is described starting from an analysis of the concepts 'user's information request' and 'information offerings of texts'. It is shown that natural language phrases are a more adequate medium for expressing information requests and information offerings than character string based query and indexing languages complemented by Boolean oprators. The phrases must be represented as concepts to reach a language invariant level for rule based relevance analysis. The special type of representation called advanced thesaurus is used for the semantic representation of natural language phrases and for relevance processing. The analysis of the retrieval problem leads to a symmetric system structure
    Series
    Studies in classification, data analysis, and knowledge organization
    Source
    Information systems and data analysis: prospects - foundations - applications. Proc. of the 17th Annual Conference of the Gesellschaft für Klassifikation, Kaiserslautern, March 3-5, 1993. Ed.: H.-H. Bock et al
  10. Crouch, C.J.: ¬An approach to the automatic construction of global thesauri (1990) 0.01
    0.014456567 = product of:
      0.06505455 = sum of:
        0.047341615 = weight(_text_:processing in 4042) [ClassicSimilarity], result of:
          0.047341615 = score(doc=4042,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.3130829 = fieldWeight in 4042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4042)
        0.017712934 = product of:
          0.035425868 = sum of:
            0.035425868 = weight(_text_:22 in 4042) [ClassicSimilarity], result of:
              0.035425868 = score(doc=4042,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.2708308 = fieldWeight in 4042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4042)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 4.1996 3:39:53
    Source
    Information processing and management. 26(1990), no.5, S.629-640
  11. Welhouse, Z.; Lee, J.H.; Bancroft, J.: "What am I fighting for?" : creating a controlled vocabulary for video game plot metadata (2015) 0.01
    0.014048787 = product of:
      0.06321954 = sum of:
        0.038461216 = weight(_text_:cataloging in 2015) [ClassicSimilarity], result of:
          0.038461216 = score(doc=2015,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.26126182 = fieldWeight in 2015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.046875 = fieldNorm(doc=2015)
        0.024758326 = weight(_text_:data in 2015) [ClassicSimilarity], result of:
          0.024758326 = score(doc=2015,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 2015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2015)
      0.22222222 = coord(2/9)
    
    Abstract
    A video game's plot is one of its defining features, and prior research confirms the importance of plot metadata to users through persona analysis, interviews, and surveys. However, existing organizational systems, including library catalogs, game-related websites, and traditional plot classification systems, do not adequately describe the plot information of video games, in other words, what the game is really about. We attempt to address the issue by creating a controlled vocabulary based on a domain analysis involving a review of relevant literature and existing data structures. The controlled vocabulary is constructed in a pair structure for maximizing flexibility and extensibility. Adopting this controlled vocabulary for describing plot information of games will allow for useful search and collocation of video games.
    Source
    Cataloging and classification quarterly. 53(2015) no.2, S.157-189
  12. Nielsen, M.L.: Thesaurus construction : key issues and selected readings (2004) 0.01
    0.013907635 = product of:
      0.062584355 = sum of:
        0.04487142 = weight(_text_:cataloging in 5006) [ClassicSimilarity], result of:
          0.04487142 = score(doc=5006,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.30480546 = fieldWeight in 5006, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5006)
        0.017712934 = product of:
          0.035425868 = sum of:
            0.035425868 = weight(_text_:22 in 5006) [ClassicSimilarity], result of:
              0.035425868 = score(doc=5006,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.2708308 = fieldWeight in 5006, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5006)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    18. 5.2006 20:06:22
    Source
    Cataloging and classification quarterly. 37(2004) nos.3/4, S.57-74
  13. Aitchison, J.; Dextre Clarke, S.G.: ¬The Thesaurus : a historical viewpoint, with a look to the future (2004) 0.01
    0.011920829 = product of:
      0.05364373 = sum of:
        0.038461216 = weight(_text_:cataloging in 5005) [ClassicSimilarity], result of:
          0.038461216 = score(doc=5005,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.26126182 = fieldWeight in 5005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.046875 = fieldNorm(doc=5005)
        0.015182514 = product of:
          0.030365027 = sum of:
            0.030365027 = weight(_text_:22 in 5005) [ClassicSimilarity], result of:
              0.030365027 = score(doc=5005,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.23214069 = fieldWeight in 5005, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5005)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 9.2007 15:46:13
    Source
    Cataloging and classification quarterly. 37(2004) nos.3/4, S.5-21
  14. Lambert, N.: Of thesauri and computers : reflections on the need for thesauri (1995) 0.01
    0.011834323 = product of:
      0.053254455 = sum of:
        0.0330111 = weight(_text_:data in 3734) [ClassicSimilarity], result of:
          0.0330111 = score(doc=3734,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2794884 = fieldWeight in 3734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=3734)
        0.020243352 = product of:
          0.040486705 = sum of:
            0.040486705 = weight(_text_:22 in 3734) [ClassicSimilarity], result of:
              0.040486705 = score(doc=3734,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.30952093 = fieldWeight in 3734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3734)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Most indexed databases now include their thesauri and/or coding in their bibliographic files, searchable at the databases' online connect rates. Assesses the searchability of these on the different hosts. Thesauri and classifications are also available as diskette or CD-ROM products. Describes a number of these, highlighting the diskette thesaurus from IFI/Plenum Data for its flexible databases, the CLAIMS Uniterm and Comprehensive indexes to US chemical patents
    Source
    Searcher. 3(1995) no.8, S.18-22
  15. Wang, J.: Automatic thesaurus development : term extraction from title metadata (2006) 0.01
    0.011707324 = product of:
      0.052682955 = sum of:
        0.032051016 = weight(_text_:cataloging in 5063) [ClassicSimilarity], result of:
          0.032051016 = score(doc=5063,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.21771818 = fieldWeight in 5063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5063)
        0.02063194 = weight(_text_:data in 5063) [ClassicSimilarity], result of:
          0.02063194 = score(doc=5063,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 5063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5063)
      0.22222222 = coord(2/9)
    
    Abstract
    The application of thesauri in networked environments is seriously hampered by the challenges of introducing new concepts and terminology into the formal controlled vocabulary, which is critical for enhancing its retrieval capability. The author describes an automated process of adding new terms to thesauri as entry vocabulary by analyzing the association between words/phrases extracted from bibliographic titles and subject descriptors in the metadata record (subject descriptors are terms assigned from controlled vocabularies of thesauri to describe the subjects of the objects [e.g., books, articles] represented by the metadata records). The investigated approach uses a corpus of metadata for scientific and technical (S&T) publications in which the titles contain substantive words for key topics. The three steps of the method are (a) extracting words and phrases from the title field of the metadata; (b) applying a method to identify and select the specific and meaningful keywords based on the associated controlled vocabulary terms from the thesaurus used to catalog the objects; and (c) inserting selected keywords into the thesaurus as new terms (most of them are in hierarchical relationships with the existing concepts), thereby updating the thesaurus with new terminology that is being used in the literature. The effectiveness of the method was demonstrated by an experiment with the Chinese Classification Thesaurus (CCT) and bibliographic data in China Machine-Readable Cataloging Record (MARC) format (CNMARC) provided by Peking University Library. This approach is equally effective in large-scale collections and in other languages.
  16. Pollard, A.: ¬A hypertext-based thesaurus as subject browsing aid for bibliographic databases (1993) 0.01
    0.010520359 = product of:
      0.09468323 = sum of:
        0.09468323 = weight(_text_:processing in 4713) [ClassicSimilarity], result of:
          0.09468323 = score(doc=4713,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.6261658 = fieldWeight in 4713, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.109375 = fieldNorm(doc=4713)
      0.11111111 = coord(1/9)
    
    Source
    Information processing and management. 29(1993) no.3, S.345-358
  17. Bellamy, L.M.; Bickham, L.: Thesaurus development for subject cataloging (1989) 0.01
    0.010467817 = product of:
      0.09421036 = sum of:
        0.09421036 = weight(_text_:cataloging in 2262) [ClassicSimilarity], result of:
          0.09421036 = score(doc=2262,freq=12.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.63995814 = fieldWeight in 2262, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.046875 = fieldNorm(doc=2262)
      0.11111111 = coord(1/9)
    
    Abstract
    The biomedical book collection in the Genetech Library and Information Services was first inventoried and cataloged in 1983 when it totaled about 2000 titles. Cataloging records were retrieved from the OCLC system and used as a basis for cataloging. A year of cataloging produced a list of 1900 subject terms. More than one term describing the same concept often appears on the list, and no hierarchical structure related the terms to one another. As the collection grew, the subject catalog became increasingly inconsistent. To bring consistency to subject cataloging, a thesaurus of biomedical terms was constructed using the list of subject headings as a basis. This thesaurus follows the broad categories of the National Library of Medicine's Medical Subject Headings and, with some exceptions, the Guidelines for the Establishment and Development of Monolingual Thesauri. It has enabled the cataloger in providing greater in-depth subject analysis of materials added to the collection and in consistently assigning subject headings to cataloging record.
  18. ¬The thesaurus: review, renaissance and revision (2004) 0.01
    0.008546937 = product of:
      0.07692243 = sum of:
        0.07692243 = weight(_text_:cataloging in 3272) [ClassicSimilarity], result of:
          0.07692243 = score(doc=3272,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.52252364 = fieldWeight in 3272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.09375 = fieldNorm(doc=3272)
      0.11111111 = coord(1/9)
    
    Source
    Cataloging and classification quarterly. 37(2004) nos.3/4
  19. Thomas, A.R.: Teach yourself thesaurus : exercises, reading, resources (2004) 0.01
    0.0070508635 = product of:
      0.06345777 = sum of:
        0.06345777 = weight(_text_:cataloging in 4855) [ClassicSimilarity], result of:
          0.06345777 = score(doc=4855,freq=4.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.43106002 = fieldWeight in 4855, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4855)
      0.11111111 = coord(1/9)
    
    Abstract
    A rationale for self-instruction in thesaurus making is presented. Some definitions of a thesaurus are given and sources suitable to begin self-tuition indicated. A sound grasp of grammar is emphasized and appropriate readings and exercises recommended. Readings in classification, facet analysis, and subject cataloging are described. An approach for deconstruction and reconstruction of sections of classification systems and thesauri is proposed and explained. Procedures for using exercises in thesaurus construction are detailed. The means of examining individual thesauri is suggested. The availability and use of free software are described. The creation of opportunities for self-learning is considered.
    Source
    Cataloging and classification quarterly. 37(2004) nos.3/4, S.23-34
  20. Dextre Clarke, S.G.; Will, L.D.; Cochard, N.: ¬The BS8723 thesaurus data model and exchange format, and its relationship to SKOS (2008) 0.01
    0.006418825 = product of:
      0.057769425 = sum of:
        0.057769425 = weight(_text_:data in 6051) [ClassicSimilarity], result of:
          0.057769425 = score(doc=6051,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.48910472 = fieldWeight in 6051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=6051)
      0.11111111 = coord(1/9)
    

Years

Languages

  • e 91
  • d 11
  • f 4
  • sp 2
  • More… Less…

Types

  • a 90
  • el 9
  • s 6
  • m 5
  • n 3
  • x 2
  • b 1
  • r 1
  • More… Less…