Search (17223 results, page 19 of 862)

  1. Yoon, J.W.: Towards a user-oriented thesaurus for non-domain-specific image collections (2009) 0.17
    0.16883837 = product of:
      0.20260604 = sum of:
        0.01025785 = weight(_text_:und in 4221) [ClassicSimilarity], result of:
          0.01025785 = score(doc=4221,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 4221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4221)
        0.048947323 = weight(_text_:anwendung in 4221) [ClassicSimilarity], result of:
          0.048947323 = score(doc=4221,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 4221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=4221)
        0.016014574 = weight(_text_:des in 4221) [ClassicSimilarity], result of:
          0.016014574 = score(doc=4221,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 4221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=4221)
        0.06839625 = weight(_text_:prinzips in 4221) [ClassicSimilarity], result of:
          0.06839625 = score(doc=4221,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 4221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=4221)
        0.058990043 = product of:
          0.117980085 = sum of:
            0.117980085 = weight(_text_:thesaurus in 4221) [ClassicSimilarity], result of:
              0.117980085 = score(doc=4221,freq=14.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5403279 = fieldWeight in 4221, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4221)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    This study explored how user-supplied tags can be applied to designing a thesaurus that reflects the unique features of image documents. Tags from the popular image-sharing Web site Flickr were examined in terms of two central components of a thesaurus-selected concepts and their semantic relations-as well as the features of image documents. Shatford's facet category and Rosch et al.'s basic-level theory were adopted for examining concepts to be included in a thesaurus. The results suggested that the best approach to Color and Generic category descriptors is to focus on basic-level terms and to include frequently used superordinate- and subordinate-level terms. In the Abstract category, it was difficult to specify a set of abstract terms that can be used consistently and dominantly, so it was suggested to enhance browsability using hierarchical and associative relations. Study results also indicate a need for greater inclusion of Specific category terms, which were shown to be an important tool in establishing related tags. Regarding semantic relations, the study indicated that in the identification of related terms, it is important that descriptors not be limited only to the category in which a main entry belongs but broadened to include terms from other categories as well. Although future studies are needed to ensure the effectiveness of this user-oriented approach, this study yielded promising results, demonstrating that user-supplied tags can be a helpful tool in selecting concepts to be included in a thesaurus and in identifying semantic relations among the selected concepts. It is hoped that the results of this study will provide a practical guideline for designing a thesaurus for image documents that takes into account both the unique features of these documents and the unique information-seeking behaviors of general users.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Ma, X.; Carranza, E.J.M.; Wu, C.; Meer, F.D. van der; Liu, G.: ¬A SKOS-based multilingual thesaurus of geological time scale for interoperability of online geological maps (2011) 0.17
    0.16883837 = product of:
      0.20260604 = sum of:
        0.01025785 = weight(_text_:und in 4800) [ClassicSimilarity], result of:
          0.01025785 = score(doc=4800,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 4800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4800)
        0.048947323 = weight(_text_:anwendung in 4800) [ClassicSimilarity], result of:
          0.048947323 = score(doc=4800,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 4800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=4800)
        0.016014574 = weight(_text_:des in 4800) [ClassicSimilarity], result of:
          0.016014574 = score(doc=4800,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 4800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=4800)
        0.06839625 = weight(_text_:prinzips in 4800) [ClassicSimilarity], result of:
          0.06839625 = score(doc=4800,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 4800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=4800)
        0.058990043 = product of:
          0.117980085 = sum of:
            0.117980085 = weight(_text_:thesaurus in 4800) [ClassicSimilarity], result of:
              0.117980085 = score(doc=4800,freq=14.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5403279 = fieldWeight in 4800, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4800)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    The usefulness of online geological maps is hindered by linguistic barriers. Multilingual geoscience thesauri alleviate linguistic barriers of geological maps. However, the benefits of multilingual geoscience thesauri for online geological maps are less studied. In this regard, we developed a multilingual thesaurus of geological time scale (GTS) to alleviate linguistic barriers of GTS records among online geological maps. We extended the Simple Knowledge Organization System (SKOS) model to represent the ordinal hierarchical structure of GTS terms. We collected GTS terms in seven languages and encoded them into a thesaurus by using the extended SKOS model. We implemented methods of characteristic-oriented term retrieval in JavaScript programs for accessing Web Map Services (WMS), recognizing GTS terms, and making translations. With the developed thesaurus and programs, we set up a pilot system to test recognitions and translations of GTS terms in online geological maps. Results of this pilot system proved the accuracy of the developed thesaurus and the functionality of the developed programs. Therefore, with proper deployments, SKOS-based multilingual geoscience thesauri can be functional for alleviating linguistic barriers among online geological maps and, thus, improving their interoperability.
    Content
    Article Outline 1. Introduction 2. SKOS-based multilingual thesaurus of geological time scale 2.1. Addressing the insufficiency of SKOS in the context of the Semantic Web 2.2. Addressing semantics and syntax/lexicon in multilingual GTS terms 2.3. Extending SKOS model to capture GTS structure 2.4. Summary of building the SKOS-based MLTGTS 3. Recognizing and translating GTS terms retrieved from WMS 4. Pilot system, results, and evaluation 5. Discussion 6. Conclusions Vgl. unter: http://www.sciencedirect.com/science?_ob=MiamiImageURL&_cid=271720&_user=3865853&_pii=S0098300411000744&_check=y&_origin=&_coverDate=31-Oct-2011&view=c&wchp=dGLbVlt-zSkzS&_valck=1&md5=e2c1daf53df72d034d22278212578f42&ie=/sdarticle.pdf.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Willis, C.; Losee, R.M.: ¬A random walk on an ontology : using thesaurus structure for automatic subject indexing (2013) 0.17
    0.16883837 = product of:
      0.20260604 = sum of:
        0.01025785 = weight(_text_:und in 1016) [ClassicSimilarity], result of:
          0.01025785 = score(doc=1016,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
        0.048947323 = weight(_text_:anwendung in 1016) [ClassicSimilarity], result of:
          0.048947323 = score(doc=1016,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
        0.016014574 = weight(_text_:des in 1016) [ClassicSimilarity], result of:
          0.016014574 = score(doc=1016,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
        0.06839625 = weight(_text_:prinzips in 1016) [ClassicSimilarity], result of:
          0.06839625 = score(doc=1016,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
        0.058990043 = product of:
          0.117980085 = sum of:
            0.117980085 = weight(_text_:thesaurus in 1016) [ClassicSimilarity], result of:
              0.117980085 = score(doc=1016,freq=14.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5403279 = fieldWeight in 1016, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1016)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    Relationships between terms and features are an essential component of thesauri, ontologies, and a range of controlled vocabularies. In this article, we describe ways to identify important concepts in documents using the relationships in a thesaurus or other vocabulary structures. We introduce a methodology for the analysis and modeling of the indexing process based on a weighted random walk algorithm. The primary goal of this research is the analysis of the contribution of thesaurus structure to the indexing process. The resulting models are evaluated in the context of automatic subject indexing using four collections of documents pre-indexed with 4 different thesauri (AGROVOC [UN Food and Agriculture Organization], high-energy physics taxonomy [HEP], National Agricultural Library Thesaurus [NALT], and medical subject headings [MeSH]). We also introduce a thesaurus-centric matching algorithm intended to improve the quality of candidate concepts. In all cases, the weighted random walk improves automatic indexing performance over matching alone with an increase in average precision (AP) of 9% for HEP, 11% for MeSH, 35% for NALT, and 37% for AGROVOC. The results of the analysis support our hypothesis that subject indexing is in part a browsing process, and that using the vocabulary and its structure in a thesaurus contributes to the indexing process. The amount that the vocabulary structure contributes was found to differ among the 4 thesauri, possibly due to the vocabulary used in the corresponding thesauri and the structural relationships between the terms. Each of the thesauri and the manual indexing associated with it is characterized using the methods developed here.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Dextre Clarke, S.G.; Gilchrist, A.; Will, L.: Revision and extension of thesaurus standards (2004) 0.17
    0.16519178 = product of:
      0.19823015 = sum of:
        0.01025785 = weight(_text_:und in 2615) [ClassicSimilarity], result of:
          0.01025785 = score(doc=2615,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 2615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.048947323 = weight(_text_:anwendung in 2615) [ClassicSimilarity], result of:
          0.048947323 = score(doc=2615,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 2615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.016014574 = weight(_text_:des in 2615) [ClassicSimilarity], result of:
          0.016014574 = score(doc=2615,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 2615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.06839625 = weight(_text_:prinzips in 2615) [ClassicSimilarity], result of:
          0.06839625 = score(doc=2615,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 2615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.054614164 = product of:
          0.10922833 = sum of:
            0.10922833 = weight(_text_:thesaurus in 2615) [ClassicSimilarity], result of:
              0.10922833 = score(doc=2615,freq=12.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5002464 = fieldWeight in 2615, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2615)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    The current standards for monolingual and multilingual thesauri are long overdue for an update. This applies to the international standards ISO 2788 and ISO 5964, as well as the corresponding national standards in several countries and the American standard ANSI/NISO Z39.19. Work is now under way in the UK and in the USA to revise and extend the standards, with particular emphasis on interoperability needs in our world of vast electronic networks. Work in the UK is starting with the British Standards, in the hope of leading on to one international standard to serve all. Some of the issues still under discussion include the treatment of facet analysis, coverage of additional types of controlled vocabulary such as classification schemes, taxonomies and ontologies, and mapping from one vocabulary to another. 1. Are thesaurus standards still needed? Since the 1960s, even before the renowned Cranfield experiments (Cleverdon et al., 1966; Cleverdon, 1967) arguments have raged over the usefulness or otherwise of controlled vocabularies. The case has never been proved definitively one way or the other. At the same time, a recognition has become widespread that no one search method can answer all retrieval requirements. In today's environment of very large networks of resources, the skilled information professional uses a range of techniques. Among these, controlled vocabularies are valued alongside others. The first international standard for monolingual thesauri was issued in 1974. In those days, the main application was for postcoordinate indexing and retrieval from document collections or bibliographic databases. For many information professionals the only practicable alternative to a thesaurus was a classification scheme. And so the thesaurus developed a strong following. After computer systems with full text search capability became widely available, however, the arguments against controlled vocabularies gained more followers. The cost of building and maintaining a thesaurus or a classification scheme was a strong disincentive. Today's databases are typically immense compared with those three decades ago. Full text searching is taken for granted, not just in discrete databases but across all the resources in an intranet or even the Internet. But intranets have brought particular frustration as users discover that despite all the computer power, they cannot find items which they know to be present an the network. So the trend against controlled vocabularies is now being reversed, as many information professionals are turning to them for help. Standards to guide them are still in demand.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Shiri, A.A.; Revie, C.; Chowdhurry, G.: Assessing the impact of user interaction with thesaural knowledge structures : a quantitative analysis framework (2003) 0.17
    0.16519178 = product of:
      0.19823015 = sum of:
        0.01025785 = weight(_text_:und in 2766) [ClassicSimilarity], result of:
          0.01025785 = score(doc=2766,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 2766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2766)
        0.048947323 = weight(_text_:anwendung in 2766) [ClassicSimilarity], result of:
          0.048947323 = score(doc=2766,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 2766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=2766)
        0.016014574 = weight(_text_:des in 2766) [ClassicSimilarity], result of:
          0.016014574 = score(doc=2766,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 2766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=2766)
        0.06839625 = weight(_text_:prinzips in 2766) [ClassicSimilarity], result of:
          0.06839625 = score(doc=2766,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 2766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=2766)
        0.054614164 = product of:
          0.10922833 = sum of:
            0.10922833 = weight(_text_:thesaurus in 2766) [ClassicSimilarity], result of:
              0.10922833 = score(doc=2766,freq=12.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5002464 = fieldWeight in 2766, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2766)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    Thesauri have been important information and knowledge organisation tools for more than three decades. The recent emergence and phenomenal growth of the World Wide Web has created new opportunities to introduce thesauri as information search and retrieval aids to end user communities. While the number of web-based and hypertextual thesauri continues to grow, few investigations have yet been carried out to evaluate how end-users, for whom all these efforts are ostensibly made, interact with and make use of thesauri for query building and expansion. The present paper reports a pilot study carried out to determine the extent to which a thesaurus-enhanced search interface to a web-based database aided end-users in their selection of search terms. The study also investigated the ways in which users interacted with the thesaurus structure, terms, and interface. Thesaurusbased searching and browsing behaviours adopted by users while interacting with the thesaurus-enhanced search interface were also examined. 1. Introduction The last decade has witnessed the emergence of a broad range of applications for knowledge structures in general and thesauri in particular. A number of researchers have predicted that thesauri will increasingly be used in retrieval rather than for indexing (Milstead, 1998; Aitchison et al., 1997) and that their application in information retrieval systems will become more diverse due to the growth of fulltext databases accessed over the Internet (Williamson, 2000). Some researchers have emphasised the need for tailoring the structure and content of thesauri as tools for end-user searching (Bates, 1986; Strong and Drott, 1986; Anderson and Rowley, 1991; Lopez-Huertas, 1997) while others have suggested thesaurus-enhanced user interfaces to support query formulation and expansion (Pollitt et.al., 1994; Jones et.al., 1995; Beaulieu, 1997). The recent phenomenal growth of the World Wide Web has created new opportunities to introduce thesauri as information search and retrieval aids to end user communities. While the number of web-based and hypertextual thesauri continues to grow, few investigations have been carried out to evaluate the ways in which end-users interact with and make use of online thesauri for query building and expansion. The work reported here expands an a pilot study (Shiri and Revie, 2001) carried out to investigate user - thesaurus interaction in the domains of biology and veterinary medicine.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  6. ¬The thesaurus: review, renaissance and revision (2004) 0.16
    0.162721 = product of:
      0.19526519 = sum of:
        0.013325338 = weight(_text_:und in 3243) [ClassicSimilarity], result of:
          0.013325338 = score(doc=3243,freq=6.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.12724145 = fieldWeight in 3243, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3243)
        0.036710493 = weight(_text_:anwendung in 3243) [ClassicSimilarity], result of:
          0.036710493 = score(doc=3243,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.16047385 = fieldWeight in 3243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3243)
        0.01201093 = weight(_text_:des in 3243) [ClassicSimilarity], result of:
          0.01201093 = score(doc=3243,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.091790445 = fieldWeight in 3243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3243)
        0.051297184 = weight(_text_:prinzips in 3243) [ClassicSimilarity], result of:
          0.051297184 = score(doc=3243,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.189695 = fieldWeight in 3243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3243)
        0.08192125 = product of:
          0.1638425 = sum of:
            0.1638425 = weight(_text_:thesaurus in 3243) [ClassicSimilarity], result of:
              0.1638425 = score(doc=3243,freq=48.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.7503696 = fieldWeight in 3243, product of:
                  6.928203 = tf(freq=48.0), with freq of:
                    48.0 = termFreq=48.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3243)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Content
    Enthält die Beiträge: Spiteri, L.F.: Word association testing and thesaurus construction: a pilot study. Aitchison, J., S.G. Dextre-Clarke: The Thesaurus: a historical viewpoint, with a look to the future. Thomas, A.R.: Teach yourself thesaurus: exercises, reading, resources. Shearer, J.R.: A practical exercise in building a thesaurus. Nielsen, M.L.: Thesaurus construction: key issues and selected readings. Riesland, M.A.: Tools of the trade: vocabulary management software. Will, L.: Thesaurus consultancy. Owens, L.A., P.A. Cochrane: Thesaurus evaluation. Greenberg, J.: User comprehension and application of information retrieval thesauri. Johnson, E.H.: Distributed thesaurus Web services. Thomas, A.R., S.K. Roe: An interview with Dr. Amy J. Warner. Landry, P.: Multilingual subject access: the linking approach of MACS.
    Footnote
    Rez. in: KO 32(2005) no.2, S.95-97 (A. Gilchrist):"It might be thought unfortunate that the word thesaurus is assonant with prehistoric beasts but as this book clearly demonstrates, the thesaurus is undergoing a notable revival, and we can remind ourselves that the word comes from the Greek thesaurus, meaning a treasury. This is a useful and timely source book, bringing together ten chapters, following an Editorial introduction and culminating in an interview with a member of the team responsible for revising the NISO Standard Guidelines for the construction, format and management of monolingual thesauri; formal proof of the thesaural renaissance. Though predominantly an American publication, it is good to see four English authors as well as one from Canada and one from Denmark; and with a good balance of academics and practitioners. This has helped to widen the net in the citing of useful references. While the techniques of thesaurus construction are still basically sound, the Editors, in their introduction, point out that the thesaurus, in its sense of an information retrieval tool is almost exactly 50 years old, and that the information environment of today is radically different. They claim three purposes for the compilation: "to acquaint or remind the Library and Information Science community of the history of the development of the thesaurus and standards for thesaurus construction. to provide bibliographies and tutorials from which any reader can become more grounded in her or his understanding of thesaurus construction, use and evaluation. to address topics related to thesauri but that are unique to the current digital environment, or network of networks." This last purpose, understandably, tends to be the slightly more tentative part of the book, but as Rosenfeld and Morville said in their book Information architecture for the World Wide Web "thesauri [will] become a key tool for dealing with the growing size and importance of web sites and intranets". The evidence supporting their belief has been growing steadily in the seven years since the first edition was published.
    The didactic parts of the book are a collection of exercises, readings and resources constituting a "Teach yourself " chapter written by Alan Thomas, ending with the warning that "New challenges include how to devise multi-functional and usersensitive vocabularies, corporate taxonomies and ontologies, and how to apply the transformative technology to them." This is absolutely right, and there is a need for some good writing that would tackle these issues. Another chapter, by James Shearer, skilfully manages to compress a practical exercise in building a thesaurus into some twenty A5 size pages. The third chapter in this set, by Marianne Lykke Nielsen, contains extensive reviews of key issues and selected readings under eight headings from the concept of the thesaurus, through the various construction stages and ending with automatic construction techniques. . . . This is a useful and approachable book. It is a pity that the index is such a poor advertisement for vocabulary control and usefulness."
    RSWK
    Thesaurus
    Informations- und Dokumentationswissenschaft / Information Retrieval / Inhaltserschließung / Thesaurus (BVB)
    Subject
    Thesaurus
    Informations- und Dokumentationswissenschaft / Information Retrieval / Inhaltserschließung / Thesaurus (BVB)
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Li, K.W.; Yang, C.C.: Automatic crosslingual thesaurus generated from the Hong Kong SAR Police Department Web Corpus for Crime Analysis (2005) 0.16
    0.16122639 = product of:
      0.19347167 = sum of:
        0.01025785 = weight(_text_:und in 3391) [ClassicSimilarity], result of:
          0.01025785 = score(doc=3391,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
        0.048947323 = weight(_text_:anwendung in 3391) [ClassicSimilarity], result of:
          0.048947323 = score(doc=3391,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
        0.016014574 = weight(_text_:des in 3391) [ClassicSimilarity], result of:
          0.016014574 = score(doc=3391,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
        0.06839625 = weight(_text_:prinzips in 3391) [ClassicSimilarity], result of:
          0.06839625 = score(doc=3391,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
        0.049855687 = product of:
          0.09971137 = sum of:
            0.09971137 = weight(_text_:thesaurus in 3391) [ClassicSimilarity], result of:
              0.09971137 = score(doc=3391,freq=10.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.45666042 = fieldWeight in 3391, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    For the sake of national security, very large volumes of data and information are generated and gathered daily. Much of this data and information is written in different languages, stored in different locations, and may be seemingly unconnected. Crosslingual semantic interoperability is a major challenge to generate an overview of this disparate data and information so that it can be analyzed, shared, searched, and summarized. The recent terrorist attacks and the tragic events of September 11, 2001 have prompted increased attention an national security and criminal analysis. Many Asian countries and cities, such as Japan, Taiwan, and Singapore, have been advised that they may become the next targets of terrorist attacks. Semantic interoperability has been a focus in digital library research. Traditional information retrieval (IR) approaches normally require a document to share some common keywords with the query. Generating the associations for the related terms between the two term spaces of users and documents is an important issue. The problem can be viewed as the creation of a thesaurus. Apart from this, terrorists and criminals may communicate through letters, e-mails, and faxes in languages other than English. The translation ambiguity significantly exacerbates the retrieval problem. The problem is expanded to crosslingual semantic interoperability. In this paper, we focus an the English/Chinese crosslingual semantic interoperability problem. However, the developed techniques are not limited to English and Chinese languages but can be applied to many other languages. English and Chinese are popular languages in the Asian region. Much information about national security or crime is communicated in these languages. An efficient automatically generated thesaurus between these languages is important to crosslingual information retrieval between English and Chinese languages. To facilitate crosslingual information retrieval, a corpus-based approach uses the term co-occurrence statistics in parallel or comparable corpora to construct a statistical translation model to cross the language boundary. In this paper, the text based approach to align English/Chinese Hong Kong Police press release documents from the Web is first presented. We also introduce an algorithmic approach to generate a robust knowledge base based an statistical correlation analysis of the semantics (knowledge) embedded in the bilingual press release corpus. The research output consisted of a thesaurus-like, semantic network knowledge base, which can aid in semanticsbased crosslingual information management and retrieval.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.16
    0.16074404 = product of:
      0.19289285 = sum of:
        0.0089756185 = weight(_text_:und in 3644) [ClassicSimilarity], result of:
          0.0089756185 = score(doc=3644,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.085706696 = fieldWeight in 3644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.042828906 = weight(_text_:anwendung in 3644) [ClassicSimilarity], result of:
          0.042828906 = score(doc=3644,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.18721949 = fieldWeight in 3644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.019817024 = weight(_text_:des in 3644) [ClassicSimilarity], result of:
          0.019817024 = score(doc=3644,freq=4.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.1514465 = fieldWeight in 3644, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.059846714 = weight(_text_:prinzips in 3644) [ClassicSimilarity], result of:
          0.059846714 = score(doc=3644,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.22131084 = fieldWeight in 3644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.06142459 = sum of:
          0.039018244 = weight(_text_:thesaurus in 3644) [ClassicSimilarity], result of:
            0.039018244 = score(doc=3644,freq=2.0), product of:
              0.21834905 = queryWeight, product of:
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.04725067 = queryNorm
              0.17869665 = fieldWeight in 3644, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3644)
          0.022406347 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
            0.022406347 = score(doc=3644,freq=2.0), product of:
              0.16546379 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04725067 = queryNorm
              0.1354154 = fieldWeight in 3644, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3644)
      0.8333333 = coord(5/6)
    
    Footnote
    Nachdruck des Originalartikels mit Kommentierung durch die Herausgeber
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  9. Nikolai, R.: Thesaurusföderationen : Ein Rahmenwerk für die flexible Integration von heterogenen, autonomen Thesauri (2002) 0.16
    0.15993637 = product of:
      0.19192365 = sum of:
        0.033534694 = weight(_text_:und in 165) [ClassicSimilarity], result of:
          0.033534694 = score(doc=165,freq=38.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.32021725 = fieldWeight in 165, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=165)
        0.036710493 = weight(_text_:anwendung in 165) [ClassicSimilarity], result of:
          0.036710493 = score(doc=165,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.16047385 = fieldWeight in 165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0234375 = fieldNorm(doc=165)
        0.02942065 = weight(_text_:des in 165) [ClassicSimilarity], result of:
          0.02942065 = score(doc=165,freq=12.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.22483975 = fieldWeight in 165, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0234375 = fieldNorm(doc=165)
        0.051297184 = weight(_text_:prinzips in 165) [ClassicSimilarity], result of:
          0.051297184 = score(doc=165,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.189695 = fieldWeight in 165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.0234375 = fieldNorm(doc=165)
        0.040960625 = product of:
          0.08192125 = sum of:
            0.08192125 = weight(_text_:thesaurus in 165) [ClassicSimilarity], result of:
              0.08192125 = score(doc=165,freq=12.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.3751848 = fieldWeight in 165, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=165)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    Dem wachsenden Bedarf der "Informationsgesellschaft" nach Informationen folgten in den letzten Jahren rasch wachsende Informationssysteme, die heterogene Informationen global verteilt und einfach zugreifbar vorhalten. Solche modernen Informationssysteme und datenintensiven Anwendungen können als eine wesentliche Komponente "verteilter Informationsumgebungen" angesehen werden, die universellen Zugriff auf Informationen aus einer Vielzahl menschlicher Wissensgebiete ermöglichen. Charakteristische Eigenschaften derartiger großer Informationssysteme sind, dass sie auf großen, zum Teil autonomen Informationsquellen basieren, die häufig über offene Computernetze (lose) verbunden sind, eine große Anzahl von Benutzern unterstützen, eine Infrastruktur anbieten, die den einfachen Zugriff auf verschiedenen Dienste ermöglicht, und dass die Qualität dieser Dienste entscheidend für deren Erfolg ist. Von besonderer Bedeutung sind bei derartig großen zur Verfügung stehenden Datenmengen Dienste, die das gezielte Wiederauffinden von Informationen (Information Retrieval) ermöglichen. Thesauri sind ein bewährtes Werkzeug, um diesen Prozess zu unterstützen. Sie bieten ein einheitliches und konsistentes Vokabular, das als Grundlage für semantisches Information Retrieval verwendet werden kann. Bei einem häufig fachübergreifenden Datenbestand, der auch mehrsprachig sein kann, sind traditionelle Fachthesauri, die in der Regel nur einsprachig vorliegen, aber nicht mehr ausreichend. Selbst in Dokumentenbeständen eines Fachinformationssystems finden sich oft Ausweitungen auf Begriffe angrenzender Fachgebiete. Es wird ein umfangreicheres und zugleich spezialisierteres Vokabular gefordert.
    In Informationssystemen werden häufig jeweils an die besonderen Bedürfnisse der Benutzer angepasste Thesauri verwendet. Bei einer Integration der Informationssysteme wird auch eine Integration der Thesauri erforderlich, um den Benutzer beispielsweise dabei zu unterstützen, Informationen aus verschiedenen Informationsquellen zu erhalten. Die DG XIII der Europäischen Union hat bereits 1990 eine Liste von 1.000 häufig verwendeten Thesauri weltweit erstellt. Eine Verbindung dieser Thesauri wäre ein wichtiger Fortschritt bei der gemeinsamen Benutzung der Terminologie. Da das Aufbauen eines neuen Thesaurus, aber auch die manuelle Integration existierender Thesauri immense Kosten verursacht (als Beispiel sei genannt, dass zur Erstellung einer initialen Version des Allgemeinen Umweltthesaurus GEMET mehrere Mannjahre benötigt wurden), sind neue Lösungen, die eine integrierte Sicht auf die Vokabulare mehrerer Thesauri unter Aufwendung finanziell vertretbarer Mittel ermöglichen, erforderlich. Zudem wird die klassische Form der Integration von Thesauri der losen Kopplung von Informationssystemen nicht gerecht. Die erforderlichen technischen Voraussetzungen für das logische Zusammenbringen verteilter, heterogener Thesauri sind durch lokale und globale Vernetzung weitestgehend gegeben.
    Zielbeschreibung: In dieser Arbeit soll ein Rahmenwerk für die lose Integration von heterogenen und autonomen Thesauri, Thesaurusföderationen genannt, erarbeitet werden. Das Konzept der Thesaurusföderationen soll den Anforderungen moderner Informationssysteme nach zugleich umfangreicheren und spezialisierteren Vokabularen unter Ausnutzung neuer technologischer Möglichkeiten gerecht werden. Der zu entwickelnde Integrations-Ansatz soll als Basis die mit großem Aufwand erstellten, bereits vorhandenen Thesauri (Komponententhesauri) verwenden und deren Vokabulare verknüpfen, so dass sie als ein Gesamtvokabular erscheinen. Existierende Ansätze für einen integrierten Zugriff auf verschiedene Informationssysteme sowie der gleichzeitigen Verwendung verschiedener Terminologien basieren auf so genannten MultiThesaurus-Systemen. Ein wesentlicher Kritikpunkt an diesen Ansätzen ist der, dass jeweils nur Teilaspekte behandelt werden. Was fehlt, ist ein in ganzheitliches Rahmenwerk, das die Aspekte der Integration, der Behandlung von Konflikten und Unvollständigkeiten, der Verwendung im Information Retrieval und schließlich die Bewertung der Güte des integrierten Vokabulars betrachtet. Ein solches Rahmenwerk soll in dieser Arbeit erstmals erarbeitet werden. Dabei gilt es zu berücksichtigen, dass eine Überforderung des Benutzers durch die Komplexität des Gesamtvokabulars vermieden wird. U.a. soll das dynamische Ein-/Ausblenden von teilhabenden Thesauri unterstützt werden. Die existierenden Ansätze der Multi-Thesaurus-Systeme berücksichtigen zudem nicht eine in verteilten Informationssystemen erstrebenswerte Autonomie der Thesauri und ihre häufig gegebene Heterogenität. Um diesen Anforderungen gerecht zu werden, soll sich unser Ansatz an den Konzepten föderierter Datenbanksysteme orientieren, allerdings ohne die Einschränkung, ausschließlich von Datenbankverwaltungsystemen verwaltete Thesauri zu integrieren. Der Schwerpunkt soll hier auf der semantischen Integration liegen, die in föderierten Datenbanksystemen häufig nur ein Randthema ist. Neue Integrationsverfahren auf semantischer Ebene (Begriffsintegration), die im Gegensatz zu bekannten Ansätzen die Ergebnisse einer rechner-unterstützten Analyse der Inhalte und Güte der Thesauri berücksichtigen und entsprechend konfiguriert werden, sollen eine verbesserte semi-automatische Integration ermöglichen, ebenso erstmals eine Bewertung der Integrationsergebnisse. Diese Verfahren sollen die Reichhaltigkeit der Informationen in den Thesauri selbst ausnutzen sowie auf weitere Wissensquellen zugreifen können, um den notwendigen menschlichen Einsatz zu minimieren. Die Thesaurusföderation soll ihre Dienste als Mehrwertdienste anbieten und dazu auf die an der Föderation beteiligten heterogenen Komponententhesauri zugreifen, deren Autonomie erhalten bleibt. Um den breiten Einsatz des entwickelten Ansatzes zu ermöglichen, soll das Konzept grundsätzlich fachgebietsunabhängig sein. Auch wenn eine (semi-)automatische Integration unter Berücksichtigung der Autonomie einem durch manuelle Verfahren und Anpassung der beteiligten Thesauri entstandenem SuperThesaurus unterlegen ist, ist dies möglicherweise die einzig praktikable Art und Weise, um ein flexibel skalierbares Multi-Thesaurus-System zu erstellen und zu pflegen.
    RSWK
    Thesaurus / Föderiertes System
    Subject
    Thesaurus / Föderiertes System
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  10. Wenke, M.: Schlagwortkatalog und Schlagwortindex : eine Untersuchung über die Zweckmäßigkeit ihrer Anwendung und mögliche Kombinationsformen am Beispiel des Katalogwerks einer Großstadtbücherei (1970) 0.16
    0.15893738 = product of:
      0.31787476 = sum of:
        0.058027163 = weight(_text_:und in 234) [ClassicSimilarity], result of:
          0.058027163 = score(doc=234,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.55409175 = fieldWeight in 234, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=234)
        0.19578929 = weight(_text_:anwendung in 234) [ClassicSimilarity], result of:
          0.19578929 = score(doc=234,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.85586053 = fieldWeight in 234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.125 = fieldNorm(doc=234)
        0.0640583 = weight(_text_:des in 234) [ClassicSimilarity], result of:
          0.0640583 = score(doc=234,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.48954904 = fieldWeight in 234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.125 = fieldNorm(doc=234)
      0.5 = coord(3/6)
    
  11. Fischer, D.H.: Converting a thesaurus to OWL : Notes on the paper "The National Cancer Institute's Thesaurus and Ontology" (2004) 0.16
    0.15613104 = product of:
      0.18735726 = sum of:
        0.0089756185 = weight(_text_:und in 2362) [ClassicSimilarity], result of:
          0.0089756185 = score(doc=2362,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.085706696 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
        0.042828906 = weight(_text_:anwendung in 2362) [ClassicSimilarity], result of:
          0.042828906 = score(doc=2362,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.18721949 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
        0.014012752 = weight(_text_:des in 2362) [ClassicSimilarity], result of:
          0.014012752 = score(doc=2362,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.10708885 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
        0.059846714 = weight(_text_:prinzips in 2362) [ClassicSimilarity], result of:
          0.059846714 = score(doc=2362,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.22131084 = fieldWeight in 2362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2362)
        0.061693266 = product of:
          0.12338653 = sum of:
            0.12338653 = weight(_text_:thesaurus in 2362) [ClassicSimilarity], result of:
              0.12338653 = score(doc=2362,freq=20.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.56508845 = fieldWeight in 2362, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2362)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    The paper analysed here is a kind of position paper. In order to get a better under-standing of the reported work I used the retrieval interface of the thesaurus, the so-called NCI DTS Browser accessible via the Web3, and I perused the cited OWL file4 with numerous "Find" and "Find next" string searches. In addition the file was im-ported into Protégé 2000, Release 2.0, with OWL Plugin 1.0 and Racer Plugin 1.7.14. At the end of the paper's introduction the authors say: "In the following sections, this paper will describe the terminology development process at NCI, and the issues associated with converting a description logic based nomenclature to a semantically rich OWL ontology." While I will not deal with the first part, i.e. the terminology development process at NCI, I do not see the thesaurus as a description logic based nomenclature, or its cur-rent state and conversion already result in a "rich" OWL ontology. What does "rich" mean here? According to my view there is a great quantity of concepts and links but a very poor description logic structure which enables inferences. And what does the fol-lowing really mean, which is said a few lines previously: "Although editors have defined a number of named ontologic relations to support the description-logic based structure of the Thesaurus, additional relation-ships are considered for inclusion as required to support dependent applications."
    According to my findings several relations available in the thesaurus query interface as "roles", are not used, i.e. there are not yet any assertions with them. And those which are used do not contribute to complete concept definitions of concepts which represent thesaurus main entries. In other words: The authors claim to already have a "description logic based nomenclature", where there is not yet one which deserves that title by being much more than a thesaurus with strict subsumption and additional inheritable semantic links. In the last section of the paper the authors say: "The most time consuming process in this conversion was making a careful analysis of the Thesaurus to understand the best way to translate it into OWL." "For other conversions, these same types of distinctions and decisions must be made. The expressive power of a proprietary encoding can vary widely from that in OWL or RDF. Understanding the original semantics and engineering a solution that most closely duplicates it is critical for creating a useful and accu-rate ontology." My question is: What decisions were made and are they exemplary, can they be rec-ommended as "the best way"? I raise strong doubts with respect to that, and I miss more profound discussions of the issues at stake. The following notes are dedicated to a critical description and assessment of the results of that conversion activity. They are written in a tutorial style more or less addressing students, but myself being a learner especially in the field of medical knowledge representation I do not speak "ex cathedra".
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  12. ¬3rd Infoterm Symposiums Terminology Work in Subject Fields, Vienna, 12.-14.11.1991 (1992) 0.15
    0.1518617 = product of:
      0.18223403 = sum of:
        0.01025785 = weight(_text_:und in 4648) [ClassicSimilarity], result of:
          0.01025785 = score(doc=4648,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4648)
        0.048947323 = weight(_text_:anwendung in 4648) [ClassicSimilarity], result of:
          0.048947323 = score(doc=4648,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=4648)
        0.016014574 = weight(_text_:des in 4648) [ClassicSimilarity], result of:
          0.016014574 = score(doc=4648,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=4648)
        0.06839625 = weight(_text_:prinzips in 4648) [ClassicSimilarity], result of:
          0.06839625 = score(doc=4648,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=4648)
        0.038618047 = product of:
          0.07723609 = sum of:
            0.07723609 = weight(_text_:thesaurus in 4648) [ClassicSimilarity], result of:
              0.07723609 = score(doc=4648,freq=6.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.35372764 = fieldWeight in 4648, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4648)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Content
    Enthält 47 Beiträge den Schwerpunkten der Tagung: Biology and related fields - Engineering and natural sciences - Medicine - Information science and information technology - Law and economics - Social sciences and humanities - Terminology research and interdisciplinary aspects; darunter: OESER, E. u. G. BUDIN: Explication and representation of qualitative biological and medical concepts: the example of the pocket knowledge data base on carnivores; HOHENEGGER, J.: Specles as the basic units in taxonomy and nomenclature; LAVIETER, L. de, J.A. DESCHAMPS u. B. FELLUGA: A multilingual environmental thesaurus: past, present, and future; TODESCHINI, C. u. G. Thoemig: The thesaurus of the International Nuclear Information System: experiences in an international environment; CITKINA, F.: Terminology of mathematics: contrastive analysis as a basis for standardization and harmonization; WALKER, D.G.: Technology and engineering terminolgy: translation problems encountered and suggested solutions; VERVOOM, A.J.: Terminology and engineering sciences; HIRS, W.M.: ICD-10, a missed chance and a new opportunity for medical terminology standardization; THOMAS, P.: Subject indexes in medical literature; RAHMSTORF, G.: Analysis of information technology terms; NEGRINI, G.: Indexing language for research projects and its graphic display; BATEWICZ, M.: Impact of modern information technology on knowledge transfer services and terminology; RATZINGER, M.: Multilingual product description (MPD): a European project; OHLY, H.P.: Terminology of the social sciences and social context approaches; BEAUGRANDE, R. de: Terminology and discourse between the social sciences and the humanities; MUSKENS, G.: Terminological standardisation and socio-linguistic diversity: dilemmas of crosscultural sociology; SNELL, B.: Terminology ten years on; ZHURAVLEV, V.F.: Standard ontological structures of systems of concepts of active knowledge; WRIGHT, S.E.: Terminology standardization in standards societies and professional associations in the United States; DAHLBERG; I.: The terminology of subject fields - reconsidered; AHMAD, K. u. H. Fulford: Terminology of interdisciplinary fields: a new perspective; DATAA, J.: Full-text databases as a terminological support for translation
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.15
    0.1518617 = product of:
      0.18223403 = sum of:
        0.01025785 = weight(_text_:und in 549) [ClassicSimilarity], result of:
          0.01025785 = score(doc=549,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.048947323 = weight(_text_:anwendung in 549) [ClassicSimilarity], result of:
          0.048947323 = score(doc=549,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.016014574 = weight(_text_:des in 549) [ClassicSimilarity], result of:
          0.016014574 = score(doc=549,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.06839625 = weight(_text_:prinzips in 549) [ClassicSimilarity], result of:
          0.06839625 = score(doc=549,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.038618047 = product of:
          0.07723609 = sum of:
            0.07723609 = weight(_text_:thesaurus in 549) [ClassicSimilarity], result of:
              0.07723609 = score(doc=549,freq=6.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.35372764 = fieldWeight in 549, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  14. Wissensspeicher in digitalen Räumen : Nachhaltigkeit, Verfügbarkeit, semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008 (2010) 0.15
    0.14855632 = product of:
      0.22283447 = sum of:
        0.042294197 = weight(_text_:und in 774) [ClassicSimilarity], result of:
          0.042294197 = score(doc=774,freq=34.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.40386027 = fieldWeight in 774, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=774)
        0.06922197 = weight(_text_:anwendung in 774) [ClassicSimilarity], result of:
          0.06922197 = score(doc=774,freq=4.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.3025924 = fieldWeight in 774, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=774)
        0.022648027 = weight(_text_:des in 774) [ClassicSimilarity], result of:
          0.022648027 = score(doc=774,freq=4.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.17308173 = fieldWeight in 774, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=774)
        0.08867026 = sum of:
          0.06306301 = weight(_text_:thesaurus in 774) [ClassicSimilarity], result of:
            0.06306301 = score(doc=774,freq=4.0), product of:
              0.21834905 = queryWeight, product of:
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.04725067 = queryNorm
              0.2888174 = fieldWeight in 774, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.03125 = fieldNorm(doc=774)
          0.025607252 = weight(_text_:22 in 774) [ClassicSimilarity], result of:
            0.025607252 = score(doc=774,freq=2.0), product of:
              0.16546379 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04725067 = queryNorm
              0.15476047 = fieldWeight in 774, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=774)
      0.6666667 = coord(4/6)
    
    Abstract
    In diesem Band 11 der Reihe Fortschritte der Wissensorganisation mit dem Titel Wissensspeicher in digitalen Räumen sind Beiträge enthalten, die auf die Vorträge der 11. Tagung der Deutschen Sektion der International Society of Knowledge Organnation 2008 in Konstanz zurückgehen. Diese Texte mußten nur in einigen Fällen aktualisiert werden, was für die damalige wie heutige Aktualität und Relevanz spricht. Manche damals neuen Ansätze sind gar inzwischen zum gängigen Standard geworden.
    Content
    Inhalt: A. Grundsätzliche Fragen (aus dem Umfeld) der Wissensorganisation Markus Gottwald, Matthias Klemm und Jan Weyand: Warum ist es schwierig, Wissen zu managen? Ein soziologischer Deutungsversuch anhand eines Wissensmanagementprojekts in einem Großunternehmen H. Peter Ohly: Wissenskommunikation und -organisation. Quo vadis? Helmut F. Spinner: Wissenspartizipation und Wissenschaftskommunikation in drei Wissensräumen: Entwurf einer integrierten Theorie B. Dokumentationssprachen in der Anwendung Felix Boteram: Semantische Relationen in Dokumentationssprachen vom Thesaurus zum semantischen Netz Jessica Hubrich: Multilinguale Wissensorganisation im Zeitalter der Globalisierung: das Projekt CrissCross Vivien Petras: Heterogenitätsbehandlung und Terminology Mapping durch Crosskonkordanzen - eine Fallstudie Manfred Hauer, Uwe Leissing und Karl Rädler: Query-Expansion durch Fachthesauri Erfahrungsbericht zu dandelon.com, Vorarlberger Parlamentsinformationssystem und vorarlberg.at
    C. Begriffsarbeit in der Wissensorganisation Ingetraut Dahlberg: Begriffsarbeit in der Wissensorganisation Claudio Gnoli, Gabriele Merli, Gianni Pavan, Elisabetta Bernuzzi, and Marco Priano: Freely faceted classification for a Web-based bibliographic archive The BioAcoustic Reference Database Stefan Hauser: Terminologiearbeit im Bereich Wissensorganisation - Vergleich dreier Publikationen anhand der Darstellung des Themenkomplexes Thesaurus Daniel Kless: Erstellung eines allgemeinen Standards zur Wissensorganisation: Nutzen, Möglichkeiten, Herausforderungen, Wege D. Kommunikation und Lernen Gerald Beck und Simon Meissner: Strukturierung und Vermittlung von heterogenen (Nicht-)Wissensbeständen in der Risikokommunikation Angelo Chianese, Francesca Cantone, Mario Caropreso, and Vincenzo Moscato: ARCHAEOLOGY 2.0: Cultural E-Learning tools and distributed repositories supported by SEMANTICA, a System for Learning Object Retrieval and Adaptive Courseware Generation for e-learning environments Sonja Hierl, Lydia Bauer, Nadja Böller und Josef Herget: Kollaborative Konzeption von Ontologien in der Hochschullehre: Theorie, Chancen und mögliche Umsetzung Marc Wilhelm Küster, Christoph Ludwig, Yahya Al-Haff und Andreas Aschenbrenner: TextGrid: eScholarship und der Fortschritt der Wissenschaft durch vernetzte Angebote
    E. Metadaten und Ontologien Thomas Baker: Dublin Core Application Profiles: current approaches Georg Hohmann: Die Anwendung des CIDOC für die semantische Wissensrepräsentation in den Kulturwissenschaften Elena Semenova: Ontologie als Begriffssystem. Theoretische Überlegungen und ihre praktische Umsetzung bei der Entwicklung einer Ontologie der Wissenschaftsdisziplinen F. Repositorien und Ressourcen Christiane Hümmer: TELOTA - Aspekte eines Wissensportals für geisteswissenschaftliche Forschung Philipp Scham: Integration von Open-Access-Repositorien in Fachportale Benjamin Zapilko: Dynamisches Browsing im Kontext von Informationsarchitekturen
  15. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.15
    0.14595625 = product of:
      0.1751475 = sum of:
        0.01025785 = weight(_text_:und in 4639) [ClassicSimilarity], result of:
          0.01025785 = score(doc=4639,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.048947323 = weight(_text_:anwendung in 4639) [ClassicSimilarity], result of:
          0.048947323 = score(doc=4639,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.016014574 = weight(_text_:des in 4639) [ClassicSimilarity], result of:
          0.016014574 = score(doc=4639,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.06839625 = weight(_text_:prinzips in 4639) [ClassicSimilarity], result of:
          0.06839625 = score(doc=4639,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.031531505 = product of:
          0.06306301 = sum of:
            0.06306301 = weight(_text_:thesaurus in 4639) [ClassicSimilarity], result of:
              0.06306301 = score(doc=4639,freq=4.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.2888174 = fieldWeight in 4639, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Object
    Art and architecture thesaurus
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  16. Weidemüller, H. U.: RSWK-Anwendung in der Deutschen Bibliothek : Kettenbildung und Permutationsverfahren (1986) 0.14
    0.1390702 = product of:
      0.2781404 = sum of:
        0.050773766 = weight(_text_:und in 556) [ClassicSimilarity], result of:
          0.050773766 = score(doc=556,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.4848303 = fieldWeight in 556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=556)
        0.17131563 = weight(_text_:anwendung in 556) [ClassicSimilarity], result of:
          0.17131563 = score(doc=556,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.74887794 = fieldWeight in 556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.109375 = fieldNorm(doc=556)
        0.05605101 = weight(_text_:des in 556) [ClassicSimilarity], result of:
          0.05605101 = score(doc=556,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.4283554 = fieldWeight in 556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.109375 = fieldNorm(doc=556)
      0.5 = coord(3/6)
    
    Abstract
    Darstellung des Verfahrens der maschinellen Kettenbildung und Permutationsverfahrens in der Deutschen Bibliographie
  17. Probleme der Katalogisierung in Parlaments- und Behördenbibliotheken (1990) 0.14
    0.1390702 = product of:
      0.2781404 = sum of:
        0.050773766 = weight(_text_:und in 4738) [ClassicSimilarity], result of:
          0.050773766 = score(doc=4738,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.4848303 = fieldWeight in 4738, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=4738)
        0.17131563 = weight(_text_:anwendung in 4738) [ClassicSimilarity], result of:
          0.17131563 = score(doc=4738,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.74887794 = fieldWeight in 4738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.109375 = fieldNorm(doc=4738)
        0.05605101 = weight(_text_:des in 4738) [ClassicSimilarity], result of:
          0.05605101 = score(doc=4738,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.4283554 = fieldWeight in 4738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.109375 = fieldNorm(doc=4738)
      0.5 = coord(3/6)
    
    Content
    Enthält Beitäge zu: konzeptionelle EDV-Überlegungen in Behördenbibliotheken; Umstellungsfragen des Kartenkataloges; Anwendung der RSWK in Spezialbibliotheken
    Series
    Arbeitshefte der Arbeitsgemeinschaft der Parlaments- und Behördenbibliotheken; 44
  18. Schubert, P.: Revision von Aufbau und Anwendung des kontrollierten Vokabulars einer bibliografischen Datensammlung zum Thema Dramturgie (2003) 0.14
    0.1390702 = product of:
      0.2781404 = sum of:
        0.050773766 = weight(_text_:und in 2518) [ClassicSimilarity], result of:
          0.050773766 = score(doc=2518,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.4848303 = fieldWeight in 2518, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=2518)
        0.17131563 = weight(_text_:anwendung in 2518) [ClassicSimilarity], result of:
          0.17131563 = score(doc=2518,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.74887794 = fieldWeight in 2518, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.109375 = fieldNorm(doc=2518)
        0.05605101 = weight(_text_:des in 2518) [ClassicSimilarity], result of:
          0.05605101 = score(doc=2518,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.4283554 = fieldWeight in 2518, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.109375 = fieldNorm(doc=2518)
      0.5 = coord(3/6)
    
    Imprint
    Potsdam : Fachhochschule, Institut für Information und Dokumentation
  19. Hedden, H.: ¬The accidental taxonomist (2012) 0.14
    0.1382601 = product of:
      0.16591212 = sum of:
        0.01025785 = weight(_text_:und in 2915) [ClassicSimilarity], result of:
          0.01025785 = score(doc=2915,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.09795051 = fieldWeight in 2915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2915)
        0.048947323 = weight(_text_:anwendung in 2915) [ClassicSimilarity], result of:
          0.048947323 = score(doc=2915,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.21396513 = fieldWeight in 2915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.03125 = fieldNorm(doc=2915)
        0.016014574 = weight(_text_:des in 2915) [ClassicSimilarity], result of:
          0.016014574 = score(doc=2915,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 2915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=2915)
        0.06839625 = weight(_text_:prinzips in 2915) [ClassicSimilarity], result of:
          0.06839625 = score(doc=2915,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.25292668 = fieldWeight in 2915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.03125 = fieldNorm(doc=2915)
        0.022296138 = product of:
          0.044592276 = sum of:
            0.044592276 = weight(_text_:thesaurus in 2915) [ClassicSimilarity], result of:
              0.044592276 = score(doc=2915,freq=2.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.20422474 = fieldWeight in 2915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2915)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  20. Liedloff, V.: Anwendung eines existenten Klassifikationssystems im Bereich der computerunterstützten Inhaltsanalyse (1985) 0.14
    0.13776785 = product of:
      0.20665178 = sum of:
        0.025386883 = weight(_text_:und in 2921) [ClassicSimilarity], result of:
          0.025386883 = score(doc=2921,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.24241515 = fieldWeight in 2921, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2921)
        0.08565781 = weight(_text_:anwendung in 2921) [ClassicSimilarity], result of:
          0.08565781 = score(doc=2921,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.37443897 = fieldWeight in 2921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2921)
        0.028025504 = weight(_text_:des in 2921) [ClassicSimilarity], result of:
          0.028025504 = score(doc=2921,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.2141777 = fieldWeight in 2921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2921)
        0.06758159 = product of:
          0.13516317 = sum of:
            0.13516317 = weight(_text_:thesaurus in 2921) [ClassicSimilarity], result of:
              0.13516317 = score(doc=2921,freq=6.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.6190234 = fieldWeight in 2921, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2921)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    In universitärer Grundlagenforschung wurde das Computergestützte TeXterschließungssystem (CTX) entwickelt. Es ist ein wörterbuchorientiertes Verfahren, das aufbauend auf einer wort- und satzorientierten Verarbeitung von Texten zu einem deutschsprachigen Text/ Dokument formal-inhaltliche Stichwörter (Grundformen, systemintern "Deskriptoren" genannt) erstellt. Diese dienen als Input für die Computer-Unterstützte Inhaltsanalyse (CUI). Mit Hilfe eines Thesaurus werden die Deskriptoren zu Oberbegriffen zusammengefaßt und die durch CTX erstellte Deskriptorliste über eine Vergleichsliste auf die Kategorien (=Oberbegriffe) des Thesaurus abgebildet. Das Ergebnis wird über mathematisch-statistische Auswertungsverfahren weiterverarbeitet. Weitere Vorteile der Einbringung eines Thesaurus werden genannt

Authors

Languages

Types

  • a 12748
  • m 2501
  • el 1159
  • s 703
  • x 640
  • i 193
  • r 131
  • b 90
  • ? 80
  • n 60
  • p 31
  • l 26
  • h 25
  • d 18
  • u 16
  • fi 11
  • z 4
  • v 2
  • au 1
  • ms 1
  • More… Less…

Themes

Subjects

Classifications