Search (77 results, page 2 of 4)

  • × language_ss:"e"
  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  • × year_i:[1990 TO 2000}
  1. Milstead, J.L.; Berger, M.C.: ¬The Engineering Information thesaurus development project (1993) 0.01
    0.008150326 = product of:
      0.020375814 = sum of:
        0.009437811 = weight(_text_:a in 5292) [ClassicSimilarity], result of:
          0.009437811 = score(doc=5292,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 5292, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5292)
        0.010938003 = product of:
          0.021876005 = sum of:
            0.021876005 = weight(_text_:information in 5292) [ClassicSimilarity], result of:
              0.021876005 = score(doc=5292,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2687516 = fieldWeight in 5292, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5292)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Reports on the development of a thesaurus by Engineering Information, Inc. for use in indexing its databases. The concept in the former, highly precoordinate, indexing vocabulary were converted into postcoodinate descriptors, and a full set of thesaural relationships developed. Issues to be resolved in developing the vocabulary included the degree of postcoordination that was appropriate, the need to make the thesaurus usable with retrospective indexing that could not be converted and the demands on in-house staff during the development and conversion process
    Source
    Information services and use. 13(1993) no.1, S.71-80
    Type
    a
  2. Park, Y.C.; Choi, K.-S.: Automatic thesaurus construction using Bayesian networks (1996) 0.01
    0.007931474 = product of:
      0.019828685 = sum of:
        0.010897844 = weight(_text_:a in 6581) [ClassicSimilarity], result of:
          0.010897844 = score(doc=6581,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 6581, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6581)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 6581) [ClassicSimilarity], result of:
              0.017861681 = score(doc=6581,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 6581, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6581)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Automatic thesaurus construction is accomplished by extracting term relations mechanically. A popular method uses statistical analysis to discover the term relations. For low frequency terms the statistical information of the terms cannot be reliably used for deciding the relationship of terms. This problem is referred to as the data sparseness problem. Many studies have shown that low frequency terms are of most use in thesaurus construction. Characterizes the statistical behaviour of terms by using an inference network. Develops a formal approach using a Baysian network for the data sparseness problem
    Source
    Information processing and management. 32(1996) no.5, S.543-553
    Type
    a
  3. Conlon, S.P.N.; Evens, M.; Ahlswede, T.: Developing a large lexical database for information retrieval, parsing, and text generation systems (1993) 0.01
    0.0077931583 = product of:
      0.019482896 = sum of:
        0.0100103095 = weight(_text_:a in 5813) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=5813,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 5813, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5813)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 5813) [ClassicSimilarity], result of:
              0.018945174 = score(doc=5813,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 5813, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5813)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Shows that it is possible to construct a lexical database by combining material from a number of machine-readable sources. Discusses the kind of lexical information required for applications in information retrieval and in other natural language processing areas, such as database interfaces and automatic filing systems. Describes the organization of the lexical database which is stored in an Oracle relational database management system and the design of the tables that comprise the database. In addition to the traditional alphabetic listing, access is privided from roots to derived forms and from derived forms to roots, and also through lexical and semantic relations between words, so that the database functions as a thesaurus as well as a dictionary. The database is designed to be open-ended and self-defined. Every attribute of every table is defined in the database itself. The lexical database can easily be extended through an SQL forms interface that facilitates additions to the tables
    Source
    Information processing and management. 29(1993) no.5, S.415-431
    Type
    a
  4. Jarvelin, K.: ¬A deductive data model for thesaurus navigation and query expansion (1996) 0.01
    0.0073474604 = product of:
      0.01836865 = sum of:
        0.009437811 = weight(_text_:a in 5625) [ClassicSimilarity], result of:
          0.009437811 = score(doc=5625,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 5625, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5625)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 5625) [ClassicSimilarity], result of:
              0.017861681 = score(doc=5625,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 5625, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5625)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes a deductive data model based on 3 abstraction levels for representing vocabularies for information retrieval: conceptual level; expression level; and occurrence level. The proposed data model can be used for the representation and navigation of indexing and retrieval thesauri and as a vocabulary source for concept based query expansion in heterogeneous retrieval environments
    Series
    Finnish information studies; 5
  5. Harter, S.P.; Cheng, Y.-R.: Colinked descriptors : improving vocabulary selection for end-user searching (1996) 0.01
    0.0073028165 = product of:
      0.01825704 = sum of:
        0.01155891 = weight(_text_:a in 4216) [ClassicSimilarity], result of:
          0.01155891 = score(doc=4216,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 4216, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4216)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 4216) [ClassicSimilarity], result of:
              0.013396261 = score(doc=4216,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 4216, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4216)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article introduces a new concept and technique for information retrieval called 'colinked descriptors'. Borrowed from an analogous idea in bibliometrics - cocited references - colinked descriptors provide a theory and method for identifying search terms that, by hypothesis, will be superior to those entered initially by a searcher. The theory suggests a means of moving automatically from 2 or more initial search terms, to other terms that should be superior in retrieval performance to the 2 original terms. A research project designed to test this colinked descriptor hypothesis is reported. The results suggest that the approach is effective, although methodological problems in testing the idea are reported. Algorithms to generate colinked descriptors can be incorporated easily into system interfaces, front-end or pre-search systems, or help software, in any database that employs a thesaurus. The potential use of colinked descriptors is a strong argument for building richer and more complex thesauri that reflect as many legitimate links among descriptors as possible
    Source
    Journal of the American Society for Information Science. 47(1996) no.4, S.311-325
    Type
    a
  6. Pollard, R.: Hypertext presentation of thesauri used in on-line searching (1990) 0.01
    0.0068851607 = product of:
      0.017212901 = sum of:
        0.010897844 = weight(_text_:a in 4892) [ClassicSimilarity], result of:
          0.010897844 = score(doc=4892,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 4892, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4892)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 4892) [ClassicSimilarity], result of:
              0.012630116 = score(doc=4892,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 4892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4892)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Explores the strengths and limitations of hypertext for the online presentation of thesauri used in information retrieval. Examines the ability of hypertext to support each of 3 common types of thesaurus display: graphic, alphabetical, and hierarchical. Presents a design for a hypertext-based hierarchical display that addresses many inadequacies of printed hierarchical displays. Ullustrates how the design might be implemented using a commercially available hypertext system. Considers issues related to the implementation and evaluation of hypertext-based thesauri
    Type
    a
  7. Hudon, M.: Multilingual thesaurus construction : integrating the views of different cultures in one gateway to knowledge and concepts (1997) 0.01
    0.0068817483 = product of:
      0.01720437 = sum of:
        0.011678694 = weight(_text_:a in 1804) [ClassicSimilarity], result of:
          0.011678694 = score(doc=1804,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 1804, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1804)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 1804) [ClassicSimilarity], result of:
              0.011051352 = score(doc=1804,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 1804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1804)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Based on the premise that in a multilingual thesaurus all languages are equal, reviews the options and solutions offered by the guidelines to the developer of specialized thesauri. Introduces other problems of a sociocultural, and even of a truly political nature, which are a prominent features in the daily life of the thesaurus designer but with which the theory and the guidelines do not deal very well. Focuses in turn on semantic, managerial, and technological aspects of multilingual thesaurus construction, from the perspective of giving equal treatment to all languages involved
    Footnote
    Contribution to a special issue devoted to papers read at the 1996 Electronic Access to Fiction research seminar at Copenhagen, Denmark
    Source
    Information services and use. 17(1997) nos.2/3, S.111-123
    Type
    a
  8. Srinivasan, P.: Thesaurus construction (1992) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 3504) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=3504,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 3504, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3504)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 3504) [ClassicSimilarity], result of:
              0.013396261 = score(doc=3504,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 3504, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3504)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Thesauri are valuable structures for Information Retrieval systems. A thesaurus provides a precise and controlled vocabulary which serves to coordinate dacument indexing and document retrieval. In both indexing and retrieval, a thesaurus may be used to select the most appropriate terms. Additionally, the thesaurus can assist the searcher in reformulating search strategies if required. Examines the important features of thesauri. This should allow the reader to differentiate between thesauri. Next, a brief overview of the manual thesaurus construction process is given. 2 major approaches for automatic thesaurus construction have been selected for detailed examination. The first is on thesaurus construction from collections of documents,a nd the 2nd, on thesaurus construction by merging existing thesauri. These 2 methods were selected since they rely on statistical techniques alone and are also significantly different from each other. Programs written in C language accompany the discussion of these approaches
    Source
    Information retrieval: data structures and algorithms. Ed.: W.B. Frakes u. R. Baeza-Yates
    Type
    a
  9. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 5202) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=5202,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 5202, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 5202) [ClassicSimilarity], result of:
              0.013396261 = score(doc=5202,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 5202, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Source
    Journal of the American Society for Information Science. 49(1998) no.3, S.206-216
    Type
    a
  10. Hudon, M.: ¬A preliminary investigation of the usefulness of semantic relations and of standardized definitions for the purpose of specifying meaning in a thesaurus (1998) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 55) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=55,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 55, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 55) [ClassicSimilarity], result of:
              0.013396261 = score(doc=55,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 55, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=55)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The terminological consistency of indexers working with a thesaurus as indexing aid remains low. This suggests that indexers cannot perceive easily or very clearly the meaning of each descriptor available as index term. This paper presents the background nd some of the findings of a small scale experiment designed to study the effect on interindexer terminological consistency of modifying the nature of the semantic information given with descriptors in a thesaurus. The study also provided some insights into the respective usefulness of standardized definitions and of traditional networks of hierarchical and associative relationships as means of providing essential meaning information in the thesaurus used as indexing aid
    Type
    a
  11. Spiteri, L.F.: ¬The essential elements of faceted thesauri (1999) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 5362) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=5362,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 5362, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5362)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 5362) [ClassicSimilarity], result of:
              0.013396261 = score(doc=5362,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 5362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5362)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The goal of this study is to evaluate, compare, and contrast how facet analysis is used to construct the systematic or faceted displays of a selection of information retrieval thesauri. More specifically, the study seeks to examine which principles of facet analysis are used in the thesauri, and the extent to which different thesauri apply these principles in the same way. A measuring instrument was designed for the purpose of evaluating the structure of faceted thesauri. This instrument was applied to fourteen faceted information retrieval thesauri. The study reveals that the thesauri do not share a common definition of what constitutes a facet. In some cases, the thesauri apply both enumerative-style classification and facet analysis to arrange their indexing terms. A number of the facets used in the thesauri are not homogeneous or mutually exclusive. The principle of synthesis is used in only 50% of the thesauri, and no one citation order is used consistently by the thesauri.
    Type
    a
  12. Aitchison, J.: Subject control : Thesaurus construction standards (1991) 0.01
    0.006654713 = product of:
      0.016636781 = sum of:
        0.00770594 = weight(_text_:a in 7930) [ClassicSimilarity], result of:
          0.00770594 = score(doc=7930,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 7930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=7930)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 7930) [ClassicSimilarity], result of:
              0.017861681 = score(doc=7930,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 7930, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7930)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Standards for the international exchange of bibliographic information: papers presented at a course held at the School of Library, Archive and Information Studies, University College, London, 3-18 August 1990. Ed.: I.C. McIlwaine
    Type
    a
  13. Jing, Y.; Croft, W.B.: ¬An association thesaurus for information retrieval (199?) 0.01
    0.00652538 = product of:
      0.01631345 = sum of:
        0.0067426977 = weight(_text_:a in 4494) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4494,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4494, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
        0.009570752 = product of:
          0.019141505 = sum of:
            0.019141505 = weight(_text_:information in 4494) [ClassicSimilarity], result of:
              0.019141505 = score(doc=4494,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23515764 = fieldWeight in 4494, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4494)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent benefits for retrieval performance, and it is difficult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in INQUERY to evaluate different types of association thesauri, and thesauri constructed for a variety of collections
  14. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.0065180818 = product of:
      0.016295204 = sum of:
        0.01155891 = weight(_text_:a in 2203) [ClassicSimilarity], result of:
          0.01155891 = score(doc=2203,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 2203, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 2203) [ClassicSimilarity], result of:
              0.009472587 = score(doc=2203,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 2203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2203)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Source
    Journal of the American Society for Information Science. 46(1995) no.5, S.348-369
    Type
    a
  15. Fischer, D.H.; Möhr, W.; Rostek, L.: ¬A modular, object-oriented and generic approach for building terminology maintenance systems (1996) 0.01
    0.0065180818 = product of:
      0.016295204 = sum of:
        0.01155891 = weight(_text_:a in 6345) [ClassicSimilarity], result of:
          0.01155891 = score(doc=6345,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 6345, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=6345)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 6345) [ClassicSimilarity], result of:
              0.009472587 = score(doc=6345,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 6345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6345)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Five years ago, we raised the question: is there a data model which is general enough such that all existing thesauri can be represented by a specialization of this general model without loss of information? The answer was not given at that time, but we referred to the principle of abstraction, well supported in object-oriented data modelling. We gained the empirical basis for that process of abstraction by modelling existing thesauri and a terminological dictionary; an abstracting view was afterwards presented in a paper to the TKE'93 conference. The present paper reports on a third step of abstraction with its very concrete consequences, embodies in a software called TerminologyFramework(TFw)
    Type
    a
  16. Schmitz-Esser, W.: New approaches in thesaurus application (1991) 0.01
    0.0060245167 = product of:
      0.015061291 = sum of:
        0.009535614 = weight(_text_:a in 2111) [ClassicSimilarity], result of:
          0.009535614 = score(doc=2111,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 2111, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2111)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 2111) [ClassicSimilarity], result of:
              0.011051352 = score(doc=2111,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 2111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2111)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    To show the difference and explain the move to a new kind of thesauri in the information science area, some of the main characteristics of conventional thesauri are pointed out as well as their side-effects. The new approaches for thesauri apllication are seen to exist in (1) expert systems, (2) interface systems, (3) object oriented design and programming, (4) hypertext systems, (5) machine translation, and (6) machine abstracting. These areas are shortly described including also the new problem which they might create. A discussion of the limitations of the new thesaurus application areas finishes the article which challenges, finally, an awareness to meet the new possibilities of a thesaural retrieval
    Type
    a
  17. ¬3rd Infoterm Symposiums Terminology Work in Subject Fields, Vienna, 12.-14.11.1991 (1992) 0.01
    0.005906556 = product of:
      0.0147663895 = sum of:
        0.00770594 = weight(_text_:a in 4648) [ClassicSimilarity], result of:
          0.00770594 = score(doc=4648,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 4648, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4648)
        0.0070604496 = product of:
          0.014120899 = sum of:
            0.014120899 = weight(_text_:information in 4648) [ClassicSimilarity], result of:
              0.014120899 = score(doc=4648,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1734784 = fieldWeight in 4648, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4648)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Enthält 47 Beiträge den Schwerpunkten der Tagung: Biology and related fields - Engineering and natural sciences - Medicine - Information science and information technology - Law and economics - Social sciences and humanities - Terminology research and interdisciplinary aspects; darunter: OESER, E. u. G. BUDIN: Explication and representation of qualitative biological and medical concepts: the example of the pocket knowledge data base on carnivores; HOHENEGGER, J.: Specles as the basic units in taxonomy and nomenclature; LAVIETER, L. de, J.A. DESCHAMPS u. B. FELLUGA: A multilingual environmental thesaurus: past, present, and future; TODESCHINI, C. u. G. Thoemig: The thesaurus of the International Nuclear Information System: experiences in an international environment; CITKINA, F.: Terminology of mathematics: contrastive analysis as a basis for standardization and harmonization; WALKER, D.G.: Technology and engineering terminolgy: translation problems encountered and suggested solutions; VERVOOM, A.J.: Terminology and engineering sciences; HIRS, W.M.: ICD-10, a missed chance and a new opportunity for medical terminology standardization; THOMAS, P.: Subject indexes in medical literature; RAHMSTORF, G.: Analysis of information technology terms; NEGRINI, G.: Indexing language for research projects and its graphic display; BATEWICZ, M.: Impact of modern information technology on knowledge transfer services and terminology; RATZINGER, M.: Multilingual product description (MPD): a European project; OHLY, H.P.: Terminology of the social sciences and social context approaches; BEAUGRANDE, R. de: Terminology and discourse between the social sciences and the humanities; MUSKENS, G.: Terminological standardisation and socio-linguistic diversity: dilemmas of crosscultural sociology; SNELL, B.: Terminology ten years on; ZHURAVLEV, V.F.: Standard ontological structures of systems of concepts of active knowledge; WRIGHT, S.E.: Terminology standardization in standards societies and professional associations in the United States; DAHLBERG; I.: The terminology of subject fields - reconsidered; AHMAD, K. u. H. Fulford: Terminology of interdisciplinary fields: a new perspective; DATAA, J.: Full-text databases as a terminological support for translation
    Editor
    Krommer-Benz, M u. A. Manu
  18. Nielsen, M.L.: Future thesauri : what kind of conceptual knowledge do searchers need? (1998) 0.01
    0.005898641 = product of:
      0.014746603 = sum of:
        0.0100103095 = weight(_text_:a in 145) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=145,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 145, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=145)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 145) [ClassicSimilarity], result of:
              0.009472587 = score(doc=145,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 145, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=145)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    For more than thirty years thesauri have been valuable tools in information retrieval. Originally, the basic function of the thesauri was to help the indexer to transform concepts and their relationships, as expressed in the language of documents, into the more regularised indexing language of catalogues and databases. In the nineties another important purpose of the thesauri is to guide the searcher to the best search terms. In spite of the new role, the design of the thesauri has remained more or less stable. This paper explores the demands which are put on the thesauri in relation to searching. Findings are presented in the form of generalisations and moreover illustrated in relation to a real-life situation. Suggestions for improved functionality are presented in the form of a prototype of a thesaurus record. The new role as a conceptual searching tool is also influencing the construction process. Therefore, the paper ends up with a discussion of new methods for thesaurus construction
    Type
    a
  19. Schmitz-Esser, W.: Thesauri facing new challenges (1990) 0.01
    0.005822873 = product of:
      0.014557183 = sum of:
        0.0067426977 = weight(_text_:a in 2218) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=2218,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 2218, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2218)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 2218) [ClassicSimilarity], result of:
              0.015628971 = score(doc=2218,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 2218, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2218)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The chairman of the thesaurus software seminar held on 14.8.1990 in Darmstadt, introduces into the topics by asking the following 10 questions and by providing his answers to them: (1) what is new in the view? (2) what is the real point of attraction? (3) cannot information retrieval profit from machine processing of language? (4) can we do better now? (5) how van we do better? (6) when does fully automatic IR arrive? (7) thesauri for machine-aided IR - how do we get there? (8) which is the right way, which is the model, what to standardize? (9) can IR people do it alone? (10) are there advanced information services with a truly human interface
    Type
    a
  20. McCray, A.T.; Nelson, S.J.: ¬The representation of meaning in the UMLS (1995) 0.01
    0.005822873 = product of:
      0.014557183 = sum of:
        0.0067426977 = weight(_text_:a in 1872) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=1872,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 1872, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1872)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 1872) [ClassicSimilarity], result of:
              0.015628971 = score(doc=1872,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 1872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1872)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Unified Medical Language System knowledge source provide detailed information about biomedical naming systems and databases. The Metathesaurus contains biomedical terminology from an increasing number of biomedical thesauri, and the Semantic Netowrk provides a structure that encompasses and unifies the thesauri that are included in the Metathesaurus. Addresses some fundamental principles underlying the design and development of the Metathesaurus and Semantic Network. Describes the formal properties of the semantic network. Considers the principle of semantic locality and how this is reflected in the UMLS knowledge sources. Discusses the issues involved in attempting to reuse knowledge and the potential for reuse of the UMLS knowledge sources
    Source
    Methods of information in medicine. 34(1995) nos.1/2, S.193-201
    Type
    a

Types

  • a 69
  • el 2
  • m 2
  • n 2
  • s 2
  • b 1
  • r 1
  • More… Less…