Search (48 results, page 1 of 3)

  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  1. Park, Y.C.; Choi, K.-S.: Automatic thesaurus construction using Bayesian networks (1996) 0.02
    0.02218583 = product of:
      0.13311498 = sum of:
        0.13311498 = weight(_text_:problem in 6581) [ClassicSimilarity], result of:
          0.13311498 = score(doc=6581,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.64980143 = fieldWeight in 6581, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=6581)
      0.16666667 = coord(1/6)
    
    Abstract
    Automatic thesaurus construction is accomplished by extracting term relations mechanically. A popular method uses statistical analysis to discover the term relations. For low frequency terms the statistical information of the terms cannot be reliably used for deciding the relationship of terms. This problem is referred to as the data sparseness problem. Many studies have shown that low frequency terms are of most use in thesaurus construction. Characterizes the statistical behaviour of terms by using an inference network. Develops a formal approach using a Baysian network for the data sparseness problem
  2. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.02
    0.018836789 = product of:
      0.056510366 = sum of:
        0.033623606 = weight(_text_:problem in 3644) [ClassicSimilarity], result of:
          0.033623606 = score(doc=3644,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.1641338 = fieldWeight in 3644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.02288676 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
          0.02288676 = score(doc=3644,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.1354154 = fieldWeight in 3644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
      0.33333334 = coord(2/6)
    
    Abstract
    Calvin Mooers' work toward the resolution of the problem of ambiguity in indexing went unrecognized for years. At the time he introduced the "descriptor" - a term with a very distinct meaning-indexers were, for the most part, taking index terms directly from the document, without either rationalizing them with context or normalizing them with some kind of classification. It is ironic that Mooers' term came to be attached to the popular but unsophisticated indexing methods which he was trying to root out. Simply expressed, what Mooers did was to take the dictionary definitions of terms and redefine them so clearly that they could not be used in any context except that provided by the new definition. He did, at great pains, construct such meanings for over four hundred words; disambiguation and specificity were sought after and found for these words. He proposed that all indexers adopt this method so that when the index supplied a term, it also supplied the exact meaning for that term as used in the indexed document. The same term used differently in another document would be defined differently and possibly renamed to avoid ambiguity. The disambiguation was achieved by using unabridged dictionaries and other sources of defining terminology. In practice, this tends to produce circularity in definition, that is, word A refers to word B which refers to word C which refers to word A. It was necessary, therefore, to break this chain by creating a new, definitive meaning for each word. Eventually, means such as those used by Austin (q.v.) for PRECIS achieved the same purpose, but by much more complex means than just creating a unique definition of each term. Mooers, however, was probably the first to realize how confusing undefined terminology could be. Early automatic indexers dealt with distinct disciplines and, as long as they did not stray beyond disciplinary boundaries, a quick and dirty keyword approach was satisfactory. The trouble came when attempts were made to make a combined index for two or more distinct disciplines. A number of processes have since been developed, mostly involving tagging of some kind or use of strings. Mooers' solution has rarely been considered seriously and probably would be extremely difficult to apply now because of so much interdisciplinarity. But for a specific, weIl defined field, it is still weIl worth considering. Mooers received training in mathematics and physics from the University of Minnesota and the Massachusetts Institute of Technology. He was the founder of Zator Company, which developed and marketed a coded card information retrieval system, and of Rockford Research, Inc., which engages in research in information science. He is the inventor of the TRAC computer language.
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
  3. Pimenov, E.N.: Normativnost' i nekotorye problem razrabotki tezauruzov i drugikh lingvistiicheskikh sredstv IPS (2000) 0.02
    0.016011242 = product of:
      0.09606744 = sum of:
        0.09606744 = weight(_text_:problem in 3281) [ClassicSimilarity], result of:
          0.09606744 = score(doc=3281,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.46895373 = fieldWeight in 3281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.078125 = fieldNorm(doc=3281)
      0.16666667 = coord(1/6)
    
  4. Röttsches, H.: Thesauruspflege im Verbund der Bibliotheken der obersten Bundesbehörden (1989) 0.02
    0.015257841 = product of:
      0.09154704 = sum of:
        0.09154704 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
          0.09154704 = score(doc=4199,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.5416616 = fieldWeight in 4199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=4199)
      0.16666667 = coord(1/6)
    
    Source
    Mitteilungen der Arbeitsgemeinschaft der Parlaments- und Behördenbibliotheken. 1989, H.67, S.1-22
  5. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.01
    0.01358599 = product of:
      0.08151594 = sum of:
        0.08151594 = weight(_text_:problem in 7862) [ClassicSimilarity], result of:
          0.08151594 = score(doc=7862,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.39792046 = fieldWeight in 7862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
      0.16666667 = coord(1/6)
    
    Abstract
    The information retrieval problem is described starting from an analysis of the concepts 'user's information request' and 'information offerings of texts'. It is shown that natural language phrases are a more adequate medium for expressing information requests and information offerings than character string based query and indexing languages complemented by Boolean oprators. The phrases must be represented as concepts to reach a language invariant level for rule based relevance analysis. The special type of representation called advanced thesaurus is used for the semantic representation of natural language phrases and for relevance processing. The analysis of the retrieval problem leads to a symmetric system structure
  6. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
          0.0784689 = score(doc=4483,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
      0.16666667 = coord(1/6)
    
    Date
    15. 3.2000 10:22:37
  7. Maniez, J.: ¬Des classifications aux thesaurus : du bon usage des facettes (1999) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 6404) [ClassicSimilarity], result of:
          0.0784689 = score(doc=6404,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 6404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=6404)
      0.16666667 = coord(1/6)
    
    Date
    1. 8.1996 22:01:00
  8. Maniez, J.: ¬Du bon usage des facettes : des classifications aux thésaurus (1999) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 3773) [ClassicSimilarity], result of:
          0.0784689 = score(doc=3773,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 3773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=3773)
      0.16666667 = coord(1/6)
    
    Date
    1. 8.1996 22:01:00
  9. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
          0.0784689 = score(doc=3895,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=3895)
      0.16666667 = coord(1/6)
    
    Date
    24. 8.2005 19:20:22
  10. Li, K.W.; Yang, C.C.: Automatic crosslingual thesaurus generated from the Hong Kong SAR Police Department Web Corpus for Crime Analysis (2005) 0.01
    0.0128089925 = product of:
      0.07685395 = sum of:
        0.07685395 = weight(_text_:problem in 3391) [ClassicSimilarity], result of:
          0.07685395 = score(doc=3391,freq=8.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 3391, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
      0.16666667 = coord(1/6)
    
    Abstract
    For the sake of national security, very large volumes of data and information are generated and gathered daily. Much of this data and information is written in different languages, stored in different locations, and may be seemingly unconnected. Crosslingual semantic interoperability is a major challenge to generate an overview of this disparate data and information so that it can be analyzed, shared, searched, and summarized. The recent terrorist attacks and the tragic events of September 11, 2001 have prompted increased attention an national security and criminal analysis. Many Asian countries and cities, such as Japan, Taiwan, and Singapore, have been advised that they may become the next targets of terrorist attacks. Semantic interoperability has been a focus in digital library research. Traditional information retrieval (IR) approaches normally require a document to share some common keywords with the query. Generating the associations for the related terms between the two term spaces of users and documents is an important issue. The problem can be viewed as the creation of a thesaurus. Apart from this, terrorists and criminals may communicate through letters, e-mails, and faxes in languages other than English. The translation ambiguity significantly exacerbates the retrieval problem. The problem is expanded to crosslingual semantic interoperability. In this paper, we focus an the English/Chinese crosslingual semantic interoperability problem. However, the developed techniques are not limited to English and Chinese languages but can be applied to many other languages. English and Chinese are popular languages in the Asian region. Much information about national security or crime is communicated in these languages. An efficient automatically generated thesaurus between these languages is important to crosslingual information retrieval between English and Chinese languages. To facilitate crosslingual information retrieval, a corpus-based approach uses the term co-occurrence statistics in parallel or comparable corpora to construct a statistical translation model to cross the language boundary. In this paper, the text based approach to align English/Chinese Hong Kong Police press release documents from the Web is first presented. We also introduce an algorithmic approach to generate a robust knowledge base based an statistical correlation analysis of the semantics (knowledge) embedded in the bilingual press release corpus. The research output consisted of a thesaurus-like, semantic network knowledge base, which can aid in semanticsbased crosslingual information management and retrieval.
  11. Doerr, M.: Semantic problems of thesaurus mapping (2001) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 5902) [ClassicSimilarity], result of:
          0.067929946 = score(doc=5902,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 5902, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5902)
      0.16666667 = coord(1/6)
    
    Abstract
    With networked information access to heterogeneous data sources, the problem of terminology provision and interoperability of controlled vocabulary schemes such as thesauri becomes increasingly urgent. Solutions are needed to improve the performance of full-text retrieval systems and to guide the design of controlled terminology schemes for use in structured data, including metadata. Thesauri are created in different languages, with different scope and points of view and at different levels of abstraction and detail, to accomodate access to a specific group of collections. In any wider search accessing distributed collections, the user would like to start with familiar terminology and let the system find out the correspondences to other terminologies in order to retrieve equivalent results from all addressed collections. This paper investigates possible semantic differences that may hinder the unambiguous mapping and transition from one thesaurus to another. It focusses on the differences of meaning of terms and their relations as intended by their creators for indexing and querying a specific collection, in contrast to methods investigating the statistical relevance of terms for objects in a collection. It develops a notion of optimal mapping, paying particular attention to the intellectual quality of mappings between terms from different vocabularies and to problems of polysemy. Proposals are made to limit the vagueness introduced by the transition from one vocabulary to another. The paper shows ways in which thesaurus creators can improve their methodology to meet the challenges of networked access of distributed collections created under varying conditions. For system implementers, the discussion will lead to a better understanding of the complexity of the problem
  12. Busch, D.: Organisation eines Thesaurus für die Unterstützung der mehrsprachigen Suche in einer bibliographischen Datenbank im Bereich Planen und Bauen (2016) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 3308) [ClassicSimilarity], result of:
          0.067929946 = score(doc=3308,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 3308, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3308)
      0.16666667 = coord(1/6)
    
    Abstract
    Das Problem der mehrsprachigen Suche gewinnt in der letzten Zeit immer mehr an Bedeutung, da viele nützliche Fachinformationen in der Welt in verschiedenen Sprachen publiziert werden. RSWBPlus ist eine bibliographische Datenbank zum Nachweis der Fachliteratur im Bereich Planen und Bauen, welche deutsch- und englischsprachige Metadaten-Einträge enthält. Bis vor Kurzem war es problematisch Einträge zu finden, deren Sprache sich von der Anfragesprache unterschied. Zum Beispiel fand man auf deutschsprachige Anfragen nur deutschsprachige Einträge, obwohl die Datenbank auch potenziell nützliche englischsprachige Einträge enthielt. Um das Problem zu lösen, wurde nach einer Untersuchung bestehender Ansätze, die RSWBPlus weiterentwickelt, um eine mehrsprachige (sprachübergreifende) Suche zu unterstützen, welche unter Einbeziehung eines zweisprachigen begriffbasierten Thesaurus erfolgt. Der Thesaurus wurde aus bereits bestehenden Thesauri automatisch gebildet. Die Einträge der Quell-Thesauri wurden in SKOS-Format (Simple Knowledge Organisation System) umgewandelt, automatisch miteinander vereinigt und schließlich in einen Ziel-Thesaurus eingespielt, der ebenfalls in SKOS geführt wird. Für den Zugriff zum Ziel-Thesaurus werden Apache Jena und MS SQL Server verwendet. Bei der mehrsprachigen Suche werden Terme der Anfrage durch entsprechende Übersetzungen und Synonyme in Deutsch und Englisch erweitert. Die Erweiterung der Suchterme kann sowohl in der Laufzeit, als auch halbautomatisch erfolgen. Das verbesserte Recherchesystem kann insbesondere deutschsprachigen Benutzern helfen, relevante englischsprachige Einträge zu finden. Die Verwendung vom SKOS erhöht die Interoperabilität der Thesauri, vereinfacht das Bilden des Ziel-Thesaurus und den Zugriff zu seinen Einträgen.
  13. Maniez, J.: Relationships in thesauri : some critical remarks (1988) 0.01
    0.011207869 = product of:
      0.06724721 = sum of:
        0.06724721 = weight(_text_:problem in 806) [ClassicSimilarity], result of:
          0.06724721 = score(doc=806,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 806, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=806)
      0.16666667 = coord(1/6)
    
    Abstract
    After reviewing some fundamental distinctions in relationships (paradigmatic/sytagmatic, interconceptual/ structural) the author proposes a functional approach for investigating the relationships in thesauri. The comparison between three closely related types of semantic fields (lexical, conceptual, thesaural) shows the specific function of relationships in all of these intellectual tools. In information retrieval the two main functions are location of relevant concepts and search of exhaustivity. a clear distinction of these aims can contribute to solving the difficult problem of the choice of 'related terms'. It is suggested that their usefulness relies upon empirical rather than upon semantic proximity. Some practical propositions are amde for the choice and display of relationships in thesauri
  14. Schmitz-Esser, W.: New approaches in thesaurus application (1991) 0.01
    0.011207869 = product of:
      0.06724721 = sum of:
        0.06724721 = weight(_text_:problem in 2111) [ClassicSimilarity], result of:
          0.06724721 = score(doc=2111,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 2111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2111)
      0.16666667 = coord(1/6)
    
    Abstract
    To show the difference and explain the move to a new kind of thesauri in the information science area, some of the main characteristics of conventional thesauri are pointed out as well as their side-effects. The new approaches for thesauri apllication are seen to exist in (1) expert systems, (2) interface systems, (3) object oriented design and programming, (4) hypertext systems, (5) machine translation, and (6) machine abstracting. These areas are shortly described including also the new problem which they might create. A discussion of the limitations of the new thesaurus application areas finishes the article which challenges, finally, an awareness to meet the new possibilities of a thesaural retrieval
  15. Somers, H.L.: Observations on standards and guidelines concerning thesaurus construction (1981) 0.01
    0.011207869 = product of:
      0.06724721 = sum of:
        0.06724721 = weight(_text_:problem in 5217) [ClassicSimilarity], result of:
          0.06724721 = score(doc=5217,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 5217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5217)
      0.16666667 = coord(1/6)
    
    Abstract
    An attempt is made to compare the existing standards and guidelines for thesaurus consruction and development, focussing particularly on the ISO, BSI standards as well as on the guidelines suggested by Aitchison and Gilchrist, and UNISIST. The different facets/aspects considered are: linguistic aspects of thesauri; formal requirements suggested by the standards/guidelines with special emphasis on problems associated with the compound terms, homographs, forms of terms, etc.; semantic relationships between terms - synonymy, BT/NT, and associativity; problems peculiar to multilingual thesauri, especially the problem of inexact equivalence between terms; and presentation and arrangement of terms in a thesaurus
  16. Jones, S.; Gatford, M.; Robertson, S.; Hancock-Beaulieu, M.; Secker, J.; Walker, S.: Interactive thesaurus navigation : intelligence rules OK? (1995) 0.01
    0.011207869 = product of:
      0.06724721 = sum of:
        0.06724721 = weight(_text_:problem in 180) [ClassicSimilarity], result of:
          0.06724721 = score(doc=180,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 180, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=180)
      0.16666667 = coord(1/6)
    
    Abstract
    We discuss whether it is feasible to build intelligent rule- or weight-based algorithms into general-purpose software for interactive thesaurus navigation. We survey some approaches to the problem reported in the literature, particularly those involving the assignement of 'link weights' in a thesaurus network, and point out some problems of both principle and practice. We then describe investigations which entailed logging the behavior of thesaurus users and testing the effect of thesaurus-based query enhancement in an IR system using term weighting, in an attempt to identify successful strategies to incorporate into automatic procedures. The results cause us to question many of the assumptions made by previous researchers in this area
  17. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.011092915 = product of:
      0.06655749 = sum of:
        0.06655749 = weight(_text_:problem in 4639) [ClassicSimilarity], result of:
          0.06655749 = score(doc=4639,freq=6.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.32490072 = fieldWeight in 4639, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
      0.16666667 = coord(1/6)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
  18. Riege, U.: Thesaurus und Klassifikation Sozialwissenschaften : Entwicklung der elektronischen Versionen (1998) 0.01
    0.010898459 = product of:
      0.06539075 = sum of:
        0.06539075 = weight(_text_:22 in 4158) [ClassicSimilarity], result of:
          0.06539075 = score(doc=4158,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.38690117 = fieldWeight in 4158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=4158)
      0.16666667 = coord(1/6)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
  19. Alkämper, H.: ¬Die Neugestaltung des Parlamentsthesaurus PARTHES (1998) 0.01
    0.010898459 = product of:
      0.06539075 = sum of:
        0.06539075 = weight(_text_:22 in 4162) [ClassicSimilarity], result of:
          0.06539075 = score(doc=4162,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.38690117 = fieldWeight in 4162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=4162)
      0.16666667 = coord(1/6)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
  20. Schenkel, M.: Vom Leitkartenthesaurus zum Online-Bibliotheksthesaurus : Die Revision des Dokumentationssystems der Bibliothek des Deutschen Bundestages (1949-1998) (1998) 0.01
    0.010898459 = product of:
      0.06539075 = sum of:
        0.06539075 = weight(_text_:22 in 4163) [ClassicSimilarity], result of:
          0.06539075 = score(doc=4163,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.38690117 = fieldWeight in 4163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=4163)
      0.16666667 = coord(1/6)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill

Years

Languages

  • e 32
  • d 10
  • f 4
  • ru 1
  • sp 1
  • More… Less…

Types

  • a 43
  • el 6
  • m 2
  • n 1
  • x 1
  • More… Less…