Search (26 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  • × type_ss:"el"
  1. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.01
    0.010460943 = product of:
      0.06276566 = sum of:
        0.014807926 = weight(_text_:und in 3081) [ClassicSimilarity], result of:
          0.014807926 = score(doc=3081,freq=20.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.3097467 = fieldWeight in 3081, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3081)
        0.027356375 = weight(_text_:informationswissenschaft in 3081) [ClassicSimilarity], result of:
          0.027356375 = score(doc=3081,freq=4.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.28154406 = fieldWeight in 3081, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.03125 = fieldNorm(doc=3081)
        0.0046665967 = weight(_text_:in in 3081) [ClassicSimilarity], result of:
          0.0046665967 = score(doc=3081,freq=14.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15905021 = fieldWeight in 3081, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3081)
        0.014807926 = weight(_text_:und in 3081) [ClassicSimilarity], result of:
          0.014807926 = score(doc=3081,freq=20.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.3097467 = fieldWeight in 3081, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3081)
        0.0011268335 = weight(_text_:s in 3081) [ClassicSimilarity], result of:
          0.0011268335 = score(doc=3081,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.048049565 = fieldWeight in 3081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=3081)
      0.16666667 = coord(5/30)
    
    Abstract
    Die Arbeit analysiert die dynamische Entwicklung und den Gebrauch von Thesaurusbegriffen. Zusätzlich konzentriert sie sich auf die Faktoren, die die Zahl von Indexbegriffen pro Dokument oder Zeitschrift beeinflussen. Als Untersuchungsobjekt dienten der MeSH und die entsprechende Datenbank "MEDLINE". Die wichtigsten Konsequenzen sind: 1. Der MeSH-Thesaurus hat sich durch drei unterschiedliche Phasen jeweils logarithmisch entwickelt. Solch einen Thesaurus sollte folgenden Gleichung folgen: "T = 3.076,6 Ln (d) - 22.695 + 0,0039d" (T = Begriffe, Ln = natürlicher Logarithmus und d = Dokumente). Um solch einen Thesaurus zu konstruieren, muss man demnach etwa 1.600 Dokumente von unterschiedlichen Themen des Bereiches des Thesaurus haben. Die dynamische Entwicklung von Thesauri wie MeSH erfordert die Einführung eines neuen Begriffs pro Indexierung von 256 neuen Dokumenten. 2. Die Verteilung der Thesaurusbegriffe erbrachte drei Kategorien: starke, normale und selten verwendete Headings. Die letzte Gruppe ist in einer Testphase, während in der ersten und zweiten Kategorie die neu hinzukommenden Deskriptoren zu einem Thesauruswachstum führen. 3. Es gibt ein logarithmisches Verhältnis zwischen der Zahl von Index-Begriffen pro Aufsatz und dessen Seitenzahl für die Artikeln zwischen einer und einundzwanzig Seiten. 4. Zeitschriftenaufsätze, die in MEDLINE mit Abstracts erscheinen erhalten fast zwei Deskriptoren mehr. 5. Die Findablity der nicht-englisch sprachigen Dokumente in MEDLINE ist geringer als die englische Dokumente. 6. Aufsätze der Zeitschriften mit einem Impact Factor 0 bis fünfzehn erhalten nicht mehr Indexbegriffe als die der anderen von MEDINE erfassten Zeitschriften. 7. In einem Indexierungssystem haben unterschiedliche Zeitschriften mehr oder weniger Gewicht in ihrem Findability. Die Verteilung der Indexbegriffe pro Seite hat gezeigt, dass es bei MEDLINE drei Kategorien der Publikationen gibt. Außerdem gibt es wenige stark bevorzugten Zeitschriften."
    Footnote
    Dissertation, Humboldt-Universität zu Berlin - Institut für Bibliotheks- und Informationswissenschaft.
    Imprint
    Berlin : Humboldt-Universität zu Berlin / Institut für Bibliotheks- und Informationswissenschaft
    Pages
    128 S
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.01
    0.0065348013 = product of:
      0.049011007 = sum of:
        0.014048031 = weight(_text_:und in 3895) [ClassicSimilarity], result of:
          0.014048031 = score(doc=3895,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=3895)
        0.014048031 = weight(_text_:und in 3895) [ClassicSimilarity], result of:
          0.014048031 = score(doc=3895,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=3895)
        0.0033805002 = weight(_text_:s in 3895) [ClassicSimilarity], result of:
          0.0033805002 = score(doc=3895,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14414869 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=3895)
        0.017534448 = product of:
          0.035068896 = sum of:
            0.035068896 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.035068896 = score(doc=3895,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.13333334 = coord(4/30)
    
    Date
    24. 8.2005 19:20:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.00
    0.0038025428 = product of:
      0.038025428 = sum of:
        0.011706693 = weight(_text_:und in 539) [ClassicSimilarity], result of:
          0.011706693 = score(doc=539,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24487628 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.011706693 = weight(_text_:und in 539) [ClassicSimilarity], result of:
          0.011706693 = score(doc=539,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24487628 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.0146120405 = product of:
          0.029224081 = sum of:
            0.029224081 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.029224081 = score(doc=539,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Date
    26.12.2011 13:22:07
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Hill, L.: New Protocols for Gazetteer and Thesaurus Services (2002) 0.00
    0.0034410476 = product of:
      0.020646285 = sum of:
        0.004682677 = weight(_text_:und in 1206) [ClassicSimilarity], result of:
          0.004682677 = score(doc=1206,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.09795051 = fieldWeight in 1206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=1206)
        0.0030550044 = weight(_text_:in in 1206) [ClassicSimilarity], result of:
          0.0030550044 = score(doc=1206,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1041228 = fieldWeight in 1206, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1206)
        0.007099094 = product of:
          0.021297282 = sum of:
            0.021297282 = weight(_text_:l in 1206) [ClassicSimilarity], result of:
              0.021297282 = score(doc=1206,freq=4.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.24841578 = fieldWeight in 1206, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1206)
          0.33333334 = coord(1/3)
        0.004682677 = weight(_text_:und in 1206) [ClassicSimilarity], result of:
          0.004682677 = score(doc=1206,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.09795051 = fieldWeight in 1206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=1206)
        0.0011268335 = weight(_text_:s in 1206) [ClassicSimilarity], result of:
          0.0011268335 = score(doc=1206,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.048049565 = fieldWeight in 1206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=1206)
      0.16666667 = coord(5/30)
    
    Abstract
    The Alexandria Digital Library Project announces the online publication of two protocols to support querying and response interactions using distributed services: one for gazetteers and one for thesauri. These protocols have been developed for our own purposes and also to support the general interoperability of gazetteers and thesauri on the web. See <http://www.alexandria.ucsb.edu/~gjanee/gazetteer/> and <http://www.alexandria.ucsb.edu/~gjanee/thesaurus/>. For the gazetteer protocol, we have provided a page of test forms that can be used to experiment with the operational functions of the protocol in accessing two gazetteers: the ADL Gazetteer and the ESRI Gazetteer (ESRI has participated in the development of the gazetteer protocol). We are in the process of developing a thesaurus server and a simple client to demonstrate the use of the thesaurus protocol. We are soliciting comments on both protocols. Please remember that we are seeking protocols that are essentially "simple" and easy to implement and that support basic operations - they should not duplicate all of the functions of specialized gazetteer and thesaurus interfaces. We continue to discuss ways of handling various issues and to further develop the protocols. For the thesaurus protocol, outstanding issues include the treatment of multilingual thesauri and the degree to which the language attribute should be supported; whether the Scope Note element should be changed to a repeatable Note element; the best way to handle the hierarchical report for multi-hierarchies where portions of the hierarchy are repeated; and whether support for searching by term identifiers is redundant and unnecessary given that the terms themselves are unique within a thesaurus. For the gazetteer protocol, we continue to work on validation of query and report XML documents and on implementing the part of the protocol designed to support the submission of new entries to a gazetteer. We would like to encourage open discussion of these protocols through the NKOS discussion list (see the NKOS webpage at <http://nkos.slis.kent.edu/>) and the CGGR-L discussion list that focuses on gazetteer development (see ADL Gazetteer Development page at <http://www.alexandria.ucsb.edu/gazetteer>).
    Source
    D-Lib magazine. 8(2002) no.3, x S
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.00
    0.00326992 = product of:
      0.024524398 = sum of:
        0.008194685 = weight(_text_:und in 4644) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4644,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.0053462577 = weight(_text_:in in 4644) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=4644,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 4644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.008194685 = weight(_text_:und in 4644) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4644,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4644, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
        0.0027887707 = weight(_text_:s in 4644) [ClassicSimilarity], result of:
          0.0027887707 = score(doc=4644,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.118916616 = fieldWeight in 4644, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4644)
      0.13333334 = coord(4/30)
    
    Abstract
    This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.
    Pages
    S.17-31
    Series
    Lecture notes in computer science; no.3298
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  6. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.00
    0.0029625234 = product of:
      0.022218924 = sum of:
        0.0070240153 = weight(_text_:und in 4641) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=4641,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 4641, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.0064806426 = weight(_text_:in in 4641) [ClassicSimilarity], result of:
          0.0064806426 = score(doc=4641,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22087781 = fieldWeight in 4641, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.0070240153 = weight(_text_:und in 4641) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=4641,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 4641, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
        0.0016902501 = weight(_text_:s in 4641) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=4641,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 4641, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=4641)
      0.13333334 = coord(4/30)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Michel, D.: Taxonomy of Subject Relationships (1997) 0.00
    0.0027822906 = product of:
      0.027822906 = sum of:
        0.011706693 = weight(_text_:und in 5346) [ClassicSimilarity], result of:
          0.011706693 = score(doc=5346,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24487628 = fieldWeight in 5346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5346)
        0.004409519 = weight(_text_:in in 5346) [ClassicSimilarity], result of:
          0.004409519 = score(doc=5346,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15028831 = fieldWeight in 5346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=5346)
        0.011706693 = weight(_text_:und in 5346) [ClassicSimilarity], result of:
          0.011706693 = score(doc=5346,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24487628 = fieldWeight in 5346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5346)
      0.1 = coord(3/30)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Lee, M.; Baillie, S.; Dell'Oro, J.: TML: a Thesaural Markpup Language (200?) 0.00
    0.0027094386 = product of:
      0.020320788 = sum of:
        0.0070240153 = weight(_text_:und in 1622) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=1622,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 1622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
        0.0045825066 = weight(_text_:in in 1622) [ClassicSimilarity], result of:
          0.0045825066 = score(doc=1622,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1561842 = fieldWeight in 1622, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
        0.0070240153 = weight(_text_:und in 1622) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=1622,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 1622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
        0.0016902501 = weight(_text_:s in 1622) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=1622,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 1622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
      0.13333334 = coord(4/30)
    
    Abstract
    Thesauri are used to provide controlled vocabularies for resource classification. Their use can greatly assist document discovery because thesauri man date a consistent shared terminology for describing documents. A particular thesauras classifies documents according to an information community's needs. As a result, there are many different thesaural schemas. This has led to a proliferation of schema-specific thesaural systems. In our research, we exploit schematic regularities to design a generic thesaural ontology and specfiy it as a markup language. The language provides a common representational framework in which to encode the idiosyncrasies of specific thesauri. This approach has several advantages: it offers consistent syntax and semantics in which to express thesauri; it allows general purpose thesaural applications to leverage many thesauri; and it supports a single thesaural user interface by which information communities can consistently organise, score and retrieve electronic documents.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  9. Tudhope, D.; Alani, H.; Jones, C.: Augmenting thesaurus relationships : possibilities for retrieval (2001) 0.00
    0.0025801647 = product of:
      0.019351235 = sum of:
        0.0058533465 = weight(_text_:und in 1520) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=1520,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 1520, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1520)
        0.006236001 = weight(_text_:in in 1520) [ClassicSimilarity], result of:
          0.006236001 = score(doc=1520,freq=16.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 1520, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1520)
        0.0058533465 = weight(_text_:und in 1520) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=1520,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 1520, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1520)
        0.0014085418 = weight(_text_:s in 1520) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=1520,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 1520, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1520)
      0.13333334 = coord(4/30)
    
    Abstract
    This paper discusses issues concerning the augmentation of thesaurus relationships, in light of new application possibilities for retrieval. We first discuss a case study that explored the retrieval potential of an augmented set of thesaurus relationships by specialising standard relationships into richer subtypes, in particular hierarchical geographical containment and the associative relationship. We then locate this work in a broader context by reviewing various attempts to build taxonomies of thesaurus relationships, and conclude by discussing the feasibility of hierarchically augmenting the core set of thesaurus relationships, particularly the associative relationship. We discuss the possibility of enriching the specification and semantics of Related Term (RT relationships), while maintaining compatibility with traditional thesauri via a limited hierarchical extension of the associative (and hierarchical) relationships. This would be facilitated by distinguishing the type of term from the (sub)type of relationship and explicitly specifying semantic categories for terms following a faceted approach. We first illustrate how hierarchical spatial relationships can be used to provide more flexible retrieval for queries incorporating place names in applications employing online gazetteers and geographical thesauri. We then employ a set of experimental scenarios to investigate key issues affecting use of the associative (RT) thesaurus relationships in semantic distance measures. Previous work has noted the potential of RTs in thesaurus search aids but also the problem of uncontrolled expansion of query term sets. Results presented in this paper suggest the potential for taking account of the hierarchical context of an RT link and specialisations of the RT relationship
    Pages
    21 S
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Konzeption und Anwendung des Prinzips Thesaurus
  10. Eckert, K: ¬The ICE-map visualization (2011) 0.00
    0.0024840718 = product of:
      0.024840716 = sum of:
        0.009365354 = weight(_text_:und in 4743) [ClassicSimilarity], result of:
          0.009365354 = score(doc=4743,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 4743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
        0.006110009 = weight(_text_:in in 4743) [ClassicSimilarity], result of:
          0.006110009 = score(doc=4743,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 4743, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
        0.009365354 = weight(_text_:und in 4743) [ClassicSimilarity], result of:
          0.009365354 = score(doc=4743,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 4743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
      0.1 = coord(3/30)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  11. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.00
    0.0022578656 = product of:
      0.01693399 = sum of:
        0.0058533465 = weight(_text_:und in 5997) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=5997,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.0038187557 = weight(_text_:in in 5997) [ClassicSimilarity], result of:
          0.0038187557 = score(doc=5997,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1301535 = fieldWeight in 5997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.0058533465 = weight(_text_:und in 5997) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=5997,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.0014085418 = weight(_text_:s in 5997) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=5997,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.13333334 = coord(4/30)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Source
    IEEE Access. 7(2019) no.153, S.151-170
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  12. Jing, Y.; Croft, W.B.: ¬An association thesaurus for information retrieval (199?) 0.00
    0.0022562696 = product of:
      0.022562696 = sum of:
        0.008194685 = weight(_text_:und in 4494) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4494,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
        0.0061733257 = weight(_text_:in in 4494) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=4494,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 4494, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
        0.008194685 = weight(_text_:und in 4494) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4494,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
      0.1 = coord(3/30)
    
    Abstract
    Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent benefits for retrieval performance, and it is difficult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in INQUERY to evaluate different types of association thesauri, and thesauri constructed for a variety of collections
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.00
    0.0022562696 = product of:
      0.022562696 = sum of:
        0.008194685 = weight(_text_:und in 572) [ClassicSimilarity], result of:
          0.008194685 = score(doc=572,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.0061733257 = weight(_text_:in in 572) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=572,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 572, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.008194685 = weight(_text_:und in 572) [ClassicSimilarity], result of:
          0.008194685 = score(doc=572,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.1 = coord(3/30)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  14. Will, L.D.: Publications on thesaurus construction and use : including some references to facet analysis, taxonomies, ontologies, topic maps and related issues (2005) 0.00
    0.0021852495 = product of:
      0.03277874 = sum of:
        0.01638937 = weight(_text_:und in 3192) [ClassicSimilarity], result of:
          0.01638937 = score(doc=3192,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.34282678 = fieldWeight in 3192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=3192)
        0.01638937 = weight(_text_:und in 3192) [ClassicSimilarity], result of:
          0.01638937 = score(doc=3192,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.34282678 = fieldWeight in 3192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=3192)
      0.06666667 = coord(2/30)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  15. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.00
    0.0021044814 = product of:
      0.01578361 = sum of:
        0.004682677 = weight(_text_:und in 4639) [ClassicSimilarity], result of:
          0.004682677 = score(doc=4639,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.09795051 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0052914224 = weight(_text_:in in 4639) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=4639,freq=18.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 4639, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.004682677 = weight(_text_:und in 4639) [ClassicSimilarity], result of:
          0.004682677 = score(doc=4639,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.09795051 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0011268335 = weight(_text_:s in 4639) [ClassicSimilarity], result of:
          0.0011268335 = score(doc=4639,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.048049565 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
      0.13333334 = coord(4/30)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Pages
    IV, 186 S
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  16. Thesaurus software (2001) 0.00
    0.0020754572 = product of:
      0.020754572 = sum of:
        0.008194685 = weight(_text_:und in 6773) [ClassicSimilarity], result of:
          0.008194685 = score(doc=6773,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 6773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6773)
        0.004365201 = weight(_text_:in in 6773) [ClassicSimilarity], result of:
          0.004365201 = score(doc=6773,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.14877784 = fieldWeight in 6773, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6773)
        0.008194685 = weight(_text_:und in 6773) [ClassicSimilarity], result of:
          0.008194685 = score(doc=6773,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 6773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6773)
      0.1 = coord(3/30)
    
    Abstract
    Members offer comments and suggest resources on programs for creating, maintaining, and publishing thesauri. Formerly a tool for writers and indexers, the thesaurus has taken on a new role as an essential component of the corporate information infrastructure. Many people are using word processor or database programs to create and maintain thesauri, while others are using specialized tools that perform consistency checks and offer special reporting capabilities. Some also use thesaurus modules integrated into another application, such as web publishing, content management, or e-commerce. This article includes material comes from our own experience, email responses from members, and comments from participants in our seminars and roundtables. There's also an introduction to thesauri in a corporate information management system
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  17. Quick Guide to Publishing a Thesaurus on the Semantic Web (2008) 0.00
    0.0020754572 = product of:
      0.020754572 = sum of:
        0.008194685 = weight(_text_:und in 4656) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4656,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4656)
        0.004365201 = weight(_text_:in in 4656) [ClassicSimilarity], result of:
          0.004365201 = score(doc=4656,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.14877784 = fieldWeight in 4656, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4656)
        0.008194685 = weight(_text_:und in 4656) [ClassicSimilarity], result of:
          0.008194685 = score(doc=4656,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 4656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4656)
      0.1 = coord(3/30)
    
    Abstract
    This document describes in brief how to express the content and structure of a thesaurus, and metadata about a thesaurus, in RDF. Using RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  18. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.00
    0.0019476034 = product of:
      0.019476034 = sum of:
        0.008194685 = weight(_text_:und in 604) [ClassicSimilarity], result of:
          0.008194685 = score(doc=604,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.0030866629 = weight(_text_:in in 604) [ClassicSimilarity], result of:
          0.0030866629 = score(doc=604,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.10520181 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.008194685 = weight(_text_:und in 604) [ClassicSimilarity], result of:
          0.008194685 = score(doc=604,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.1 = coord(3/30)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  19. Dextre Clarke, S.G.: Overview of ISO NP 25964 : structured vocabularies for information retrieval (2007) 0.00
    0.0019339453 = product of:
      0.019339453 = sum of:
        0.0070240153 = weight(_text_:und in 535) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=535,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=535)
        0.0052914224 = weight(_text_:in in 535) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=535,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 535, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=535)
        0.0070240153 = weight(_text_:und in 535) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=535,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=535)
      0.1 = coord(3/30)
    
    Abstract
    ISO 2788 and ISO 5964, the international standards for monolingual and multilingual thesauri respectively dated 1986 and 1985, are very much in need of revision. A proposal to revise them was recently approved by the relevant subcommittee, ISO TC46/SC9. The work will be based on BS 8723, a five part standard of which Parts 1 and 2 were published in 2005, Parts 3 and 4 are scheduled for publication in 2007, and Part 5 is still in draft. This subsession will address aspects of the whole revision project. It is conceived as a panel session starting with a brief overview from the project leader. Then there are three presentations of 15 minutes, plus 5 minutes each for specific questions. At the end we have 20 minutes for questions to any or all of the panel, and discussion of issues from the workshop participants.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  20. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.00
    0.0017789633 = product of:
      0.017789632 = sum of:
        0.0070240153 = weight(_text_:und in 4642) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=4642,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.003741601 = weight(_text_:in in 4642) [ClassicSimilarity], result of:
          0.003741601 = score(doc=4642,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.12752387 = fieldWeight in 4642, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.0070240153 = weight(_text_:und in 4642) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=4642,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
      0.1 = coord(3/30)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus