Search (3 results, page 1 of 1)

  • × author_ss:"Wang, S."
  • × theme_ss:"Semantische Interoperabilität"
  1. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.00
    0.0012504549 = product of:
      0.028760463 = sum of:
        0.028760463 = sum of:
          0.0094278185 = weight(_text_:1 in 4645) [ClassicSimilarity], result of:
            0.0094278185 = score(doc=4645,freq=2.0), product of:
              0.057894554 = queryWeight, product of:
                2.4565027 = idf(docFreq=10304, maxDocs=44218)
                0.023567878 = queryNorm
              0.16284466 = fieldWeight in 4645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4565027 = idf(docFreq=10304, maxDocs=44218)
                0.046875 = fieldNorm(doc=4645)
          0.019332644 = weight(_text_:29 in 4645) [ClassicSimilarity], result of:
            0.019332644 = score(doc=4645,freq=2.0), product of:
              0.08290443 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.023567878 = queryNorm
              0.23319192 = fieldWeight in 4645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=4645)
      0.04347826 = coord(1/23)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
    Date
    29. 7.2011 14:44:56
  2. Wang, S.; Isaac, A.; Schopman, B.; Schlobach, S.; Meij, L. van der: Matching multilingual subject vocabularies (2009) 0.00
    3.779547E-4 = product of:
      0.008692958 = sum of:
        0.008692958 = product of:
          0.017385917 = sum of:
            0.017385917 = weight(_text_:international in 3035) [ClassicSimilarity], result of:
              0.017385917 = score(doc=3035,freq=2.0), product of:
                0.078619614 = queryWeight, product of:
                  3.33588 = idf(docFreq=4276, maxDocs=44218)
                  0.023567878 = queryNorm
                0.22113968 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.33588 = idf(docFreq=4276, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    Most libraries and other cultural heritage institutions use controlled knowledge organisation systems, such as thesauri, to describe their collections. Unfortunately, as most of these institutions use different such systems, united access to heterogeneous collections is difficult. Things are even worse in an international context when concepts have labels in different languages. In order to overcome the multilingual interoperability problem between European Libraries, extensive work has been done to manually map concepts from different knowledge organisation systems, which is a tedious and expensive process. Within the TELplus project, we developed and evaluated methods to automatically discover these mappings, using different ontology matching techniques. In experiments on major French, English and German subject heading lists Rameau, LCSH and SWD, we show that we can automatically produce mappings of surprisingly good quality, even when using relatively naive translation and matching methods.
  3. Wang, S.; Isaac, A.; Schlobach, S.; Meij, L. van der; Schopman, B.: Instance-based semantic interoperability in the cultural heritage (2012) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 125) [ClassicSimilarity], result of:
              0.007856515 = score(doc=125,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=125)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Semantic Web journal. 3(2012) no.1, S.45-64