Search (4 results, page 1 of 1)

  • × author_ss:"Assem, M. van"
  • × type_ss:"el"
  1. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.05
    0.046775818 = product of:
      0.093551636 = sum of:
        0.093551636 = sum of:
          0.048973244 = weight(_text_:work in 4649) [ClassicSimilarity], result of:
            0.048973244 = score(doc=4649,freq=2.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.2433148 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.04457839 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.04457839 = score(doc=4649,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.5 = coord(1/2)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  2. Assem, M. van; Gangemi, A.; Schreiber, G.: Conversion of WordNet to a standard RDF/OWL representation (2006) 0.01
    0.012243311 = product of:
      0.024486622 = sum of:
        0.024486622 = product of:
          0.048973244 = sum of:
            0.048973244 = weight(_text_:work in 4641) [ClassicSimilarity], result of:
              0.048973244 = score(doc=4641,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.2433148 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents an overview of the work in progress at the W3C to produce a standard conversion of WordNet to the RDF/OWL representation language in use in the SemanticWeb community. Such a standard representation is useful to provide application developers a high-quality resource and to promote interoperability. Important requirements in this conversion process are that it should be complete and should stay close to WordNet's conceptual model. The paper explains the steps taken to produce the conversion and details design decisions such as the composition of the class hierarchy and properties, the addition of suitable OWL semantics and the chosen format of the URIs. Additional topics include a strategy to incorporate OWL and RDFS semantics in one schema such that both RDF(S) infrastructure and OWL infrastructure can interpret the information correctly, problems encountered in understanding the Prolog source files and the description of the two versions that are provided (Basic and Full) to accommodate different usages of WordNet.
  3. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Hollink, L.; Huang, Z.; Kersen, J. van; Niet, M. de; Omelayenko, B.; Ossenbruggen, J. van; Siebes, R.; Taekema, J.; Wielemaker, J.; Wielinga, B.: MultimediaN E-Culture demonstrator (2006) 0.01
    0.012243311 = product of:
      0.024486622 = sum of:
        0.024486622 = product of:
          0.048973244 = sum of:
            0.048973244 = weight(_text_:work in 4648) [ClassicSimilarity], result of:
              0.048973244 = score(doc=4648,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.2433148 = fieldWeight in 4648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4648)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The main objective of the MultimediaN E-Culture project is to demonstrate how novel semantic-web and presentation technologies can be deployed to provide better indexing and search support within large virtual collections of culturalheritage resources. The architecture is fully based on open web standards in particular XML, SVG, RDF/OWL and SPARQL. One basic hypothesis underlying this work is that the use of explicit background knowledge in the form of ontologies/vocabularies/thesauri is in particular useful in information retrieval in knowledge-rich domains. This paper gives some details about the internals of the demonstrator.
  4. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.010202759 = product of:
      0.020405518 = sum of:
        0.020405518 = product of:
          0.040811036 = sum of:
            0.040811036 = weight(_text_:work in 4705) [ClassicSimilarity], result of:
              0.040811036 = score(doc=4705,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.20276234 = fieldWeight in 4705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.