Search (5 results, page 1 of 1)

  • × author_ss:"Boer, V. de"
  1. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Hollink, L.; Huang, Z.; Kersen, J. van; Niet, M. de; Omelayenko, B.; Ossenbruggen, J. van; Siebes, R.; Taekema, J.; Wielemaker, J.; Wielinga, B.: MultimediaN E-Culture demonstrator (2006) 0.04
    0.038344346 = product of:
      0.07668869 = sum of:
        0.01029941 = weight(_text_:information in 4648) [ClassicSimilarity], result of:
          0.01029941 = score(doc=4648,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4648)
        0.066389285 = weight(_text_:standards in 4648) [ClassicSimilarity], result of:
          0.066389285 = score(doc=4648,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=4648)
      0.5 = coord(2/4)
    
    Abstract
    The main objective of the MultimediaN E-Culture project is to demonstrate how novel semantic-web and presentation technologies can be deployed to provide better indexing and search support within large virtual collections of culturalheritage resources. The architecture is fully based on open web standards in particular XML, SVG, RDF/OWL and SPARQL. One basic hypothesis underlying this work is that the use of explicit background knowledge in the form of ontologies/vocabularies/thesauri is in particular useful in information retrieval in knowledge-rich domains. This paper gives some details about the internals of the demonstrator.
  2. Hennicke, S.; Olensky, M.; Boer, V. de; Isaac, A.; Wielemaker, J.: ¬A data model for cross-domain data representation : the "Europeana Data Model" in the case of archival and museum data (2010) 0.04
    0.038344346 = product of:
      0.07668869 = sum of:
        0.01029941 = weight(_text_:information in 4664) [ClassicSimilarity], result of:
          0.01029941 = score(doc=4664,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 4664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4664)
        0.066389285 = weight(_text_:standards in 4664) [ClassicSimilarity], result of:
          0.066389285 = score(doc=4664,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 4664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=4664)
      0.5 = coord(2/4)
    
    Abstract
    This paper reports on ongoing work about heterogeneous and cross-domain data conversion to a common data model in EuropeanaConnect. The "Europeana Data Model" (EDM) provides the means to accommodate data from different domains while mostly retaining the original metadata notion. We give an introduction to the EDM and demonstrate how important metadata principles of two different metadata standards can be represented by EDM: one from the library domain ("Bibliopolis"), and one from the archive domain based on the "Encoded Archival Description" (EAD) standard. We conclude that the EDM offers a feasible approach to the issue of heterogeneous data interoperability in a digital library environment.
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  3. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Omelayenko, B.; Ossenbruggen, J. van; Wielemaker, J.; Wielinga, B.; Tordai, A.; Aroyoa, L.: Semantic annotation and search of cultural-heritage collections : the MultimediaN E-Culture demonstrator (2008) 0.02
    0.016597321 = product of:
      0.066389285 = sum of:
        0.066389285 = weight(_text_:standards in 4646) [ClassicSimilarity], result of:
          0.066389285 = score(doc=4646,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 4646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=4646)
      0.25 = coord(1/4)
    
    Abstract
    In this article we describe a SemanticWeb application for semantic annotation and search in large virtual collections of cultural-heritage objects, indexed with multiple vocabularies. During the annotation phase we harvest, enrich and align collection metadata and vocabularies. The semantic-search facilities support keyword-based queries of the graph (currently 20M triples), resulting in semantically grouped result clusters, all representing potential semantic matches of the original query. We show two sample search scenario's. The annotation and search software is open source and is already being used by third parties. All software is based on establishedWeb standards, in particular HTML/XML, CSS, RDF/OWL, SPARQL and JavaScript.
  4. Boer, V. de; Porter, A.L.; Someren, M. v.: Extracting historical time periods from the Web (2010) 0.01
    0.006007989 = product of:
      0.024031956 = sum of:
        0.024031956 = weight(_text_:information in 3988) [ClassicSimilarity], result of:
          0.024031956 = score(doc=3988,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27153665 = fieldWeight in 3988, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3988)
      0.25 = coord(1/4)
    
    Abstract
    In this work we present an automatic method for the extraction of time periods related to ontological concepts from the Web. The method consists of two parts: an Information Extraction phase and a Semantic Representation phase. In the Information Extraction phase, temporal information about events that are associated with the target instance are extracted from Web documents. The resulting distribution is normalized and a model is fit to it. This distribution is then converted into a Semantic Representation in the second phase. We present the method and describe experiments where time periods for four different types of concepts are extracted and converted to a time representation vocabulary, based on the TIMEX2 annotation standard.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.9, S.1888-1908
  5. Estrada, L.M.; Hildebrand, M.; Boer, V. de; Ossenbruggen, J. van: Time-based tags for fiction movies : comparing experts to novices using a video labeling game (2017) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 3347) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3347,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3347)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.348-364