Search (6 results, page 1 of 1)

  • × author_ss:"Boer, V. de"
  1. Boer, V. de; Porter, A.L.; Someren, M. v.: Extracting historical time periods from the Web (2010) 0.01
    0.008684997 = product of:
      0.021712493 = sum of:
        0.010661141 = weight(_text_:a in 3988) [ClassicSimilarity], result of:
          0.010661141 = score(doc=3988,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 3988, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3988)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 3988) [ClassicSimilarity], result of:
              0.022102704 = score(doc=3988,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 3988, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3988)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this work we present an automatic method for the extraction of time periods related to ontological concepts from the Web. The method consists of two parts: an Information Extraction phase and a Semantic Representation phase. In the Information Extraction phase, temporal information about events that are associated with the target instance are extracted from Web documents. The resulting distribution is normalized and a model is fit to it. This distribution is then converted into a Semantic Representation in the second phase. We present the method and describe experiments where time periods for four different types of concepts are extracted and converted to a time representation vocabulary, based on the TIMEX2 annotation standard.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.9, S.1888-1908
    Type
    a
  2. Hennicke, S.; Olensky, M.; Boer, V. de; Isaac, A.; Wielemaker, J.: ¬A data model for cross-domain data representation : the "Europeana Data Model" in the case of archival and museum data (2010) 0.01
    0.005898641 = product of:
      0.014746603 = sum of:
        0.0100103095 = weight(_text_:a in 4664) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=4664,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 4664, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4664)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 4664) [ClassicSimilarity], result of:
              0.009472587 = score(doc=4664,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 4664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4664)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper reports on ongoing work about heterogeneous and cross-domain data conversion to a common data model in EuropeanaConnect. The "Europeana Data Model" (EDM) provides the means to accommodate data from different domains while mostly retaining the original metadata notion. We give an introduction to the EDM and demonstrate how important metadata principles of two different metadata standards can be represented by EDM: one from the library domain ("Bibliopolis"), and one from the archive domain based on the "Encoded Archival Description" (EAD) standard. We conclude that the EDM offers a feasible approach to the issue of heterogeneous data interoperability in a digital library environment.
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
    Type
    a
  3. Estrada, L.M.; Hildebrand, M.; Boer, V. de; Ossenbruggen, J. van: Time-based tags for fiction movies : comparing experts to novices using a video labeling game (2017) 0.01
    0.005182888 = product of:
      0.012957219 = sum of:
        0.009010308 = weight(_text_:a in 3347) [ClassicSimilarity], result of:
          0.009010308 = score(doc=3347,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 3347, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3347)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 3347) [ClassicSimilarity], result of:
              0.007893822 = score(doc=3347,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 3347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3347)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The cultural heritage sector has embraced social tagging as a way to increase both access to online content and to engage users with their digital collections. In this article, we build on two current lines of research. (a) We use Waisda?, an existing labeling game, to add time-based annotations to content. (b) In this context, we investigate the role of experts in human-based computation (nichesourcing). We report on a small-scale experiment in which we applied Waisda? to content from film archives. We study the differences in the type of time-based tags between experts and novices for film clips in a crowdsourcing setting. The findings show high similarity in the number and type of tags (mostly factual). In the less frequent tags, however, experts used more domain-specific terms. We conclude that competitive games are not suited to elicit real expert-level descriptions. We also confirm that providing guidelines, based on conceptual frameworks that are more suited to moving images in a time-based fashion, could result in increasing the quality of the tags, thus allowing for creating more tag-based innovative services for online audiovisual heritage.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.348-364
    Type
    a
  4. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Hollink, L.; Huang, Z.; Kersen, J. van; Niet, M. de; Omelayenko, B.; Ossenbruggen, J. van; Siebes, R.; Taekema, J.; Wielemaker, J.; Wielinga, B.: MultimediaN E-Culture demonstrator (2006) 0.00
    0.003529194 = product of:
      0.008822985 = sum of:
        0.004086692 = weight(_text_:a in 4648) [ClassicSimilarity], result of:
          0.004086692 = score(doc=4648,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 4648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4648)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 4648) [ClassicSimilarity], result of:
              0.009472587 = score(doc=4648,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 4648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4648)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The main objective of the MultimediaN E-Culture project is to demonstrate how novel semantic-web and presentation technologies can be deployed to provide better indexing and search support within large virtual collections of culturalheritage resources. The architecture is fully based on open web standards in particular XML, SVG, RDF/OWL and SPARQL. One basic hypothesis underlying this work is that the use of explicit background knowledge in the form of ontologies/vocabularies/thesauri is in particular useful in information retrieval in knowledge-rich domains. This paper gives some details about the internals of the demonstrator.
  5. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.00
    0.0018020617 = product of:
      0.009010308 = sum of:
        0.009010308 = weight(_text_:a in 265) [ClassicSimilarity], result of:
          0.009010308 = score(doc=265,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 265, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=265)
      0.2 = coord(1/5)
    
    Abstract
    Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
    Type
    a
  6. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Omelayenko, B.; Ossenbruggen, J. van; Wielemaker, J.; Wielinga, B.; Tordai, A.; Aroyoa, L.: Semantic annotation and search of cultural-heritage collections : the MultimediaN E-Culture demonstrator (2008) 0.00
    0.0014156717 = product of:
      0.007078358 = sum of:
        0.007078358 = weight(_text_:a in 4646) [ClassicSimilarity], result of:
          0.007078358 = score(doc=4646,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 4646, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4646)
      0.2 = coord(1/5)
    
    Abstract
    In this article we describe a SemanticWeb application for semantic annotation and search in large virtual collections of cultural-heritage objects, indexed with multiple vocabularies. During the annotation phase we harvest, enrich and align collection metadata and vocabularies. The semantic-search facilities support keyword-based queries of the graph (currently 20M triples), resulting in semantically grouped result clusters, all representing potential semantic matches of the original query. We show two sample search scenario's. The annotation and search software is open source and is already being used by third parties. All software is based on establishedWeb standards, in particular HTML/XML, CSS, RDF/OWL, SPARQL and JavaScript.