Search (13 results, page 1 of 1)

  • × author_ss:"Isaac, A."
  1. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.03
    0.028408606 = product of:
      0.11363442 = sum of:
        0.071807064 = weight(_text_:case in 4645) [ClassicSimilarity], result of:
          0.071807064 = score(doc=4645,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.41216385 = fieldWeight in 4645, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
        0.04182736 = weight(_text_:studies in 4645) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4645,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.25 = coord(2/8)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  2. Hennicke, S.; Olensky, M.; Boer, V. de; Isaac, A.; Wielemaker, J.: ¬A data model for cross-domain data representation : the "Europeana Data Model" in the case of archival and museum data (2010) 0.01
    0.0063469075 = product of:
      0.05077526 = sum of:
        0.05077526 = weight(_text_:case in 4664) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4664,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4664)
      0.125 = coord(1/8)
    
  3. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.01
    0.0052890894 = product of:
      0.042312715 = sum of:
        0.042312715 = weight(_text_:case in 265) [ClassicSimilarity], result of:
          0.042312715 = score(doc=265,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=265)
      0.125 = coord(1/8)
    
  4. Wallis, R.; Isaac, A.; Charles, V.; Manguinhas, H.: Recommendations for the application of Schema.org to aggregated cultural heritage metadata to increase relevance and visibility to search engines : the case of Europeana (2017) 0.01
    0.0052890894 = product of:
      0.042312715 = sum of:
        0.042312715 = weight(_text_:case in 3372) [ClassicSimilarity], result of:
          0.042312715 = score(doc=3372,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 3372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3372)
      0.125 = coord(1/8)
    
  5. Wang, S.; Isaac, A.; Schopman, B.; Schlobach, S.; Meij, L. van der: Matching multilingual subject vocabularies (2009) 0.01
    0.005011469 = product of:
      0.040091753 = sum of:
        0.040091753 = weight(_text_:libraries in 3035) [ClassicSimilarity], result of:
          0.040091753 = score(doc=3035,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.30797386 = fieldWeight in 3035, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=3035)
      0.125 = coord(1/8)
    
    Abstract
    Most libraries and other cultural heritage institutions use controlled knowledge organisation systems, such as thesauri, to describe their collections. Unfortunately, as most of these institutions use different such systems, united access to heterogeneous collections is difficult. Things are even worse in an international context when concepts have labels in different languages. In order to overcome the multilingual interoperability problem between European Libraries, extensive work has been done to manually map concepts from different knowledge organisation systems, which is a tedious and expensive process. Within the TELplus project, we developed and evaluated methods to automatically discover these mappings, using different ontology matching techniques. In experiments on major French, English and German subject heading lists Rameau, LCSH and SWD, we show that we can automatically produce mappings of surprisingly good quality, even when using relatively naive translation and matching methods.
  6. Isaac, A.; Baker, T.: Linked data practice at different levels of semantic precision : the perspective of libraries, archives and museums (2015) 0.00
    0.004176224 = product of:
      0.033409793 = sum of:
        0.033409793 = weight(_text_:libraries in 2026) [ClassicSimilarity], result of:
          0.033409793 = score(doc=2026,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25664487 = fieldWeight in 2026, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
      0.125 = coord(1/8)
    
    Abstract
    Libraries, archives and museums rely on structured schemas and vocabularies to indicate classes in which a resource may belong. In the context of linked data, key organizational components are the RDF data model, element schemas and value vocabularies, with simple ontologies having minimally defined classes and properties in order to facilitate reuse and interoperability. Simplicity over formal semantics is a tenet of the open-world assumption underlying ontology languages central to the Semantic Web, but the result is a lack of constraints, data quality checks and validation capacity. Inconsistent use of vocabularies and ontologies that do not follow formal semantics rules and logical concept hierarchies further complicate the use of Semantic Web technologies. The Simple Knowledge Organization System (SKOS) helps make existing value vocabularies available in the linked data environment, but it exchanges precision for simplicity. Incompatibilities between simple organized vocabularies, Resource Description Framework Schemas and OWL ontologies and even basic notions of subjects and concepts prevent smooth translations and challenge the conversion of cultural institutions' unique legacy vocabularies for linked data. Adopting the linked data vision requires accepting loose semantic interpretations. To avoid semantic inconsistencies and illogical results, cultural organizations following the linked data path must be careful to choose the level of semantics that best suits their domain and needs.
  7. Isaac, A.: After EDLproject : controlled Vocabularies in TELPlus (2007) 0.00
    0.0040267524 = product of:
      0.03221402 = sum of:
        0.03221402 = product of:
          0.06442804 = sum of:
            0.06442804 = weight(_text_:22 in 116) [ClassicSimilarity], result of:
              0.06442804 = score(doc=116,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.46428138 = fieldWeight in 116, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=116)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  8. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.00
    0.0037023628 = product of:
      0.029618902 = sum of:
        0.029618902 = weight(_text_:case in 553) [ClassicSimilarity], result of:
          0.029618902 = score(doc=553,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.17000891 = fieldWeight in 553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.125 = coord(1/8)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
  9. Manguinhas, H.; Charles, V.; Isaac, A.; Miles, T.; Lima, A.; Neroulidis, A.; Ginouves, V.; Atsidis, D.; Hildebrand, M.; Brinkerink, M.; Gordea, S.: Linking subject labels in cultural heritage metadata to MIMO vocabulary using CultuurLink (2016) 0.00
    0.0035436437 = product of:
      0.02834915 = sum of:
        0.02834915 = weight(_text_:libraries in 3107) [ClassicSimilarity], result of:
          0.02834915 = score(doc=3107,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=3107)
      0.125 = coord(1/8)
    
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  10. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.00
    0.0035436437 = product of:
      0.02834915 = sum of:
        0.02834915 = weight(_text_:libraries in 39) [ClassicSimilarity], result of:
          0.02834915 = score(doc=39,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
      0.125 = coord(1/8)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  11. Wang, S.; Isaac, A.; Schlobach, S.; Meij, L. van der; Schopman, B.: Instance-based semantic interoperability in the cultural heritage (2012) 0.00
    0.0029530365 = product of:
      0.023624292 = sum of:
        0.023624292 = weight(_text_:libraries in 125) [ClassicSimilarity], result of:
          0.023624292 = score(doc=125,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.18147534 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=125)
      0.125 = coord(1/8)
    
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: http://www.semantic-web-journal.net/content/instance-based-semantic-interoperability-cultural-heritage http://www.semantic-web-journal.net/sites/default/files/swj157_1.pdf.
  12. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.00
    0.0023624292 = product of:
      0.018899433 = sum of:
        0.018899433 = weight(_text_:libraries in 3398) [ClassicSimilarity], result of:
          0.018899433 = score(doc=3398,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.14518027 = fieldWeight in 3398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03125 = fieldNorm(doc=3398)
      0.125 = coord(1/8)
    
    Content
    This paper is based on a talk given at "Information Access for the Global Community, An International Seminar on the Universal Decimal Classification" held on 4-5 June 2007 in The Hague, The Netherlands. An abstract of this talk will be published in Extensions and Corrections to the UDC, an annual publication of the UDC consortium. Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  13. Summers, E.; Isaac, A.; Redding, C.; Krech, D.: LCSH, SKOS and Linked Data (2008) 0.00
    0.002348939 = product of:
      0.018791512 = sum of:
        0.018791512 = product of:
          0.037583023 = sum of:
            0.037583023 = weight(_text_:22 in 2631) [ClassicSimilarity], result of:
              0.037583023 = score(doc=2631,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.2708308 = fieldWeight in 2631, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2631)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas