Search (156 results, page 1 of 8)

  • × theme_ss:"Semantische Interoperabilität"
  1. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.09
    0.092324466 = product of:
      0.18464893 = sum of:
        0.18464893 = sum of:
          0.08582799 = weight(_text_:web in 8365) [ClassicSimilarity], result of:
            0.08582799 = score(doc=8365,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.50479853 = fieldWeight in 8365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.109375 = fieldNorm(doc=8365)
          0.09882093 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
            0.09882093 = score(doc=8365,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.5416616 = fieldWeight in 8365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=8365)
      0.5 = coord(1/2)
    
    Date
    22. 6.2015 16:08:38
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.08
    0.08428172 = sum of:
      0.068955295 = product of:
        0.20686588 = sum of:
          0.20686588 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
            0.20686588 = score(doc=1000,freq=2.0), product of:
              0.4416923 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052098576 = queryNorm
              0.46834838 = fieldWeight in 1000, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1000)
        0.33333334 = coord(1/3)
      0.015326426 = product of:
        0.030652853 = sum of:
          0.030652853 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
            0.030652853 = score(doc=1000,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 1000, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1000)
        0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.08
    0.08147512 = product of:
      0.16295023 = sum of:
        0.16295023 = sum of:
          0.11353976 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
            0.11353976 = score(doc=4184,freq=14.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.6677857 = fieldWeight in 4184, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
          0.049410466 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
            0.049410466 = score(doc=4184,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 4184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
      0.5 = coord(1/2)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  4. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.06
    0.061869845 = product of:
      0.12373969 = sum of:
        0.12373969 = sum of:
          0.07432922 = weight(_text_:web in 759) [ClassicSimilarity], result of:
            0.07432922 = score(doc=759,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.43716836 = fieldWeight in 759, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.049410466 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.049410466 = score(doc=759,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.5 = coord(1/2)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  5. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.05
    0.048338976 = product of:
      0.09667795 = sum of:
        0.09667795 = sum of:
          0.03678342 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
            0.03678342 = score(doc=4379,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.21634221 = fieldWeight in 4379, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=4379)
          0.059894532 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
            0.059894532 = score(doc=4379,freq=4.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.32829654 = fieldWeight in 4379, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4379)
      0.5 = coord(1/2)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  6. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.05
    0.048338976 = product of:
      0.09667795 = sum of:
        0.09667795 = sum of:
          0.03678342 = weight(_text_:web in 1967) [ClassicSimilarity], result of:
            0.03678342 = score(doc=1967,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.21634221 = fieldWeight in 1967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.059894532 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.059894532 = score(doc=1967,freq=4.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  7. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.05
    0.048268706 = product of:
      0.09653741 = sum of:
        0.09653741 = product of:
          0.28961223 = sum of:
            0.28961223 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.28961223 = score(doc=306,freq=2.0), product of:
                0.4416923 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052098576 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  8. Galinski, C.: Fragen der semantischen Interoperabilität brechen jetzt überall auf (o.J.) 0.05
    0.04718572 = product of:
      0.09437144 = sum of:
        0.09437144 = sum of:
          0.052019615 = weight(_text_:web in 4183) [ClassicSimilarity], result of:
            0.052019615 = score(doc=4183,freq=4.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.3059541 = fieldWeight in 4183, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=4183)
          0.042351827 = weight(_text_:22 in 4183) [ClassicSimilarity], result of:
            0.042351827 = score(doc=4183,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.23214069 = fieldWeight in 4183, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4183)
      0.5 = coord(1/2)
    
    Content
    "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen." ... "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen."
    Date
    22. 1.2011 10:16:32
  9. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.05
    0.046162233 = product of:
      0.092324466 = sum of:
        0.092324466 = sum of:
          0.042913996 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
            0.042913996 = score(doc=3283,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.25239927 = fieldWeight in 3283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3283)
          0.049410466 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
            0.049410466 = score(doc=3283,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 3283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3283)
      0.5 = coord(1/2)
    
    Theme
    Semantic Web
  10. Godby, C.J.; Smith, D.; Childress, E.: Encoding application profiles in a computational model of the crosswalk (2008) 0.04
    0.044192746 = product of:
      0.08838549 = sum of:
        0.08838549 = sum of:
          0.053092297 = weight(_text_:web in 2649) [ClassicSimilarity], result of:
            0.053092297 = score(doc=2649,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.3122631 = fieldWeight in 2649, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
          0.03529319 = weight(_text_:22 in 2649) [ClassicSimilarity], result of:
            0.03529319 = score(doc=2649,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 2649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
      0.5 = coord(1/2)
    
    Abstract
    OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill,et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broader-narrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Dunsire, G.; Nicholson, D.: Signposting the crossroads : terminology Web services and classification-based interoperability (2010) 0.04
    0.044192746 = product of:
      0.08838549 = sum of:
        0.08838549 = sum of:
          0.053092297 = weight(_text_:web in 4066) [ClassicSimilarity], result of:
            0.053092297 = score(doc=4066,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.3122631 = fieldWeight in 4066, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
          0.03529319 = weight(_text_:22 in 4066) [ClassicSimilarity], result of:
            0.03529319 = score(doc=4066,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 4066, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
      0.5 = coord(1/2)
    
    Abstract
    The focus of this paper is the provision of terminology- and classification-based terminologies interoperability data via web services, initially using interoperability data based on the use of a Dewey Decimal Classification (DDC) spine, but with an aim to explore other possibilities in time, including the use of other spines. The High-Level Thesaurus Project (HILT) Phase IV developed pilot web services based on SRW/U, SOAP, and SKOS to deliver machine-readable terminology and crossterminology mappings data likely to be useful to information services wishing to enhance their subject search or browse services. It also developed an associated toolkit to help information services technical staff to embed HILT-related functionality within service interfaces. Several UK information services have created illustrative user interface enhancements using HILT functionality and these will demonstrate what is possible. HILT currently has the following subject schemes mounted and available: DDC, CAB, GCMD, HASSET, IPSV, LCSH, MeSH, NMR, SCAS, UNESCO, and AAT. It also has high level mappings between some of these schemes and DDC and some deeper pilot mappings available.
    Date
    6. 1.2011 19:22:48
  12. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.04
    0.044150814 = product of:
      0.08830163 = sum of:
        0.08830163 = sum of:
          0.06006708 = weight(_text_:web in 168) [ClassicSimilarity], result of:
            0.06006708 = score(doc=168,freq=12.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.35328537 = fieldWeight in 168, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.028234553 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.028234553 = score(doc=168,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  13. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.04
    0.04028248 = product of:
      0.08056496 = sum of:
        0.08056496 = sum of:
          0.030652853 = weight(_text_:web in 1962) [ClassicSimilarity], result of:
            0.030652853 = score(doc=1962,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 1962, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
          0.049912106 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
            0.049912106 = score(doc=1962,freq=4.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.27358043 = fieldWeight in 1962, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
      0.5 = coord(1/2)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  14. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.03
    0.03297302 = product of:
      0.06594604 = sum of:
        0.06594604 = sum of:
          0.030652853 = weight(_text_:web in 2627) [ClassicSimilarity], result of:
            0.030652853 = score(doc=2627,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
          0.03529319 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
            0.03529319 = score(doc=2627,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  15. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.03
    0.03297302 = product of:
      0.06594604 = sum of:
        0.06594604 = sum of:
          0.030652853 = weight(_text_:web in 3628) [ClassicSimilarity], result of:
            0.030652853 = score(doc=3628,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
          0.03529319 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
            0.03529319 = score(doc=3628,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
      0.5 = coord(1/2)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  16. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.03
    0.03297302 = product of:
      0.06594604 = sum of:
        0.06594604 = sum of:
          0.030652853 = weight(_text_:web in 5757) [ClassicSimilarity], result of:
            0.030652853 = score(doc=5757,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 5757, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5757)
          0.03529319 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
            0.03529319 = score(doc=5757,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 5757, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5757)
      0.5 = coord(1/2)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  17. Krause, J.: Semantic heterogeneity : comparing new semantic web approaches with those of digital libraries (2008) 0.03
    0.027630107 = product of:
      0.055260215 = sum of:
        0.055260215 = product of:
          0.11052043 = sum of:
            0.11052043 = weight(_text_:web in 1908) [ClassicSimilarity], result of:
              0.11052043 = score(doc=1908,freq=26.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.65002745 = fieldWeight in 1908, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web. Design/methodology/approach - The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing. Findings - The points criticised by the semantic web and ontology approaches are the same as those of the DL "Shell model approach" from the mid-1990s, with emphasis on the centrality of its heterogeneity components (used, for example, in vascoda). The Shell model argument began with the "invisible web", necessitating the restructuring of DL approaches. The conclusion is that both approaches fit well together and that the Shell model, with its semantic heterogeneity components, can be reformulated on the semantic web basis. Practical implications - A reinterpretation of the DL approaches of semantic heterogeneity and adapting to standards and tools supported by the W3C should be the best solution. It is therefore recommended that - although most of the semantic web standards are not technologically refined for commercial applications at present - all individual DL developments should be checked for their adaptability to the W3C standards of the semantic web. Originality/value - A unique conceptual analysis of the parallel developments emanating from the digital library and semantic web communities.
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Theme
    Semantic Web
  18. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.03
    0.026378417 = product of:
      0.052756835 = sum of:
        0.052756835 = sum of:
          0.024522282 = weight(_text_:web in 1096) [ClassicSimilarity], result of:
            0.024522282 = score(doc=1096,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.14422815 = fieldWeight in 1096, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=1096)
          0.028234553 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
            0.028234553 = score(doc=1096,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 1096, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1096)
      0.5 = coord(1/2)
    
    Abstract
    As a part of a research project aiming to connect library data to the unfamiliar data sets available in the Linked Data (LD) community's CKAN Data Hub (thedatahub.org), this project collected, analyzed, and mapped properties used in describing and accessing music recordings, scores, and music-related information used by selected music LD data sets, library catalogs, and various digital collections created by libraries and other cultural institutions. This article reviews current efforts to connect music data through the Semantic Web, with an emphasis on the Music Ontology (MO) and ontology alignment approaches; it also presents a framework for understanding the life cycle of a musical work, focusing on the central activities of composition, performance, and use. The project studied metadata structures and properties of 11 music-related LD data sets and mapped them to the descriptions commonly used in the library cataloging records for sound recordings and musical scores (including MARC records and their extended schema.org markup), and records from 20 collections of digitized music recordings and scores (featuring a variety of metadata structures). The analysis resulted in a set of crosswalks and a unified crosswalk that aligns these properties. The paper reports on detailed methodologies used and discusses research findings and issues. Topics of particular concern include (a) the challenges of mapping between the overgeneralized descriptions found in library data and the specialized, music-oriented properties present in the LD data sets; (b) the hidden information and access points in library data; and (c) the potential benefits of enriching library data through the mapping of properties found in library catalogs to similar properties used by LD data sets.
    Date
    28.10.2013 17:22:17
  19. Dellschaft, K.; Hachenberg, C.: Repräsentation von Wissensorganisationssystemen im Semantic Web : ein Best Practice Guide (2011) 0.03
    0.026279347 = product of:
      0.052558694 = sum of:
        0.052558694 = product of:
          0.10511739 = sum of:
            0.10511739 = weight(_text_:web in 4782) [ClassicSimilarity], result of:
              0.10511739 = score(doc=4782,freq=12.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6182494 = fieldWeight in 4782, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4782)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In diesem Dokument sollen Begriffe, Prinzipien und Methoden vorgestellt werden, die sich als hilfreich bei der Erstellung von Semantic Web konformen Repräsentationen von Wissensorganisationssystemen (KOS) erwiesen haben, wie z. B. Thesauri und Klassifikationssysteme. Das Dokument richtet sich an Organisationen wie z. B. Bibliotheken, die ihre traditionellen Wissensorganisationssysteme im Rahmen des Semantic Web veröffentlichen wollen. Die in diesem Dokument beschriebenen Vorgehensweisen und Prinzipien sind nicht als normativ anzusehen. Sie sollen nur dabei helfen, von bisher gemachten Erfahrungen zu profitieren und einen leichteren Einstieg in die wichtigsten Begriffichkeiten und Techniken des Semantic Web zu bekommen. An vielen Stellen wird zudem auf weiterführende Literatur zum Thema und auf relevante Standards und Spezifikationen aus dem Bereich des Semantic Web hingewiesen.
    Theme
    Semantic Web
  20. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.03
    0.025416005 = product of:
      0.05083201 = sum of:
        0.05083201 = product of:
          0.10166402 = sum of:
            0.10166402 = weight(_text_:web in 3934) [ClassicSimilarity], result of:
              0.10166402 = score(doc=3934,freq=22.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59793836 = fieldWeight in 3934, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3934)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
    RSWK
    Ontologie <Wissensverarbeitung> / Semantic Web
    Series
    Lecture Notes in Computer Scienc;10370 )(Information Systems and Applications, incl. Internet/Web, and HCI
    Subject
    Ontologie <Wissensverarbeitung> / Semantic Web
    Theme
    Semantic Web

Years

Languages

  • e 119
  • d 33
  • pt 1
  • More… Less…

Types

  • a 98
  • el 51
  • m 13
  • s 7
  • x 6
  • r 4
  • p 2
  • n 1
  • More… Less…