Search (163 results, page 1 of 9)

  • × theme_ss:"Semantische Interoperabilität"
  1. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.11
    0.10684322 = product of:
      0.16026482 = sum of:
        0.06606405 = weight(_text_:wide in 4379) [ClassicSimilarity], result of:
          0.06606405 = score(doc=4379,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.29372054 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.09420077 = sum of:
          0.035840917 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
            0.035840917 = score(doc=4379,freq=2.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.21634221 = fieldWeight in 4379, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=4379)
          0.05835985 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
            0.05835985 = score(doc=4379,freq=4.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.32829654 = fieldWeight in 4379, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4379)
      0.6666667 = coord(2/3)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  2. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.10
    0.09888324 = product of:
      0.14832486 = sum of:
        0.062285792 = weight(_text_:wide in 168) [ClassicSimilarity], result of:
          0.062285792 = score(doc=168,freq=4.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.2769224 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.08603907 = sum of:
          0.058527973 = weight(_text_:web in 168) [ClassicSimilarity], result of:
            0.058527973 = score(doc=168,freq=12.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.35328537 = fieldWeight in 168, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.027511096 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.027511096 = score(doc=168,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  3. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.08
    0.08297856 = product of:
      0.124467835 = sum of:
        0.093428686 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.093428686 = score(doc=1094,freq=4.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.031039147 = product of:
          0.062078293 = sum of:
            0.062078293 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
              0.062078293 = score(doc=1094,freq=6.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.37471575 = fieldWeight in 1094, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1094)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  4. Neubauer, G.: Visualization of typed links in linked data (2017) 0.08
    0.08006412 = product of:
      0.12009617 = sum of:
        0.07785724 = weight(_text_:wide in 3912) [ClassicSimilarity], result of:
          0.07785724 = score(doc=3912,freq=4.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.34615302 = fieldWeight in 3912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.04223893 = product of:
          0.08447786 = sum of:
            0.08447786 = weight(_text_:web in 3912) [ClassicSimilarity], result of:
              0.08447786 = score(doc=3912,freq=16.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.5099235 = fieldWeight in 3912, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3912)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Theme
    Semantic Web
  5. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.07
    0.066569686 = product of:
      0.09985453 = sum of:
        0.055053383 = weight(_text_:wide in 6061) [ClassicSimilarity], result of:
          0.055053383 = score(doc=6061,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.044801146 = product of:
          0.08960229 = sum of:
            0.08960229 = weight(_text_:web in 6061) [ClassicSimilarity], result of:
              0.08960229 = score(doc=6061,freq=18.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.5408555 = fieldWeight in 6061, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6061)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Theme
    Semantic Web
  6. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.06
    0.059972547 = product of:
      0.17991763 = sum of:
        0.17991763 = sum of:
          0.08362881 = weight(_text_:web in 8365) [ClassicSimilarity], result of:
            0.08362881 = score(doc=8365,freq=2.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.50479853 = fieldWeight in 8365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.109375 = fieldNorm(doc=8365)
          0.09628883 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
            0.09628883 = score(doc=8365,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.5416616 = fieldWeight in 8365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=8365)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2015 16:08:38
  7. Tang, J.; Liang, B.-Y.; Li, J.-Z.: Toward detecting mapping strategies for ontology interoperability (2005) 0.06
    0.056613877 = product of:
      0.084920816 = sum of:
        0.055053383 = weight(_text_:wide in 3367) [ClassicSimilarity], result of:
          0.055053383 = score(doc=3367,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 3367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
        0.029867431 = product of:
          0.059734862 = sum of:
            0.059734862 = weight(_text_:web in 3367) [ClassicSimilarity], result of:
              0.059734862 = score(doc=3367,freq=8.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.36057037 = fieldWeight in 3367, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3367)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontology mapping is one of the core tasks for ontology interoperability. It is aimed to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. It benefits many applications, such as integration of ontology based web data sources, interoperability of agents or web services. To reduce the amount of users' effort as much as possible, (semi-) automatic ontology mapping is becoming more and more important to bring it into fruition. In the existing literature, many approaches have found considerable interest by combining several different similar/mapping strategies (namely multi-strategy based mapping). However, experiments show that the multi-strategy based mapping does not always outperform its single-strategy counterpart. In this paper, we mainly aim to deal with two problems: (1) for a new, unseen mapping task, should we select a multi-strategy based algorithm or just one single-strategy based algorithm? (2) if the task is suitable for multi-strategy, then how to select the strategies into the final combined scenario? We propose an approach of multiple strategies detections for ontology mapping. The results obtained so far show that multi-strategy detection improves on precision and recall significantly.
    Content
    Beitrag anlässlich: Workshop on The Semantic Computing Initiative (SeC 2005) --- From Semantic Web to Semantic World --- to be held in conjunction with The 14th Int'l Conf. on World Wide Web (WWW2005); vgl.: http://www.instsec.org/2005ws/.
  8. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.05
    0.054748103 = product of:
      0.082122155 = sum of:
        0.06718844 = product of:
          0.20156533 = sum of:
            0.20156533 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20156533 = score(doc=1000,freq=2.0), product of:
                0.43037477 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050763648 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.014933716 = product of:
          0.029867431 = sum of:
            0.029867431 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
              0.029867431 = score(doc=1000,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.18028519 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  9. Piscitelli, F.A.: Library linked data models : library data in the Semantic Web (2019) 0.05
    0.053946227 = product of:
      0.08091934 = sum of:
        0.055053383 = weight(_text_:wide in 5478) [ClassicSimilarity], result of:
          0.055053383 = score(doc=5478,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.025865955 = product of:
          0.05173191 = sum of:
            0.05173191 = weight(_text_:web in 5478) [ClassicSimilarity], result of:
              0.05173191 = score(doc=5478,freq=6.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.3122631 = fieldWeight in 5478, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5478)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This exploratory study examined Linked Data (LD) schemas/ontologies and data models proposed or in use by libraries around the world using MAchine Readable Cataloging (MARC) as a basis for comparison of the scope and extensibility of these potential new standards. The researchers selected 14 libraries from national libraries, academic libraries, government libraries, public libraries, multi-national libraries, and cultural heritage centers currently developing Library Linked Data (LLD) schemas. The choices of models, schemas, and elements used in each library's LD can create interoperability issues for LD services because of substantial differences between schemas and data models evolving via local decisions. The researchers observed that a wide variety of vocabularies and ontologies were used for LLD including common web schemas such as Dublin Core (DC)/DCTerms, Schema.org and Resource Description Framework (RDF), as well as deprecated schemas such as MarcOnt and rdagroup1elements. A sharp divide existed as well between LLD schemas using variations of the Functional Requirements for Bibliographic Records (FRBR) data model and those with different data models or even with no listed data model. Libraries worldwide are not using the same elements or even the same ontologies, schemas and data models to describe the same materials using the same general concepts.
    Theme
    Semantic Web
  10. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.05
    0.052924983 = product of:
      0.15877494 = sum of:
        0.15877494 = sum of:
          0.11063052 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
            0.11063052 = score(doc=4184,freq=14.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.6677857 = fieldWeight in 4184, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
          0.048144415 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
            0.048144415 = score(doc=4184,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.2708308 = fieldWeight in 4184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
      0.33333334 = coord(1/3)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  11. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.04
    0.04246226 = product of:
      0.06369339 = sum of:
        0.03892862 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.03892862 = score(doc=3173,freq=4.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.024764767 = product of:
          0.049529534 = sum of:
            0.049529534 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
              0.049529534 = score(doc=3173,freq=22.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.29896918 = fieldWeight in 3173, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3173)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
  12. Tennis, J.T.: Versioning concept schemes for persistent retrieval (2006) 0.04
    0.04062552 = product of:
      0.060938276 = sum of:
        0.044042703 = weight(_text_:wide in 1956) [ClassicSimilarity], result of:
          0.044042703 = score(doc=1956,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.016895572 = product of:
          0.033791143 = sum of:
            0.033791143 = weight(_text_:web in 1956) [ClassicSimilarity], result of:
              0.033791143 = score(doc=1956,freq=4.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.2039694 = fieldWeight in 1956, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1956)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or classification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a consequence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking. The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a "model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies" in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision. As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking system for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
  13. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.04
    0.0401897 = product of:
      0.12056909 = sum of:
        0.12056909 = sum of:
          0.07242467 = weight(_text_:web in 759) [ClassicSimilarity], result of:
            0.07242467 = score(doc=759,freq=6.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.43716836 = fieldWeight in 759, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.048144415 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.048144415 = score(doc=759,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.33333334 = coord(1/3)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  14. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.04
    0.039629713 = product of:
      0.05944457 = sum of:
        0.03853737 = weight(_text_:wide in 4205) [ClassicSimilarity], result of:
          0.03853737 = score(doc=4205,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.171337 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.020907203 = product of:
          0.041814405 = sum of:
            0.041814405 = weight(_text_:web in 4205) [ClassicSimilarity], result of:
              0.041814405 = score(doc=4205,freq=8.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.25239927 = fieldWeight in 4205, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).
    The focus of the project "coli-conc" lies in semi-automatic creation of mappings between different KOS in general and the two important library classification schemes in particular - Dewey classification system (DDC) and Regensburg classification system (RVK). In the year 2000, the national libraries of Germany, Austria and Switzerland adopted DDC in an endeavor to develop a nation-wide classification scheme. But historically, in the German speaking regions, the academic libraries have been using their own home-grown systems, the most prominent and popular being the RVK. However, with the launch of DDC, building concordances between DDC and RVK has become an imperative, although it is still rare. The delay in building comprehensive concordances between these two systems has been because of major challenges posed by the sheer largeness of these two systems (38.000 classes in DDC and ca. 860.000 classes in RVK), the strong disparity in their respective structure, the variation in the perception and representation of the concepts. The challenge is compounded geometrically for any manual attempt in this direction. Although there have been efforts on automatic mappings (OAEI Library Track 2012 -- 2014 and e.g. Pfeffer 2013) in the recent years; such concordances carry the risks of inaccurate mappings, and the approaches are rather more suitable for mapping suggestions than for automatic generation of concordances (Lauser 2008; Reiner 2010). The project "coli-conc" will facilitate the creation, evaluation, and reuse of mappings with a public collection of concordances and a web application of mapping management. The proposed presentation will give an introduction to the tools and standards created and planned in the project "coli-conc". This includes preliminary work on DDC concordances (Balakrishnan 2013), an overview of the software concept, technical architecture (Voß 2015) and a demonstration of the Cocoda web application.
  15. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.04
    0.03826275 = product of:
      0.057394125 = sum of:
        0.027526692 = weight(_text_:wide in 4232) [ClassicSimilarity], result of:
          0.027526692 = score(doc=4232,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.122383565 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.029867431 = product of:
          0.059734862 = sum of:
            0.059734862 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
              0.059734862 = score(doc=4232,freq=32.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.36057037 = fieldWeight in 4232, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    Theme
    Semantic Web
  16. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.04
    0.03732645 = product of:
      0.055989675 = sum of:
        0.044042703 = weight(_text_:wide in 4659) [ClassicSimilarity], result of:
          0.044042703 = score(doc=4659,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.011946972 = product of:
          0.023893945 = sum of:
            0.023893945 = weight(_text_:web in 4659) [ClassicSimilarity], result of:
              0.023893945 = score(doc=4659,freq=2.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.14422815 = fieldWeight in 4659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4659)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
  17. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.03
    0.031400256 = product of:
      0.09420077 = sum of:
        0.09420077 = sum of:
          0.035840917 = weight(_text_:web in 1967) [ClassicSimilarity], result of:
            0.035840917 = score(doc=1967,freq=2.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.21634221 = fieldWeight in 1967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.05835985 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.05835985 = score(doc=1967,freq=4.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  18. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.03
    0.031354606 = product of:
      0.09406382 = sum of:
        0.09406382 = product of:
          0.28219146 = sum of:
            0.28219146 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.28219146 = score(doc=306,freq=2.0), product of:
                0.43037477 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050763648 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  19. Galinski, C.: Fragen der semantischen Interoperabilität brechen jetzt überall auf (o.J.) 0.03
    0.030651119 = product of:
      0.09195335 = sum of:
        0.09195335 = sum of:
          0.05068671 = weight(_text_:web in 4183) [ClassicSimilarity], result of:
            0.05068671 = score(doc=4183,freq=4.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.3059541 = fieldWeight in 4183, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=4183)
          0.041266643 = weight(_text_:22 in 4183) [ClassicSimilarity], result of:
            0.041266643 = score(doc=4183,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.23214069 = fieldWeight in 4183, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4183)
      0.33333334 = coord(1/3)
    
    Content
    "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen." ... "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen."
    Date
    22. 1.2011 10:16:32
  20. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.03
    0.029986273 = product of:
      0.08995882 = sum of:
        0.08995882 = sum of:
          0.041814405 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
            0.041814405 = score(doc=3283,freq=2.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.25239927 = fieldWeight in 3283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3283)
          0.048144415 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
            0.048144415 = score(doc=3283,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.2708308 = fieldWeight in 3283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3283)
      0.33333334 = coord(1/3)
    
    Theme
    Semantic Web

Years

Languages

  • e 126
  • d 33
  • pt 1
  • More… Less…

Types

  • a 102
  • el 53
  • m 13
  • s 7
  • x 6
  • r 5
  • p 2
  • n 1
  • More… Less…