Search (80 results, page 1 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.25
    0.24779256 = product of:
      0.4955851 = sum of:
        0.070797876 = product of:
          0.21239361 = sum of:
            0.21239361 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.21239361 = score(doc=306,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.21239361 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.21239361 = score(doc=306,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.21239361 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.21239361 = score(doc=306,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.18
    0.17699468 = product of:
      0.35398936 = sum of:
        0.050569907 = product of:
          0.15170972 = sum of:
            0.15170972 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.15170972 = score(doc=1000,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.15170972 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15170972 = score(doc=1000,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.15170972 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15170972 = score(doc=1000,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(3/6)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Amarger, F.; Chanet, J.-P.; Haemmerlé, O.; Hernandez, N.; Roussey, C.: SKOS sources transformations for ontology engineering : agronomical taxonomy use case (2014) 0.03
    0.025739681 = product of:
      0.07721904 = sum of:
        0.06677184 = weight(_text_:propose in 1593) [ClassicSimilarity], result of:
          0.06677184 = score(doc=1593,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 1593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=1593)
        0.0104471985 = product of:
          0.031341594 = sum of:
            0.031341594 = weight(_text_:29 in 1593) [ClassicSimilarity], result of:
              0.031341594 = score(doc=1593,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23319192 = fieldWeight in 1593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1593)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Sources like thesauri or taxonomies are already used as input in ontology development process. Some of them are also published on the LOD using the SKOS format. Reusing this type of sources to build an ontology is not an easy task. The ontology developer has to face different syntax and different modelling goals. We propose in this paper a new methodology to transform several non-ontological sources into a single ontology. We take into account: the redundancy of the knowledge extracted from sources in order to discover the consensual knowledge and Ontology Design Patterns (ODPs) to guide the transformation process. We have evaluated our methodology by creating an ontology on wheat taxonomy from three sources: Agrovoc thesaurus, TaxRef taxonomy, NCBI taxonomy.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  4. Gabler, S.: Thesauri - a Toolbox for Information Retrieval (2023) 0.01
    0.013321342 = product of:
      0.07992805 = sum of:
        0.07992805 = weight(_text_:forschung in 114) [ClassicSimilarity], result of:
          0.07992805 = score(doc=114,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.43000343 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
      0.16666667 = coord(1/6)
    
    Source
    Bibliothek: Forschung und Praxis. 47(2023) H.2, S.189-199
  5. Ehrig, M.; Studer, R.: Wissensvernetzung durch Ontologien (2006) 0.01
    0.011774513 = product of:
      0.070647076 = sum of:
        0.070647076 = weight(_text_:forschung in 5901) [ClassicSimilarity], result of:
          0.070647076 = score(doc=5901,freq=4.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.38007292 = fieldWeight in 5901, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
      0.16666667 = coord(1/6)
    
    Abstract
    In der Informatik sind Ontologien formale Modelle eines Anwendungsbereiches, die die Kommunikation zwischen menschlichen und/oder maschinellen Akteuren unterstützen und damit den Austausch und das Teilen von Wissen in Unternehmen erleichtern. Ontologien zur strukturierten Darstellung von Wissen zu nutzen hat deshalb in den letzten Jahren zunehmende Verbreitung gefunden. Schon heute existieren weltweit tausende Ontologien. Um Interoperabilität zwischen darauf aufbauenden Softwareagenten oder Webservices zu ermöglichen, ist die semantische Integration der Ontologien eine zwingendnotwendige Vorraussetzung. Wie man sieh leicht verdeutlichen kann, ist die rein manuelle Erstellung der Abbildungen ab einer bestimmten Größe. Komplexität und Veränderungsrate der Ontologien nicht mehr ohne weiteres möglich. Automatische oder semiautomatische Technologien müssen den Nutzer darin unterstützen. Das Integrationsproblem beschäftigt Forschung und Industrie schon seit vielen Jahren z. B. im Bereich der Datenbankintegration. Neu ist jedoch die Möglichkeit komplexe semantische Informationen. wie sie in Ontologien vorhanden sind, einzubeziehen. Zur Ontologieintegration wird in diesem Kapitel ein sechsstufiger genereller Prozess basierend auf den semantischen Strukturen eingeführt. Erweiterungen beschäftigen sich mit der Effizienz oder der optimalen Nutzereinbindung in diesen Prozess. Außerdem werden zwei Anwendungen vorgestellt, in denen dieser Prozess erfolgreich umgesetzt wurde. In einem abschließenden Fazit werden neue aktuelle Trends angesprochen. Da die Ansätze prinzipiell auf jedes Schema übertragbar sind, das eine semantische Basis enthält, geht der Einsatzbereich dieser Forschung weit über reine Ontologieanwendungen hinaus.
  6. Mayr, P.; Petras, V.: Crosskonkordanzen : Terminologie Mapping und deren Effektivität für das Information Retrieval 0.01
    0.011656174 = product of:
      0.06993704 = sum of:
        0.06993704 = weight(_text_:forschung in 1996) [ClassicSimilarity], result of:
          0.06993704 = score(doc=1996,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.376253 = fieldWeight in 1996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1996)
      0.16666667 = coord(1/6)
    
    Abstract
    Das Bundesministerium für Bildung und Forschung hat eine große Initiative zur Erstellung von Crosskonkordanzen gefördert, die 2007 zu Ende geführt wurde. Die Aufgabe dieser Initiative war die Organisation, die Erstellung und das Management von Crosskonkordanzen zwischen kontrollierten Vokabularen (Thesauri, Klassifikationen, Deskriptorenlisten) in den Sozialwissenschaften und anderen Fachgebieten. 64 Crosskonkordanzen mit mehr als 500.000 Relationen wurden umgesetzt. In der Schlussphase des Projekts wurde eine umfangreiche Evaluation durchgeführt, die die Effektivität der Crosskonkordanzen in unterschiedlichen Informationssystemen testen sollte. Der Artikel berichtet über die Crosskonkordanz-Arbeit und die Evaluationsergebnisse.
  7. Binz, V.; Rühle, S.: KIM - Das Kompetenzzentrum Interoperable Metadaten (2009) 0.01
    0.011656174 = product of:
      0.06993704 = sum of:
        0.06993704 = weight(_text_:forschung in 4559) [ClassicSimilarity], result of:
          0.06993704 = score(doc=4559,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.376253 = fieldWeight in 4559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4559)
      0.16666667 = coord(1/6)
    
    Source
    Bibliothek: Forschung und Praxis. 33(2009) H.3, S.370-374
  8. Shaw, R.; Rabinowitz, A.; Golden, P.; Kansa, E.: Report on and demonstration of the PeriodO period gazetteer (2015) 0.01
    0.011128641 = product of:
      0.06677184 = sum of:
        0.06677184 = weight(_text_:propose in 2249) [ClassicSimilarity], result of:
          0.06677184 = score(doc=2249,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 2249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=2249)
      0.16666667 = coord(1/6)
    
    Abstract
    The PeriodO period gazetteer documents definitions of historical period names. Each entry of the gazetteer identifies the definition of a single period. To be included in the gazetteer, a definition must a) give the period a name, b) impose some temporal bounds on the period, c) have some implicit or explicit association with a geographical region, and d) have been formally or informally published in some citable source. Much care has been put into giving period definitions stable identifiers that can be resolved to RDF representations of period definitions. Anyone can propose additions of new definitions to PeriodO, and we have implemented an open source web service and browser-based client for distributed versioning and collaborative maintenance of the gazetteer.
  9. Krötzsch, M.; Hitzler, P.; Ehrig, M.; Sure, Y.: Category theory in ontology research : concrete gain from an abstract approach (2004 (?)) 0.01
    0.011128641 = product of:
      0.06677184 = sum of:
        0.06677184 = weight(_text_:propose in 4538) [ClassicSimilarity], result of:
          0.06677184 = score(doc=4538,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 4538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=4538)
      0.16666667 = coord(1/6)
    
    Abstract
    The focus of research on representing and reasoning with knowledge traditionally has been on single specifications and appropriate inference paradigms to draw conclusions from such data. Accordingly, this is also an essential aspect of ontology research which has received much attention in recent years. But ontologies introduce another new challenge based on the distributed nature of most of their applications, which requires to relate heterogeneous ontological specifications and to integrate information from multiple sources. These problems have of course been recognized, but many current approaches still lack the deep formal backgrounds on which todays reasoning paradigms are already founded. Here we propose category theory as a well-explored and very extensive mathematical foundation for modelling distributed knowledge. A particular prospect is to derive conclusions from the structure of those distributed knowledge bases, as it is for example needed when merging ontologies
  10. Staub, P.: Semantische Interoperabilität : Voraussetzung für Geodaten-Infrastrukturen (2009) 0.01
    0.009991007 = product of:
      0.059946038 = sum of:
        0.059946038 = weight(_text_:forschung in 4181) [ClassicSimilarity], result of:
          0.059946038 = score(doc=4181,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.32250258 = fieldWeight in 4181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.046875 = fieldNorm(doc=4181)
      0.16666667 = coord(1/6)
    
    Abstract
    Die gemeinsame, integrierte Nutzung von verteilten, heterogenen Geodaten stellt eine grosse Herausforderung dar. Ein wichtiger Ansatz zur Datenintegration ist die Entwicklung von Geodaten-Infrastrukturen (GDI). Sie stellen auf regionaler, nationaler oder internationaler Ebene einen technisch-organisatorischen Rahmen zur interoperablen Nutzung von verteilten Geodaten bereit. GDI stellen an die Interoperabilität grosse Ansprüche, die über die «klassische» Interoperabilität mittels OGC-Webdiensten hinausgehen. Eines der Hauptprobleme ist dabei die semantische Heterogenität bestehender Datenmodelle. Um diese in einer GDI integriert nutzen zu können, müssen Modelltransformationen definiert und ausgeführt werden können. Ein aktueller Ansatz aus der Forschung zeigt dafür eine Lösung auf, indem OGC-Webdienste mit einem neuen Konzept für semantische Modelltransformationen kombiniert werden.
  11. Borst, T.: Repositorien auf ihrem Weg in das Semantic Web : semantisch hergeleitete Interoperabilität als Zielstellung für künftige Repository-Entwicklungen (2014) 0.01
    0.009991007 = product of:
      0.059946038 = sum of:
        0.059946038 = weight(_text_:forschung in 1555) [ClassicSimilarity], result of:
          0.059946038 = score(doc=1555,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.32250258 = fieldWeight in 1555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.046875 = fieldNorm(doc=1555)
      0.16666667 = coord(1/6)
    
    Source
    Bibliothek: Forschung und Praxis. 38(2014) H.2, S.257-265
  12. Kim, J.-M.; Shin, H.; Kim, H.-J.: Schema and constraints-based matching and merging of Topic Maps (2007) 0.01
    0.009273868 = product of:
      0.055643205 = sum of:
        0.055643205 = weight(_text_:propose in 922) [ClassicSimilarity], result of:
          0.055643205 = score(doc=922,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 922, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=922)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, we propose a multi-strategic matching and merging approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the Topic Maps. Our multi-strategic matching approach consists of a linguistic module and a Topic Map constraints-based module. A linguistic module computes similarities between concepts using morphological analysis, string normalization and tokenization and language-dependent heuristics. A Topic Map constraints-based module takes advantage of several Topic Maps-dependent techniques such as a topic property-based matching, a hierarchy-based matching, and an association-based matching. This is a composite matching procedure and need not generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the Topic Maps. Merging between Topic Maps follows the matching operations. We set up the MERGE function to integrate two Topic Maps into a new Topic Map, which satisfies such merge requirements as entity preservation, property preservation, relation preservation, and conflict resolution. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Wikipedia philosophy ontology as input ontologies. Our experiments show that the automatically generated matching results conform to the outputs generated manually by domain experts and can be of great benefit to the following merging operations.
  13. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.01
    0.009273868 = product of:
      0.055643205 = sum of:
        0.055643205 = weight(_text_:propose in 1909) [ClassicSimilarity], result of:
          0.055643205 = score(doc=1909,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 1909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1909)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The general science portal "vascoda" merges structured, high-quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata-driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Design/methodology/approach - Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks. Findings - The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher. Originality/value - The central impact of the paper focuses on the integration of three structural value-adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re-ranking.
  14. Tang, J.; Liang, B.-Y.; Li, J.-Z.: Toward detecting mapping strategies for ontology interoperability (2005) 0.01
    0.009273868 = product of:
      0.055643205 = sum of:
        0.055643205 = weight(_text_:propose in 3367) [ClassicSimilarity], result of:
          0.055643205 = score(doc=3367,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 3367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontology mapping is one of the core tasks for ontology interoperability. It is aimed to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. It benefits many applications, such as integration of ontology based web data sources, interoperability of agents or web services. To reduce the amount of users' effort as much as possible, (semi-) automatic ontology mapping is becoming more and more important to bring it into fruition. In the existing literature, many approaches have found considerable interest by combining several different similar/mapping strategies (namely multi-strategy based mapping). However, experiments show that the multi-strategy based mapping does not always outperform its single-strategy counterpart. In this paper, we mainly aim to deal with two problems: (1) for a new, unseen mapping task, should we select a multi-strategy based algorithm or just one single-strategy based algorithm? (2) if the task is suitable for multi-strategy, then how to select the strategies into the final combined scenario? We propose an approach of multiple strategies detections for ontology mapping. The results obtained so far show that multi-strategy detection improves on precision and recall significantly.
  15. Suchowolec, K.; Lang, C.; Schneider, R.: Re-designing online terminology resources for German grammar (2016) 0.01
    0.009273868 = product of:
      0.055643205 = sum of:
        0.055643205 = weight(_text_:propose in 3108) [ClassicSimilarity], result of:
          0.055643205 = score(doc=3108,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 3108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3108)
      0.16666667 = coord(1/6)
    
    Abstract
    The compilation of terminological vocabularies plays a central role in the organization and retrieval of scientific texts. Both simple keyword lists as well as sophisticated modellings of relationships between terminological concepts can make a most valuable contribution to the analysis, classification, and finding of appropriate digital documents, either on the Web or within local repositories. This seems especially true for long-established scientific fields with various theoretical and historical branches, such as linguistics, where the use of terminology within documents from different origins is sometimes far from being consistent. In this short paper, we report on the early stages of a project that aims at the re-design of an existing domain-specific KOS for grammatical content grammis. In particular, we deal with the terminological part of grammis and present the state-of-the-art of this online resource as well as the key re-design principles. Further, we propose questions regarding ramifications of the Linked Open Data and Semantic Web approaches for our re-design decisions.
  16. Köbler, J.; Niederklapfer, T.: Kreuzkonkordanzen zwischen RVK-BK-MSC-PACS der Fachbereiche Mathematik un Physik (2010) 0.01
    0.00837593 = product of:
      0.050255578 = sum of:
        0.050255578 = product of:
          0.075383365 = sum of:
            0.044323713 = weight(_text_:29 in 4408) [ClassicSimilarity], result of:
              0.044323713 = score(doc=4408,freq=4.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.3297832 = fieldWeight in 4408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
            0.031059656 = weight(_text_:22 in 4408) [ClassicSimilarity], result of:
              0.031059656 = score(doc=4408,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 4408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.2011 10:47:10
    29. 3.2011 10:57:42
    Pages
    22 S
  17. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.0083629545 = product of:
      0.050177723 = sum of:
        0.050177723 = product of:
          0.075266585 = sum of:
            0.031341594 = weight(_text_:29 in 4379) [ClassicSimilarity], result of:
              0.031341594 = score(doc=4379,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23319192 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
            0.04392499 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.04392499 = score(doc=4379,freq=4.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  18. Semenova, E.; Stricker, M.: ¬Eine Ontologie der Wissenschaftsdisziplinen : Entwicklung eines Instrumentariums für die Wissenskommunikation (2007) 0.01
    0.008325839 = product of:
      0.049955033 = sum of:
        0.049955033 = weight(_text_:forschung in 1889) [ClassicSimilarity], result of:
          0.049955033 = score(doc=1889,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.26875216 = fieldWeight in 1889, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1889)
      0.16666667 = coord(1/6)
    
    Abstract
    Interdisziplinarität als Kennzeichen des modernen Wissenschaftslebens setzt in Forschung und Lehre eine effiziente Wissenschaftskommunikation voraus, bei der sich die Partner über eine gemeinsame Sprache verständigen können. Klassifikationen und Thesauri übernehmen dabei eine wichtige Rolle. Zu beobachten ist, dass vorhandene Instrumente in ihrem Gefüge zu inflexibel sind, um die komplex ineinander verwobenen Felder der Wissenschaft in ihrer dynamischen Entwicklung adäquat abzubilden, zur (Selbst-)Verständigung über das Wesen und Struktur der Wissenschaftslandschaft sowie zum erfolgreichen Wissensaustausch beizutragen. Ontologien erschließen neue Wege zur Lösung dieser Aufgaben. In einigen Einzelwissenschaften und Teilgebieten ist diesbezüglich eine rege Tätigkeit zu beobachten, es fehlt allerdings noch ein fachübergreifendes Instrumentarium. Im Vortrag wird das von der DFG geförderte Projekt "Entwicklung einer Ontologie der Wissenschaftsdisziplinen" vorgestellt. Es gilt, die oben beschriebene Lücke zu schließen und eine umfassende Ontologie für Erschließung, Recherche und Datenaustausch bei der Wissenschaftskommunikation zu erstellen. Diese Ontologie soll dazu beitragen, eine effiziente Wissenskommunikation, besonders bei interdisziplinären Projekten, zu unterstützen, verfügbare Ressourcen auffindbar zu machen und mögliche Knotenstellen künftiger Kooperationen zu verdeutlichen. Ausgehend von der Kritik an vorhandenen Instrumenten wird derzeit ein Begriffsmodell für die Beschreibung von Wissenschaftsdisziplinen, ihrer zentralen Facetten sowie ihrer interdisziplinären Beziehungen untereinander entwickelt. Das Modell, inspiriert vom Topic Maps Paradigma, basiert auf einer überschaubaren Menge zentraler Konzepte sowie prinzipiell inverser Beziehungen. Eine entsprechende Ontologie wird in unterschiedlichen (technischen) Beschreibungsformaten formuliert werden können. Dies bildet den Grundstein für den Fokus des Projekts, flexible, verteilte, benutzer- wie pflegefreundliche technische Umsetzungen zu entwickeln und mit Kooperationspartnern zu implementieren.
  19. Stempfhuber, M.; Zapilko, M.B.: ¬Ein Ebenenmodell für die semantische Integration von Primärdaten und Publikationen in Digitalen Bibliotheken (2013) 0.01
    0.008325839 = product of:
      0.049955033 = sum of:
        0.049955033 = weight(_text_:forschung in 917) [ClassicSimilarity], result of:
          0.049955033 = score(doc=917,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.26875216 = fieldWeight in 917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0390625 = fieldNorm(doc=917)
      0.16666667 = coord(1/6)
    
    Abstract
    Digitale Bibliotheken stehen derzeit vor der Herausforderung, den veränderten Informationsbedürfnissen ihrer wissenschaftlichen Nutzer nachzukommen und einen integrierten Zugriff auf verschiedene Informationsarten (z.B. Publikationen, Primärdaten, Wissenschaftler- und. Organisationsprofile, Forschungsprojektinformationen) zu bieten, die in zunehmenden Maße digital zur Verfügung stehen und diese in virtuellen Forschungsumgebungen verfügbar zu machen. Die daraus resultierende Herausforderungen struktureller und semantischer Heterogenität werden durch ein weites Feld von verschiedenen Metadaten-Standards, Inhaltserschließungsverfahren sowie Indexierungsansätze für verschiedene Arten von Information getragen. Bisher existiert jedoch kein allgemeingültiges, integrierendes Modell für Organisation und Retrieval von Wissen in Digitalen Bibliotheken. Dieser Beitrag stellt aktuelle Forschungsentwicklungen und -aktivitäten vor, die die Problematik der semantischen Interoperabilität in Digitalen Bibliotheken behandeln und präsentiert ein Modell für eine integrierte Suche in textuellen Daten (z.B. Publikationen) und Faktendaten (z.B. Primärdaten), das verschiedene Ansätze der aktuellen Forschung aufgreift und miteinander in Bezug setzt. Eingebettet in den Forschungszyklus treffen dabei traditionelle Inhaltserschließungsverfahren für Publikationen auf neuere ontologie-basierte Ansätze, die für die Repräsentation komplexerer Informationen und Zusammenhänge (z.B. in sozialwissenschaftlichen Umfragedaten) geeigneter scheinen. Die Vorteile des Modells sind (1) die einfache Wiederverwendbarkeit bestehender Wissensorganisationssysteme sowie (2) ein geringer Aufwand bei der Konzeptmodellierung durch Ontologien.
  20. Gracy, K.F.: Enriching and enhancing moving images with Linked Data : an exploration in the alignment of metadata models (2018) 0.01
    0.007419094 = product of:
      0.044514563 = sum of:
        0.044514563 = weight(_text_:propose in 4200) [ClassicSimilarity], result of:
          0.044514563 = score(doc=4200,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.22691247 = fieldWeight in 4200, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.03125 = fieldNorm(doc=4200)
      0.16666667 = coord(1/6)
    
    Abstract
    The purpose of this paper is to examine the current state of Linked Data (LD) in archival moving image description, and propose ways in which current metadata records can be enriched and enhanced by interlinking such metadata with relevant information found in other data sets. Design/methodology/approach Several possible metadata models for moving image production and archiving are considered, including models from records management, digital curation, and the recent BIBFRAME AV Modeling Study. This research also explores how mappings between archival moving image records and relevant external data sources might be drawn, and what gaps exist between current vocabularies and what is needed to record and make accessible the full lifecycle of archiving through production, use, and reuse. Findings The author notes several major impediments to implementation of LD for archival moving images. The various pieces of information about creators, places, and events found in moving image records are not easily connected to relevant information in other sources because they are often not semantically defined within the record and can be hidden in unstructured fields. Libraries, archives, and museums must work on aligning the various vocabularies and schemas of potential value for archival moving image description to enable interlinking between vocabularies currently in use and those which are used by external data sets. Alignment of vocabularies is often complicated by mismatches in granularity between vocabularies. Research limitations/implications The focus is on how these models inform functional requirements for access and other archival activities, and how the field might benefit from having a common metadata model for critical archival descriptive activities. Practical implications By having a shared model, archivists may more easily align current vocabularies and develop new vocabularies and schemas to address the needs of moving image data creators and scholars. Originality/value Moving image archives, like other cultural institutions with significant heritage holdings, can benefit tremendously from investing in the semantic definition of information found in their information databases. While commercial entities such as search engines and data providers have already embraced the opportunities that semantic search provides for resource discovery, most non-commercial entities are just beginning to do so. Thus, this research addresses the benefits and challenges of enriching and enhancing archival moving image records with semantically defined information via LD.

Languages

  • e 56
  • d 24

Types

  • a 55
  • el 23
  • m 4
  • s 3
  • x 3
  • p 1
  • r 1
  • More… Less…