Search (104 results, page 1 of 6)

  • × theme_ss:"Semantische Interoperabilität"
  1. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.05
    0.052372687 = product of:
      0.10474537 = sum of:
        0.05559624 = weight(_text_:processing in 2192) [ClassicSimilarity], result of:
          0.05559624 = score(doc=2192,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3162615 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.049149137 = product of:
          0.0737237 = sum of:
            0.04403903 = weight(_text_:science in 2192) [ClassicSimilarity], result of:
              0.04403903 = score(doc=2192,freq=14.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.38499892 = fieldWeight in 2192, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
            0.029684676 = weight(_text_:29 in 2192) [ClassicSimilarity], result of:
              0.029684676 = score(doc=2192,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
    LCSH
    Computer science
    Text processing (Computer science)
    Series
    Communications in computer and information science; 478
    Subject
    Computer science
    Text processing (Computer science)
  2. Metadata and semantics research : 9th Research Conference, MTSR 2015, Manchester, UK, September 9-11, 2015, Proceedings (2015) 0.04
    0.040801696 = product of:
      0.08160339 = sum of:
        0.06671549 = weight(_text_:processing in 3274) [ClassicSimilarity], result of:
          0.06671549 = score(doc=3274,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3795138 = fieldWeight in 3274, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=3274)
        0.014887909 = product of:
          0.044663727 = sum of:
            0.044663727 = weight(_text_:science in 3274) [ClassicSimilarity], result of:
              0.044663727 = score(doc=3274,freq=10.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.39046016 = fieldWeight in 3274, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3274)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    LCSH
    Computer science
    Text processing (Computer science)
    Series
    Communications in computer and information science; 544
    Subject
    Computer science
    Text processing (Computer science)
  3. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.04
    0.035010517 = product of:
      0.07002103 = sum of:
        0.03931248 = weight(_text_:processing in 4607) [ClassicSimilarity], result of:
          0.03931248 = score(doc=4607,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.030708551 = product of:
          0.046062827 = sum of:
            0.016645188 = weight(_text_:science in 4607) [ClassicSimilarity], result of:
              0.016645188 = score(doc=4607,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
            0.029417641 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.029417641 = score(doc=4607,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Series
    Lecture notes in computer science: Lecture notes in artificial intelligence ; 4604
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  4. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.031512067 = product of:
      0.06302413 = sum of:
        0.05747574 = product of:
          0.1724272 = sum of:
            0.1724272 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.1724272 = score(doc=1000,freq=2.0), product of:
                0.36816013 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 1000) [ClassicSimilarity], result of:
              0.016645188 = score(doc=1000,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  5. Levergood, B.; Farrenkopf, S.; Frasnelli, E.: ¬The specification of the language of the field and interoperability : cross-language access to catalogues and online libraries (CACAO) (2008) 0.03
    0.029471014 = product of:
      0.058942027 = sum of:
        0.04717497 = weight(_text_:processing in 2646) [ClassicSimilarity], result of:
          0.04717497 = score(doc=2646,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 2646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=2646)
        0.011767056 = product of:
          0.035301168 = sum of:
            0.035301168 = weight(_text_:22 in 2646) [ClassicSimilarity], result of:
              0.035301168 = score(doc=2646,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23214069 = fieldWeight in 2646, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2646)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The CACAO Project (Cross-language Access to Catalogues and Online Libraries) has been designed to implement natural language processing and cross-language information retrieval techniques to provide cross-language access to information in libraries, a critical issue in the linguistically diverse European Union. This project report addresses two metadata-related challenges for the library community in this context: "false friends" (identical words having different meanings in different languages) and term ambiguity. The possible solutions involve enriching the metadata with attributes specifying language or the source authority file, or associating potential search terms to classes in a classification system. The European Library will evaluate an early implementation of this work in late 2008.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.03
    0.029471014 = product of:
      0.058942027 = sum of:
        0.04717497 = weight(_text_:processing in 4820) [ClassicSimilarity], result of:
          0.04717497 = score(doc=4820,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 4820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.011767056 = product of:
          0.035301168 = sum of:
            0.035301168 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.035301168 = score(doc=4820,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  7. Zhang, X.: Concept integration of document databases using different indexing languages (2006) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 962) [ClassicSimilarity], result of:
          0.04717497 = score(doc=962,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=962)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 962) [ClassicSimilarity], result of:
              0.019974224 = score(doc=962,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=962)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    An integrated information retrieval system generally contains multiple databases that are inconsistent in terms of their content and indexing. This paper proposes a rough set-based transfer (RST) model for integration of the concepts of document databases using various indexing languages, so that users can search through the multiple databases using any of the current indexing languages. The RST model aims to effectively create meaningful transfer relations between the terms of two indexing languages, provided a number of documents are indexed with them in parallel. In our experiment, the indexing concepts of two databases respectively using the Thesaurus of Social Science (IZ) and the Schlagwortnormdatei (SWD) are integrated by means of the RST model. Finally, this paper compares the results achieved with a cross-concordance method, a conditional probability based method and the RST model.
    Source
    Information processing and management. 42(2006) no.1, S.121-135
  8. Vlachidis, A.; Tudhope, D.: ¬A knowledge-based approach to information extraction for semantic interoperability in the archaeology domain (2016) 0.02
    0.022430437 = product of:
      0.044860873 = sum of:
        0.03931248 = weight(_text_:processing in 2895) [ClassicSimilarity], result of:
          0.03931248 = score(doc=2895,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 2895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 2895) [ClassicSimilarity], result of:
              0.016645188 = score(doc=2895,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 2895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2895)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The article presents a method for automatic semantic indexing of archaeological grey-literature reports using empirical (rule-based) Information Extraction techniques in combination with domain-specific knowledge organization systems. The semantic annotation system (OPTIMA) performs the tasks of Named Entity Recognition, Relation Extraction, Negation Detection, and Word-Sense Disambiguation using hand-crafted rules and terminological resources for associating contextual abstractions with classes of the standard ontology CIDOC Conceptual Reference Model (CRM) for cultural heritage and its archaeological extension, CRM-EH. Relation Extraction (RE) performance benefits from a syntactic-based definition of RE patterns derived from domain oriented corpus analysis. The evaluation also shows clear benefit in the use of assistive natural language processing (NLP) modules relating to Word-Sense Disambiguation, Negation Detection, and Noun Phrase Validation, together with controlled thesaurus expansion. The semantic indexing results demonstrate the capacity of rule-based Information Extraction techniques to deliver interoperable semantic abstractions (semantic annotations) with respect to the CIDOC CRM and archaeological thesauri. Major contributions include recognition of relevant entities using shallow parsing NLP techniques driven by a complimentary use of ontological and terminological domain resources and empirical derivation of context-driven RE rules for the recognition of semantic relationships from phrases of unstructured text.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.5, S.1138-1152
  9. Victorino, M.; Terto de Holanda, M.; Ishikawa, E.; Costa Oliveira, E.; Chhetri, S.: Transforming open data to linked open data using ontologies for information organization in big data environments of the Brazilian Government : the Brazilian database Government Open Linked Data - DBgoldbr (2018) 0.02
    0.022430437 = product of:
      0.044860873 = sum of:
        0.03931248 = weight(_text_:processing in 4532) [ClassicSimilarity], result of:
          0.03931248 = score(doc=4532,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 4532) [ClassicSimilarity], result of:
              0.016645188 = score(doc=4532,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 4532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4532)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The Brazilian Government has made a massive volume of structured, semi-structured and non-structured public data available on the web to ensure that the administration is as transparent as possible. Subsequently, providing applications with enough capability to handle this "big data environment" so that vital and decisive information is readily accessible, has become a tremendous challenge. In this environment, data processing is done via new approaches in the area of information and computer science, involving technologies and processes for collecting, representing, storing and disseminating information. Along these lines, this paper presents a conceptual model, the technical architecture and the prototype implementation of a tool, denominated DBgoldbr, designed to classify government public information with the help of ontologies, by transforming open data into open linked data. To achieve this objective, we used "soft system methodology" to identify problems, to collect users needs and to design solutions according to the objectives of specific groups. The DBgoldbr tool was designed to facilitate the search for open data made available by many Brazilian government institutions, so that this data can be reused to support the evaluation and monitoring of social programs, in order to support the design and management of public policies.
  10. Cheng, Y.-Y.; Xia, Y.: ¬A systematic review of methods for aligning, mapping, merging taxonomies in information sciences (2023) 0.02
    0.022430437 = product of:
      0.044860873 = sum of:
        0.03931248 = weight(_text_:processing in 1029) [ClassicSimilarity], result of:
          0.03931248 = score(doc=1029,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 1029, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1029)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 1029) [ClassicSimilarity], result of:
              0.016645188 = score(doc=1029,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 1029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1029)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The purpose of this study is to provide a systematic literature review on taxonomy alignment methods in information science to explore the common research pipeline and characteristics. Design/methodology/approach The authors implement a five-step systematic literature review process relating to taxonomy alignment. They take on a knowledge organization system (KOS) perspective, and specifically examining the level of KOS on "taxonomies." Findings They synthesize the matching dimensions of 28 taxonomy alignment studies in terms of the taxonomy input, approach and output. In the input dimension, they develop three characteristics: tree shapes, variable names and symmetry; for approach: methodology, unit of matching, comparison type and relation type; for output: the number of merged solutions and whether original taxonomies are preserved in the solutions. Research limitations/implications The main research implications of this study are threefold: (1) to enhance the understanding of the characteristics of a taxonomy alignment work; (2) to provide a novel categorization of taxonomy alignment approaches into natural language processing approach, logic-based approach and heuristic-based approach; (3) to provide a methodological guideline on the must-include characteristics for future taxonomy alignment research. Originality/value There is no existing comprehensive review on the alignment of "taxonomies". Further, no other mapping survey research has discussed the comparison from a KOS perspective. Using a KOS lens is critical in understanding the broader picture of what other similar systems of organizations are, and enables us to define taxonomies more precisely.
  11. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.02
    0.020116508 = product of:
      0.08046603 = sum of:
        0.08046603 = product of:
          0.24139808 = sum of:
            0.24139808 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.24139808 = score(doc=306,freq=2.0), product of:
                0.36816013 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043425296 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  12. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.02
    0.018863637 = product of:
      0.037727274 = sum of:
        0.03144998 = weight(_text_:processing in 5329) [ClassicSimilarity], result of:
          0.03144998 = score(doc=5329,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.17890452 = fieldWeight in 5329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=5329)
        0.006277293 = product of:
          0.018831879 = sum of:
            0.018831879 = weight(_text_:science in 5329) [ClassicSimilarity], result of:
              0.018831879 = score(doc=5329,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.16463245 = fieldWeight in 5329, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5329)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and benchmarking. To this end, Chapter 1 introduces the main concepts of the Semantic Web and Linked Data and provides a roadmap for the book. Next, Chapter 2 briefly presents the basic concepts underpinning Linked Data technologies that are discussed in the book. Chapter 3 then offers an overview of various techniques and systems for centrally querying RDF datasets, and Chapter 4 outlines various techniques and systems for efficiently querying large RDF datasets in distributed environments. Subsequently, Chapter 5 explores how streaming requirements are addressed in current, state-of-the-art RDF stream data processing. Chapter 6 covers performance and scaling issues of distributed RDF reasoning systems, while Chapter 7 details benchmarks for RDF query engines and instance matching systems. Chapter 8 addresses the provenance management for Linked Data and presents the different provenance models developed. Lastly, Chapter 9 offers a brief summary, highlighting and providing insights into some of the open challenges and research directions. Providing an updated overview of methods, technologies and systems related to Linked Data this book is mainly intended for students and researchers who are interested in the Linked Data domain. It enables students to gain an understanding of the foundations and underpinning technologies and standards for Linked Data, while researchers benefit from the in-depth coverage of the emerging and ongoing advances in Linked Data storing, querying, reasoning, and provenance management systems. Further, it serves as a starting point to tackle the next research challenges in the domain of Linked Data management.
    LCSH
    Computer science
    Subject
    Computer science
  13. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.02
    0.015354276 = product of:
      0.061417103 = sum of:
        0.061417103 = product of:
          0.092125654 = sum of:
            0.033290375 = weight(_text_:science in 3278) [ClassicSimilarity], result of:
              0.033290375 = score(doc=3278,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2910318 = fieldWeight in 3278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3278)
            0.058835283 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
              0.058835283 = score(doc=3278,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.38690117 = fieldWeight in 3278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3278)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  14. Köbler, J.; Niederklapfer, T.: Kreuzkonkordanzen zwischen RVK-BK-MSC-PACS der Fachbereiche Mathematik un Physik (2010) 0.01
    0.014279622 = product of:
      0.057118487 = sum of:
        0.057118487 = product of:
          0.08567773 = sum of:
            0.050376564 = weight(_text_:29 in 4408) [ClassicSimilarity], result of:
              0.050376564 = score(doc=4408,freq=4.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.3297832 = fieldWeight in 4408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
            0.035301168 = weight(_text_:22 in 4408) [ClassicSimilarity], result of:
              0.035301168 = score(doc=4408,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23214069 = fieldWeight in 4408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    29. 3.2011 10:47:10
    29. 3.2011 10:57:42
    Pages
    22 S
  15. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.014257501 = product of:
      0.057030004 = sum of:
        0.057030004 = product of:
          0.085545 = sum of:
            0.03562161 = weight(_text_:29 in 4379) [ClassicSimilarity], result of:
              0.03562161 = score(doc=4379,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23319192 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
            0.049923394 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.049923394 = score(doc=4379,freq=4.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  16. Hubain, R.; Wilde, M. De; Hooland, S. van: Automated SKOS vocabulary design for the biopharmaceutical industry (2016) 0.01
    0.013759367 = product of:
      0.05503747 = sum of:
        0.05503747 = weight(_text_:processing in 5132) [ClassicSimilarity], result of:
          0.05503747 = score(doc=5132,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 5132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5132)
      0.25 = coord(1/4)
    
    Abstract
    Ensuring quick and consistent access to large collections of unstructured documents is one of the biggest challenges facing knowledge-intensive organizations. Designing specific vocabularies to index and retrieve documents is often deemed too expensive, full-text search being preferred despite its known limitations. However, the process of creating controlled vocabularies can be partly automated thanks to natural language processing and machine learning techniques. With a case study from the biopharmaceutical industry, we demonstrate how small organizations can use an automated workflow in order to create a controlled vocabulary to index unstructured documents in a semantically meaningful way.
  17. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.010747994 = product of:
      0.042991977 = sum of:
        0.042991977 = product of:
          0.064487964 = sum of:
            0.023303263 = weight(_text_:science in 3283) [ClassicSimilarity], result of:
              0.023303263 = score(doc=3283,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20372227 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
            0.041184697 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.041184697 = score(doc=3283,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Series
    Communications in computer and information science; 672
  18. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.01
    0.010747994 = product of:
      0.042991977 = sum of:
        0.042991977 = product of:
          0.064487964 = sum of:
            0.023303263 = weight(_text_:science in 997) [ClassicSimilarity], result of:
              0.023303263 = score(doc=997,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20372227 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
            0.041184697 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.041184697 = score(doc=997,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    22. 6.2023 18:23:31
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.866-878
  19. Kim, J.-M.; Shin, H.; Kim, H.-J.: Schema and constraints-based matching and merging of Topic Maps (2007) 0.01
    0.00982812 = product of:
      0.03931248 = sum of:
        0.03931248 = weight(_text_:processing in 922) [ClassicSimilarity], result of:
          0.03931248 = score(doc=922,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 922, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=922)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 43(2007) no.4, S.930-945
  20. Burstein, M.; McDermott, D.V.: Ontology translation for interoperability among Semantic Web services (2005) 0.01
    0.00982812 = product of:
      0.03931248 = sum of:
        0.03931248 = weight(_text_:processing in 2661) [ClassicSimilarity], result of:
          0.03931248 = score(doc=2661,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2661)
      0.25 = coord(1/4)
    
    Abstract
    Research on semantic web services promises greater interoperability among software agents and web services by enabling content-based automated service discovery and interaction and by utilizing. Although this is to be based on use of shared ontologies published on the semantic web, services produced and described by different developers may well use different, perhaps partly overlapping, sets of ontologies. Interoperability will depend on ontology mappings and architectures supporting the associated translation processes. The question we ask is, does the traditional approach of introducing mediator agents to translate messages between requestors and services work in such an open environment? This article reviews some of the processing assumptions that were made in the development of the semantic web service modeling ontology OWL-S and argues that, as a practical matter, the translation function cannot always be isolated in mediators. Ontology mappings need to be published on the semantic web just as ontologies themselves are. The translation for service discovery, service process model interpretation, task negotiation, service invocation, and response interpretation may then be distributed to various places in the architecture so that translation can be done in the specific goal-oriented informational contexts of the agents performing these processes. We present arguments for assigning translation responsibility to particular agents in the cases of service invocation, response translation, and match- making.

Years

Languages

  • e 85
  • d 19

Types

  • a 69
  • el 26
  • m 9
  • s 5
  • x 4
  • p 2
  • n 1
  • r 1
  • More… Less…