Search (63 results, page 1 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.11
    0.111487046 = product of:
      0.22297409 = sum of:
        0.062642865 = product of:
          0.18792859 = sum of:
            0.18792859 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18792859 = score(doc=1000,freq=2.0), product of:
                0.4012581 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047329273 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.16033123 = weight(_text_:master in 1000) [ClassicSimilarity], result of:
          0.16033123 = score(doc=1000,freq=4.0), product of:
            0.3116585 = queryWeight, product of:
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.047329273 = queryNorm
            0.51444525 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(2/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.07
    0.072827324 = product of:
      0.0971031 = sum of:
        0.056685656 = weight(_text_:master in 3370) [ClassicSimilarity], result of:
          0.056685656 = score(doc=3370,freq=2.0), product of:
            0.3116585 = queryWeight, product of:
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.047329273 = queryNorm
            0.18188387 = fieldWeight in 3370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3370)
        0.021637926 = weight(_text_:reference in 3370) [ClassicSimilarity], result of:
          0.021637926 = score(doc=3370,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.11237389 = fieldWeight in 3370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3370)
        0.018779518 = product of:
          0.037559036 = sum of:
            0.037559036 = weight(_text_:file in 3370) [ClassicSimilarity], result of:
              0.037559036 = score(doc=3370,freq=2.0), product of:
                0.25368783 = queryWeight, product of:
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.047329273 = queryNorm
                0.14805219 = fieldWeight in 3370, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3370)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    "Last week, I received an email from Yulia Skora in Ukraine who is working on the mapping between UDC Summary and BBK (Bibliographic Library Classification) Summary. It reminded me of yet another challenging area of work. When responding to Yulia I realised that the issues with mapping, for instance, UDC Summary to Dewey Summaries [pdf] are often made more difficult because we have to deal with classification summaries in both systems and we cannot use a known exactMatch in many situations. In 2008, following advice received from colleagues in the HILT project, two of our colleagues quickly mapped 1000 classes of Dewey Summaries to UDC Master Reference File as a whole. This appeared to be relatively simple. The mapping in this case is simply an answer to a question "and how would you say e.g. Art metal work in UDC?" But when in 2009 we realised that we were going to release 2000 classes of UDC Summary as linked data, we decided to wait until we had our UDC Summary set defined and completed to be able to publish it mapped to the Dewey Summaries. As we arrived at this stage, little did we realise how much more complex the reversed mapping of UDC Summary to Dewey Summaries would turn out to be. Mapping the Dewey Summaries to UDC highlighted situations in which the logic and structure of two systems do not agree. Especially because Dewey tends to enumerate combinations of subject and attributes that do not always logically belong together. For instance, 850 Literatures of Italian, Sardinian, Dalmatian, Romanian, Rhaeto-Romanic languages Italian literature. This class mixes languages from three different subgroups of Romance languages. Italian and Sardinian belong to Italo Romance sub-family; Romanian and Dalmatian are Balkan Romance languages and Rhaeto Romance is the third subgroup that includes Friulian Ladin and Romanch. As UDC literature is based on a strict classification of language families, Dewey class 850 has to be mapped to 3 narrower UDC classes 821.131 Literature of Italo-Romance Languages , 821.132 Literature of Rhaeto-Romance languages and 821.135 Literature of Balkan-Romance Languages, or to a broader class 821.13 Literature of Romance languages. Hence we have to be sure that we have all these classes listed in the UDC Summary to be able to express UDC-DDC many-to-one, specific-to-broader relationships.
  3. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.04
    0.03855694 = product of:
      0.07711388 = sum of:
        0.03462068 = weight(_text_:reference in 3965) [ClassicSimilarity], result of:
          0.03462068 = score(doc=3965,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.17979822 = fieldWeight in 3965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.0424932 = product of:
          0.0849864 = sum of:
            0.0849864 = weight(_text_:file in 3965) [ClassicSimilarity], result of:
              0.0849864 = score(doc=3965,freq=4.0), product of:
                0.25368783 = queryWeight, product of:
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.047329273 = queryNorm
                0.33500385 = fieldWeight in 3965, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
    The paper discusses the importance of these initiatives in releasing as linked data the very large quantities of rich, professionally-generated metadata stored in formats based on these standards, such as UNIMARC and MARC21, addressing such issues as critical mass for semantic and statistical inferencing, integration with user- and machine-generated metadata, and authenticity, veracity and trust. The paper also discusses related initiatives to release controlled vocabularies, including the Dewey Decimal Classification (DDC), ISBD, Library of Congress Name Authority File (LCNAF), Library of Congress Subject Headings (LCSH), Rameau (French subject headings), Universal Decimal Classification (UDC), and the Virtual International Authority File (VIAF) as linked data. Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
  4. Leiva-Mederos, A.; Senso, J.A.; Hidalgo-Delgado, Y.; Hipola, P.: Working framework of semantic interoperability for CRIS with heterogeneous data sources (2017) 0.03
    0.032333955 = product of:
      0.06466791 = sum of:
        0.03462068 = weight(_text_:reference in 3706) [ClassicSimilarity], result of:
          0.03462068 = score(doc=3706,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.17979822 = fieldWeight in 3706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=3706)
        0.030047229 = product of:
          0.060094457 = sum of:
            0.060094457 = weight(_text_:file in 3706) [ClassicSimilarity], result of:
              0.060094457 = score(doc=3706,freq=2.0), product of:
                0.25368783 = queryWeight, product of:
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.047329273 = queryNorm
                0.23688349 = fieldWeight in 3706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3706)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources. Design/methodology/approach After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the "dimensions" included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file. Findings The paper also includes an evaluation based on the comparison - by means of calculations of recall and precision - of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information. Originality/value The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.
  5. Levergood, B.; Farrenkopf, S.; Frasnelli, E.: ¬The specification of the language of the field and interoperability : cross-language access to catalogues and online libraries (CACAO) (2008) 0.03
    0.032154113 = product of:
      0.12861645 = sum of:
        0.12861645 = sum of:
          0.090141684 = weight(_text_:file in 2646) [ClassicSimilarity], result of:
            0.090141684 = score(doc=2646,freq=2.0), product of:
              0.25368783 = queryWeight, product of:
                5.3600616 = idf(docFreq=564, maxDocs=44218)
                0.047329273 = queryNorm
              0.35532522 = fieldWeight in 2646, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.3600616 = idf(docFreq=564, maxDocs=44218)
                0.046875 = fieldNorm(doc=2646)
          0.038474776 = weight(_text_:22 in 2646) [ClassicSimilarity], result of:
            0.038474776 = score(doc=2646,freq=2.0), product of:
              0.16573904 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047329273 = queryNorm
              0.23214069 = fieldWeight in 2646, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2646)
      0.25 = coord(1/4)
    
    Abstract
    The CACAO Project (Cross-language Access to Catalogues and Online Libraries) has been designed to implement natural language processing and cross-language information retrieval techniques to provide cross-language access to information in libraries, a critical issue in the linguistically diverse European Union. This project report addresses two metadata-related challenges for the library community in this context: "false friends" (identical words having different meanings in different languages) and term ambiguity. The possible solutions involve enriching the metadata with attributes specifying language or the source authority file, or associating potential search terms to classes in a classification system. The European Library will evaluate an early implementation of this work in late 2008.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.03
    0.030892981 = product of:
      0.061785962 = sum of:
        0.048961036 = weight(_text_:reference in 168) [ClassicSimilarity], result of:
          0.048961036 = score(doc=168,freq=4.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.2542731 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.012824926 = product of:
          0.025649851 = sum of:
            0.025649851 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.025649851 = score(doc=168,freq=2.0), product of:
                0.16573904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047329273 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  7. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.02
    0.021925002 = product of:
      0.08770001 = sum of:
        0.08770001 = product of:
          0.26310003 = sum of:
            0.26310003 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.26310003 = score(doc=306,freq=2.0), product of:
                0.4012581 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047329273 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  8. Hubrich, J.: Intersystem relations : Characteristics and functionalities (2011) 0.02
    0.01731034 = product of:
      0.06924136 = sum of:
        0.06924136 = weight(_text_:reference in 4780) [ClassicSimilarity], result of:
          0.06924136 = score(doc=4780,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.35959643 = fieldWeight in 4780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=4780)
      0.25 = coord(1/4)
    
    Abstract
    Within the frame of the methodological support of the CrissCross project and the research conducted in the Reseda project, a tiered model of semantic interoperability was developed. This correlates methods of establishing semantic interoperability and types of intersystem relations to search functionalities in retrieval scenarios. In this article the model is outlined and exemplified with reference to respective selective alignment projects.
  9. Jahns, Y.: 20 years SWD : German subject authority data prepared for the future (2011) 0.02
    0.01593495 = product of:
      0.0637398 = sum of:
        0.0637398 = product of:
          0.1274796 = sum of:
            0.1274796 = weight(_text_:file in 1802) [ClassicSimilarity], result of:
              0.1274796 = score(doc=1802,freq=4.0), product of:
                0.25368783 = queryWeight, product of:
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.047329273 = queryNorm
                0.5025058 = fieldWeight in 1802, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1802)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The German subject headings authority file - SWD - provides a terminologically controlled vocabulary, covering all fields of knowledge. The subject headings are determined by the German Rules for the Subject Catalogue. The authority file is produced and updated daily by participating libraries from around Germany, Austria and Switzerland. Over the last twenty years, it grew to an online-accessible database with about 550.000 headings. They are linked to other thesauri, also to French and English equivalents and with notations of the Dewey Decimal Classification. Thus, it allows multilingual access and searching in dispersed, heterogeneously indexed catalogues. The vocabulary is not only used for cataloguing library materials, but also web-resources and objects in archives and museums.
  10. Binding, C.; Tudhope, D.: Improving interoperability using vocabulary linked data (2015) 0.02
    0.015300324 = product of:
      0.061201297 = sum of:
        0.061201297 = weight(_text_:reference in 2205) [ClassicSimilarity], result of:
          0.061201297 = score(doc=2205,freq=4.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.31784135 = fieldWeight in 2205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2205)
      0.25 = coord(1/4)
    
    Abstract
    The concept of Linked Data has been an emerging theme within the computing and digital heritage areas in recent years. The growth and scale of Linked Data has underlined the need for greater commonality in concept referencing, to avoid local redefinition and duplication of reference resources. Achieving domain-wide agreement on common vocabularies would be an unreasonable expectation; however, datasets often already have local vocabulary resources defined, and so the prospects for large-scale interoperability can be substantially improved by creating alignment links from these local vocabularies out to common external reference resources. The ARIADNE project is undertaking large-scale integration of archaeology dataset metadata records, to create a cross-searchable research repository resource. Key to enabling this cross search will be the 'subject' metadata originating from multiple data providers, containing terms from multiple multilingual controlled vocabularies. This paper discusses various aspects of vocabulary mapping. Experience from the previous SENESCHAL project in the publication of controlled vocabularies as Linked Open Data is discussed, emphasizing the importance of unique URI identifiers for vocabulary concepts. There is a need to align legacy indexing data to the uniquely defined concepts and examples are discussed of SENESCHAL data alignment work. A case study for the ARIADNE project presents work on mapping between vocabularies, based on the Getty Art and Architecture Thesaurus as a central hub and employing an interactive vocabulary mapping tool developed for the project, which generates SKOS mapping relationships in JSON and other formats. The potential use of such vocabulary mappings to assist cross search over archaeological datasets from different countries is illustrated in a pilot experiment. The results demonstrate the enhanced opportunities for interoperability and cross searching that the approach offers.
  11. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.02
    0.015146547 = product of:
      0.060586188 = sum of:
        0.060586188 = weight(_text_:reference in 604) [ClassicSimilarity], result of:
          0.060586188 = score(doc=604,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.31464687 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  12. Wicaksana, I.W.S.; Wahyudi, B.: Comparison Latent Semantic and WordNet approach for semantic similarity calculation (2011) 0.02
    0.015146547 = product of:
      0.060586188 = sum of:
        0.060586188 = weight(_text_:reference in 689) [ClassicSimilarity], result of:
          0.060586188 = score(doc=689,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.31464687 = fieldWeight in 689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=689)
      0.25 = coord(1/4)
    
    Abstract
    Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this paper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
  13. Jacobs, J.-H.; Mengel, T.; Müller, K.: Benefits of the CrissCross project for conceptual interoperability and retrieval (2010) 0.01
    0.013145663 = product of:
      0.05258265 = sum of:
        0.05258265 = product of:
          0.1051653 = sum of:
            0.1051653 = weight(_text_:file in 3038) [ClassicSimilarity], result of:
              0.1051653 = score(doc=3038,freq=2.0), product of:
                0.25368783 = queryWeight, product of:
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.047329273 = queryNorm
                0.4145461 = fieldWeight in 3038, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3038)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses goals, methods and benefits of the conceptual mapping approach conducted within the CrissCross project, where topical headings of the German subject headings authority file Schlagwortnormdatei (SWD) are being mapped to notations of the Dewey Decimal Classification. Project-specific retrieval concepts for improving thematic access in heterogeneous information spaces are outlined and explained on the basis of significant examples.
  14. Jacobs, J.-H.; Mengel, T.; Müller, K.: Insights and Outlooks : a retrospective view on the CrissCross project (2011) 0.01
    0.013145663 = product of:
      0.05258265 = sum of:
        0.05258265 = product of:
          0.1051653 = sum of:
            0.1051653 = weight(_text_:file in 4785) [ClassicSimilarity], result of:
              0.1051653 = score(doc=4785,freq=2.0), product of:
                0.25368783 = queryWeight, product of:
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.047329273 = queryNorm
                0.4145461 = fieldWeight in 4785, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3600616 = idf(docFreq=564, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4785)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses goals, methods and benefits of the conceptual mapping approach developed by the CrissCross project, in the framework of which the topical headings of the German subject headings authority file Schlagwortnormdatei (SWD) have been mapped to notations of the Dewey Decimal Classification (DDC). Projectspecific retrieval concepts for improving thematic access in heterogeneous information spaces are outlined and explained on the basis of significant examples.
  15. Zeng, M.L.; Chan, L.M.: Trends and issues in establishing interoperability among knowledge organization systems (2004) 0.01
    0.012982754 = product of:
      0.051931016 = sum of:
        0.051931016 = weight(_text_:reference in 2224) [ClassicSimilarity], result of:
          0.051931016 = score(doc=2224,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.2696973 = fieldWeight in 2224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=2224)
      0.25 = coord(1/4)
    
    Abstract
    This report analyzes the methodologies used in establishing interoperability among knowledge organization systems (KOS) such as controlled vocabularies and classification schemes that present the organized interpretation of knowledge structures. The development and trends of KOS are discussed with reference to the online era and the Internet era. Selected current projects and activities addressing KOS interoperability issues are reviewed in terms of the languages and structures involved. The methodological analysis encompasses both conventional and new methods that have proven to be widely accepted, including derivation/modeling, translation/adaptation, satellite and leaf node linking, direct mapping, co-occurrence mapping, switching, linking through a temporary union list, and linking through a thesaurus server protocol. Methods used in link storage and management, as weIl as common issues regarding mapping and methodological options, are also presented. It is concluded that interoperability of KOS is an unavoidable issue and process in today's networked environment. There have been and will be many multilingual products and services, with many involving various structured systems. Results from recent efforts are encouraging.
  16. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.01
    0.012982754 = product of:
      0.051931016 = sum of:
        0.051931016 = weight(_text_:reference in 4645) [ClassicSimilarity], result of:
          0.051931016 = score(doc=4645,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.2696973 = fieldWeight in 4645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.25 = coord(1/4)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  17. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.01
    0.01122181 = product of:
      0.04488724 = sum of:
        0.04488724 = product of:
          0.08977448 = sum of:
            0.08977448 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.08977448 = score(doc=8365,freq=2.0), product of:
                0.16573904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047329273 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2015 16:08:38
  18. Vizine-Goetz, D.; Houghton, A.; Childress, E.: Web services for controlled vocabularies (2006) 0.01
    0.010818963 = product of:
      0.04327585 = sum of:
        0.04327585 = weight(_text_:reference in 1171) [ClassicSimilarity], result of:
          0.04327585 = score(doc=1171,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.22474778 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
      0.25 = coord(1/4)
    
    Abstract
    Amid the debates about whether folksonomies will supplant controlled vocabularies and whether the Library of Congress Subject Headings (LCSH) and Dewey Decimal Classification (DDC) system have outlived their usefulness, libraries, museums and other organizations continue to require efficient, effective access to controlled vocabularies for creating consistent metadata for their collections . In this article, we present an approach for using Web services to interact with controlled vocabularies. Services are implemented within a service-oriented architecture (SOA) framework. SOA is an approach to distributed computing where services are loosely coupled and discoverable on the network. A set of experimental services for controlled vocabularies is provided through the Microsoft Office (MS) Research task pane (a small window or sidebar that opens up next to Internet Explorer (IE) and other Microsoft Office applications). The research task pane is a built-in feature of IE when MS Office 2003 is loaded. The research pane enables a user to take advantage of a number of research and reference services accessible over the Internet. Web browsers, such as Mozilla Firefox and Opera, also provide sidebars which could be used to deliver similar, loosely-coupled Web services.
  19. Lumsden, J.; Hall, H.; Cruickshank, P.: Ontology definition and construction, and epistemological adequacy for systems interoperability : a practitioner analysis (2011) 0.01
    0.010818963 = product of:
      0.04327585 = sum of:
        0.04327585 = weight(_text_:reference in 4801) [ClassicSimilarity], result of:
          0.04327585 = score(doc=4801,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.22474778 = fieldWeight in 4801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4801)
      0.25 = coord(1/4)
    
    Abstract
    Ontology development is considered to be a useful approach to the design and implementation of interoperable systems. This literature review and commentary examines the current state of knowledge in this field with particular reference to processes involved in assuring epistemological adequacy. It takes the perspective of the information systems practitioner keen to adopt a systematic approach to in-house ontology design, taking into consideration previously published work. The study arises from author involvement in an integration/interoperability project on systems that support Scottish Common Housing Registers in which, ultimately, ontological modelling was not deployed. Issues concerning the agreement of meaning, and the implications for the creation of interoperable systems, are discussed. The extent to which those theories, methods and frameworks provide practitioners with a usable set of tools is explored, and examples of practical applications of ontological modelling are noted. The findings from the review of the literature demonstrate a number of difficulties faced by information systems practitioners keen to develop and deploy domain ontologies. A major problem is deciding which broad approach to take: to rely on automatic ontology construction techniques, or to rely on key words and domain experts to develop ontologies.
  20. Wang, S.; Isaac, A.; Schlobach, S.; Meij, L. van der; Schopman, B.: Instance-based semantic interoperability in the cultural heritage (2012) 0.01
    0.010818963 = product of:
      0.04327585 = sum of:
        0.04327585 = weight(_text_:reference in 125) [ClassicSimilarity], result of:
          0.04327585 = score(doc=125,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.22474778 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=125)
      0.25 = coord(1/4)
    
    Abstract
    This paper gives a comprehensive overview over the problem of Semantic Interoperability in the Cultural Heritage domain, with a particular focus on solutions centered around extensional, i.e., instance-based, ontology matching methods. It presents three typical scenarios requiring interoperability, one with homogenous collections, one with heterogeneous collections, and one with multi-lingual collection. It discusses two different ways to evaluate potential alignments, one based on the application of re-indexing, one using a reference alignment. To these scenarios we apply extensional matching with different similarity measures which gives interesting insights. Finally, we firmly position our work in the Cultural Heritage context through an extensive discussion of the relevance for, and issues related to this specific field. The findings are as unspectacular as expected but nevertheless important: the provided methods can really improve interoperability in a number of important cases, but they are not universal solutions to all related problems. This paper will provide a solid foundation for any future work on Semantic Interoperability in the Cultural Heritage domain, in particular for anybody intending to apply extensional methods.

Languages

  • e 52
  • d 11

Types

  • a 44
  • el 20
  • m 3
  • x 3
  • s 2
  • More… Less…