Search (13 results, page 1 of 1)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.05
    0.045787815 = product of:
      0.16025734 = sum of:
        0.044100422 = weight(_text_:subject in 1967) [ClassicSimilarity], result of:
          0.044100422 = score(doc=1967,freq=6.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.41066417 = fieldWeight in 1967, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.049448926 = weight(_text_:classification in 1967) [ClassicSimilarity], result of:
          0.049448926 = score(doc=1967,freq=12.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5171319 = fieldWeight in 1967, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.049448926 = weight(_text_:classification in 1967) [ClassicSimilarity], result of:
          0.049448926 = score(doc=1967,freq=12.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.5171319 = fieldWeight in 1967, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.017259069 = product of:
          0.034518138 = sum of:
            0.034518138 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.034518138 = score(doc=1967,freq=4.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  2. Szostak, R.; Smiraglia, R.P.: Comparative approaches to interdisciplinary KOSs : use cases of converting UDC to BCC (2017) 0.02
    0.023276636 = product of:
      0.1086243 = sum of:
        0.042009152 = weight(_text_:subject in 3874) [ClassicSimilarity], result of:
          0.042009152 = score(doc=3874,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.3911902 = fieldWeight in 3874, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3874)
        0.033307575 = weight(_text_:classification in 3874) [ClassicSimilarity], result of:
          0.033307575 = score(doc=3874,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.34832728 = fieldWeight in 3874, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3874)
        0.033307575 = weight(_text_:classification in 3874) [ClassicSimilarity], result of:
          0.033307575 = score(doc=3874,freq=4.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.34832728 = fieldWeight in 3874, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3874)
      0.21428572 = coord(3/14)
    
    Abstract
    We take a small sample of works and compare how these are classified within both the Universal Decimal Classification and the Basic concepts Classification. We examine notational length, expressivity, network effects, and the number of subject strings. One key finding is that BCC typically synthesizes many more terms than UDC in classifying a particular document - but the length of classificatory notations is roughly equivalent for the two KOSs. BCC captures documents with fewer subject strings (generally one) but these are more complex.
  3. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.02
    0.022654418 = product of:
      0.10572062 = sum of:
        0.0474445 = weight(_text_:subject in 3870) [ClassicSimilarity], result of:
          0.0474445 = score(doc=3870,freq=10.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4418043 = fieldWeight in 3870, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
        0.029138058 = weight(_text_:classification in 3870) [ClassicSimilarity], result of:
          0.029138058 = score(doc=3870,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 3870, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
        0.029138058 = weight(_text_:classification in 3870) [ClassicSimilarity], result of:
          0.029138058 = score(doc=3870,freq=6.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.3047229 = fieldWeight in 3870, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.21428572 = coord(3/14)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  4. Suchowolec, K.; Lang, C.; Schneider, R.: Re-designing online terminology resources for German grammar (2016) 0.01
    0.012551095 = product of:
      0.05857178 = sum of:
        0.016822865 = weight(_text_:classification in 3108) [ClassicSimilarity], result of:
          0.016822865 = score(doc=3108,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 3108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3108)
        0.016822865 = weight(_text_:classification in 3108) [ClassicSimilarity], result of:
          0.016822865 = score(doc=3108,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.17593184 = fieldWeight in 3108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3108)
        0.024926046 = product of:
          0.04985209 = sum of:
            0.04985209 = weight(_text_:texts in 3108) [ClassicSimilarity], result of:
              0.04985209 = score(doc=3108,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.302856 = fieldWeight in 3108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3108)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    The compilation of terminological vocabularies plays a central role in the organization and retrieval of scientific texts. Both simple keyword lists as well as sophisticated modellings of relationships between terminological concepts can make a most valuable contribution to the analysis, classification, and finding of appropriate digital documents, either on the Web or within local repositories. This seems especially true for long-established scientific fields with various theoretical and historical branches, such as linguistics, where the use of terminology within documents from different origins is sometimes far from being consistent. In this short paper, we report on the early stages of a project that aims at the re-design of an existing domain-specific KOS for grammatical content grammis. In particular, we deal with the terminological part of grammis and present the state-of-the-art of this online resource as well as the key re-design principles. Further, we propose questions regarding ramifications of the Linked Open Data and Semantic Web approaches for our re-design decisions.
  5. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.01
    0.010911817 = product of:
      0.050921813 = sum of:
        0.024005229 = weight(_text_:subject in 3109) [ClassicSimilarity], result of:
          0.024005229 = score(doc=3109,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.22353725 = fieldWeight in 3109, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.013458292 = weight(_text_:classification in 3109) [ClassicSimilarity], result of:
          0.013458292 = score(doc=3109,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.14074548 = fieldWeight in 3109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.013458292 = weight(_text_:classification in 3109) [ClassicSimilarity], result of:
          0.013458292 = score(doc=3109,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.14074548 = fieldWeight in 3109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
      0.21428572 = coord(3/14)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  6. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.01
    0.010048032 = product of:
      0.07033622 = sum of:
        0.03675035 = weight(_text_:subject in 3654) [ClassicSimilarity], result of:
          0.03675035 = score(doc=3654,freq=6.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.34222013 = fieldWeight in 3654, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3654)
        0.033585876 = product of:
          0.06717175 = sum of:
            0.06717175 = weight(_text_:schemes in 3654) [ClassicSimilarity], result of:
              0.06717175 = score(doc=3654,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41806644 = fieldWeight in 3654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3654)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  7. Naudet, Y.; Latour, T.; Chen, D.: ¬A Systemic approach to Interoperability formalization (2009) 0.01
    0.00576784 = product of:
      0.04037488 = sum of:
        0.02018744 = weight(_text_:classification in 2740) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2740,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2740, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2740)
        0.02018744 = weight(_text_:classification in 2740) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2740,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2740, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2740)
      0.14285715 = coord(2/14)
    
    Abstract
    With a first version developed last year, the Ontology of Interoperability (OoI) aims at formally describing concepts relating to problems and solutions in the domain of interoperability. From the beginning, the OoI has its foundations in the systemic theory and addresses interoperability from the general point of view of a system, whether it is composed by other systems (systems-of-systems) or not. In this paper, we present the last OoI focusing on the systemic approach. We then integrate a classification of interoperability knowledge provided by the Framework for Enterprise Interoperability. This way, we contextualize the OoI with a specific vocabulary to the enterprise domain, where solutions to interoperability problems are characterized according to interoperability approaches defined in the ISO 14258 and both solutions and problems can be localized into enterprises levels and characterized by interoperability levels, as defined in the European Interoperability Framework.
  8. Wake, S.; Nicholson, D.: HILT: High-Level Thesaurus Project : building consensus for interoperable subject access across communities (2001) 0.00
    0.0034293185 = product of:
      0.048010457 = sum of:
        0.048010457 = weight(_text_:subject in 1224) [ClassicSimilarity], result of:
          0.048010457 = score(doc=1224,freq=16.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.4470745 = fieldWeight in 1224, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=1224)
      0.071428575 = coord(1/14)
    
    Abstract
    This article provides an overview of the work carried out by the HILT Project <http://hilt.cdlr.strath.ac.uk> in making recommendations towards interoperable subject access, or cross-searching and browsing distributed services amongst the archives, libraries, museums and electronic services sectors. The article details consensus achieved at the 19 June 2001 HILT Workshop and discusses the HILT Stakeholder Survey. In 1999 Péter Jascó wrote that "savvy searchers" are asking for direction. Three years later the scenario he describes, that of searchers cross-searching databases where the subject vocabulary used in each case is different, still rings true. Jascó states that, in many cases, databases do not offer the necessary aids required to use the "preferred terms of the subject-controlled vocabulary". The databases to which Jascó refers are Dialog and DataStar. However, the situation he describes applies as well to the area that HILT is researching: that of cross-searching and browsing by subject across databases and catalogues in archives, libraries, museums and online information services. So how does a user access information on a particular subject when it is indexed across a multitude of services under different, but quite often similar, subject terms? Also, if experienced searchers are having problems, what about novice searchers? As information professionals, it is our role to investigate such problems and recommend solutions. Although there is no hard empirical evidence one way or another, HILT participants agree that the problem for users attempting to search across databases is real. There is a strong likelihood that users are disadvantaged by the use of different subject terminology combined with a multitude of different practices taking place within the archive, library, museums and online communities. Arguably, failure to address this problem of interoperability undermines the value of cross-searching and browsing facilities, and wastes public money because relevant resources are 'hidden' from searchers. HILT is charged with analysing this broad problem through qualitative methods, with the main aim of presenting a set of recommendations on how to make it easier to cross-search and browse distributed services. Because this is a very large problem composed of many strands, HILT recognizes that any proposed solutions must address a host of issues. Recommended solutions must be affordable, sustainable, politically acceptable, useful, future-proof and international in scope. It also became clear to the HILT team that progress toward finding solutions to the interoperability problem could only be achieved through direct dialogue with other parties keen to solve this problem, and that the problem was as much about consensus building as it was about finding a solution. This article describes how HILT approached the cross-searching problem; how it investigated the nature of the problem, detailing results from the HILT Stakeholder Survey; and how it achieved consensus through the recent HILT Workshop.
  9. Takhirov, N.; Aalberg, T.; Duchateau, F.; Zumer, M.: FRBR-ML: a FRBR-based framework for semantic interoperability (2012) 0.00
    0.002872974 = product of:
      0.04022163 = sum of:
        0.04022163 = weight(_text_:bibliographic in 134) [ClassicSimilarity], result of:
          0.04022163 = score(doc=134,freq=8.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.34409973 = fieldWeight in 134, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
      0.071428575 = coord(1/14)
    
    Abstract
    Metadata related to cultural items such as literature, music and movies is a valuable resource that is currently exploited in many applications and services based on semantic web technologies. A vast amount of such information has been created by memory institutions in the last decades using different standard or ad hoc schemas, and a main challenge is to make this legacy data accessible as reusable semantic data. On one hand, this is a syntactic problem that can be solved by transforming to formats that are compatible with the tools and services used for semantic aware services. On the other hand, this is a semantic problem. Simply transforming from one format to another does not automatically enable semantic interoperability and legacy data often needs to be reinterpreted as well as transformed. The conceptual model in the Functional Requirements for Bibliographic Records, initially developed as a conceptual framework for library standards and systems, is a major step towards a shared semantic model of the products of artistic and intellectual endeavor of mankind. The model is generally accepted as sufficiently generic to serve as a conceptual framework for a broad range of cultural heritage metadata. Unfortunately, the existing large body of legacy data makes a transition to this model difficult. For instance, most bibliographic data is still only available in various MARC-based formats which is hard to render into reusable and meaningful semantic data. Making legacy bibliographic data accessible as semantic data is a complex problem that includes interpreting and transforming the information. In this article, we present our work on transforming and enhancing legacy bibliographic information into a representation where the structure and semantics of the FRBR model is explicit.
  10. Doerr, M.: Semantic problems of thesaurus mapping (2001) 0.00
    0.0023989913 = product of:
      0.033585876 = sum of:
        0.033585876 = product of:
          0.06717175 = sum of:
            0.06717175 = weight(_text_:schemes in 5902) [ClassicSimilarity], result of:
              0.06717175 = score(doc=5902,freq=4.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.41806644 = fieldWeight in 5902, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5902)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    With networked information access to heterogeneous data sources, the problem of terminology provision and interoperability of controlled vocabulary schemes such as thesauri becomes increasingly urgent. Solutions are needed to improve the performance of full-text retrieval systems and to guide the design of controlled terminology schemes for use in structured data, including metadata. Thesauri are created in different languages, with different scope and points of view and at different levels of abstraction and detail, to accomodate access to a specific group of collections. In any wider search accessing distributed collections, the user would like to start with familiar terminology and let the system find out the correspondences to other terminologies in order to retrieve equivalent results from all addressed collections. This paper investigates possible semantic differences that may hinder the unambiguous mapping and transition from one thesaurus to another. It focusses on the differences of meaning of terms and their relations as intended by their creators for indexing and querying a specific collection, in contrast to methods investigating the statistical relevance of terms for objects in a collection. It develops a notion of optimal mapping, paying particular attention to the intellectual quality of mappings between terms from different vocabularies and to problems of polysemy. Proposals are made to limit the vagueness introduced by the transition from one vocabulary to another. The paper shows ways in which thesaurus creators can improve their methodology to meet the challenges of networked access of distributed collections created under varying conditions. For system implementers, the discussion will lead to a better understanding of the complexity of the problem
  11. Manguinhas, H.; Charles, V.; Isaac, A.; Miles, T.; Lima, A.; Neroulidis, A.; Ginouves, V.; Atsidis, D.; Hildebrand, M.; Brinkerink, M.; Gordea, S.: Linking subject labels in cultural heritage metadata to MIMO vocabulary using CultuurLink (2016) 0.00
    0.0018186709 = product of:
      0.02546139 = sum of:
        0.02546139 = weight(_text_:subject in 3107) [ClassicSimilarity], result of:
          0.02546139 = score(doc=3107,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=3107)
      0.071428575 = coord(1/14)
    
  12. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.00
    0.0010170004 = product of:
      0.014238005 = sum of:
        0.014238005 = product of:
          0.02847601 = sum of:
            0.02847601 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.02847601 = score(doc=759,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    11. 5.2013 19:22:18
  13. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.00
    8.7171455E-4 = product of:
      0.0122040035 = sum of:
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.024408007 = score(doc=4820,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    3.12.2016 18:39:22