Search (46 results, page 1 of 3)

  • × type_ss:"el"
  • × theme_ss:"Semantische Interoperabilität"
  1. Mayr, P.; Petras, V.: Cross-concordances : terminology mapping and its effectiveness for information retrieval (2008) 0.02
    0.01671183 = product of:
      0.06684732 = sum of:
        0.019452421 = weight(_text_:retrieval in 2323) [ClassicSimilarity], result of:
          0.019452421 = score(doc=2323,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
        0.047394894 = sum of:
          0.021088472 = weight(_text_:system in 2323) [ClassicSimilarity], result of:
            0.021088472 = score(doc=2323,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.20878783 = fieldWeight in 2323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.046875 = fieldNorm(doc=2323)
          0.02630642 = weight(_text_:29 in 2323) [ClassicSimilarity], result of:
            0.02630642 = score(doc=2323,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.23319192 = fieldWeight in 2323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2323)
      0.25 = coord(2/8)
    
    Abstract
    The German Federal Ministry for Education and Research funded a major terminology mapping initiative, which found its conclusion in 2007. The task of this terminology mapping initiative was to organize, create and manage 'cross-concordances' between controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. 64 crosswalks with more than 500,000 relations were established. In the final phase of the project, a major evaluation effort to test and measure the effectiveness of the vocabulary mappings in an information system environment was conducted. The paper reports on the cross-concordance work and evaluation results.
    Date
    26.12.2011 13:33:29
  2. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.01
    0.011211801 = product of:
      0.044847205 = sum of:
        0.032420702 = weight(_text_:retrieval in 5864) [ClassicSimilarity], result of:
          0.032420702 = score(doc=5864,freq=8.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33420905 = fieldWeight in 5864, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.012426502 = product of:
          0.024853004 = sum of:
            0.024853004 = weight(_text_:system in 5864) [ClassicSimilarity], result of:
              0.024853004 = score(doc=5864,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.24605882 = fieldWeight in 5864, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5864)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
    Content
    Vgl.: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval. Vgl. auch: http://semantic-web-journal.net/content/similarity-based-knowledge-graph-queries-recommendation-retrieval-1.
  3. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.01
    0.007969393 = product of:
      0.031877574 = sum of:
        0.0061508045 = product of:
          0.012301609 = sum of:
            0.012301609 = weight(_text_:system in 542) [ClassicSimilarity], result of:
              0.012301609 = score(doc=542,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.1217929 = fieldWeight in 542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
        0.02572677 = product of:
          0.05145354 = sum of:
            0.05145354 = weight(_text_:etc in 542) [ClassicSimilarity], result of:
              0.05145354 = score(doc=542,freq=4.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.29621437 = fieldWeight in 542, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In the final phase of the project, a major evaluation effort is under way to test and measure the effectiveness of the vocabulary mappings in an information system environment. Actual user queries are tested in a distributed search environment, where several bibliographic databases with different controlled vocabularies are searched at the same time. Three query variations are compared to each other: a free-text search without focusing on using the controlled vocabulary or terminology mapping; a controlled vocabulary search, where terms from one vocabulary (a 'home' vocabulary thought to be familiar to the user of a particular database) are used to search all databases; and finally, a search, where controlled vocabulary terms are translated into the terms of the respective controlled vocabulary of the database. For evaluation purposes, types of cross-concordances are distinguished between intradisciplinary vocabularies (vocabularies within the social sciences) and interdisciplinary vocabularies (social sciences to other disciplines as well as other combinations). Simultaneously, an extensive quantitative analysis is conducted aimed at finding patterns in terminology mappings that can explain trends in the effectiveness of terminology mappings, particularly looking at overlapping terms, types of determined relations (equivalence, hierarchy etc.), size of participating vocabularies, etc. This project is the largest terminology mapping effort in Germany. The number and variety of controlled vocabularies targeted provide an optimal basis for insights and further research opportunities. To our knowledge, terminology mapping efforts have rarely been evaluated with stringent qualitative and quantitative measures. This research should contribute in this area. For the NKOS workshop, we plan to present an overview of the project and participating vocabularies, an introduction to the heterogeneity service and its application as well as some of the results and findings of the evaluation, which will be concluded in August.
  4. Kollia, I.; Tzouvaras, V.; Drosopoulos, N.; Stamou, G.: ¬A systemic approach for effective semantic access to cultural content (2012) 0.01
    0.007857411 = product of:
      0.031429645 = sum of:
        0.016210351 = weight(_text_:retrieval in 130) [ClassicSimilarity], result of:
          0.016210351 = score(doc=130,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.16710453 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
        0.015219294 = product of:
          0.030438587 = sum of:
            0.030438587 = weight(_text_:system in 130) [ClassicSimilarity], result of:
              0.030438587 = score(doc=130,freq=6.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30135927 = fieldWeight in 130, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=130)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    A large on-going activity for digitization, dissemination and preservation of cultural heritage is taking place in Europe, United States and the world, which involves all types of cultural institutions, i.e., galleries, libraries, museums, archives and all types of cultural content. The development of Europeana, as a single point of access to European Cultural Heritage, has probably been the most important result of the activities in the field till now. Semantic interoperability, linked open data, user involvement and user generated content are key issues in these developments. This paper presents a system that provides content providers and users the ability to map, in an effective way, their own metadata schemas to common domain standards and the Europeana (ESE, EDM) data models. The system is currently largely used by many European research projects and the Europeana. Based on these mappings, semantic query answering techniques are proposed as a means for effective access to digital cultural heritage, providing users with content enrichment, linking of data based on their involvement and facilitating content search and retrieval. An experimental study is presented, involving content from national content aggregators, as well as thematic content aggregators and the Europeana, which illustrates the proposed system
  5. Doerr, M.: Semantic problems of thesaurus mapping (2001) 0.01
    0.0071592135 = product of:
      0.028636854 = sum of:
        0.016210351 = weight(_text_:retrieval in 5902) [ClassicSimilarity], result of:
          0.016210351 = score(doc=5902,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.16710453 = fieldWeight in 5902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5902)
        0.012426502 = product of:
          0.024853004 = sum of:
            0.024853004 = weight(_text_:system in 5902) [ClassicSimilarity], result of:
              0.024853004 = score(doc=5902,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.24605882 = fieldWeight in 5902, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5902)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    With networked information access to heterogeneous data sources, the problem of terminology provision and interoperability of controlled vocabulary schemes such as thesauri becomes increasingly urgent. Solutions are needed to improve the performance of full-text retrieval systems and to guide the design of controlled terminology schemes for use in structured data, including metadata. Thesauri are created in different languages, with different scope and points of view and at different levels of abstraction and detail, to accomodate access to a specific group of collections. In any wider search accessing distributed collections, the user would like to start with familiar terminology and let the system find out the correspondences to other terminologies in order to retrieve equivalent results from all addressed collections. This paper investigates possible semantic differences that may hinder the unambiguous mapping and transition from one thesaurus to another. It focusses on the differences of meaning of terms and their relations as intended by their creators for indexing and querying a specific collection, in contrast to methods investigating the statistical relevance of terms for objects in a collection. It develops a notion of optimal mapping, paying particular attention to the intellectual quality of mappings between terms from different vocabularies and to problems of polysemy. Proposals are made to limit the vagueness introduced by the transition from one vocabulary to another. The paper shows ways in which thesaurus creators can improve their methodology to meet the challenges of networked access of distributed collections created under varying conditions. For system implementers, the discussion will lead to a better understanding of the complexity of the problem
  6. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.01
    0.0068772445 = product of:
      0.027508978 = sum of:
        0.012301609 = product of:
          0.024603218 = sum of:
            0.024603218 = weight(_text_:system in 540) [ClassicSimilarity], result of:
              0.024603218 = score(doc=540,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2435858 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.5 = coord(1/2)
        0.01520737 = product of:
          0.03041474 = sum of:
            0.03041474 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
              0.03041474 = score(doc=540,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2708308 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  7. Heckner, M.; Mühlbacher, S.; Wolff, C.: Tagging tagging : a classification model for user keywords in scientific bibliography management systems (2007) 0.01
    0.0063423524 = product of:
      0.02536941 = sum of:
        0.018339919 = weight(_text_:retrieval in 533) [ClassicSimilarity], result of:
          0.018339919 = score(doc=533,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.18905719 = fieldWeight in 533, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=533)
        0.0070294905 = product of:
          0.014058981 = sum of:
            0.014058981 = weight(_text_:system in 533) [ClassicSimilarity], result of:
              0.014058981 = score(doc=533,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.13919188 = fieldWeight in 533, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=533)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Recently, a growing amount of systems that allow personal content annotation (tagging) are being created, ranging from personal sites for organising bookmarks (del.icio.us), photos (flickr.com) or videos (video.google.com, youtube.com) to systems for managing bibliographies for scientific research projects (citeulike.org, connotea.org). Simultaneously, a debate on the pro and cons of allowing users to add personal keywords to digital content has arisen. One recurrent point-of-discussion is whether tagging can solve the well-known vocabulary problem: In order to support successful retrieval in complex environments, it is necessary to index an object with a variety of aliases (cf. Furnas 1987). In this spirit, social tagging enhances the pool of rigid, traditional keywording by adding user-created retrieval vocabularies. Furthermore, tagging goes beyond simple personal content-based keywords by providing meta-keywords like funny or interesting that "identify qualities or characteristics" (Golder and Huberman 2006, Kipp and Campbell 2006, Kipp 2007, Feinberg 2006, Kroski 2005). Contrarily, tagging systems are claimed to lead to semantic difficulties that may hinder the precision and recall of tagging systems (e.g. the polysemy problem, cf. Marlow 2006, Lakoff 2005, Golder and Huberman 2006). Empirical research on social tagging is still rare and mostly from a computer linguistics or librarian point-of-view (Voß 2007) which focus either on the automatic statistical analyses of large data sets, or intellectually inspect single cases of tag usage: Some scientists studied the evolution of tag vocabularies and tag distribution in specific systems (Golder and Huberman 2006, Hammond 2005). Others concentrate on tagging behaviour and tagger characteristics in collaborative systems. (Hammond 2005, Kipp and Campbell 2007, Feinberg 2006, Sen 2006). However, little research has been conducted on the functional and linguistic characteristics of tags.1 An analysis of these patterns could show differences between user wording and conventional keywording. In order to provide a reasonable basis for comparison, a classification system for existing tags is needed.
  8. Naudet, Y.; Latour, T.; Chen, D.: ¬A Systemic approach to Interoperability formalization (2009) 0.01
    0.0059243618 = product of:
      0.047394894 = sum of:
        0.047394894 = sum of:
          0.021088472 = weight(_text_:system in 2740) [ClassicSimilarity], result of:
            0.021088472 = score(doc=2740,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.20878783 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.046875 = fieldNorm(doc=2740)
          0.02630642 = weight(_text_:29 in 2740) [ClassicSimilarity], result of:
            0.02630642 = score(doc=2740,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.23319192 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2740)
      0.125 = coord(1/8)
    
    Abstract
    With a first version developed last year, the Ontology of Interoperability (OoI) aims at formally describing concepts relating to problems and solutions in the domain of interoperability. From the beginning, the OoI has its foundations in the systemic theory and addresses interoperability from the general point of view of a system, whether it is composed by other systems (systems-of-systems) or not. In this paper, we present the last OoI focusing on the systemic approach. We then integrate a classification of interoperability knowledge provided by the Framework for Enterprise Interoperability. This way, we contextualize the OoI with a specific vocabulary to the enterprise domain, where solutions to interoperability problems are characterized according to interoperability approaches defined in the ISO 14258 and both solutions and problems can be localized into enterprises levels and characterized by interoperability levels, as defined in the European Interoperability Framework.
    Date
    29. 1.2016 18:48:14
  9. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.01
    0.0058222273 = product of:
      0.02328891 = sum of:
        0.012426502 = product of:
          0.024853004 = sum of:
            0.024853004 = weight(_text_:system in 3628) [ClassicSimilarity], result of:
              0.024853004 = score(doc=3628,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.24605882 = fieldWeight in 3628, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
        0.010862407 = product of:
          0.021724815 = sum of:
            0.021724815 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.021724815 = score(doc=3628,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  10. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.01
    0.005011449 = product of:
      0.020045796 = sum of:
        0.011347245 = weight(_text_:retrieval in 4205) [ClassicSimilarity], result of:
          0.011347245 = score(doc=4205,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.11697317 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.008698551 = product of:
          0.017397102 = sum of:
            0.017397102 = weight(_text_:system in 4205) [ClassicSimilarity], result of:
              0.017397102 = score(doc=4205,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.17224117 = fieldWeight in 4205, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).
    The focus of the project "coli-conc" lies in semi-automatic creation of mappings between different KOS in general and the two important library classification schemes in particular - Dewey classification system (DDC) and Regensburg classification system (RVK). In the year 2000, the national libraries of Germany, Austria and Switzerland adopted DDC in an endeavor to develop a nation-wide classification scheme. But historically, in the German speaking regions, the academic libraries have been using their own home-grown systems, the most prominent and popular being the RVK. However, with the launch of DDC, building concordances between DDC and RVK has become an imperative, although it is still rare. The delay in building comprehensive concordances between these two systems has been because of major challenges posed by the sheer largeness of these two systems (38.000 classes in DDC and ca. 860.000 classes in RVK), the strong disparity in their respective structure, the variation in the perception and representation of the concepts. The challenge is compounded geometrically for any manual attempt in this direction. Although there have been efforts on automatic mappings (OAEI Library Track 2012 -- 2014 and e.g. Pfeffer 2013) in the recent years; such concordances carry the risks of inaccurate mappings, and the approaches are rather more suitable for mapping suggestions than for automatic generation of concordances (Lauser 2008; Reiner 2010). The project "coli-conc" will facilitate the creation, evaluation, and reuse of mappings with a public collection of concordances and a web application of mapping management. The proposed presentation will give an introduction to the tools and standards created and planned in the project "coli-conc". This includes preliminary work on DDC concordances (Balakrishnan 2013), an overview of the software concept, technical architecture (Voß 2015) and a demonstration of the Cocoda web application.
  11. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.00
    0.004999443 = product of:
      0.019997772 = sum of:
        0.012968281 = weight(_text_:retrieval in 3109) [ClassicSimilarity], result of:
          0.012968281 = score(doc=3109,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.13368362 = fieldWeight in 3109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.0070294905 = product of:
          0.014058981 = sum of:
            0.014058981 = weight(_text_:system in 3109) [ClassicSimilarity], result of:
              0.014058981 = score(doc=3109,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.13919188 = fieldWeight in 3109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  12. Wicaksana, I.W.S.; Wahyudi, B.: Comparison Latent Semantic and WordNet approach for semantic similarity calculation (2011) 0.00
    0.004547893 = product of:
      0.036383145 = sum of:
        0.036383145 = product of:
          0.07276629 = sum of:
            0.07276629 = weight(_text_:etc in 689) [ClassicSimilarity], result of:
              0.07276629 = score(doc=689,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.41891038 = fieldWeight in 689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=689)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this paper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
  13. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.00
    0.0038018425 = product of:
      0.03041474 = sum of:
        0.03041474 = product of:
          0.06082948 = sum of:
            0.06082948 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.06082948 = score(doc=8365,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 6.2015 16:08:38
  14. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.00
    0.0032587221 = product of:
      0.026069777 = sum of:
        0.026069777 = product of:
          0.052139554 = sum of:
            0.052139554 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
              0.052139554 = score(doc=126,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.46428138 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=126)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  15. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0031246517 = product of:
      0.012498607 = sum of:
        0.008105176 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
          0.008105176 = score(doc=4232,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.08355226 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.0043934314 = product of:
          0.008786863 = sum of:
            0.008786863 = weight(_text_:system in 4232) [ClassicSimilarity], result of:
              0.008786863 = score(doc=4232,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.08699492 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  16. Mayr, P.; Petras, V.: Crosskonkordanzen : Terminologie Mapping und deren Effektivität für das Information Retrieval 0.00
    0.0028368114 = product of:
      0.02269449 = sum of:
        0.02269449 = weight(_text_:retrieval in 1996) [ClassicSimilarity], result of:
          0.02269449 = score(doc=1996,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.23394634 = fieldWeight in 1996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1996)
      0.125 = coord(1/8)
    
  17. Landry, P.: MACS: multilingual access to subject and link management : Extending the Multilingual Capacity of TEL in the EDL Project (2007) 0.00
    0.0027156018 = product of:
      0.021724815 = sum of:
        0.021724815 = product of:
          0.04344963 = sum of:
            0.04344963 = weight(_text_:22 in 1287) [ClassicSimilarity], result of:
              0.04344963 = score(doc=1287,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.38690117 = fieldWeight in 1287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1287)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  18. Angjeli, A.; Isaac, A.: Semantic web and vocabularies interoperability : an experiment with illuminations collections (2008) 0.00
    0.002598796 = product of:
      0.020790368 = sum of:
        0.020790368 = product of:
          0.041580737 = sum of:
            0.041580737 = weight(_text_:etc in 2324) [ClassicSimilarity], result of:
              0.041580737 = score(doc=2324,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.23937736 = fieldWeight in 2324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2324)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    During the years 2006 and 2007, the BnF has collaborated with the National Library of the Netherlands within the framework of the Dutch project STITCH. This project, through concrete experiments, investigates semantic interoperability, especially in relation to searching. How can we conduct semantic searches across several digital heritage collections? The metadata related to content analysis are often heterogeneous. Beyond using manual mapping of semantically similar entities, STITCH explores the techniques of the semantic web, particularly ontology mapping. This paper is about an experiment made on two digital iconographic collections: Mandragore, iconographic database of the Manuscript Department of the BnF, and the Medieval Illuminated manuscripts collection of the KB. While the content of these two collections is similar, they have been processed differently and the vocabularies used to index their content is very different. Vocabularies in Mandragore and Iconclass are both controlled and hierarchical but they do not have the same semantic and structure. This difference is of particular interest to the STITCH project, as it aims to study automatic alignment of two vocabularies. The collaborative experiment started with a precise analysis of each of the vocabularies; that included concepts and their representation, lexical properties of the terms used, semantic relationships, etc. The team of Dutch researchers then studied and implemented mechanisms of alignment of the two vocabularies. The initial models being different, there had to be a common standard in order to enable procedures of alignment. RDF and SKOS were selected for that. The experiment lead to building a prototype that allows for querying in both databases at the same time through a single interface. The descriptors of each vocabulary are used as search terms for all images regardless of the collection they belong to. This experiment is only one step in the search for solutions that aim at making navigation easier between heritage collections that have heterogeneous metadata.
  19. Dextre Clarke, S.G.: Overview of ISO NP 25964 : structured vocabularies for information retrieval (2007) 0.00
    0.0024315526 = product of:
      0.019452421 = sum of:
        0.019452421 = weight(_text_:retrieval in 535) [ClassicSimilarity], result of:
          0.019452421 = score(doc=535,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=535)
      0.125 = coord(1/8)
    
  20. Chen, H.: Semantic research for digital libraries (1999) 0.00
    0.0024315526 = product of:
      0.019452421 = sum of:
        0.019452421 = weight(_text_:retrieval in 1247) [ClassicSimilarity], result of:
          0.019452421 = score(doc=1247,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 1247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1247)
      0.125 = coord(1/8)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.