Search (59 results, page 1 of 3)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Semantische Interoperabilität"
  1. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.08
    0.08147512 = product of:
      0.16295023 = sum of:
        0.16295023 = sum of:
          0.11353976 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
            0.11353976 = score(doc=4184,freq=14.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.6677857 = fieldWeight in 4184, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
          0.049410466 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
            0.049410466 = score(doc=4184,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 4184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
      0.5 = coord(1/2)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  2. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.06
    0.061869845 = product of:
      0.12373969 = sum of:
        0.12373969 = sum of:
          0.07432922 = weight(_text_:web in 759) [ClassicSimilarity], result of:
            0.07432922 = score(doc=759,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.43716836 = fieldWeight in 759, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.049410466 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.049410466 = score(doc=759,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.5 = coord(1/2)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.05
    0.048268706 = product of:
      0.09653741 = sum of:
        0.09653741 = product of:
          0.28961223 = sum of:
            0.28961223 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.28961223 = score(doc=306,freq=2.0), product of:
                0.4416923 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052098576 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Godby, C.J.; Smith, D.; Childress, E.: Encoding application profiles in a computational model of the crosswalk (2008) 0.04
    0.044192746 = product of:
      0.08838549 = sum of:
        0.08838549 = sum of:
          0.053092297 = weight(_text_:web in 2649) [ClassicSimilarity], result of:
            0.053092297 = score(doc=2649,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.3122631 = fieldWeight in 2649, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
          0.03529319 = weight(_text_:22 in 2649) [ClassicSimilarity], result of:
            0.03529319 = score(doc=2649,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 2649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
      0.5 = coord(1/2)
    
    Abstract
    OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill,et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broader-narrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.03
    0.03297302 = product of:
      0.06594604 = sum of:
        0.06594604 = sum of:
          0.030652853 = weight(_text_:web in 2627) [ClassicSimilarity], result of:
            0.030652853 = score(doc=2627,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
          0.03529319 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
            0.03529319 = score(doc=2627,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.03
    0.03297302 = product of:
      0.06594604 = sum of:
        0.06594604 = sum of:
          0.030652853 = weight(_text_:web in 3628) [ClassicSimilarity], result of:
            0.030652853 = score(doc=3628,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
          0.03529319 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
            0.03529319 = score(doc=3628,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
      0.5 = coord(1/2)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  7. Krause, J.: Semantic heterogeneity : comparing new semantic web approaches with those of digital libraries (2008) 0.03
    0.027630107 = product of:
      0.055260215 = sum of:
        0.055260215 = product of:
          0.11052043 = sum of:
            0.11052043 = weight(_text_:web in 1908) [ClassicSimilarity], result of:
              0.11052043 = score(doc=1908,freq=26.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.65002745 = fieldWeight in 1908, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web. Design/methodology/approach - The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing. Findings - The points criticised by the semantic web and ontology approaches are the same as those of the DL "Shell model approach" from the mid-1990s, with emphasis on the centrality of its heterogeneity components (used, for example, in vascoda). The Shell model argument began with the "invisible web", necessitating the restructuring of DL approaches. The conclusion is that both approaches fit well together and that the Shell model, with its semantic heterogeneity components, can be reformulated on the semantic web basis. Practical implications - A reinterpretation of the DL approaches of semantic heterogeneity and adapting to standards and tools supported by the W3C should be the best solution. It is therefore recommended that - although most of the semantic web standards are not technologically refined for commercial applications at present - all individual DL developments should be checked for their adaptability to the W3C standards of the semantic web. Originality/value - A unique conceptual analysis of the parallel developments emanating from the digital library and semantic web communities.
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Theme
    Semantic Web
  8. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.02
    0.022989638 = product of:
      0.045979276 = sum of:
        0.045979276 = product of:
          0.09195855 = sum of:
            0.09195855 = weight(_text_:web in 6061) [ClassicSimilarity], result of:
              0.09195855 = score(doc=6061,freq=18.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5408555 = fieldWeight in 6061, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6061)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Theme
    Semantic Web
  9. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.02
    0.021175914 = product of:
      0.042351827 = sum of:
        0.042351827 = product of:
          0.084703654 = sum of:
            0.084703654 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
              0.084703654 = score(doc=126,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.46428138 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=126)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  10. Boteram, F.; Hubrich, J.: Towards a comprehensive international Knowledge Organization System (2008) 0.02
    0.021175914 = product of:
      0.042351827 = sum of:
        0.042351827 = product of:
          0.084703654 = sum of:
            0.084703654 = weight(_text_:22 in 4786) [ClassicSimilarity], result of:
              0.084703654 = score(doc=4786,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.46428138 = fieldWeight in 4786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4786)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 19:30:41
  11. Burstein, M.; McDermott, D.V.: Ontology translation for interoperability among Semantic Web services (2005) 0.02
    0.018770961 = product of:
      0.037541922 = sum of:
        0.037541922 = product of:
          0.075083844 = sum of:
            0.075083844 = weight(_text_:web in 2661) [ClassicSimilarity], result of:
              0.075083844 = score(doc=2661,freq=12.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.4416067 = fieldWeight in 2661, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2661)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Research on semantic web services promises greater interoperability among software agents and web services by enabling content-based automated service discovery and interaction and by utilizing. Although this is to be based on use of shared ontologies published on the semantic web, services produced and described by different developers may well use different, perhaps partly overlapping, sets of ontologies. Interoperability will depend on ontology mappings and architectures supporting the associated translation processes. The question we ask is, does the traditional approach of introducing mediator agents to translate messages between requestors and services work in such an open environment? This article reviews some of the processing assumptions that were made in the development of the semantic web service modeling ontology OWL-S and argues that, as a practical matter, the translation function cannot always be isolated in mediators. Ontology mappings need to be published on the semantic web just as ontologies themselves are. The translation for service discovery, service process model interpretation, task negotiation, service invocation, and response interpretation may then be distributed to various places in the architecture so that translation can be done in the specific goal-oriented informational contexts of the agents performing these processes. We present arguments for assigning translation responsibility to particular agents in the cases of service invocation, response translation, and match- making.
  12. Landry, P.: MACS: multilingual access to subject and link management : Extending the Multilingual Capacity of TEL in the EDL Project (2007) 0.02
    0.017646596 = product of:
      0.03529319 = sum of:
        0.03529319 = product of:
          0.07058638 = sum of:
            0.07058638 = weight(_text_:22 in 1287) [ClassicSimilarity], result of:
              0.07058638 = score(doc=1287,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.38690117 = fieldWeight in 1287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1287)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  13. Veltman, K.H.: Syntactic and semantic interoperability : new approaches to knowledge and the Semantic Web (2001) 0.02
    0.017339872 = product of:
      0.034679744 = sum of:
        0.034679744 = product of:
          0.06935949 = sum of:
            0.06935949 = weight(_text_:web in 3883) [ClassicSimilarity], result of:
              0.06935949 = score(doc=3883,freq=16.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.4079388 = fieldWeight in 3883, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3883)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    At VVWW-7 (Brisbane, 1997), Tim Berners-Lee outlined his vision of a global reasoning web. At VVWW- 8 (Toronto, May 1998), he developed this into a vision of a semantic web, where one Gould search not just for isolated words, but for meaning in the form of logically provable claims. In the past four years this vision has spread with amazing speed. The semantic web has been adopted by the European Commission as one of the important goals of the Sixth Framework Programme. In the United States it has become linked with the Defense Advanced Research Projects Agency (DARPA). While this quest to achieve a semantic web is new, the quest for meaning in language has a history that is almost as old as language itself. Accordingly this paper opens with a survey of the historical background. The contributions of the Dublin Core are reviewed briefly. To achieve a semantic web requires both syntactic and semantic interoperability. These challenges are outlined. A basic contention of this paper is that semantic interoperability requires much more than a simple agreement concerning the static meaning of a term. Different levels of agreement (local, regional, national and international) are involved and these levels have their own history. Hence, one of the larger challenges is to create new systems of knowledge organization, which identify and connect these different levels. With respect to meaning or semantics, early twentieth century pioneers such as Wüster were hopeful that it might be sufficient to limit oneself to isolated terms and words without reference to the larger grammatical context: to concept systems rather than to propositional logic. While a fascination with concept systems implicitly dominates many contemporary discussions, this paper suggests why this approach is not sufficient. The final section of this paper explores how an approach using propositional logic could lead to a new approach to universals and particulars. This points to a re-organization of knowledge, and opens the way for a vision of a semantic web with all the historical and cultural richness and complexity of language itself.
    Theme
    Semantic Web
  14. Budin, G.: Kommunikation in Netzwerken : Terminologiemanagement (2006) 0.02
    0.017339872 = product of:
      0.034679744 = sum of:
        0.034679744 = product of:
          0.06935949 = sum of:
            0.06935949 = weight(_text_:web in 5700) [ClassicSimilarity], result of:
              0.06935949 = score(doc=5700,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.4079388 = fieldWeight in 5700, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5700)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dieses Kapitel gibt einen Überblick über Ziele, Methoden, und Anwendungskontexte des Terminologiemanagements. Eine Definition von "Terminologie" leitet über zu einem terminologischen Wissensmodell, mit dem die Dynamik und Komplexität von begrifflichen Wissensstrukturen und entsprechenden lexikalischen Repräsentationsformen beschrieben werden kann. Ziel der terminologischen Wissensmodellierung ist die Erarbeitung von sprachlichen und begrifflichen Voraussetzungen für präzise Fachkommunikation sowie für die semantische Interoperabilität im künftigen "Semantic Web".
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  15. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.02
    0.016219966 = product of:
      0.032439932 = sum of:
        0.032439932 = product of:
          0.064879864 = sum of:
            0.064879864 = weight(_text_:web in 3398) [ClassicSimilarity], result of:
              0.064879864 = score(doc=3398,freq=14.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.38159183 = fieldWeight in 3398, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3398)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To show how semantic web techniques can help address semantic interoperability issues in the broad cultural heritage domain, allowing users an integrated and seamless access to heterogeneous collections. Design/methodology/approach - This paper presents the heterogeneity problems to be solved. It introduces semantic web techniques that can help in solving them, focusing on the representation of controlled vocabularies and their semantic alignment. It gives pointers to some previous projects and experiments that have tried to address the problems discussed. Findings - Semantic web research provides practical technical and methodological approaches to tackle the different issues. Two contributions of interest are the simple knowledge organisation system model and automatic vocabulary alignment methods and tools. These contributions were demonstrated to be usable for enabling semantic search and navigation across collections. Research limitations/implications - The research aims at designing different representation and alignment methods for solving interoperability problems in the context of controlled subject vocabularies. Given the variety and technical richness of current research in the semantic web field, it is impossible to provide an in-depth account or an exhaustive list of references. Every aspect of the paper is, however, given one or several pointers for further reading. Originality/value - This article provides a general and practical introduction to relevant semantic web techniques. It is of specific value for the practitioners in the cultural heritage and digital library domains who are interested in applying these methods in practice.
    Content
    This paper is based on a talk given at "Information Access for the Global Community, An International Seminar on the Universal Decimal Classification" held on 4-5 June 2007 in The Hague, The Netherlands. An abstract of this talk will be published in Extensions and Corrections to the UDC, an annual publication of the UDC consortium. Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Theme
    Semantic Web
  16. Liang, A.; Salokhe, G.; Sini, M.; Keizer, J.: Towards an infrastructure for semantic applications : methodologies for semantic integration of heterogeneous resources (2006) 0.02
    0.015927691 = product of:
      0.031855382 = sum of:
        0.031855382 = product of:
          0.063710764 = sum of:
            0.063710764 = weight(_text_:web in 241) [ClassicSimilarity], result of:
              0.063710764 = score(doc=241,freq=6.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.37471575 = fieldWeight in 241, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=241)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The semantic heterogeneity presented by Web information in the Agricultural domain presents tremendous information retrieval challenges. This article presents work taking place at the Food and Agriculture Organizations (FAO) which addresses this challenge. Based on the analysis of resources in the domain of agriculture, this paper proposes (a) an application profile (AP) for dealing with the problem of heterogeneity originating from differences in terminologies, domain coverage, and domain modelling, and (b) a root application ontology (AAO) based on the application profile which can serve as a basis for extending knowledge of the domain. The paper explains how even a small investment in the enhancement of relations between vocabularies, both metadata and domain-specific, yields a relatively large return on investment.
    Footnote
    Simultaneously published as Knitting the Semantic Web
    Theme
    Semantic Web
  17. Koutsomitropoulos, D.A.; Solomou, G.D.; Alexopoulos, A.D.; Papatheodorou, T.S.: Semantic metadata interoperability and inference-based querying in digital repositories (2009) 0.02
    0.015927691 = product of:
      0.031855382 = sum of:
        0.031855382 = product of:
          0.063710764 = sum of:
            0.063710764 = weight(_text_:web in 3731) [ClassicSimilarity], result of:
              0.063710764 = score(doc=3731,freq=6.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.37471575 = fieldWeight in 3731, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3731)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Metadata applications have evolved in time into highly structured "islands of information" about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this article we take upon the well-established Dublin Core metadata standard as well as other metadata schemata, which often appear in digital repositories set-ups, and suggest a proper Semantic Web OWL ontology. In this process the authors cope with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual metadata records. The authors conclude by presenting a working prototype that provides for inference-based querying on top of digital repositories.
    Theme
    Semantic Web
  18. Proceedings of the 2nd International Workshop on Evaluation of Ontology-based Tools (2004) 0.02
    0.015326426 = product of:
      0.030652853 = sum of:
        0.030652853 = product of:
          0.061305705 = sum of:
            0.061305705 = weight(_text_:web in 3152) [ClassicSimilarity], result of:
              0.061305705 = score(doc=3152,freq=8.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.36057037 = fieldWeight in 3152, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3152)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Table of Contents Part I: Accepted Papers Christoph Tempich and Raphael Volz: Towards a benchmark for Semantic Web reasoners - an analysis of the DAML ontology library M. Carmen Suarez-Figueroa and Asuncion Gomez-Perez: Results of Taxonomic Evaluation of RDF(S) and DAML+OIL ontologies using RDF(S) and DAML+OIL Validation Tools and Ontology Platforms import services Volker Haarslev and Ralf Möller: Racer: A Core Inference Engine for the Semantic Web Mikhail Kazakov and Habib Abdulrab: DL-workbench: a metamodeling approach to ontology manipulation Thorsten Liebig and Olaf Noppens: OntoTrack: Fast Browsing and Easy Editing of Large Ontologie Frederic Fürst, Michel Leclere, and Francky Trichet: TooCoM : a Tool to Operationalize an Ontology with the Conceptual Graph Model Naoki Sugiura, Masaki Kurematsu, Naoki Fukuta, Noriaki Izumi, and Takahira Yamaguchi: A domain ontology engineering tool with general ontologies and text corpus Howard Goldberg, Alfredo Morales, David MacMillan, and Matthew Quinlan: An Ontology-Driven Application to Improve the Prescription of Educational Resources to Parents of Premature Infants Part II: Experiment Contributions Domain natural language description for the experiment Raphael Troncy, Antoine Isaac, and Veronique Malaise: Using XSLT for Interoperability: DOE and The Travelling Domain Experiment Christian Fillies: SemTalk EON2003 Semantic Web Export / Import Interface Test Óscar Corcho, Asunción Gómez-Pérez, Danilo José Guerrero-Rodríguez, David Pérez-Rey, Alberto Ruiz-Cristina, Teresa Sastre-Toral, M. Carmen Suárez-Figueroa: Evaluation experiment of ontology tools' interoperability with the WebODE ontology engineering workbench Holger Knublauch: Case Study: Using Protege to Convert the Travel Ontology to UML and OWL Franz Calvo and John Gennari: Interoperability of Protege 2.0 beta and OilEd 3.5 in the Domain Knowledge of Osteoporosis
    Theme
    Semantic Web
  19. Vizine-Goetz, D.; Houghton, A.; Childress, E.: Web services for controlled vocabularies (2006) 0.02
    0.015326426 = product of:
      0.030652853 = sum of:
        0.030652853 = product of:
          0.061305705 = sum of:
            0.061305705 = weight(_text_:web in 1171) [ClassicSimilarity], result of:
              0.061305705 = score(doc=1171,freq=8.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.36057037 = fieldWeight in 1171, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Amid the debates about whether folksonomies will supplant controlled vocabularies and whether the Library of Congress Subject Headings (LCSH) and Dewey Decimal Classification (DDC) system have outlived their usefulness, libraries, museums and other organizations continue to require efficient, effective access to controlled vocabularies for creating consistent metadata for their collections . In this article, we present an approach for using Web services to interact with controlled vocabularies. Services are implemented within a service-oriented architecture (SOA) framework. SOA is an approach to distributed computing where services are loosely coupled and discoverable on the network. A set of experimental services for controlled vocabularies is provided through the Microsoft Office (MS) Research task pane (a small window or sidebar that opens up next to Internet Explorer (IE) and other Microsoft Office applications). The research task pane is a built-in feature of IE when MS Office 2003 is loaded. The research pane enables a user to take advantage of a number of research and reference services accessible over the Internet. Web browsers, such as Mozilla Firefox and Opera, also provide sidebars which could be used to deliver similar, loosely-coupled Web services.
  20. Tang, J.; Liang, B.-Y.; Li, J.-Z.: Toward detecting mapping strategies for ontology interoperability (2005) 0.02
    0.015326426 = product of:
      0.030652853 = sum of:
        0.030652853 = product of:
          0.061305705 = sum of:
            0.061305705 = weight(_text_:web in 3367) [ClassicSimilarity], result of:
              0.061305705 = score(doc=3367,freq=8.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.36057037 = fieldWeight in 3367, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3367)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontology mapping is one of the core tasks for ontology interoperability. It is aimed to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. It benefits many applications, such as integration of ontology based web data sources, interoperability of agents or web services. To reduce the amount of users' effort as much as possible, (semi-) automatic ontology mapping is becoming more and more important to bring it into fruition. In the existing literature, many approaches have found considerable interest by combining several different similar/mapping strategies (namely multi-strategy based mapping). However, experiments show that the multi-strategy based mapping does not always outperform its single-strategy counterpart. In this paper, we mainly aim to deal with two problems: (1) for a new, unseen mapping task, should we select a multi-strategy based algorithm or just one single-strategy based algorithm? (2) if the task is suitable for multi-strategy, then how to select the strategies into the final combined scenario? We propose an approach of multiple strategies detections for ontology mapping. The results obtained so far show that multi-strategy detection improves on precision and recall significantly.
    Content
    Beitrag anlässlich: Workshop on The Semantic Computing Initiative (SeC 2005) --- From Semantic Web to Semantic World --- to be held in conjunction with The 14th Int'l Conf. on World Wide Web (WWW2005); vgl.: http://www.instsec.org/2005ws/.

Languages

  • e 45
  • d 13

Types

  • a 38
  • el 23
  • x 2
  • r 1
  • More… Less…