Search (71 results, page 1 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  1. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.05
    0.046958487 = product of:
      0.14087546 = sum of:
        0.09510193 = weight(_text_:problem in 759) [ClassicSimilarity], result of:
          0.09510193 = score(doc=759,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.46424055 = fieldWeight in 759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.04577352 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
          0.04577352 = score(doc=759,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
      0.33333334 = coord(2/6)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  2. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.04
    0.044715807 = product of:
      0.26829484 = sum of:
        0.26829484 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
          0.26829484 = score(doc=306,freq=2.0), product of:
            0.4091808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04826377 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.16666667 = coord(1/6)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  3. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.04
    0.043055516 = product of:
      0.12916654 = sum of:
        0.07685395 = weight(_text_:problem in 7411) [ClassicSimilarity], result of:
          0.07685395 = score(doc=7411,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 7411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=7411)
        0.052312598 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
          0.052312598 = score(doc=7411,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 7411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=7411)
      0.33333334 = coord(2/6)
    
    Abstract
    - Formal characterization given to the thesaurus mapping problem - Interopearbility workflow - - Thesauri SKOS Core transformation - - Thesaurus Mapping algorithms implementation - The "gold standard" data set and the THALEN application - Thesaurus interoperability assessment measures - Experimental results
    Date
    7.11.2008 10:40:22
  4. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.04
    0.043055516 = product of:
      0.12916654 = sum of:
        0.07685395 = weight(_text_:problem in 2227) [ClassicSimilarity], result of:
          0.07685395 = score(doc=2227,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 2227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=2227)
        0.052312598 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
          0.052312598 = score(doc=2227,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 2227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2227)
      0.33333334 = coord(2/6)
    
    Abstract
    - Introduction to the Thesaurus Interoperability problem - Analysis of the thesauri for the project case study - Overview of Schema/Ontology Mapping methodologies - The proposed approach for thesaurus mapping - Standards for implementing the proposed methodology
    Date
    7.11.2008 10:40:22
  5. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.03193986 = product of:
      0.19163916 = sum of:
        0.19163916 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
          0.19163916 = score(doc=1000,freq=2.0), product of:
            0.4091808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04826377 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.16666667 = coord(1/6)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  6. Garcia Marco, F.J.: Compatibility & heterogeneity in knowledge organization : some reflections around a case study in the field of consumer information (2008) 0.03
    0.0269097 = product of:
      0.0807291 = sum of:
        0.04803372 = weight(_text_:problem in 1678) [ClassicSimilarity], result of:
          0.04803372 = score(doc=1678,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 1678, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1678)
        0.032695375 = weight(_text_:22 in 1678) [ClassicSimilarity], result of:
          0.032695375 = score(doc=1678,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.19345059 = fieldWeight in 1678, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1678)
      0.33333334 = coord(2/6)
    
    Abstract
    A case study in compatibility and heterogeneity of knowledge organization (KO) systems and processes is presented. It is based in the experience of the author in the field of information for consumer protection, a good example of the emerging transdisciplinary applied social sciences. The activities and knowledge organization problems and solutions of the Aragonian Consumers' Information and Documentation Centre are described and analyzed. Six assertions can be concluded: a) heterogeneity and compatibility are certainly an inherent problem in knowledge organization and also in practical domains; b) knowledge organization is also a social task, not only a lögical one; c) knowledge organization is affected by economical and efficiency considerations; d) knowledge organization is at the heart of Knowledge Management; e) identifying and maintaining the focus in interdisciplinary fields is a must; f the different knowledge organization tools of a institution must be considered as an integrated system, pursuing a unifying model.
    Date
    16. 3.2008 18:22:50
  7. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.03
    0.02683342 = product of:
      0.08050026 = sum of:
        0.054343957 = weight(_text_:problem in 168) [ClassicSimilarity], result of:
          0.054343957 = score(doc=168,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.2652803 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.026156299 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
          0.026156299 = score(doc=168,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.15476047 = fieldWeight in 168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  8. Wake, S.; Nicholson, D.: HILT: High-Level Thesaurus Project : building consensus for interoperable subject access across communities (2001) 0.02
    0.01921349 = product of:
      0.11528094 = sum of:
        0.11528094 = weight(_text_:problem in 1224) [ClassicSimilarity], result of:
          0.11528094 = score(doc=1224,freq=18.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.5627445 = fieldWeight in 1224, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=1224)
      0.16666667 = coord(1/6)
    
    Abstract
    This article provides an overview of the work carried out by the HILT Project <http://hilt.cdlr.strath.ac.uk> in making recommendations towards interoperable subject access, or cross-searching and browsing distributed services amongst the archives, libraries, museums and electronic services sectors. The article details consensus achieved at the 19 June 2001 HILT Workshop and discusses the HILT Stakeholder Survey. In 1999 Péter Jascó wrote that "savvy searchers" are asking for direction. Three years later the scenario he describes, that of searchers cross-searching databases where the subject vocabulary used in each case is different, still rings true. Jascó states that, in many cases, databases do not offer the necessary aids required to use the "preferred terms of the subject-controlled vocabulary". The databases to which Jascó refers are Dialog and DataStar. However, the situation he describes applies as well to the area that HILT is researching: that of cross-searching and browsing by subject across databases and catalogues in archives, libraries, museums and online information services. So how does a user access information on a particular subject when it is indexed across a multitude of services under different, but quite often similar, subject terms? Also, if experienced searchers are having problems, what about novice searchers? As information professionals, it is our role to investigate such problems and recommend solutions. Although there is no hard empirical evidence one way or another, HILT participants agree that the problem for users attempting to search across databases is real. There is a strong likelihood that users are disadvantaged by the use of different subject terminology combined with a multitude of different practices taking place within the archive, library, museums and online communities. Arguably, failure to address this problem of interoperability undermines the value of cross-searching and browsing facilities, and wastes public money because relevant resources are 'hidden' from searchers. HILT is charged with analysing this broad problem through qualitative methods, with the main aim of presenting a set of recommendations on how to make it easier to cross-search and browse distributed services. Because this is a very large problem composed of many strands, HILT recognizes that any proposed solutions must address a host of issues. Recommended solutions must be affordable, sustainable, politically acceptable, useful, future-proof and international in scope. It also became clear to the HILT team that progress toward finding solutions to the interoperability problem could only be achieved through direct dialogue with other parties keen to solve this problem, and that the problem was as much about consensus building as it was about finding a solution. This article describes how HILT approached the cross-searching problem; how it investigated the nature of the problem, detailing results from the HILT Stakeholder Survey; and how it achieved consensus through the recent HILT Workshop.
  9. Giunchiglia, F.; Maltese, V.; Dutta, B.: Domains and context : first steps towards managing diversity in knowledge (2011) 0.02
    0.016011242 = product of:
      0.09606744 = sum of:
        0.09606744 = weight(_text_:problem in 603) [ClassicSimilarity], result of:
          0.09606744 = score(doc=603,freq=8.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.46895373 = fieldWeight in 603, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=603)
      0.16666667 = coord(1/6)
    
    Abstract
    Despite the progress made, one of the main barriers towards the use of semantics is the lack of background knowledge. Dealing with this problem has turned out to be a very difficult task because on the one hand the background knowledge should be very large and virtually unbound and, on the other hand, it should be context sensitive and able to capture the diversity of the world, for instance in terms of language and knowledge. Our proposed solution consists in addressing the problem in three steps: (1) create an extensible diversity-aware knowledge base providing a continuously growing quantity of properly organized knowledge; (2) given the problem, build at run-time the proper context within which perform the reasoning; (3) solve the problem. Our work is based on two key ideas. The first is that of using domains, i.e. a general semantic-aware methodology and technique for structuring the background knowledge. The second is that of building the context of reasoning by a suitable combination of domains. Our goal in this paper is to introduce the overall approach, show how it can be applied to an important use case, i.e. the matching of classifications, and describe our first steps towards the construction of a large scale diversity-aware knowledge base.
  10. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.015257841 = product of:
      0.09154704 = sum of:
        0.09154704 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
          0.09154704 = score(doc=8365,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.5416616 = fieldWeight in 8365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=8365)
      0.16666667 = coord(1/6)
    
    Date
    22. 6.2015 16:08:38
  11. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
          0.0784689 = score(doc=126,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=126)
      0.16666667 = coord(1/6)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  12. Boteram, F.; Hubrich, J.: Towards a comprehensive international Knowledge Organization System (2008) 0.01
    0.013078149 = product of:
      0.0784689 = sum of:
        0.0784689 = weight(_text_:22 in 4786) [ClassicSimilarity], result of:
          0.0784689 = score(doc=4786,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.46428138 = fieldWeight in 4786, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=4786)
      0.16666667 = coord(1/6)
    
    Date
    22. 9.2008 19:30:41
  13. Li, K.W.; Yang, C.C.: Automatic crosslingual thesaurus generated from the Hong Kong SAR Police Department Web Corpus for Crime Analysis (2005) 0.01
    0.0128089925 = product of:
      0.07685395 = sum of:
        0.07685395 = weight(_text_:problem in 3391) [ClassicSimilarity], result of:
          0.07685395 = score(doc=3391,freq=8.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 3391, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
      0.16666667 = coord(1/6)
    
    Abstract
    For the sake of national security, very large volumes of data and information are generated and gathered daily. Much of this data and information is written in different languages, stored in different locations, and may be seemingly unconnected. Crosslingual semantic interoperability is a major challenge to generate an overview of this disparate data and information so that it can be analyzed, shared, searched, and summarized. The recent terrorist attacks and the tragic events of September 11, 2001 have prompted increased attention an national security and criminal analysis. Many Asian countries and cities, such as Japan, Taiwan, and Singapore, have been advised that they may become the next targets of terrorist attacks. Semantic interoperability has been a focus in digital library research. Traditional information retrieval (IR) approaches normally require a document to share some common keywords with the query. Generating the associations for the related terms between the two term spaces of users and documents is an important issue. The problem can be viewed as the creation of a thesaurus. Apart from this, terrorists and criminals may communicate through letters, e-mails, and faxes in languages other than English. The translation ambiguity significantly exacerbates the retrieval problem. The problem is expanded to crosslingual semantic interoperability. In this paper, we focus an the English/Chinese crosslingual semantic interoperability problem. However, the developed techniques are not limited to English and Chinese languages but can be applied to many other languages. English and Chinese are popular languages in the Asian region. Much information about national security or crime is communicated in these languages. An efficient automatically generated thesaurus between these languages is important to crosslingual information retrieval between English and Chinese languages. To facilitate crosslingual information retrieval, a corpus-based approach uses the term co-occurrence statistics in parallel or comparable corpora to construct a statistical translation model to cross the language boundary. In this paper, the text based approach to align English/Chinese Hong Kong Police press release documents from the Web is first presented. We also introduce an algorithmic approach to generate a robust knowledge base based an statistical correlation analysis of the semantics (knowledge) embedded in the bilingual press release corpus. The research output consisted of a thesaurus-like, semantic network knowledge base, which can aid in semanticsbased crosslingual information management and retrieval.
  14. Nicholson, D.: High-Level Thesaurus (HILT) project : interoperability and cross-searching distributed services (200?) 0.01
    0.0128089925 = product of:
      0.07685395 = sum of:
        0.07685395 = weight(_text_:problem in 5966) [ClassicSimilarity], result of:
          0.07685395 = score(doc=5966,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 5966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=5966)
      0.16666667 = coord(1/6)
    
    Abstract
    My presentation is about the HILT, High Level Thesaurus Project, which is looking, very roughly speaking, at how we might deal with interoperability problems relating to cross-searching distributed services by subject. The aims of HILT are to study and report on the problem of cross-searching and browsing by subject across a range of communities, services, and service or resource types in the UK given the wide range of subject schemes and associated practices in place
  15. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.01
    0.0128089925 = product of:
      0.07685395 = sum of:
        0.07685395 = weight(_text_:problem in 4659) [ClassicSimilarity], result of:
          0.07685395 = score(doc=4659,freq=8.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 4659, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
      0.16666667 = coord(1/6)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
  16. Doerr, M.: Semantic problems of thesaurus mapping (2001) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 5902) [ClassicSimilarity], result of:
          0.067929946 = score(doc=5902,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 5902, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5902)
      0.16666667 = coord(1/6)
    
    Abstract
    With networked information access to heterogeneous data sources, the problem of terminology provision and interoperability of controlled vocabulary schemes such as thesauri becomes increasingly urgent. Solutions are needed to improve the performance of full-text retrieval systems and to guide the design of controlled terminology schemes for use in structured data, including metadata. Thesauri are created in different languages, with different scope and points of view and at different levels of abstraction and detail, to accomodate access to a specific group of collections. In any wider search accessing distributed collections, the user would like to start with familiar terminology and let the system find out the correspondences to other terminologies in order to retrieve equivalent results from all addressed collections. This paper investigates possible semantic differences that may hinder the unambiguous mapping and transition from one thesaurus to another. It focusses on the differences of meaning of terms and their relations as intended by their creators for indexing and querying a specific collection, in contrast to methods investigating the statistical relevance of terms for objects in a collection. It develops a notion of optimal mapping, paying particular attention to the intellectual quality of mappings between terms from different vocabularies and to problems of polysemy. Proposals are made to limit the vagueness introduced by the transition from one vocabulary to another. The paper shows ways in which thesaurus creators can improve their methodology to meet the challenges of networked access of distributed collections created under varying conditions. For system implementers, the discussion will lead to a better understanding of the complexity of the problem
  17. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 6061) [ClassicSimilarity], result of:
          0.067929946 = score(doc=6061,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 6061, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.16666667 = coord(1/6)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
  18. Krause, J.: Heterogenität und Integration : Zur Weiterentwicklung von Inhaltserschließung und Retrieval in sich veränderten Kontexten (2001) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 6071) [ClassicSimilarity], result of:
          0.067929946 = score(doc=6071,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 6071, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
      0.16666667 = coord(1/6)
    
    Abstract
    As an important support tool in science research, specialized information systems are rapidly changing their character. The potential for improvement compared with today's usual systems is enormous. This fact will be demonstrated by means of two problem complexes: - WWW search engines, which were developed without any government grants, are increasingly dominating the scene. Does the WWW displace information centers with their high quality databases? What are the results we can get nowadays using general WWW search engines? - In addition to the WWW and specialized databases, scientists now use WWW library catalogues of digital libraries, which combine the catalogues from an entire region or a country. At the same time, however, they are faced with highly decentralized heterogeneous databases which contain the widest range of textual sources and data, e.g. from surveys. One consequence is the presence of serious inconsistencies in quality, relevance and content analysis. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content development processes via a visual user interface without inconsistencies in content development, for example, seriously impairing the quality of the search results, e. g. when phrasing their search inquiry in the terminology to which they are accustomed
  19. Busch, D.: Organisation eines Thesaurus für die Unterstützung der mehrsprachigen Suche in einer bibliographischen Datenbank im Bereich Planen und Bauen (2016) 0.01
    0.011321658 = product of:
      0.067929946 = sum of:
        0.067929946 = weight(_text_:problem in 3308) [ClassicSimilarity], result of:
          0.067929946 = score(doc=3308,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.33160037 = fieldWeight in 3308, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3308)
      0.16666667 = coord(1/6)
    
    Abstract
    Das Problem der mehrsprachigen Suche gewinnt in der letzten Zeit immer mehr an Bedeutung, da viele nützliche Fachinformationen in der Welt in verschiedenen Sprachen publiziert werden. RSWBPlus ist eine bibliographische Datenbank zum Nachweis der Fachliteratur im Bereich Planen und Bauen, welche deutsch- und englischsprachige Metadaten-Einträge enthält. Bis vor Kurzem war es problematisch Einträge zu finden, deren Sprache sich von der Anfragesprache unterschied. Zum Beispiel fand man auf deutschsprachige Anfragen nur deutschsprachige Einträge, obwohl die Datenbank auch potenziell nützliche englischsprachige Einträge enthielt. Um das Problem zu lösen, wurde nach einer Untersuchung bestehender Ansätze, die RSWBPlus weiterentwickelt, um eine mehrsprachige (sprachübergreifende) Suche zu unterstützen, welche unter Einbeziehung eines zweisprachigen begriffbasierten Thesaurus erfolgt. Der Thesaurus wurde aus bereits bestehenden Thesauri automatisch gebildet. Die Einträge der Quell-Thesauri wurden in SKOS-Format (Simple Knowledge Organisation System) umgewandelt, automatisch miteinander vereinigt und schließlich in einen Ziel-Thesaurus eingespielt, der ebenfalls in SKOS geführt wird. Für den Zugriff zum Ziel-Thesaurus werden Apache Jena und MS SQL Server verwendet. Bei der mehrsprachigen Suche werden Terme der Anfrage durch entsprechende Übersetzungen und Synonyme in Deutsch und Englisch erweitert. Die Erweiterung der Suchterme kann sowohl in der Laufzeit, als auch halbautomatisch erfolgen. Das verbesserte Recherchesystem kann insbesondere deutschsprachigen Benutzern helfen, relevante englischsprachige Einträge zu finden. Die Verwendung vom SKOS erhöht die Interoperabilität der Thesauri, vereinfacht das Bilden des Ziel-Thesaurus und den Zugriff zu seinen Einträgen.
  20. Gödert, W.: Ontological spine, localization and multilingual access : some reflections and a proposal (2008) 0.01
    0.011207869 = product of:
      0.06724721 = sum of:
        0.06724721 = weight(_text_:problem in 4334) [ClassicSimilarity], result of:
          0.06724721 = score(doc=4334,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 4334, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4334)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper the following problem is discussed: Which possibilities exist to integrate localized knowledge into knowledge structures like classification systems or other documentary languages for the design of OPACs and information systems? It is proposed to combine a de-localized classificatory structure - best describes as 'ontological spine' - with multilingual semantic networks. Each of these networks should represent the respective localized knowledge along an extended set of typed semantic relations serving as entry points vocabulary as well as a semantic basis for navigational purposes within the localized knowledge context. The spine should enable a link between well-known and not well-known knowledge structures.

Languages

  • e 54
  • d 17

Types

  • a 45
  • el 24
  • x 6
  • m 4
  • s 2
  • More… Less…