Search (54 results, page 2 of 3)

  • × language_ss:"e"
  • × theme_ss:"Semantische Interoperabilität"
  1. Nicholson, D.M.; Dawson, A.; Shiri, A.: HILT: a pilot terminology mapping service with a DDC spine (2006) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 2152) [ClassicSimilarity], result of:
          0.05764047 = score(doc=2152,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 2152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=2152)
      0.16666667 = coord(1/6)
    
    Abstract
    The role of DDC in the ongoing HILT (High-level Thesaurus) project is discussed. A phased initiative, funded by JISC in the UK, HILT addresses an issue of likely interest to anyone serving users wishing to cross-search or cross-browse groups of networked information services, whether at regional, national or international level - the problem of subject-based retrieval from multiple sources using different subject schemes for resource description. Although all three phases of HILT to date are covered, the primary concern is with the subject interoperability solution piloted in phase II, and with the use of DDC as a spine in that approach.
  2. Liang, A.; Salokhe, G.; Sini, M.; Keizer, J.: Towards an infrastructure for semantic applications : methodologies for semantic integration of heterogeneous resources (2006) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 241) [ClassicSimilarity], result of:
          0.05764047 = score(doc=241,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
      0.16666667 = coord(1/6)
    
    Abstract
    The semantic heterogeneity presented by Web information in the Agricultural domain presents tremendous information retrieval challenges. This article presents work taking place at the Food and Agriculture Organizations (FAO) which addresses this challenge. Based on the analysis of resources in the domain of agriculture, this paper proposes (a) an application profile (AP) for dealing with the problem of heterogeneity originating from differences in terminologies, domain coverage, and domain modelling, and (b) a root application ontology (AAO) based on the application profile which can serve as a basis for extending knowledge of the domain. The paper explains how even a small investment in the enhancement of relations between vocabularies, both metadata and domain-specific, yields a relatively large return on investment.
  3. Wang, S.; Isaac, A.; Schopman, B.; Schlobach, S.; Meij, L. van der: Matching multilingual subject vocabularies (2009) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 3035) [ClassicSimilarity], result of:
          0.05764047 = score(doc=3035,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 3035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=3035)
      0.16666667 = coord(1/6)
    
    Abstract
    Most libraries and other cultural heritage institutions use controlled knowledge organisation systems, such as thesauri, to describe their collections. Unfortunately, as most of these institutions use different such systems, united access to heterogeneous collections is difficult. Things are even worse in an international context when concepts have labels in different languages. In order to overcome the multilingual interoperability problem between European Libraries, extensive work has been done to manually map concepts from different knowledge organisation systems, which is a tedious and expensive process. Within the TELplus project, we developed and evaluated methods to automatically discover these mappings, using different ontology matching techniques. In experiments on major French, English and German subject heading lists Rameau, LCSH and SWD, we show that we can automatically produce mappings of surprisingly good quality, even when using relatively naive translation and matching methods.
  4. Stempfhuber, M.; Zapilko, B.: Modelling text-fact-integration in digital libraries (2009) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 3393) [ClassicSimilarity], result of:
          0.05764047 = score(doc=3393,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 3393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
      0.16666667 = coord(1/6)
    
    Abstract
    Digital Libraries currently face the challenge of integrating many different types of research information (e.g. publications, primary data, expert's profiles, institutional profiles, project information etc.) according to their scientific users' needs. To date no general, integrated model for knowledge organization and retrieval in Digital Libraries exists. This causes the problem of structural and semantic heterogeneity due to the wide range of metadata standards, indexing vocabularies and indexing approaches used for different types of information. The research presented in this paper focuses on areas in which activities are being undertaken in the field of Digital Libraries in order to treat semantic interoperability problems. We present a model for the integrated retrieval of factual and textual data which combines multiple approaches to semantic interoperability und sets them into context. Embedded in the research cycle, traditional content indexing methods for publications meet the newer, but rarely used ontology-based approaches which seem to be better suited for representing complex information like the one contained in survey data. The benefits of our model are (1) easy re-use of available knowledge organisation systems and (2) reduced efforts for domain modelling with ontologies.
  5. Gemberling, T.: Thema and FRBR's third group (2010) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 4158) [ClassicSimilarity], result of:
          0.05764047 = score(doc=4158,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 4158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=4158)
      0.16666667 = coord(1/6)
    
    Abstract
    The treatment of subjects by Functional Requirements for Bibliographic Records (FRBR) has attracted less attention than some of its other aspects, but there seems to be a general consensus that it needs work. While some have proposed elaborating its subject categories-concepts, objects, events, and places-to increase their semantic complexity, a working group of the International Federation of Library Associations and Institutions (IFLA) has recently made a promising proposal that essentially bypasses those categories in favor of one entity, thema. This article gives an overview of the proposal and discusses its relevance to another difficult problem, ambiguities in the establishment of headings for buildings.Use of dynamic links from subject-based finding aids to records for electronic resources in the OPAC is suggested as one method for by-passing the OPAC search interface, thus making the library's electronic resources more accessible. This method simplifies maintenance of links to electronic resources and aids instruction by providing a single, consistent access point to them. Results of a usage study from before and after this project was completed show a consistent, often dramatic increase in use of the library's electronic resources.
  6. Khiat, A.; Benaissa, M.: Approach for instance-based ontology alignment : using argument and event structures of generative lexicon (2014) 0.01
    0.009606745 = product of:
      0.05764047 = sum of:
        0.05764047 = weight(_text_:problem in 1577) [ClassicSimilarity], result of:
          0.05764047 = score(doc=1577,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 1577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=1577)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontology alignment became a very important problem to ensure semantic interoperability for different sources of information heterogeneous and distributed. Instance-based ontology alignment represents a very promising technique to find semantic correspondences between entities of different ontologies when they contain a lot of instances. In this paper, we describe a new approach to manage ontologies that do not share common instances.This approach extracts the argument and event structures from a set of instances of the concept of the source ontology and compared them with other semantic features extracted from a set of instances of the concept of the target ontology using Generative Lexicon Theory. We show that it is theoretically powerful because it is based on linguistic semantics and useful in practice. We present the experimental results obtained by running our approach on Biblio test of Benchmark series of OAEI 2011. The results show the good performance of our approach.
  7. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.009247649 = product of:
      0.055485893 = sum of:
        0.055485893 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
          0.055485893 = score(doc=1967,freq=4.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.32829654 = fieldWeight in 1967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  8. Heckner, M.; Mühlbacher, S.; Wolff, C.: Tagging tagging : a classification model for user keywords in scientific bibliography management systems (2007) 0.01
    0.009057326 = product of:
      0.054343957 = sum of:
        0.054343957 = weight(_text_:problem in 533) [ClassicSimilarity], result of:
          0.054343957 = score(doc=533,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.2652803 = fieldWeight in 533, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.03125 = fieldNorm(doc=533)
      0.16666667 = coord(1/6)
    
    Abstract
    Recently, a growing amount of systems that allow personal content annotation (tagging) are being created, ranging from personal sites for organising bookmarks (del.icio.us), photos (flickr.com) or videos (video.google.com, youtube.com) to systems for managing bibliographies for scientific research projects (citeulike.org, connotea.org). Simultaneously, a debate on the pro and cons of allowing users to add personal keywords to digital content has arisen. One recurrent point-of-discussion is whether tagging can solve the well-known vocabulary problem: In order to support successful retrieval in complex environments, it is necessary to index an object with a variety of aliases (cf. Furnas 1987). In this spirit, social tagging enhances the pool of rigid, traditional keywording by adding user-created retrieval vocabularies. Furthermore, tagging goes beyond simple personal content-based keywords by providing meta-keywords like funny or interesting that "identify qualities or characteristics" (Golder and Huberman 2006, Kipp and Campbell 2006, Kipp 2007, Feinberg 2006, Kroski 2005). Contrarily, tagging systems are claimed to lead to semantic difficulties that may hinder the precision and recall of tagging systems (e.g. the polysemy problem, cf. Marlow 2006, Lakoff 2005, Golder and Huberman 2006). Empirical research on social tagging is still rare and mostly from a computer linguistics or librarian point-of-view (Voß 2007) which focus either on the automatic statistical analyses of large data sets, or intellectually inspect single cases of tag usage: Some scientists studied the evolution of tag vocabularies and tag distribution in specific systems (Golder and Huberman 2006, Hammond 2005). Others concentrate on tagging behaviour and tagger characteristics in collaborative systems. (Hammond 2005, Kipp and Campbell 2007, Feinberg 2006, Sen 2006). However, little research has been conducted on the functional and linguistic characteristics of tags.1 An analysis of these patterns could show differences between user wording and conventional keywording. In order to provide a reasonable basis for comparison, a classification system for existing tags is needed.
  9. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.01
    0.008718766 = product of:
      0.052312598 = sum of:
        0.052312598 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
          0.052312598 = score(doc=541,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=541)
      0.16666667 = coord(1/6)
    
    Date
    26.12.2011 13:22:46
  10. Kaczmarek, M.; Kruk, S.R.; Gzella, A.: Collaborative building of controlled vocabulary crosswalks (2007) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 543) [ClassicSimilarity], result of:
          0.04803372 = score(doc=543,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 543, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=543)
      0.16666667 = coord(1/6)
    
    Abstract
    One of the main features of classic libraries is metadata, which also is the key aspect of the Semantic Web. Librarians in the process of resources annotation use different kinds of Knowledge Organization Systems; KOS range from controlled vocabularies to classifications and categories (e.g., taxonomies) and to relationship lists (e.g., thesauri). The diversity of controlled vocabularies, used by various libraries and organizations, became a bottleneck for efficient information exchange between different entities. Even though a simple one-to-one mapping could be established, based on the similarities between names of concepts, we cannot derive information about the hierarchy between concepts from two different KOS. One of the solutions to this problem is to create an algorithm based on data delivered by large community of users using many classification schemata at once. The rationale behind it is that similar resources can be described by equivalent concepts taken from different taxonomies. The more annotations are collected, the more precise the result of this crosswalk will be.
  11. Lumsden, J.; Hall, H.; Cruickshank, P.: Ontology definition and construction, and epistemological adequacy for systems interoperability : a practitioner analysis (2011) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 4801) [ClassicSimilarity], result of:
          0.04803372 = score(doc=4801,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 4801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4801)
      0.16666667 = coord(1/6)
    
    Abstract
    Ontology development is considered to be a useful approach to the design and implementation of interoperable systems. This literature review and commentary examines the current state of knowledge in this field with particular reference to processes involved in assuring epistemological adequacy. It takes the perspective of the information systems practitioner keen to adopt a systematic approach to in-house ontology design, taking into consideration previously published work. The study arises from author involvement in an integration/interoperability project on systems that support Scottish Common Housing Registers in which, ultimately, ontological modelling was not deployed. Issues concerning the agreement of meaning, and the implications for the creation of interoperable systems, are discussed. The extent to which those theories, methods and frameworks provide practitioners with a usable set of tools is explored, and examples of practical applications of ontological modelling are noted. The findings from the review of the literature demonstrate a number of difficulties faced by information systems practitioners keen to develop and deploy domain ontologies. A major problem is deciding which broad approach to take: to rely on automatic ontology construction techniques, or to rely on key words and domain experts to develop ontologies.
  12. Wang, S.; Isaac, A.; Schlobach, S.; Meij, L. van der; Schopman, B.: Instance-based semantic interoperability in the cultural heritage (2012) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 125) [ClassicSimilarity], result of:
          0.04803372 = score(doc=125,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=125)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper gives a comprehensive overview over the problem of Semantic Interoperability in the Cultural Heritage domain, with a particular focus on solutions centered around extensional, i.e., instance-based, ontology matching methods. It presents three typical scenarios requiring interoperability, one with homogenous collections, one with heterogeneous collections, and one with multi-lingual collection. It discusses two different ways to evaluate potential alignments, one based on the application of re-indexing, one using a reference alignment. To these scenarios we apply extensional matching with different similarity measures which gives interesting insights. Finally, we firmly position our work in the Cultural Heritage context through an extensive discussion of the relevance for, and issues related to this specific field. The findings are as unspectacular as expected but nevertheless important: the provided methods can really improve interoperability in a number of important cases, but they are not universal solutions to all related problems. This paper will provide a solid foundation for any future work on Semantic Interoperability in the Cultural Heritage domain, in particular for anybody intending to apply extensional methods.
  13. Ioannou, E.; Nejdl, W.; Niederée, C.; Velegrakis, Y.: Embracing uncertainty in entity linking (2012) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 433) [ClassicSimilarity], result of:
          0.04803372 = score(doc=433,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 433, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=433)
      0.16666667 = coord(1/6)
    
    Abstract
    The modern Web has grown from a publishing place of well-structured data and HTML pages for companies and experienced users into a vivid publishing and data exchange community in which everyone can participate, both as a data consumer and as a data producer. Unavoidably, the data available on the Web became highly heterogeneous, ranging from highly structured and semistructured to highly unstructured user-generated content, reflecting different perspectives and structuring principles. The full potential of such data can only be realized by combining information from multiple sources. For instance, the knowledge that is typically embedded in monolithic applications can be outsourced and, thus, used also in other applications. Numerous systems nowadays are already actively utilizing existing content from various sources such as WordNet or Wikipedia. Some well-known examples of such systems include DBpedia, Freebase, Spock, and DBLife. A major challenge during combining and querying information from multiple heterogeneous sources is entity linkage, i.e., the ability to detect whether two pieces of information correspond to the same real-world object. This chapter introduces a novel approach for addressing the entity linkage problem for heterogeneous, uncertain, and volatile data.
  14. Baker, T.; Sutton, S.A.: Linked data and the charm of weak semantics : Introduction: the strengths of weak semantics (2015) 0.01
    0.008005621 = product of:
      0.04803372 = sum of:
        0.04803372 = weight(_text_:problem in 2022) [ClassicSimilarity], result of:
          0.04803372 = score(doc=2022,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 2022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2022)
      0.16666667 = coord(1/6)
    
    Abstract
    Logic and precision are fundamental to ontologies underlying the semantic web and, by extension, to linked data. This special section focuses on the interaction of semantics, ontologies and linked data. The discussion presents the Simple Knowledge Organization Scheme (SKOS) as a less formal strategy for expressing concept hierarchies and associations and questions the value of deep domain ontologies in favor of simpler vocabularies that are more open to reuse, albeit risking illogical outcomes. RDF ontologies harbor another unexpected drawback. While structurally sound, they leave validation gaps permitting illogical uses, a problem being addressed by a W3C Working Group. Data models based on RDF graphs and properties may replace traditional library catalog models geared to predefined entities, with relationships between RDF classes providing the semantic connections. The BIBFRAME Initiative takes a different and streamlined approach to linking data, building rich networks of information resources rather than relying on a strict underlying structure and vocabulary. Taken together, the articles illustrate the trend toward a pragmatic approach to a Semantic Web, sacrificing some specificity for greater flexibility and partial interoperability.
  15. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.01
    0.0077063735 = product of:
      0.04623824 = sum of:
        0.04623824 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
          0.04623824 = score(doc=1962,freq=4.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.27358043 = fieldWeight in 1962, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
      0.16666667 = coord(1/6)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  16. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.01
    0.0076289205 = product of:
      0.04577352 = sum of:
        0.04577352 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
          0.04577352 = score(doc=540,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=540)
      0.16666667 = coord(1/6)
    
    Date
    26.12.2011 13:22:27
  17. Mayr, P.; Petras, V.: Building a Terminology Network for Search : the KoMoHe project (2008) 0.01
    0.0076289205 = product of:
      0.04577352 = sum of:
        0.04577352 = weight(_text_:22 in 2618) [ClassicSimilarity], result of:
          0.04577352 = score(doc=2618,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 2618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2618)
      0.16666667 = coord(1/6)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.0076289205 = product of:
      0.04577352 = sum of:
        0.04577352 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
          0.04577352 = score(doc=3283,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
      0.16666667 = coord(1/6)
    
  19. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.01
    0.0076289205 = product of:
      0.04577352 = sum of:
        0.04577352 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
          0.04577352 = score(doc=997,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
      0.16666667 = coord(1/6)
    
    Date
    22. 6.2023 18:23:31
  20. Levergood, B.; Farrenkopf, S.; Frasnelli, E.: ¬The specification of the language of the field and interoperability : cross-language access to catalogues and online libraries (CACAO) (2008) 0.01
    0.0065390747 = product of:
      0.03923445 = sum of:
        0.03923445 = weight(_text_:22 in 2646) [ClassicSimilarity], result of:
          0.03923445 = score(doc=2646,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.23214069 = fieldWeight in 2646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2646)
      0.16666667 = coord(1/6)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas

Types

  • a 35
  • el 20
  • m 4
  • s 2
  • x 2
  • More… Less…