Search (77 results, page 1 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.12
    0.11635318 = product of:
      0.29088295 = sum of:
        0.072720736 = product of:
          0.2181622 = sum of:
            0.2181622 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.2181622 = score(doc=306,freq=2.0), product of:
                0.3327227 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03924537 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.2181622 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.2181622 = score(doc=306,freq=2.0), product of:
            0.3327227 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03924537 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.08
    0.0831094 = product of:
      0.2077735 = sum of:
        0.051943377 = product of:
          0.15583013 = sum of:
            0.15583013 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.15583013 = score(doc=1000,freq=2.0), product of:
                0.3327227 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03924537 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.15583013 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15583013 = score(doc=1000,freq=2.0), product of:
            0.3327227 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03924537 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.4 = coord(2/5)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Golub, K.; Tudhope, D.; Zeng, M.L.; Zumer, M.: Terminology registries for knowledge organization systems : functionality, use, and attributes (2014) 0.03
    0.032743596 = product of:
      0.081858985 = sum of:
        0.07122457 = weight(_text_:relation in 1347) [ClassicSimilarity], result of:
          0.07122457 = score(doc=1347,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.3468557 = fieldWeight in 1347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.046875 = fieldNorm(doc=1347)
        0.010634413 = product of:
          0.031903237 = sum of:
            0.031903237 = weight(_text_:22 in 1347) [ClassicSimilarity], result of:
              0.031903237 = score(doc=1347,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.23214069 = fieldWeight in 1347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1347)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Terminology registries (TRs) are a crucial element of the infrastructure required for resource discovery services, digital libraries, Linked Data, and semantic interoperability generally. They can make the content of knowledge organization systems (KOS) available both for human and machine access. The paper describes the attributes and functionality for a TR, based on a review of published literature, existing TRs, and a survey of experts. A domain model based on user tasks is constructed and a set of core metadata elements for use in TRs is proposed. Ideally, the TR should allow searching as well as browsing for a KOS, matching a user's search while also providing information about existing terminology services, accessible to both humans and machines. The issues surrounding metadata for KOS are also discussed, together with the rationale for different aspects and the importance of a core set of KOS metadata for future machine-based access; a possible core set of metadata elements is proposed. This is dealt with in terms of practical experience and in relation to the Dublin Core Application Profile.
    Date
    22. 8.2014 17:12:54
  4. Godby, C.J.; Smith, D.; Childress, E.: Encoding application profiles in a computational model of the crosswalk (2008) 0.03
    0.027286327 = product of:
      0.06821582 = sum of:
        0.059353806 = weight(_text_:relation in 2649) [ClassicSimilarity], result of:
          0.059353806 = score(doc=2649,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 2649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2649)
        0.008862011 = product of:
          0.026586032 = sum of:
            0.026586032 = weight(_text_:22 in 2649) [ClassicSimilarity], result of:
              0.026586032 = score(doc=2649,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.19345059 = fieldWeight in 2649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2649)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill,et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broader-narrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. Linked data and user interaction : the road ahead (2015) 0.03
    0.026397347 = product of:
      0.13198674 = sum of:
        0.13198674 = weight(_text_:aufsatzsammlung in 2552) [ClassicSimilarity], result of:
          0.13198674 = score(doc=2552,freq=4.0), product of:
            0.25749236 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.03924537 = queryNorm
            0.51258504 = fieldWeight in 2552, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
      0.2 = coord(1/5)
    
    RSWK
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
    Subject
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
  6. Soergel, D.: Towards a relation ontology for the Semantic Web (2011) 0.02
    0.024672914 = product of:
      0.12336457 = sum of:
        0.12336457 = weight(_text_:relation in 4342) [ClassicSimilarity], result of:
          0.12336457 = score(doc=4342,freq=6.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.60077167 = fieldWeight in 4342, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.046875 = fieldNorm(doc=4342)
      0.2 = coord(1/5)
    
    Abstract
    The Semantic Web consists of data structured for use by computer programs, such as data sets made available under the Linked Open Data initiative. Much of this data is structured following the entity-relationship model encoded in RDF for syntactic interoperability. For semantic interoperability, the semantics of the relationships used in any given dataset needs to be made explicit. Ultimately this requires an inventory of these relationships structured around a relation ontology. This talk will outline a blueprint for such an inventory, including a format for the description/definition of binary and n-ary relations, drawing on ideas put forth in the classification and thesaurus community over the last 60 years, upper level ontologies, systems like FrameNet, the Buffalo Relation Ontology, and an analysis of linked data sets.
  7. Vlachidis, A.; Tudhope, D.: ¬A knowledge-based approach to information extraction for semantic interoperability in the archaeology domain (2016) 0.02
    0.016787792 = product of:
      0.083938956 = sum of:
        0.083938956 = weight(_text_:relation in 2895) [ClassicSimilarity], result of:
          0.083938956 = score(doc=2895,freq=4.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.40877336 = fieldWeight in 2895, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
      0.2 = coord(1/5)
    
    Abstract
    The article presents a method for automatic semantic indexing of archaeological grey-literature reports using empirical (rule-based) Information Extraction techniques in combination with domain-specific knowledge organization systems. The semantic annotation system (OPTIMA) performs the tasks of Named Entity Recognition, Relation Extraction, Negation Detection, and Word-Sense Disambiguation using hand-crafted rules and terminological resources for associating contextual abstractions with classes of the standard ontology CIDOC Conceptual Reference Model (CRM) for cultural heritage and its archaeological extension, CRM-EH. Relation Extraction (RE) performance benefits from a syntactic-based definition of RE patterns derived from domain oriented corpus analysis. The evaluation also shows clear benefit in the use of assistive natural language processing (NLP) modules relating to Word-Sense Disambiguation, Negation Detection, and Noun Phrase Validation, together with controlled thesaurus expansion. The semantic indexing results demonstrate the capacity of rule-based Information Extraction techniques to deliver interoperable semantic abstractions (semantic annotations) with respect to the CIDOC CRM and archaeological thesauri. Major contributions include recognition of relevant entities using shallow parsing NLP techniques driven by a complimentary use of ontological and terminological domain resources and empirical derivation of context-driven RE rules for the recognition of semantic relationships from phrases of unstructured text.
  8. Dunsire, G.: Enhancing information services using machine-to-machine terminology services (2011) 0.02
    0.016619066 = product of:
      0.08309533 = sum of:
        0.08309533 = weight(_text_:relation in 1805) [ClassicSimilarity], result of:
          0.08309533 = score(doc=1805,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.40466496 = fieldWeight in 1805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1805)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes the basic concepts of terminology services and their role in information retrieval interfaces. Terminology services are consumed by other software applications using machine-to-machine protocols, rather than directly by end-users. An example of a terminology service is the pilot developed by the High Level Thesaurus (HILT) project which has successfully demonstrated its potential for enhancing subject retrieval in operational services. Examples of enhancements in three such services are given. The paper discusses the future development of terminology services in relation to the Semantic Web.
  9. McCulloch, E.; Shiri, A.; Nicholson, A.D.: Subject searching requirements : the HILT II experience (2004) 0.01
    0.014244914 = product of:
      0.07122457 = sum of:
        0.07122457 = weight(_text_:relation in 4758) [ClassicSimilarity], result of:
          0.07122457 = score(doc=4758,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.3468557 = fieldWeight in 4758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
      0.2 = coord(1/5)
    
    Abstract
    The HILT Phase II project aimed to develop a pilot terminologies server with a view to improving cross-sectoral information retrieval. In order to inform this process, it was first necessary to examine how a representative group of users approached a range of information-related tasks. This paper focuses on exploratory interviews conducted to investigate the proposed ideal and actual strategies of a group of 30 users in relation to eight separate information tasks. In addition, users were asked to give examples of search terms they may employ and to describe how they would formulate search queries in each scenario. The interview process undertaken and the results compiled are outlined, and associated implications for the development of a pilot terminologies server are discussed.
  10. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.01
    0.011870761 = product of:
      0.059353806 = sum of:
        0.059353806 = weight(_text_:relation in 6061) [ClassicSimilarity], result of:
          0.059353806 = score(doc=6061,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.2 = coord(1/5)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
  11. Park, J.-r.: Semantic interoperability and metadata quality : an analysis of metadata item records of digital image collections (2006) 0.01
    0.011870761 = product of:
      0.059353806 = sum of:
        0.059353806 = weight(_text_:relation in 172) [ClassicSimilarity], result of:
          0.059353806 = score(doc=172,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=172)
      0.2 = coord(1/5)
    
    Abstract
    This paper is a current assessment of the status of metadata creation and mapping between catalogerdefined field names and Dublin Core (DC) metadata elements across three digital image collections. The metadata elements that evince the most frequently inaccurate, inconsistent and incomplete DC metadata application are identified. As well, the most frequently occurring locally added metadata elements and associated pattern development are examined. For this, a randomly collected sample of 659 metadata item records from three digital image collections is analyzed. Implications and issues drawn from the evaluation of the current status of metadata creation and mapping are also discussed in relation to the issue of semantic interoperability of concept representation across digital image collections. The findings of the study suggest that conceptual ambiguities and semantic overlaps inherent among some DC metadata elements hinder semantic interoperability. The DC metadata scheme needs to be refined in order to disambiguate semantic relations of certain DC metadata elements that present semantic overlaps and conceptual ambiguities between element names and their corresponding definitions. The findings of the study also suggest that the development of mediation mechanisms such as concept networks that facilitate the metadata creation and mapping process are critically needed for enhancing metadata quality.
  12. Kim, J.-M.; Shin, H.; Kim, H.-J.: Schema and constraints-based matching and merging of Topic Maps (2007) 0.01
    0.011870761 = product of:
      0.059353806 = sum of:
        0.059353806 = weight(_text_:relation in 922) [ClassicSimilarity], result of:
          0.059353806 = score(doc=922,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 922, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=922)
      0.2 = coord(1/5)
    
    Abstract
    In this paper, we propose a multi-strategic matching and merging approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the Topic Maps. Our multi-strategic matching approach consists of a linguistic module and a Topic Map constraints-based module. A linguistic module computes similarities between concepts using morphological analysis, string normalization and tokenization and language-dependent heuristics. A Topic Map constraints-based module takes advantage of several Topic Maps-dependent techniques such as a topic property-based matching, a hierarchy-based matching, and an association-based matching. This is a composite matching procedure and need not generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the Topic Maps. Merging between Topic Maps follows the matching operations. We set up the MERGE function to integrate two Topic Maps into a new Topic Map, which satisfies such merge requirements as entity preservation, property preservation, relation preservation, and conflict resolution. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Wikipedia philosophy ontology as input ontologies. Our experiments show that the automatically generated matching results conform to the outputs generated manually by domain experts and can be of great benefit to the following merging operations.
  13. Tang, J.; Liang, B.-Y.; Li, J.-Z.: Toward detecting mapping strategies for ontology interoperability (2005) 0.01
    0.011870761 = product of:
      0.059353806 = sum of:
        0.059353806 = weight(_text_:relation in 3367) [ClassicSimilarity], result of:
          0.059353806 = score(doc=3367,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 3367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
      0.2 = coord(1/5)
    
    Abstract
    Ontology mapping is one of the core tasks for ontology interoperability. It is aimed to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. It benefits many applications, such as integration of ontology based web data sources, interoperability of agents or web services. To reduce the amount of users' effort as much as possible, (semi-) automatic ontology mapping is becoming more and more important to bring it into fruition. In the existing literature, many approaches have found considerable interest by combining several different similar/mapping strategies (namely multi-strategy based mapping). However, experiments show that the multi-strategy based mapping does not always outperform its single-strategy counterpart. In this paper, we mainly aim to deal with two problems: (1) for a new, unseen mapping task, should we select a multi-strategy based algorithm or just one single-strategy based algorithm? (2) if the task is suitable for multi-strategy, then how to select the strategies into the final combined scenario? We propose an approach of multiple strategies detections for ontology mapping. The results obtained so far show that multi-strategy detection improves on precision and recall significantly.
  14. Cheng, Y.-Y.; Xia, Y.: ¬A systematic review of methods for aligning, mapping, merging taxonomies in information sciences (2023) 0.01
    0.011870761 = product of:
      0.059353806 = sum of:
        0.059353806 = weight(_text_:relation in 1029) [ClassicSimilarity], result of:
          0.059353806 = score(doc=1029,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 1029, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1029)
      0.2 = coord(1/5)
    
    Abstract
    The purpose of this study is to provide a systematic literature review on taxonomy alignment methods in information science to explore the common research pipeline and characteristics. Design/methodology/approach The authors implement a five-step systematic literature review process relating to taxonomy alignment. They take on a knowledge organization system (KOS) perspective, and specifically examining the level of KOS on "taxonomies." Findings They synthesize the matching dimensions of 28 taxonomy alignment studies in terms of the taxonomy input, approach and output. In the input dimension, they develop three characteristics: tree shapes, variable names and symmetry; for approach: methodology, unit of matching, comparison type and relation type; for output: the number of merged solutions and whether original taxonomies are preserved in the solutions. Research limitations/implications The main research implications of this study are threefold: (1) to enhance the understanding of the characteristics of a taxonomy alignment work; (2) to provide a novel categorization of taxonomy alignment approaches into natural language processing approach, logic-based approach and heuristic-based approach; (3) to provide a methodological guideline on the must-include characteristics for future taxonomy alignment research. Originality/value There is no existing comprehensive review on the alignment of "taxonomies". Further, no other mapping survey research has discussed the comparison from a KOS perspective. Using a KOS lens is critical in understanding the broader picture of what other similar systems of organizations are, and enables us to define taxonomies more precisely.
  15. Köbler, J.; Niederklapfer, T.: Kreuzkonkordanzen zwischen RVK-BK-MSC-PACS der Fachbereiche Mathematik un Physik (2010) 0.01
    0.010324105 = product of:
      0.05162052 = sum of:
        0.05162052 = product of:
          0.07743078 = sum of:
            0.04552754 = weight(_text_:29 in 4408) [ClassicSimilarity], result of:
              0.04552754 = score(doc=4408,freq=4.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.3297832 = fieldWeight in 4408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
            0.031903237 = weight(_text_:22 in 4408) [ClassicSimilarity], result of:
              0.031903237 = score(doc=4408,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.23214069 = fieldWeight in 4408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    29. 3.2011 10:47:10
    29. 3.2011 10:57:42
    Pages
    22 S
  16. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.01030811 = product of:
      0.05154055 = sum of:
        0.05154055 = product of:
          0.07731082 = sum of:
            0.03219283 = weight(_text_:29 in 4379) [ClassicSimilarity], result of:
              0.03219283 = score(doc=4379,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.23319192 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
            0.045117993 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.045117993 = score(doc=4379,freq=4.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  17. Angjeli, A.; Isaac, A.: Semantic web and vocabularies interoperability : an experiment with illuminations collections (2008) 0.01
    0.00949661 = product of:
      0.047483046 = sum of:
        0.047483046 = weight(_text_:relation in 2324) [ClassicSimilarity], result of:
          0.047483046 = score(doc=2324,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.23123713 = fieldWeight in 2324, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03125 = fieldNorm(doc=2324)
      0.2 = coord(1/5)
    
    Abstract
    During the years 2006 and 2007, the BnF has collaborated with the National Library of the Netherlands within the framework of the Dutch project STITCH. This project, through concrete experiments, investigates semantic interoperability, especially in relation to searching. How can we conduct semantic searches across several digital heritage collections? The metadata related to content analysis are often heterogeneous. Beyond using manual mapping of semantically similar entities, STITCH explores the techniques of the semantic web, particularly ontology mapping. This paper is about an experiment made on two digital iconographic collections: Mandragore, iconographic database of the Manuscript Department of the BnF, and the Medieval Illuminated manuscripts collection of the KB. While the content of these two collections is similar, they have been processed differently and the vocabularies used to index their content is very different. Vocabularies in Mandragore and Iconclass are both controlled and hierarchical but they do not have the same semantic and structure. This difference is of particular interest to the STITCH project, as it aims to study automatic alignment of two vocabularies. The collaborative experiment started with a precise analysis of each of the vocabularies; that included concepts and their representation, lexical properties of the terms used, semantic relationships, etc. The team of Dutch researchers then studied and implemented mechanisms of alignment of the two vocabularies. The initial models being different, there had to be a common standard in order to enable procedures of alignment. RDF and SKOS were selected for that. The experiment lead to building a prototype that allows for querying in both databases at the same time through a single interface. The descriptors of each vocabulary are used as search terms for all images regardless of the collection they belong to. This experiment is only one step in the search for solutions that aim at making navigation easier between heritage collections that have heterogeneous metadata.
  18. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.01
    0.008393896 = product of:
      0.041969478 = sum of:
        0.041969478 = weight(_text_:relation in 4232) [ClassicSimilarity], result of:
          0.041969478 = score(doc=4232,freq=4.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.20438668 = fieldWeight in 4232, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.2 = coord(1/5)
    
    Abstract
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data. There is a difference in the way users interact with resources, visually or textually, and how resources are represented for machines to be processed by algorithms. This difference complicates bridging the users' intents and machine executable queries. It is important to implement this 'translation' mechanism to impact the search as favorable as possible in terms of performance, complexity and accuracy. To do this, we explain a second technique, that supports such a bridging component. Our second technique is developed around three features that support the search process: looking up, relating and ranking resources. The main goal is to ensure that resources in the results are as precise and relevant as possible. During the evaluation of this technique, we did not only look at the precision of the search results but also investigated how the effectiveness of the search evolved while the user executed certain actions sequentially.
  19. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.01
    0.0059353807 = product of:
      0.029676903 = sum of:
        0.029676903 = weight(_text_:relation in 3370) [ClassicSimilarity], result of:
          0.029676903 = score(doc=3370,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.1445232 = fieldWeight in 3370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3370)
      0.2 = coord(1/5)
    
    Content
    Precombined subjects, such as those shown above from Dewey, may be expressed in UDC Summary as examples of combination within various records. To express an exact match UDC class 07 has to contain example of combination 07(7) Journals. The Press - North America. In some cases we have, therefore, added examples to UDC Summary that represent exact match to Dewey Summaries. It is unfortunate that DDC has so many classes on the top level that deal with a selection of countries or languages that are given a preferred status in the scheme, and repeating these preferences in examples of combinations of UDC emulates an unwelcome cultural bias which we have to balance out somehow. This brings us to another challenge.. UDC 913(7) Regional Geography - North America [contains 2 concepts each of which has its URI] is an exact match to Dewey 917 [represented as one concept, 1 URI]. It seems that, because they represent an exact match to Dewey numbers, these UDC examples of combinations may also need a separate URIs so that they can be published as SKOS data. Albeit challenging, mapping proves to be a very useful exercise and I am looking forward to future work here especially in relation to our plans to map UDC Summary to Colon Classification. We are discussing this project with colleagues from DRTC in Bangalore (India)."
  20. Woldering, B.: ¬Die Europäische Digitale Bibliothek nimmt Gestalt an (2007) 0.01
    0.0056974287 = product of:
      0.028487142 = sum of:
        0.028487142 = product of:
          0.04273071 = sum of:
            0.021461887 = weight(_text_:29 in 2439) [ClassicSimilarity], result of:
              0.021461887 = score(doc=2439,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.15546128 = fieldWeight in 2439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2439)
            0.021268824 = weight(_text_:22 in 2439) [ClassicSimilarity], result of:
              0.021268824 = score(doc=2439,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.15476047 = fieldWeight in 2439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2439)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    22. 2.2009 19:10:56
    Source
    Dialog mit Bibliotheken. 20(2008) H.1, S.29-31

Languages

  • e 60
  • d 17

Types

  • a 51
  • el 22
  • m 5
  • s 4
  • x 3
  • p 1
  • r 1
  • More… Less…