Search (313 results, page 1 of 16)

  • × theme_ss:"Semantic Web"
  1. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.05
    0.04530395 = sum of:
      0.011696915 = product of:
        0.07018149 = sum of:
          0.07018149 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.07018149 = score(doc=1634,freq=6.0), product of:
              0.20111527 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0441157 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.16666667 = coord(1/6)
      0.033607032 = sum of:
        0.00969876 = weight(_text_:a in 1634) [ClassicSimilarity], result of:
          0.00969876 = score(doc=1634,freq=28.0), product of:
            0.050867476 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0441157 = queryNorm
            0.19066721 = fieldWeight in 1634, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.023908272 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
          0.023908272 = score(doc=1634,freq=2.0), product of:
            0.15448566 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0441157 = queryNorm
            0.15476047 = fieldWeight in 1634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Type
    a
  2. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.04
    0.037519548 = sum of:
      0.0067532165 = product of:
        0.040519297 = sum of:
          0.040519297 = weight(_text_:authors in 1626) [ClassicSimilarity], result of:
            0.040519297 = score(doc=1626,freq=2.0), product of:
              0.20111527 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0441157 = queryNorm
              0.20147301 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.16666667 = coord(1/6)
      0.03076633 = sum of:
        0.006858059 = weight(_text_:a in 1626) [ClassicSimilarity], result of:
          0.006858059 = score(doc=1626,freq=14.0), product of:
            0.050867476 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0441157 = queryNorm
            0.13482209 = fieldWeight in 1626, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.023908272 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
          0.023908272 = score(doc=1626,freq=2.0), product of:
            0.15448566 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0441157 = queryNorm
            0.15476047 = fieldWeight in 1626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Type
    a
  3. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.04
    0.035714295 = sum of:
      0.0042207604 = product of:
        0.025324563 = sum of:
          0.025324563 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.025324563 = score(doc=150,freq=2.0), product of:
              0.20111527 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0441157 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.16666667 = coord(1/6)
      0.031493533 = sum of:
        0.0056120674 = weight(_text_:a in 150) [ClassicSimilarity], result of:
          0.0056120674 = score(doc=150,freq=24.0), product of:
            0.050867476 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0441157 = queryNorm
            0.11032722 = fieldWeight in 150, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.025881466 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
          0.025881466 = score(doc=150,freq=6.0), product of:
            0.15448566 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0441157 = queryNorm
            0.16753313 = fieldWeight in 150, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  4. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.03
    0.033125468 = product of:
      0.066250935 = sum of:
        0.066250935 = sum of:
          0.006480256 = weight(_text_:a in 2090) [ClassicSimilarity], result of:
            0.006480256 = score(doc=2090,freq=2.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.12739488 = fieldWeight in 2090, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=2090)
          0.05977068 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
            0.05977068 = score(doc=2090,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.38690117 = fieldWeight in 2090, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2090)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Type
    a
  5. Baker, T.; Sutton, S.A.: Linked data and the charm of weak semantics : Introduction: the strengths of weak semantics (2015) 0.03
    0.030591255 = sum of:
      0.025731063 = product of:
        0.15438637 = sum of:
          0.15438637 = weight(_text_:baker in 2022) [ClassicSimilarity], result of:
            0.15438637 = score(doc=2022,freq=2.0), product of:
              0.35112646 = queryWeight, product of:
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.0441157 = queryNorm
              0.4396888 = fieldWeight in 2022, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2022)
        0.16666667 = coord(1/6)
      0.0048601925 = product of:
        0.009720385 = sum of:
          0.009720385 = weight(_text_:a in 2022) [ClassicSimilarity], result of:
            0.009720385 = score(doc=2022,freq=18.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.19109234 = fieldWeight in 2022, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2022)
        0.5 = coord(1/2)
    
    Abstract
    Logic and precision are fundamental to ontologies underlying the semantic web and, by extension, to linked data. This special section focuses on the interaction of semantics, ontologies and linked data. The discussion presents the Simple Knowledge Organization Scheme (SKOS) as a less formal strategy for expressing concept hierarchies and associations and questions the value of deep domain ontologies in favor of simpler vocabularies that are more open to reuse, albeit risking illogical outcomes. RDF ontologies harbor another unexpected drawback. While structurally sound, they leave validation gaps permitting illogical uses, a problem being addressed by a W3C Working Group. Data models based on RDF graphs and properties may replace traditional library catalog models geared to predefined entities, with relationships between RDF classes providing the semantic connections. The BIBFRAME Initiative takes a different and streamlined approach to linking data, building rich networks of information resources rather than relying on a strict underlying structure and vocabulary. Taken together, the articles illustrate the trend toward a pragmatic approach to a Semantic Web, sacrificing some specificity for greater flexibility and partial interoperability.
    Footnote
    Introduction to a special section "Linked data and the charm of weak semantics".
    Type
    a
  6. Isaac, A.; Baker, T.: Linked data practice at different levels of semantic precision : the perspective of libraries, archives and museums (2015) 0.03
    0.029699393 = sum of:
      0.025731063 = product of:
        0.15438637 = sum of:
          0.15438637 = weight(_text_:baker in 2026) [ClassicSimilarity], result of:
            0.15438637 = score(doc=2026,freq=2.0), product of:
              0.35112646 = queryWeight, product of:
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.0441157 = queryNorm
              0.4396888 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
        0.16666667 = coord(1/6)
      0.0039683306 = product of:
        0.007936661 = sum of:
          0.007936661 = weight(_text_:a in 2026) [ClassicSimilarity], result of:
            0.007936661 = score(doc=2026,freq=12.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.15602624 = fieldWeight in 2026, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
        0.5 = coord(1/2)
    
    Abstract
    Libraries, archives and museums rely on structured schemas and vocabularies to indicate classes in which a resource may belong. In the context of linked data, key organizational components are the RDF data model, element schemas and value vocabularies, with simple ontologies having minimally defined classes and properties in order to facilitate reuse and interoperability. Simplicity over formal semantics is a tenet of the open-world assumption underlying ontology languages central to the Semantic Web, but the result is a lack of constraints, data quality checks and validation capacity. Inconsistent use of vocabularies and ontologies that do not follow formal semantics rules and logical concept hierarchies further complicate the use of Semantic Web technologies. The Simple Knowledge Organization System (SKOS) helps make existing value vocabularies available in the linked data environment, but it exchanges precision for simplicity. Incompatibilities between simple organized vocabularies, Resource Description Framework Schemas and OWL ontologies and even basic notions of subjects and concepts prevent smooth translations and challenge the conversion of cultural institutions' unique legacy vocabularies for linked data. Adopting the linked data vision requires accepting loose semantic interpretations. To avoid semantic inconsistencies and illogical results, cultural organizations following the linked data path must be careful to choose the level of semantics that best suits their domain and needs.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Type
    a
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.02885449 = sum of:
      0.02335581 = product of:
        0.14013486 = sum of:
          0.14013486 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.14013486 = score(doc=701,freq=2.0), product of:
              0.3740134 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0441157 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.16666667 = coord(1/6)
      0.00549868 = product of:
        0.01099736 = sum of:
          0.01099736 = weight(_text_:a in 701) [ClassicSimilarity], result of:
            0.01099736 = score(doc=701,freq=36.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.2161963 = fieldWeight in 701, product of:
                6.0 = tf(freq=36.0), with freq of:
                  36.0 = termFreq=36.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.5 = coord(1/2)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  8. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.03
    0.026500374 = product of:
      0.05300075 = sum of:
        0.05300075 = sum of:
          0.0051842052 = weight(_text_:a in 3376) [ClassicSimilarity], result of:
            0.0051842052 = score(doc=3376,freq=2.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.10191591 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
          0.047816545 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
            0.047816545 = score(doc=3376,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.30952093 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
      0.5 = coord(1/2)
    
    Date
    31. 7.2010 16:58:22
    Type
    a
  9. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.03
    0.026475402 = product of:
      0.052950803 = sum of:
        0.052950803 = sum of:
          0.011111326 = weight(_text_:a in 1026) [ClassicSimilarity], result of:
            0.011111326 = score(doc=1026,freq=12.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.21843673 = fieldWeight in 1026, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
          0.041839477 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
            0.041839477 = score(doc=1026,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.2708308 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
      0.5 = coord(1/2)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Type
    a
  10. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.03
    0.026025265 = sum of:
      0.020469602 = product of:
        0.122817606 = sum of:
          0.122817606 = weight(_text_:authors in 2547) [ClassicSimilarity], result of:
            0.122817606 = score(doc=2547,freq=6.0), product of:
              0.20111527 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0441157 = queryNorm
              0.61068267 = fieldWeight in 2547, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2547)
        0.16666667 = coord(1/6)
      0.005555663 = product of:
        0.011111326 = sum of:
          0.011111326 = weight(_text_:a in 2547) [ClassicSimilarity], result of:
            0.011111326 = score(doc=2547,freq=12.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.21843673 = fieldWeight in 2547, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2547)
        0.5 = coord(1/2)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
    Type
    a
  11. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.03
    0.025455918 = product of:
      0.050911836 = sum of:
        0.050911836 = sum of:
          0.009072359 = weight(_text_:a in 759) [ClassicSimilarity], result of:
            0.009072359 = score(doc=759,freq=8.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.17835285 = fieldWeight in 759, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.041839477 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.041839477 = score(doc=759,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.5 = coord(1/2)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Type
    a
  12. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.02
    0.024848185 = product of:
      0.04969637 = sum of:
        0.04969637 = sum of:
          0.007856894 = weight(_text_:a in 2640) [ClassicSimilarity], result of:
            0.007856894 = score(doc=2640,freq=6.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.1544581 = fieldWeight in 2640, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
          0.041839477 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
            0.041839477 = score(doc=2640,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.2708308 = fieldWeight in 2640, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
      0.5 = coord(1/2)
    
    Abstract
    Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  13. Blumauer, A.; Pellegrini, T.: Semantic Web Revisited : Eine kurze Einführung in das Social Semantic Web (2009) 0.02
    0.024848185 = product of:
      0.04969637 = sum of:
        0.04969637 = sum of:
          0.007856894 = weight(_text_:a in 4855) [ClassicSimilarity], result of:
            0.007856894 = score(doc=4855,freq=6.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.1544581 = fieldWeight in 4855, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4855)
          0.041839477 = weight(_text_:22 in 4855) [ClassicSimilarity], result of:
            0.041839477 = score(doc=4855,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.2708308 = fieldWeight in 4855, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4855)
      0.5 = coord(1/2)
    
    Pages
    S.3-22
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
    Type
    a
  14. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.02
    0.024127303 = product of:
      0.048254605 = sum of:
        0.048254605 = sum of:
          0.0064151273 = weight(_text_:a in 4184) [ClassicSimilarity], result of:
            0.0064151273 = score(doc=4184,freq=4.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.12611452 = fieldWeight in 4184, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
          0.041839477 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
            0.041839477 = score(doc=4184,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.2708308 = fieldWeight in 4184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Type
    a
  15. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.02
    0.023176953 = sum of:
      0.02058485 = product of:
        0.123509094 = sum of:
          0.123509094 = weight(_text_:baker in 4796) [ClassicSimilarity], result of:
            0.123509094 = score(doc=4796,freq=2.0), product of:
              0.35112646 = queryWeight, product of:
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.0441157 = queryNorm
              0.35175103 = fieldWeight in 4796, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.03125 = fieldNorm(doc=4796)
        0.16666667 = coord(1/6)
      0.0025921026 = product of:
        0.0051842052 = sum of:
          0.0051842052 = weight(_text_:a in 4796) [ClassicSimilarity], result of:
            0.0051842052 = score(doc=4796,freq=8.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.10191591 = fieldWeight in 4796, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=4796)
        0.5 = coord(1/2)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  16. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.02
    0.0226932 = product of:
      0.0453864 = sum of:
        0.0453864 = sum of:
          0.009523992 = weight(_text_:a in 2556) [ClassicSimilarity], result of:
            0.009523992 = score(doc=2556,freq=12.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.18723148 = fieldWeight in 2556, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
          0.03586241 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
            0.03586241 = score(doc=2556,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.23214069 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
      0.5 = coord(1/2)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Type
    a
  17. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.02
    0.022403881 = sum of:
      0.018011743 = product of:
        0.108070455 = sum of:
          0.108070455 = weight(_text_:baker in 3062) [ClassicSimilarity], result of:
            0.108070455 = score(doc=3062,freq=2.0), product of:
              0.35112646 = queryWeight, product of:
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.0441157 = queryNorm
              0.30778214 = fieldWeight in 3062, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.9592175 = idf(docFreq=41, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3062)
        0.16666667 = coord(1/6)
      0.0043921373 = product of:
        0.008784275 = sum of:
          0.008784275 = weight(_text_:a in 3062) [ClassicSimilarity], result of:
            0.008784275 = score(doc=3062,freq=30.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.17268941 = fieldWeight in 3062, product of:
                5.477226 = tf(freq=30.0), with freq of:
                  30.0 = termFreq=30.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3062)
        0.5 = coord(1/2)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  18. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.02
    0.022278294 = product of:
      0.044556588 = sum of:
        0.044556588 = sum of:
          0.0086941775 = weight(_text_:a in 662) [ClassicSimilarity], result of:
            0.0086941775 = score(doc=662,freq=10.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.1709182 = fieldWeight in 662, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
          0.03586241 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
            0.03586241 = score(doc=662,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.23214069 = fieldWeight in 662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
      0.5 = coord(1/2)
    
    Abstract
    The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
    Date
    22. 3.2013 19:29:20
    Type
    a
  19. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.02
    0.021819359 = product of:
      0.043638717 = sum of:
        0.043638717 = sum of:
          0.007776308 = weight(_text_:a in 2418) [ClassicSimilarity], result of:
            0.007776308 = score(doc=2418,freq=8.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.15287387 = fieldWeight in 2418, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.03586241 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.03586241 = score(doc=2418,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.5 = coord(1/2)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
    Type
    a
  20. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.02
    0.021298444 = product of:
      0.042596888 = sum of:
        0.042596888 = sum of:
          0.0067344806 = weight(_text_:a in 2024) [ClassicSimilarity], result of:
            0.0067344806 = score(doc=2024,freq=6.0), product of:
              0.050867476 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0441157 = queryNorm
              0.13239266 = fieldWeight in 2024, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.03586241 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.03586241 = score(doc=2024,freq=2.0), product of:
              0.15448566 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0441157 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.5 = coord(1/2)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
    Type
    a

Years

Languages

  • e 241
  • d 69
  • f 1
  • More… Less…

Types

  • a 213
  • el 80
  • m 43
  • s 17
  • n 10
  • x 6
  • r 2
  • More… Less…

Subjects

Classifications