Search (150 results, page 2 of 8)

  • × theme_ss:"Semantic Web"
  • × year_i:[2010 TO 2020}
  1. Wright, H.: Semantic Web and ontologies (2018) 0.05
    0.046343543 = product of:
      0.18537417 = sum of:
        0.06563474 = weight(_text_:web in 80) [ClassicSimilarity], result of:
          0.06563474 = score(doc=80,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5643819 = fieldWeight in 80, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
        0.05410469 = weight(_text_:wide in 80) [ClassicSimilarity], result of:
          0.05410469 = score(doc=80,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 80, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
        0.06563474 = weight(_text_:web in 80) [ClassicSimilarity], result of:
          0.06563474 = score(doc=80,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5643819 = fieldWeight in 80, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
      0.25 = coord(3/12)
    
    Abstract
    The Semantic Web and ontologies can help archaeologists combine and share data, making it more open and useful. Archaeologists create diverse types of data, using a wide variety of technologies and methodologies. Like all research domains, these data are increasingly digital. The creation of data that are now openly and persistently available from disparate sources has also inspired efforts to bring archaeological resources together and make them more interoperable. This allows functionality such as federated cross-search across different datasets, and the mapping of heterogeneous data to authoritative structures to build a single data source. Ontologies provide the structure and relationships for Semantic Web data, and have been developed for use in cultural heritage applications generally, and archaeology specifically. A variety of online resources for archaeology now incorporate Semantic Web principles and technologies.
    Theme
    Semantic Web
  2. Cahier, J.-P.; Zaher, L'H.; Isoard , G.: Document et modèle pour l'action, une méthode pour le web socio-sémantique : application à un web 2.0 en développement durable (2010) 0.04
    0.044405274 = product of:
      0.1776211 = sum of:
        0.06563474 = weight(_text_:web in 4836) [ClassicSimilarity], result of:
          0.06563474 = score(doc=4836,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5643819 = fieldWeight in 4836, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4836)
        0.06563474 = weight(_text_:web in 4836) [ClassicSimilarity], result of:
          0.06563474 = score(doc=4836,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5643819 = fieldWeight in 4836, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4836)
        0.046351604 = product of:
          0.09270321 = sum of:
            0.09270321 = weight(_text_:2.0 in 4836) [ClassicSimilarity], result of:
              0.09270321 = score(doc=4836,freq=2.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.4485502 = fieldWeight in 4836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4836)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    We present the DOCMA method (DOCument and Model for Action) focused to Socio-Semantic web applications in large communities of interest. DOCMA is dedicated to end-users without any knowledge in Information Science. Community Members can elicit, structure and index shared business items emerging from their inquiry (such as projects, actors, products, geographically situated objects of interest.). We apply DOCMA to an experiment in the field of Sustainable Development: the Cartodd-Map21 collaborative Web portal.
    Theme
    Semantic Web
  3. Bianchini, C.; Willer, M.: ISBD resource and Its description in the context of the Semantic Web (2014) 0.04
    0.04299651 = product of:
      0.17198604 = sum of:
        0.06563474 = weight(_text_:web in 1998) [ClassicSimilarity], result of:
          0.06563474 = score(doc=1998,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5643819 = fieldWeight in 1998, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1998)
        0.040716566 = weight(_text_:world in 1998) [ClassicSimilarity], result of:
          0.040716566 = score(doc=1998,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 1998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1998)
        0.06563474 = weight(_text_:web in 1998) [ClassicSimilarity], result of:
          0.06563474 = score(doc=1998,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5643819 = fieldWeight in 1998, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1998)
      0.25 = coord(3/12)
    
    Abstract
    This article explores the question "What is an International Standard for Bibliographic Description (ISBD) resource in the context of the Semantic Web, and what is the relationship of its description to the linked data?" This question is discussed against the background of the dichotomy between the description and access using the Semantic Web differentiation of the three logical layers: real-world objects, web of data, and special purpose (bibliographic) data. The representation of bibliographic data as linked data is discussed, distinguishing the description of a resource from the iconic/objective and the informational/subjective viewpoints. In the conclusion, the authors give views on possible directions of future development of the ISBD.
    Theme
    Semantic Web
  4. Hitzler, P.; Janowicz, K.: Ontologies in a data driven world : finding the middle ground (2013) 0.04
    0.042609457 = product of:
      0.17043783 = sum of:
        0.050318997 = weight(_text_:web in 803) [ClassicSimilarity], result of:
          0.050318997 = score(doc=803,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43268442 = fieldWeight in 803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=803)
        0.06979983 = weight(_text_:world in 803) [ClassicSimilarity], result of:
          0.06979983 = score(doc=803,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.50960356 = fieldWeight in 803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.09375 = fieldNorm(doc=803)
        0.050318997 = weight(_text_:web in 803) [ClassicSimilarity], result of:
          0.050318997 = score(doc=803,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43268442 = fieldWeight in 803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=803)
      0.25 = coord(3/12)
    
    Theme
    Semantic Web
  5. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.04
    0.04233361 = product of:
      0.16933444 = sum of:
        0.07501114 = weight(_text_:web in 4331) [ClassicSimilarity], result of:
          0.07501114 = score(doc=4331,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.6450079 = fieldWeight in 4331, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
        0.07501114 = weight(_text_:web in 4331) [ClassicSimilarity], result of:
          0.07501114 = score(doc=4331,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.6450079 = fieldWeight in 4331, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
        0.019312155 = product of:
          0.03862431 = sum of:
            0.03862431 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.03862431 = score(doc=4331,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Das Semantic Web - bzw. Linked Data - hat das Potenzial, die Verfügbarkeit von Daten und Wissen, sowie den Zugriff darauf zu revolutionieren. Einen großen Beitrag dazu können Wissensorganisationssysteme wie Thesauri leisten, die die Daten inhaltlich erschließen und strukturieren. Leider sind immer noch viele dieser Systeme lediglich in Buchform oder in speziellen Anwendungen verfügbar. Wie also lassen sie sich für das Semantic Web nutzen? Das Simple Knowledge Organization System (SKOS) bietet eine Möglichkeit, die Wissensorganisationssysteme in eine Form zu "übersetzen", die im Web zitiert und mit anderen Resourcen verknüpft werden kann.
    Date
    15. 3.2011 19:21:22
    Theme
    Semantic Web
  6. Smith, D.A.; Shadbolt, N.R.: FacetOntology : expressive descriptions of facets in the Semantic Web (2012) 0.04
    0.042185247 = product of:
      0.16874099 = sum of:
        0.096111774 = weight(_text_:filter in 2208) [ClassicSimilarity], result of:
          0.096111774 = score(doc=2208,freq=2.0), product of:
            0.24899386 = queryWeight, product of:
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.035634913 = queryNorm
            0.38600057 = fieldWeight in 2208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2208)
        0.03631461 = weight(_text_:web in 2208) [ClassicSimilarity], result of:
          0.03631461 = score(doc=2208,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3122631 = fieldWeight in 2208, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2208)
        0.03631461 = weight(_text_:web in 2208) [ClassicSimilarity], result of:
          0.03631461 = score(doc=2208,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3122631 = fieldWeight in 2208, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2208)
      0.25 = coord(3/12)
    
    Abstract
    The formal structure of the information on the Semantic Web lends itself to faceted browsing, an information retrieval method where users can filter results based on the values of properties ("facets"). Numerous faceted browsers have been created to browse RDF and Linked Data, but these systems use their own ontologies for defining how data is queried to populate their facets. Since the source data is the same format across these systems (specifically, RDF), we can unify the different methods of describing how to quer the underlying data, to enable compatibility across systems, and provide an extensible base ontology for future systems. To this end, we present FacetOntology, an ontology that defines how to query data to form a faceted browser, and a number of transformations and filters that can be applied to data before it is shown to users. FacetOntology overcomes limitations in the expressivity of existing work, by enabling the full expressivity of SPARQL when selecting data for facets. By applying a FacetOntology definition to data, a set of facets are specified, each with queries and filters to source RDF data, which enables faceted browsing systems to be created using that RDF data.
    Theme
    Semantic Web
  7. Vatant, B.: Porting library vocabularies to the Semantic Web, and back : a win-win round trip (2010) 0.04
    0.04200787 = product of:
      0.16803148 = sum of:
        0.06656578 = weight(_text_:web in 3968) [ClassicSimilarity], result of:
          0.06656578 = score(doc=3968,freq=14.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.57238775 = fieldWeight in 3968, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
        0.034899916 = weight(_text_:world in 3968) [ClassicSimilarity], result of:
          0.034899916 = score(doc=3968,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 3968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
        0.06656578 = weight(_text_:web in 3968) [ClassicSimilarity], result of:
          0.06656578 = score(doc=3968,freq=14.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.57238775 = fieldWeight in 3968, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
      0.25 = coord(3/12)
    
    Abstract
    The role of vocabularies is critical in the long overdue synergy between the Web and Library heritage. The Semantic Web should leverage existing vocabularies instead of reinventing them, but the specific features of library vocabularies make them more or less portable to the Semantic Web. Based on preliminary results in the framework of the TELplus project, we suggest guidelines for needed evolutions in order to make vocabularies usable and efficient in the Semantic Web realm, assess choices made so far by large libraries to publish vocabularies conformant to standards and good practices, and review how Semantic Web tools can help managing those vocabularies.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
    Theme
    Semantic Web
  8. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.04
    0.041360278 = product of:
      0.16544111 = sum of:
        0.075478494 = weight(_text_:web in 4649) [ClassicSimilarity], result of:
          0.075478494 = score(doc=4649,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.64902663 = fieldWeight in 4649, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.075478494 = weight(_text_:web in 4649) [ClassicSimilarity], result of:
          0.075478494 = score(doc=4649,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.64902663 = fieldWeight in 4649, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.014484116 = product of:
          0.028968232 = sum of:
            0.028968232 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.028968232 = score(doc=4649,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
    Theme
    Semantic Web
  9. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.04
    0.041110925 = product of:
      0.1644437 = sum of:
        0.06289875 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
          0.06289875 = score(doc=1210,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5408555 = fieldWeight in 1210, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.038646206 = weight(_text_:wide in 1210) [ClassicSimilarity], result of:
          0.038646206 = score(doc=1210,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.24476713 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.06289875 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
          0.06289875 = score(doc=1210,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5408555 = fieldWeight in 1210, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.25 = coord(3/12)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
    Theme
    Semantic Web
  10. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.04
    0.041110925 = product of:
      0.1644437 = sum of:
        0.06289875 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.06289875 = score(doc=5997,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5408555 = fieldWeight in 5997, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.038646206 = weight(_text_:wide in 5997) [ClassicSimilarity], result of:
          0.038646206 = score(doc=5997,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.24476713 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.06289875 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.06289875 = score(doc=5997,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5408555 = fieldWeight in 5997, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.25 = coord(3/12)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Theme
    Semantic Web
  11. Sah, M.; Wade, V.: Personalized concept-based search on the Linked Open Data (2015) 0.04
    0.040425193 = product of:
      0.121275574 = sum of:
        0.033546 = weight(_text_:web in 2511) [ClassicSimilarity], result of:
          0.033546 = score(doc=2511,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.2884563 = fieldWeight in 2511, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
        0.02326661 = weight(_text_:world in 2511) [ClassicSimilarity], result of:
          0.02326661 = score(doc=2511,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 2511, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
        0.030916965 = weight(_text_:wide in 2511) [ClassicSimilarity], result of:
          0.030916965 = score(doc=2511,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.1958137 = fieldWeight in 2511, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
        0.033546 = weight(_text_:web in 2511) [ClassicSimilarity], result of:
          0.033546 = score(doc=2511,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.2884563 = fieldWeight in 2511, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
      0.33333334 = coord(4/12)
    
    Abstract
    In this paper, we present a novel personalized concept-based search mechanism for the Web of Data based on results categorization. The innovation of the paper comes from combining novel categorization and personalization techniques, and using categorization for providing personalization. In our approach, search results (Linked Open Data resources) are dynamically categorized into Upper Mapping and Binding Exchange Layer (UMBEL) concepts using a novel fuzzy retrieval model. Then, results with the same concepts are grouped together to form categories, which we call conceptlenses. Such categorization enables concept-based browsing of the retrieved results aligned to users' intent or interests. When the user selects a concept lens for exploration, results are immediately personalized. In particular, all concept lenses are personally re-organized according to their similarity to the selected lens. Within the selected concept lens; more relevant results are included using results re-ranking and query expansion, as well as relevant concept lenses are suggested to support results exploration. This allows dynamic adaptation of results to the user's local choices. We also support interactive personalization; when the user clicks on a result, within the interacted lens, relevant lenses and results are included using results re-ranking and query expansion. Extensive evaluations were performed to assess our approach: (i) Performance of our fuzzy-based categorization approach was evaluated on a particular benchmark (~10,000 mappings). The evaluations showed that we can achieve highly acceptable categorization accuracy and perform better than the vector space model. (ii) Personalized search efficacy was assessed using a user study with 32 participants in a tourist domain. The results revealed that our approach performed significantly better than a non-adaptive baseline search. (iii) Dynamic personalization performance was evaluated, which illustrated that our personalization approach is scalable. (iv) Finally, we compared our system with the existing LOD search engines, which showed that our approach is unique.
    Source
    Web Semantics: Science, Services and Agents on the World Wide Web. 35(2015) [in press]
    Theme
    Semantic Web
  12. Auer, S.; Lehmann, J.: Making the Web a data washing machine : creating knowledge out of interlinked data (2010) 0.04
    0.040421367 = product of:
      0.16168547 = sum of:
        0.0663011 = weight(_text_:web in 112) [ClassicSimilarity], result of:
          0.0663011 = score(doc=112,freq=20.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5701118 = fieldWeight in 112, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=112)
        0.029083263 = weight(_text_:world in 112) [ClassicSimilarity], result of:
          0.029083263 = score(doc=112,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=112)
        0.0663011 = weight(_text_:web in 112) [ClassicSimilarity], result of:
          0.0663011 = score(doc=112,freq=20.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5701118 = fieldWeight in 112, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=112)
      0.25 = coord(3/12)
    
    Abstract
    Over the past 3 years, the semantic web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into a very promising candidate for addressing one of the biggest challenges in the area of the Semantic Web vision: the exploitation of the Web as a platform for data and information integration. To translate this initial success into a world-scale reality, a number of research challenges need to be addressed: the performance gap between relational and RDF data management has to be closed, coherence and quality of data published on theWeb have to be improved, provenance and trust on the Linked Data Web must be established and generally the entrance barrier for data publishers and users has to be lowered. In this vision statement we discuss these challenges and argue, that research approaches tackling these challenges should be integrated into a mutual refinement cycle. We also present two crucial use-cases for the widespread adoption of linked data.
    Content
    Vgl.: http://www.semantic-web-journal.net/content/new-submission-making-web-data-washing-machine-creating-knowledge-out-interlinked-data http://www.semantic-web-journal.net/sites/default/files/swj24_0.pdf.
    Source
    Semantic Web journal. 0(2010), no.1
    Theme
    Semantic Web
  13. Glimm, B.; Hogan, A.; Krötzsch, M.; Polleres, A.: OWL: Yet to arrive on the Web of Data? (2012) 0.04
    0.03953895 = product of:
      0.1581558 = sum of:
        0.06162794 = weight(_text_:web in 4798) [ClassicSimilarity], result of:
          0.06162794 = score(doc=4798,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5299281 = fieldWeight in 4798, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4798)
        0.034899916 = weight(_text_:world in 4798) [ClassicSimilarity], result of:
          0.034899916 = score(doc=4798,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 4798, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=4798)
        0.06162794 = weight(_text_:web in 4798) [ClassicSimilarity], result of:
          0.06162794 = score(doc=4798,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5299281 = fieldWeight in 4798, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4798)
      0.25 = coord(3/12)
    
    Abstract
    Seven years on from OWL becoming a W3C recommendation, and two years on from the more recent OWL 2 W3C recommendation, OWL has still experienced only patchy uptake on the Web. Although certain OWL features (like owl:sameAs) are very popular, other features of OWL are largely neglected by publishers in the Linked Data world. This may suggest that despite the promise of easy implementations and the proposal of tractable profiles suggested in OWL's second version, there is still no "right" standard fragment for the Linked Data community. In this paper, we (1) analyse uptake of OWL on the Web of Data, (2) gain insights into the OWL fragment that is actually used/usable on the Web, where we arrive at the conclusion that this fragment is likely to be a simplified profile based on OWL RL, (3) propose and discuss such a new fragment, which we call OWL LD (for Linked Data).
    Content
    Beitrag des Workshops: Linked Data on the Web (LDOW2012), April 16, 2012 Lyon, France; vgl.: http://events.linkeddata.org/ldow2012/.
    Theme
    Semantic Web
  14. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.04
    0.039243247 = product of:
      0.11772974 = sum of:
        0.0419325 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
          0.0419325 = score(doc=4232,freq=32.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 4232, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.014541632 = weight(_text_:world in 4232) [ClassicSimilarity], result of:
          0.014541632 = score(doc=4232,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.10616741 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.019323103 = weight(_text_:wide in 4232) [ClassicSimilarity], result of:
          0.019323103 = score(doc=4232,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.122383565 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.0419325 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
          0.0419325 = score(doc=4232,freq=32.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 4232, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.33333334 = coord(4/12)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    Theme
    Semantic Web
  15. Borst, T.; Neubert, J.; Seiler, A.: Bibliotheken auf dem Weg in das Semantic Web : Bericht von der SWIB2010 in Köln - unterschiedliche Entwicklungsschwerpunkte (2011) 0.04
    0.038705394 = product of:
      0.11611618 = sum of:
        0.037739247 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
          0.037739247 = score(doc=4532,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.32451332 = fieldWeight in 4532, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4532)
        0.017449958 = weight(_text_:world in 4532) [ClassicSimilarity], result of:
          0.017449958 = score(doc=4532,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.12740089 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4532)
        0.023187723 = weight(_text_:wide in 4532) [ClassicSimilarity], result of:
          0.023187723 = score(doc=4532,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.14686027 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4532)
        0.037739247 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
          0.037739247 = score(doc=4532,freq=18.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.32451332 = fieldWeight in 4532, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4532)
      0.33333334 = coord(4/12)
    
    Abstract
    Zum zweiten Mal nach 2009 hat in Köln die Konferenz »Semantic Web in Bibliotheken (SWIB)« stattgefunden. Unter Beteiligung von 120 Teilnehmern aus neun Nationen wurde an zwei Tagen über die Semantic Web-Aktivitäten von in- und ausländischen Einrichtungen, ferner seitens des W3C und der Forschung berichtet. Die Konferenz wurde wie im Vorjahr vom Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (hbz) und der Deutschen Zentralbibliothek für Wirtschaftswissenschaften (ZBW) - Leibniz-Informationszentrum Wirtschaft veranstaltet und war bereits im Vorfeld ausgebucht.
    Content
    "Gegenüber der vorjährigen Veranstaltung war ein deutlicher Fortschritt bei der Entwicklung hin zu einem »Web of Linked Open Data (LOD)« zu erkennen. VertreterInnen aus Einrichtungen wie der Deutschen, der Französischen und der Ungarischen Nationalbibliothek, der Prager Wirtschaftsuniversität, der UB Mannheim, des hbz sowie der ZBW berichteten über ihre Ansätze, Entwicklungen und bereits vorhandenen Angebote und Infrastrukturen. Stand der Entwicklung Im Vordergrund stand dabei zum einen die Transformation und Publikation von semantisch angereicherten Bibliotheksdaten aus den vorhandenen Datenquellen in ein maschinenlesbares RDF-Format zwecks Weiterverarbeitung und Verknüpfung mit anderen Datenbeständen. Auf diese Weise betreibt etwa die Deutsche Nationalbibliothek (DNB) die Veröffentlichung und dauerhafte Addressierbarkeit der Gemeinsamen Normdateien (GND) als in der Erprobungsphase befindlichen LOD-Service. Im Verbund mit anderen internationalen »Gedächtnisinstitutionen« soll dieser Service künftig eine verlässliche Infrastruktur zur Identifikation und Einbindung von Personen, Organisationen oder Konzepten in die eigenen Datenbestände bieten.
    Einen zweiten Entwicklungsschwerpunkt bildeten Ansätze, die vorhandenen Bibliotheksdaten auf der Basis von derzeit in der Standardisierung befindlichen Beschreibungssprachen wie »Resource Description and Access« (RDA) und Modellierungen wie »Functional Requirements for Bibliograhical Records« (FRBR) so aufzubereiten, dass die Daten durch externe Softwaresysteme allgemein interpretierbar und zum Beispiel in modernen Katalogsystemen navigierbar gemacht werden können. Aufbauend auf den zu Beginn des zweiten Tages vorgetragenen Überlegungen der US-amerikanischen Bibliotheksberaterin Karen Coyle schilderten Vertreter der DNB sowie Stefan Gradmann von der Europeana, wie grundlegende Unterscheidungen zwischen einem Werk (zum Beispiel einem Roman), seiner medialen Erscheinungsform (zum Beispiel als Hörbuch) sowie seiner Manifestation (zum Beispiel als CD-ROM) mithilfe von RDA-Elementen ausgedrückt werden können. Aus der Sicht des World Wide Web Konsortiums (W3C) berichtete Antoine Isaac über die Gründung und Arbeit einer »Library Linked Data Incubator Group«, die sich mit der Inventarisierung typischer Nutzungsfälle und »best practices« für LOD-basierte Anwendungen befasst. Sören Auer von der Universität Leipzig bot einen Überblick zu innovativen Forschungsansätzen, die den von ihm so genannten »Lebenszyklus« von LOD an verschiedenen Stellen unterstützen sollen. Angesprochen wurden dabei verschiedene Projekte unter anderem zur Datenhaltung sowie zur automatischen Verlinkung von LOD.
    Rechtliche Aspekte Dass das Semantische Web speziell in seiner Ausprägung als LOD nicht nur Entwickler und Projektleiter beschäftigt, zeigte sich in zwei weiteren Vorträgen. So erläuterte Stefanie Grunow von der ZBW die rechtlichen Rahmenbedingungen bei der Veröffentlichung und Nutzung von LOD, insbesondere wenn es sich um Datenbankinhalte aus verschiedenen Quellen handelt. Angesichts der durchaus gewünschten, teilweise aber nicht antizipierbaren, Nachnutzung von LOD durch Dritte sei im Einzelfall zu prüfen, ob und welche Lizenz für einen Herausgeber die geeignete ist. Aus der Sicht eines Hochschullehrers reflektierte Günther Neher von der FH Potsdam, wie das Semantische Web und LODTechnologien in der informationswissenschaftlichen Ausbildung an seiner Einrichtung zukünftig berücksichtigt werden könnten.
    Perspektiven Welche Potenziale das Semantic Web schließlich für Wissenschaft und Forschung bietet, zeigte sich in dem Vortrag von Klaus Tochtermann, Direktor der ZBW. Ausgehend von klassischen Wissensmanagement-Prozessen wie dem Recherchieren, der Bereitstellung, der Organisation und der Erschließung von Fachinformationen wurden hier punktuell die Ansätze für semantische Technologien angesprochen. Ein auch für Bibliotheken typischer Anwendungsfall sei etwa die Erweiterung der syntaktisch-basierten Suche um Konzepte aus Fachvokabularen, mit der die ursprüngliche Sucheingabe ergänzt wird. Auf diese Weise können Forschende, so Tochtermann, wesentlich mehr und gleichzeitig auch relevante Dokumente erschließen. Anette Seiler (hbz), die die Konferenz moderiert hatte, zog abschließend ein positives Fazit des - auch in den Konferenzpausen - sehr lebendigen Austauschs der hiesigen und internationalen Bibliotheksszene. Auch das von den Teilnehmern spontan erbetene Feedback fiel außerordentlich erfreulich aus. Per Handzeichen sprach man sich fast einhellig für eine Fortführung der SWIB aus - für Anette Seiler ein Anlass mehr anzukündigen, dass die SWIB 2011 erneut stattfinden wird, diesmal in Hamburg und voraussichtlich wieder Ende November."
    Theme
    Semantic Web
  16. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.04
    0.035685804 = product of:
      0.14274321 = sum of:
        0.05930151 = weight(_text_:web in 2090) [ClassicSimilarity], result of:
          0.05930151 = score(doc=2090,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5099235 = fieldWeight in 2090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.05930151 = weight(_text_:web in 2090) [ClassicSimilarity], result of:
          0.05930151 = score(doc=2090,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5099235 = fieldWeight in 2090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.024140194 = product of:
          0.048280388 = sum of:
            0.048280388 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.048280388 = score(doc=2090,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Theme
    Semantic Web
  17. Cahier, J.-P.; Ma, X.; Zaher, L'H.: Document and item-based modeling : a hybrid method for a socio-semantic web (2010) 0.04
    0.03559937 = product of:
      0.14239748 = sum of:
        0.050840456 = weight(_text_:web in 62) [ClassicSimilarity], result of:
          0.050840456 = score(doc=62,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 62, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=62)
        0.040716566 = weight(_text_:world in 62) [ClassicSimilarity], result of:
          0.040716566 = score(doc=62,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 62, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=62)
        0.050840456 = weight(_text_:web in 62) [ClassicSimilarity], result of:
          0.050840456 = score(doc=62,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 62, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=62)
      0.25 = coord(3/12)
    
    Abstract
    The paper discusses the challenges of categorising documents and "items of the world" to promote knowledge sharing in large communities of interest. We present the DOCMA method (Document and Item-based Model for Action) dedicated to end-users who have minimal or no knowledge of information science. Community members can elicit structure and indexed business items stemming from their query including projects, actors, products, places of interest, and geo-situated objects. This hybrid method was applied in a collaborative Web portal in the field of sustainability for the past two years.
    Theme
    Semantic Web
  18. Ilik, V.: Cataloger makeover : creating non-MARC name authorities (2015) 0.04
    0.035151012 = product of:
      0.14060405 = sum of:
        0.04151106 = weight(_text_:web in 1884) [ClassicSimilarity], result of:
          0.04151106 = score(doc=1884,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35694647 = fieldWeight in 1884, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1884)
        0.057581924 = weight(_text_:world in 1884) [ClassicSimilarity], result of:
          0.057581924 = score(doc=1884,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.4204015 = fieldWeight in 1884, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1884)
        0.04151106 = weight(_text_:web in 1884) [ClassicSimilarity], result of:
          0.04151106 = score(doc=1884,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35694647 = fieldWeight in 1884, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1884)
      0.25 = coord(3/12)
    
    Abstract
    This article shares a vision of the enterprise of cataloging and the role of catalogers and metadata librarians in the twenty-first century. The revolutionary opportunities now presented by Semantic Web technologies liberate catalogers from their historically analog-based static world, re-conceptualize it, and transform it into a world of high dimensionality and fluidity. By presenting illustrative examples of innovative metadata creation and manipulation, such as non-MARC name authority records, we seek to contribute to the libraries' mission with innovative projects that enable discovery, development, communication, learning, and creativity, and hold promise to exceed users' expectations.
    Theme
    Semantic Web
  19. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.03
    0.032949124 = product of:
      0.1317965 = sum of:
        0.051356614 = weight(_text_:web in 3936) [ClassicSimilarity], result of:
          0.051356614 = score(doc=3936,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.4416067 = fieldWeight in 3936, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
        0.029083263 = weight(_text_:world in 3936) [ClassicSimilarity], result of:
          0.029083263 = score(doc=3936,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 3936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
        0.051356614 = weight(_text_:web in 3936) [ClassicSimilarity], result of:
          0.051356614 = score(doc=3936,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.4416067 = fieldWeight in 3936, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
      0.25 = coord(3/12)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  20. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.03
    0.031627472 = product of:
      0.18976483 = sum of:
        0.094882414 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.094882414 = score(doc=54,freq=16.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
        0.094882414 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.094882414 = score(doc=54,freq=16.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
      0.16666667 = coord(2/12)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Theme
    Semantic Web

Languages

  • e 116
  • d 32
  • f 1
  • More… Less…

Types

  • a 93
  • m 38
  • el 28
  • s 14
  • x 7
  • r 2
  • More… Less…

Subjects