Search (4 results, page 1 of 1)

  • × author_ss:"Auer, S."
  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  1. Auer, S.; Lehmann, J.: Making the Web a data washing machine : creating knowledge out of interlinked data (2010) 0.00
    0.0026649784 = product of:
      0.0053299568 = sum of:
        0.0053299568 = product of:
          0.0106599135 = sum of:
            0.0106599135 = weight(_text_:a in 112) [ClassicSimilarity], result of:
              0.0106599135 = score(doc=112,freq=16.0), product of:
                0.059167966 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.051314447 = queryNorm
                0.18016359 = fieldWeight in 112, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=112)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over the past 3 years, the semantic web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into a very promising candidate for addressing one of the biggest challenges in the area of the Semantic Web vision: the exploitation of the Web as a platform for data and information integration. To translate this initial success into a world-scale reality, a number of research challenges need to be addressed: the performance gap between relational and RDF data management has to be closed, coherence and quality of data published on theWeb have to be improved, provenance and trust on the Linked Data Web must be established and generally the entrance barrier for data publishers and users has to be lowered. In this vision statement we discuss these challenges and argue, that research approaches tackling these challenges should be integrated into a mutual refinement cycle. We also present two crucial use-cases for the widespread adoption of linked data.
    Type
    a
  2. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.00
    0.0025282204 = product of:
      0.005056441 = sum of:
        0.005056441 = product of:
          0.010112882 = sum of:
            0.010112882 = weight(_text_:a in 4260) [ClassicSimilarity], result of:
              0.010112882 = score(doc=4260,freq=10.0), product of:
                0.059167966 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.051314447 = queryNorm
                0.1709182 = fieldWeight in 4260, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4260)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
    Type
    a
  3. Bizer, C.; Lehmann, J.; Kobilarov, G.; Auer, S.; Becker, C.; Cyganiak, R.; Hellmann, S.: DBpedia: a crystallization point for the Web of Data. (2009) 0.00
    0.0023079389 = product of:
      0.0046158778 = sum of:
        0.0046158778 = product of:
          0.0092317555 = sum of:
            0.0092317555 = weight(_text_:a in 1643) [ClassicSimilarity], result of:
              0.0092317555 = score(doc=1643,freq=12.0), product of:
                0.059167966 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.051314447 = queryNorm
                0.15602624 = fieldWeight in 1643, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains suc as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.
    Type
    a
  4. Auer, S.; Lehmann, J.: What have Innsbruck and Leipzig in common? : extracting semantics from Wiki content (2007) 0.00
    0.0015989869 = product of:
      0.0031979738 = sum of:
        0.0031979738 = product of:
          0.0063959477 = sum of:
            0.0063959477 = weight(_text_:a in 2481) [ClassicSimilarity], result of:
              0.0063959477 = score(doc=2481,freq=4.0), product of:
                0.059167966 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.051314447 = queryNorm
                0.10809815 = fieldWeight in 2481, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2481)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used.

Types