Search (9 results, page 1 of 1)

  • × theme_ss:"Semantic Web"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.02
    0.02361377 = product of:
      0.04722754 = sum of:
        0.04722754 = product of:
          0.09445508 = sum of:
            0.09445508 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.09445508 = score(doc=4643,freq=2.0), product of:
                0.17438023 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979689 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  2. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.02
    0.020240374 = product of:
      0.040480748 = sum of:
        0.040480748 = product of:
          0.080961496 = sum of:
            0.080961496 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.080961496 = score(doc=6048,freq=2.0), product of:
                0.17438023 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979689 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  3. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.02
    0.020240374 = product of:
      0.040480748 = sum of:
        0.040480748 = product of:
          0.080961496 = sum of:
            0.080961496 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.080961496 = score(doc=100,freq=2.0), product of:
                0.17438023 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979689 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 15:41:14
  4. OWL Web Ontology Language Test Cases (2004) 0.01
    0.013493583 = product of:
      0.026987165 = sum of:
        0.026987165 = product of:
          0.05397433 = sum of:
            0.05397433 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.05397433 = score(doc=4685,freq=2.0), product of:
                0.17438023 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979689 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2011 13:33:22
  5. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.011806885 = product of:
      0.02361377 = sum of:
        0.02361377 = product of:
          0.04722754 = sum of:
            0.04722754 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.04722754 = score(doc=759,freq=2.0), product of:
                0.17438023 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979689 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 5.2013 19:22:18
  6. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.01
    0.010005045 = product of:
      0.02001009 = sum of:
        0.02001009 = product of:
          0.08004036 = sum of:
            0.08004036 = weight(_text_:authors in 4329) [ClassicSimilarity], result of:
              0.08004036 = score(doc=4329,freq=2.0), product of:
                0.22701477 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04979689 = queryNorm
                0.35257778 = fieldWeight in 4329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4329)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
  7. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.01
    0.008575753 = product of:
      0.017151507 = sum of:
        0.017151507 = product of:
          0.06860603 = sum of:
            0.06860603 = weight(_text_:authors in 4260) [ClassicSimilarity], result of:
              0.06860603 = score(doc=4260,freq=2.0), product of:
                0.22701477 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04979689 = queryNorm
                0.30220953 = fieldWeight in 4260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4260)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
  8. Schmitz-Esser, W.; Sigel, A.: Introducing terminology-based ontologies : Papers and Materials presented by the authors at the workshop "Introducing Terminology-based Ontologies" (Poli/Schmitz-Esser/Sigel) at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006 (2006) 0.01
    0.008575753 = product of:
      0.017151507 = sum of:
        0.017151507 = product of:
          0.06860603 = sum of:
            0.06860603 = weight(_text_:authors in 1285) [ClassicSimilarity], result of:
              0.06860603 = score(doc=1285,freq=2.0), product of:
                0.22701477 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04979689 = queryNorm
                0.30220953 = fieldWeight in 1285, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1285)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
  9. Auer, S.; Lehmann, J.: What have Innsbruck and Leipzig in common? : extracting semantics from Wiki content (2007) 0.01
    0.008575753 = product of:
      0.017151507 = sum of:
        0.017151507 = product of:
          0.06860603 = sum of:
            0.06860603 = weight(_text_:authors in 2481) [ClassicSimilarity], result of:
              0.06860603 = score(doc=2481,freq=2.0), product of:
                0.22701477 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04979689 = queryNorm
                0.30220953 = fieldWeight in 2481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2481)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used.