Search (151 results, page 1 of 8)

  • × theme_ss:"Semantic Web"
  1. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.06
    0.05529504 = sum of:
      0.036166955 = product of:
        0.14466782 = sum of:
          0.14466782 = weight(_text_:authors in 2547) [ClassicSimilarity], result of:
            0.14466782 = score(doc=2547,freq=6.0), product of:
              0.23689525 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051964227 = queryNorm
              0.61068267 = fieldWeight in 2547, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2547)
        0.25 = coord(1/4)
      0.019128084 = product of:
        0.05738425 = sum of:
          0.05738425 = weight(_text_:j in 2547) [ClassicSimilarity], result of:
            0.05738425 = score(doc=2547,freq=4.0), product of:
              0.16511615 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.051964227 = queryNorm
              0.34753868 = fieldWeight in 2547, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2547)
        0.33333334 = coord(1/3)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
  2. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.05
    0.05489915 = sum of:
      0.020666832 = product of:
        0.08266733 = sum of:
          0.08266733 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.08266733 = score(doc=1634,freq=6.0), product of:
              0.23689525 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051964227 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.03423232 = product of:
        0.051348478 = sum of:
          0.023186738 = weight(_text_:j in 1634) [ClassicSimilarity], result of:
            0.023186738 = score(doc=1634,freq=2.0), product of:
              0.16511615 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.051964227 = queryNorm
              0.14042683 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.028161742 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.028161742 = score(doc=1634,freq=2.0), product of:
              0.18196987 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051964227 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.6666667 = coord(2/3)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  3. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.04
    0.035555765 = product of:
      0.07111153 = sum of:
        0.07111153 = product of:
          0.106667295 = sum of:
            0.05738425 = weight(_text_:j in 759) [ClassicSimilarity], result of:
              0.05738425 = score(doc=759,freq=4.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.34753868 = fieldWeight in 759, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
            0.049283046 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.049283046 = score(doc=759,freq=2.0), product of:
                0.18196987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051964227 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    11. 5.2013 19:22:18
  4. OWL Web Ontology Language Test Cases (2004) 0.03
    0.03423232 = product of:
      0.06846464 = sum of:
        0.06846464 = product of:
          0.102696955 = sum of:
            0.046373475 = weight(_text_:j in 4685) [ClassicSimilarity], result of:
              0.046373475 = score(doc=4685,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.28085366 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
            0.056323484 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.056323484 = score(doc=4685,freq=2.0), product of:
                0.18196987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051964227 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    14. 8.2011 13:33:22
    Editor
    Carroll, J.J. u. J. de Roo
  5. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.03
    0.029953279 = product of:
      0.059906557 = sum of:
        0.059906557 = product of:
          0.089859836 = sum of:
            0.04057679 = weight(_text_:j in 4330) [ClassicSimilarity], result of:
              0.04057679 = score(doc=4330,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.24574696 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
            0.049283046 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.049283046 = score(doc=4330,freq=2.0), product of:
                0.18196987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051964227 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    12. 2.2011 17:35:22
  6. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.03
    0.02949137 = sum of:
      0.017898 = product of:
        0.071592 = sum of:
          0.071592 = weight(_text_:authors in 4260) [ClassicSimilarity], result of:
            0.071592 = score(doc=4260,freq=2.0), product of:
              0.23689525 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051964227 = queryNorm
              0.30220953 = fieldWeight in 4260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4260)
        0.25 = coord(1/4)
      0.01159337 = product of:
        0.034780107 = sum of:
          0.034780107 = weight(_text_:j in 4260) [ClassicSimilarity], result of:
            0.034780107 = score(doc=4260,freq=2.0), product of:
              0.16511615 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.051964227 = queryNorm
              0.21064025 = fieldWeight in 4260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=4260)
        0.33333334 = coord(1/3)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
  7. Auer, S.; Lehmann, J.: What have Innsbruck and Leipzig in common? : extracting semantics from Wiki content (2007) 0.03
    0.02949137 = sum of:
      0.017898 = product of:
        0.071592 = sum of:
          0.071592 = weight(_text_:authors in 2481) [ClassicSimilarity], result of:
            0.071592 = score(doc=2481,freq=2.0), product of:
              0.23689525 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051964227 = queryNorm
              0.30220953 = fieldWeight in 2481, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2481)
        0.25 = coord(1/4)
      0.01159337 = product of:
        0.034780107 = sum of:
          0.034780107 = weight(_text_:j in 2481) [ClassicSimilarity], result of:
            0.034780107 = score(doc=2481,freq=2.0), product of:
              0.16511615 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.051964227 = queryNorm
              0.21064025 = fieldWeight in 2481, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=2481)
        0.33333334 = coord(1/3)
    
    Abstract
    Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used.
  8. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.03
    0.025674239 = product of:
      0.051348478 = sum of:
        0.051348478 = product of:
          0.07702272 = sum of:
            0.034780107 = weight(_text_:j in 662) [ClassicSimilarity], result of:
              0.034780107 = score(doc=662,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.21064025 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
            0.04224261 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.04224261 = score(doc=662,freq=2.0), product of:
                0.18196987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051964227 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:29:20
  9. Berners-Lee, T.; Hendler, J.; Lassila, O.: Mein Computer versteht mich (2001) 0.02
    0.024907973 = product of:
      0.049815945 = sum of:
        0.049815945 = product of:
          0.074723914 = sum of:
            0.028350435 = weight(_text_:h in 4550) [ClassicSimilarity], result of:
              0.028350435 = score(doc=4550,freq=2.0), product of:
                0.12910248 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.051964227 = queryNorm
                0.21959636 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4550)
            0.046373475 = weight(_text_:j in 4550) [ClassicSimilarity], result of:
              0.046373475 = score(doc=4550,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.28085366 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4550)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2001, H.8, S.42-49
  10. Fluit, C.; Horst, H. ter; Meer, J. van der; Sabou, M.; Mika, P.: Spectacle (2004) 0.02
    0.023483109 = product of:
      0.046966217 = sum of:
        0.046966217 = product of:
          0.07044932 = sum of:
            0.021262825 = weight(_text_:h in 4337) [ClassicSimilarity], result of:
              0.021262825 = score(doc=4337,freq=2.0), product of:
                0.12910248 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.051964227 = queryNorm
                0.16469726 = fieldWeight in 4337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4337)
            0.049186498 = weight(_text_:j in 4337) [ClassicSimilarity], result of:
              0.049186498 = score(doc=4337,freq=4.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.2978903 = fieldWeight in 4337, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4337)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
  11. Köstlbacher, A.; Maurus, J.: Semantische Wikis für das Wissensmanagement : Reif für den praktischen Einsatz? (2009) 0.02
    0.021794474 = product of:
      0.043588948 = sum of:
        0.043588948 = product of:
          0.06538342 = sum of:
            0.02480663 = weight(_text_:h in 2913) [ClassicSimilarity], result of:
              0.02480663 = score(doc=2913,freq=2.0), product of:
                0.12910248 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.051964227 = queryNorm
                0.19214681 = fieldWeight in 2913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2913)
            0.04057679 = weight(_text_:j in 2913) [ClassicSimilarity], result of:
              0.04057679 = score(doc=2913,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.24574696 = fieldWeight in 2913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2913)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Information - Wissenschaft und Praxis. 60(2009) H.4, S.225-231
  12. Meyer, A.: wiki2rdf: Automatische Extraktion von RDF-Tripeln aus Artikelvolltexten der Wikipedia (2013) 0.02
    0.021794474 = product of:
      0.043588948 = sum of:
        0.043588948 = product of:
          0.06538342 = sum of:
            0.02480663 = weight(_text_:h in 1017) [ClassicSimilarity], result of:
              0.02480663 = score(doc=1017,freq=2.0), product of:
                0.12910248 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.051964227 = queryNorm
                0.19214681 = fieldWeight in 1017, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1017)
            0.04057679 = weight(_text_:j in 1017) [ClassicSimilarity], result of:
              0.04057679 = score(doc=1017,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.24574696 = fieldWeight in 1017, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1017)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.degruyter.com/view/j/iwp.2013.64.issue-2-3/iwp-2013-0015/iwp-2013-0015.xml?format=INT.
    Source
    Information - Wissenschaft und Praxis. 64(2013) H.2/3, S.115-126
  13. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.02
    0.021319248 = sum of:
      0.011932 = product of:
        0.047728 = sum of:
          0.047728 = weight(_text_:authors in 1626) [ClassicSimilarity], result of:
            0.047728 = score(doc=1626,freq=2.0), product of:
              0.23689525 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051964227 = queryNorm
              0.20147301 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.25 = coord(1/4)
      0.009387247 = product of:
        0.028161742 = sum of:
          0.028161742 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.028161742 = score(doc=1626,freq=2.0), product of:
              0.18196987 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051964227 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
  14. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.020633243 = product of:
      0.041266486 = sum of:
        0.041266486 = product of:
          0.16506594 = sum of:
            0.16506594 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16506594 = score(doc=701,freq=2.0), product of:
                0.4405533 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051964227 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  15. Voss, J.: LibraryThing : Web 2.0 für Literaturfreunde und Bibliotheken (2007) 0.02
    0.020476155 = product of:
      0.04095231 = sum of:
        0.04095231 = sum of:
          0.008859511 = weight(_text_:h in 1847) [ClassicSimilarity], result of:
            0.008859511 = score(doc=1847,freq=2.0), product of:
              0.12910248 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.051964227 = queryNorm
              0.06862386 = fieldWeight in 1847, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1847)
          0.01449171 = weight(_text_:j in 1847) [ClassicSimilarity], result of:
            0.01449171 = score(doc=1847,freq=2.0), product of:
              0.16511615 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.051964227 = queryNorm
              0.08776677 = fieldWeight in 1847, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1847)
          0.017601088 = weight(_text_:22 in 1847) [ClassicSimilarity], result of:
            0.017601088 = score(doc=1847,freq=2.0), product of:
              0.18196987 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051964227 = queryNorm
              0.09672529 = fieldWeight in 1847, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1847)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 10:36:23
    Source
    Mitteilungsblatt der Bibliotheken in Niedersachsen und Sachsen-Anhalt. 2007, H.137, S.12-13
  16. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.02
    0.02031758 = product of:
      0.04063516 = sum of:
        0.04063516 = product of:
          0.060952738 = sum of:
            0.032790996 = weight(_text_:j in 2661) [ClassicSimilarity], result of:
              0.032790996 = score(doc=2661,freq=4.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.19859353 = fieldWeight in 2661, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2661)
            0.028161742 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
              0.028161742 = score(doc=2661,freq=2.0), product of:
                0.18196987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051964227 = queryNorm
                0.15476047 = fieldWeight in 2661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2661)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  17. Iosif, V.; Mika, P.; Larsson, R.; Akkermans, H.: Field experimenting with Semantic Web tools in a virtual organization (2004) 0.02
    0.018680979 = product of:
      0.037361957 = sum of:
        0.037361957 = product of:
          0.056042932 = sum of:
            0.021262825 = weight(_text_:h in 4412) [ClassicSimilarity], result of:
              0.021262825 = score(doc=4412,freq=2.0), product of:
                0.12910248 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.051964227 = queryNorm
                0.16469726 = fieldWeight in 4412, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4412)
            0.034780107 = weight(_text_:j in 4412) [ClassicSimilarity], result of:
              0.034780107 = score(doc=4412,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.21064025 = fieldWeight in 4412, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4412)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
  18. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.02
    0.018378925 = product of:
      0.03675785 = sum of:
        0.03675785 = product of:
          0.05513677 = sum of:
            0.020288395 = weight(_text_:j in 1155) [ClassicSimilarity], result of:
              0.020288395 = score(doc=1155,freq=2.0), product of:
                0.16511615 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.051964227 = queryNorm
                0.12287348 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
            0.034848377 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
              0.034848377 = score(doc=1155,freq=4.0), product of:
                0.18196987 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051964227 = queryNorm
                0.19150631 = fieldWeight in 1155, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    17.12.2013 12:51:22
    Editor
    Greenberg, J.
  19. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.02
    0.01786817 = sum of:
      0.014915001 = product of:
        0.059660003 = sum of:
          0.059660003 = weight(_text_:authors in 468) [ClassicSimilarity], result of:
            0.059660003 = score(doc=468,freq=8.0), product of:
              0.23689525 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051964227 = queryNorm
              0.25184128 = fieldWeight in 468, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=468)
        0.25 = coord(1/4)
      0.0029531706 = product of:
        0.008859511 = sum of:
          0.008859511 = weight(_text_:h in 468) [ClassicSimilarity], result of:
            0.008859511 = score(doc=468,freq=2.0), product of:
              0.12910248 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.051964227 = queryNorm
              0.06862386 = fieldWeight in 468, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.01953125 = fieldNorm(doc=468)
        0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
  20. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.017619494 = sum of:
      0.0074575003 = product of:
        0.029830001 = sum of:
          0.029830001 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.029830001 = score(doc=150,freq=2.0), product of:
              0.23689525 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051964227 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.25 = coord(1/4)
      0.010161994 = product of:
        0.03048598 = sum of:
          0.03048598 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.03048598 = score(doc=150,freq=6.0), product of:
              0.18196987 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051964227 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.33333334 = coord(1/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.

Languages

  • e 103
  • d 46
  • f 1
  • More… Less…

Types

  • a 103
  • el 40
  • m 19
  • s 9
  • n 4
  • r 3
  • x 2
  • More… Less…

Subjects