Search (45 results, page 3 of 3)

  • × theme_ss:"Semantic Web"
  • × type_ss:"a"
  1. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.01
    0.0061889226 = product of:
      0.012377845 = sum of:
        0.012377845 = product of:
          0.02475569 = sum of:
            0.02475569 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
              0.02475569 = score(doc=2665,freq=2.0), product of:
                0.18281296 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052204985 = queryNorm
                0.1354154 = fieldWeight in 2665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2665)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Legg, C.: Ontologies on the Semantic Web (2007) 0.01
    0.0052776835 = product of:
      0.010555367 = sum of:
        0.010555367 = product of:
          0.021110734 = sum of:
            0.021110734 = weight(_text_:retrieval in 1979) [ClassicSimilarity], result of:
              0.021110734 = score(doc=1979,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.13368362 = fieldWeight in 1979, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1979)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The "Semantic Web" is touted by its developers as equally revolutionary, although it has not yet achieved anything like the Web's exponential uptake. It seeks to transcend a current limitation of the Web - that it largely requires indexing to be accomplished merely on specific character strings. Thus, a person searching for information about "turkey" (the bird) receives from current search engines many irrelevant pages about "Turkey" (the country) and nothing about the Spanish "pavo" even if he or she is a Spanish-speaker able to understand such pages. The Semantic Web vision is to develop technology to facilitate retrieval of information via meanings, not just spellings. For this to be possible, most commentators believe, Semantic Web applications will have to draw on some kind of shared, structured, machine-readable conceptual scheme. Thus, there has been a convergence between the Semantic Web research community and an older tradition with roots in classical Artificial Intelligence (AI) research (sometimes referred to as "knowledge representation") whose goal is to develop a formal ontology. A formal ontology is a machine-readable theory of the most fundamental concepts or "categories" required in order to understand information pertaining to any knowledge domain. A review of the attempts that have been made to realize this goal provides an opportunity to reflect in interestingly concrete ways on various research questions such as the following: - How explicit a machine-understandable theory of meaning is it possible or practical to construct? - How universal a machine-understandable theory of meaning is it possible or practical to construct? - How much (and what kind of) inference support is required to realize a machine-understandable theory of meaning? - What is it for a theory of meaning to be machine-understandable anyway?
  3. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.01
    0.0052776835 = product of:
      0.010555367 = sum of:
        0.010555367 = product of:
          0.021110734 = sum of:
            0.021110734 = weight(_text_:retrieval in 4404) [ClassicSimilarity], result of:
              0.021110734 = score(doc=4404,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.13368362 = fieldWeight in 4404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Thus, there is a clear need for the web to become more semantic. The aim of introducing semantics into the web is to enhance the precision of search, but also enable the use of logical reasoning on web contents in order to answer queries. The CORPORUM OntoBuilder toolset is developed specifically for this task. It consists of a set of applications that can fulfil a variety of tasks, either as stand-alone tools, or augmenting each other. Important tasks that are dealt with by CORPORUM are related to document and information retrieval (find relevant documents, or support the user finding them), as well as information extraction (building a knowledge base from web documents to answer queries), information dissemination (summarizing strategies and information visualization), and automated document classification strategies. First versions of the toolset are encouraging in that they show large potential as a supportive technology for building up the Semantic Web. In this chapter, methods for transforming the current web into a semantic web are discussed, as well as a technical solution that can perform this task: the CORPORUM tool set. First, the toolset is introduced; followed by some pragmatic issues relating to the approach; then there will be a short overview of the theory in relation to CognIT's vision; and finally, a discussion on some of the applications that arose from the project.
  4. Sah, M.; Wade, V.: Personalized concept-based search on the Linked Open Data (2015) 0.01
    0.0052776835 = product of:
      0.010555367 = sum of:
        0.010555367 = product of:
          0.021110734 = sum of:
            0.021110734 = weight(_text_:retrieval in 2511) [ClassicSimilarity], result of:
              0.021110734 = score(doc=2511,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.13368362 = fieldWeight in 2511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2511)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we present a novel personalized concept-based search mechanism for the Web of Data based on results categorization. The innovation of the paper comes from combining novel categorization and personalization techniques, and using categorization for providing personalization. In our approach, search results (Linked Open Data resources) are dynamically categorized into Upper Mapping and Binding Exchange Layer (UMBEL) concepts using a novel fuzzy retrieval model. Then, results with the same concepts are grouped together to form categories, which we call conceptlenses. Such categorization enables concept-based browsing of the retrieved results aligned to users' intent or interests. When the user selects a concept lens for exploration, results are immediately personalized. In particular, all concept lenses are personally re-organized according to their similarity to the selected lens. Within the selected concept lens; more relevant results are included using results re-ranking and query expansion, as well as relevant concept lenses are suggested to support results exploration. This allows dynamic adaptation of results to the user's local choices. We also support interactive personalization; when the user clicks on a result, within the interacted lens, relevant lenses and results are included using results re-ranking and query expansion. Extensive evaluations were performed to assess our approach: (i) Performance of our fuzzy-based categorization approach was evaluated on a particular benchmark (~10,000 mappings). The evaluations showed that we can achieve highly acceptable categorization accuracy and perform better than the vector space model. (ii) Personalized search efficacy was assessed using a user study with 32 participants in a tourist domain. The results revealed that our approach performed significantly better than a non-adaptive baseline search. (iii) Dynamic personalization performance was evaluated, which illustrated that our personalization approach is scalable. (iv) Finally, we compared our system with the existing LOD search engines, which showed that our approach is unique.
  5. Voss, J.: LibraryThing : Web 2.0 für Literaturfreunde und Bibliotheken (2007) 0.00
    0.004420659 = product of:
      0.008841318 = sum of:
        0.008841318 = product of:
          0.017682636 = sum of:
            0.017682636 = weight(_text_:22 in 1847) [ClassicSimilarity], result of:
              0.017682636 = score(doc=1847,freq=2.0), product of:
                0.18281296 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052204985 = queryNorm
                0.09672529 = fieldWeight in 1847, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1847)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2007 10:36:23

Years

Languages

  • e 40
  • d 5

Types