Search (5 results, page 1 of 1)

  • × author_ss:"Nejdl, W."
  1. Henze, N.; Nejdl, W.: ¬A logical characterization of adaptive educational hypermedia (2004) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 5711) [ClassicSimilarity], result of:
              0.009471525 = score(doc=5711,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 5711, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5711)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Currently, adaptive educational hypermedia systems (AEHSs) are described using nonuniform methods, depending on the specific view of the system, the application, or other parameters. There is no common language for expressing the functionality of AEHSs, hence these systems are difficult to compare and analyze. In this paper we investigate how a logical description can be employed to characterize adaptive educational hypermedia. We propose a definition of AEHSs based on first-order logic, characterize some AEHSs resulting from this formalism, and discuss the applicability of this approach.
    Type
    a
  2. Zenz, G.; Zhou, X.; Minack, E.; Siberski, W.; Nejdl, W.: Interactive query construction for keyword search on the Semantic Web (2012) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 430) [ClassicSimilarity], result of:
              0.00894975 = score(doc=430,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 430, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=430)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With the advance of the semantic Web, increasing amounts of data are available in a structured and machine-understandable form. This opens opportunities for users to employ semantic queries instead of simple keyword-based ones to accurately express the information need. However, constructing semantic queries is a demanding task for human users [11]. To compose a valid semantic query, a user has to (1) master a query language (e.g., SPARQL) and (2) acquire sufficient knowledge about the ontology or the schema of the data source. While there are systems which support this task with visual tools [21, 26] or natural language interfaces [3, 13, 14, 18], the process of query construction can still be complex and time consuming. According to [24], users prefer keyword search, and struggle with the construction of semantic queries although being supported with a natural language interface. Several keyword search approaches have already been proposed to ease information seeking on semantic data [16, 32, 35] or databases [1, 31]. However, keyword queries lack the expressivity to precisely describe the user's intent. As a result, ranking can at best put query intentions of the majority on top, making it impossible to take the intentions of all users into consideration.
  3. Ioannou, E.; Nejdl, W.; Niederée, C.; Velegrakis, Y.: Embracing uncertainty in entity linking (2012) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 433) [ClassicSimilarity], result of:
              0.008285859 = score(doc=433,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 433, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=433)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The modern Web has grown from a publishing place of well-structured data and HTML pages for companies and experienced users into a vivid publishing and data exchange community in which everyone can participate, both as a data consumer and as a data producer. Unavoidably, the data available on the Web became highly heterogeneous, ranging from highly structured and semistructured to highly unstructured user-generated content, reflecting different perspectives and structuring principles. The full potential of such data can only be realized by combining information from multiple sources. For instance, the knowledge that is typically embedded in monolithic applications can be outsourced and, thus, used also in other applications. Numerous systems nowadays are already actively utilizing existing content from various sources such as WordNet or Wikipedia. Some well-known examples of such systems include DBpedia, Freebase, Spock, and DBLife. A major challenge during combining and querying information from multiple heterogeneous sources is entity linkage, i.e., the ability to detect whether two pieces of information correspond to the same real-world object. This chapter introduces a novel approach for addressing the entity linkage problem for heterogeneous, uncertain, and volatile data.
  4. Souza, J.; Carvalho, A.; Cristo, M.; Moura, E.; Calado, P.; Chirita, P.-A.; Nejdl, W.: Using site-level connections to estimate link confidence (2012) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 498) [ClassicSimilarity], result of:
              0.008285859 = score(doc=498,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 498, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=498)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Search engines are essential tools for web users today. They rely on a large number of features to compute the rank of search results for each given query. The estimated reputation of pages is among the effective features available for search engine designers, probably being adopted by most current commercial search engines. Page reputation is estimated by analyzing the linkage relationships between pages. This information is used by link analysis algorithms as a query-independent feature, to be taken into account when computing the rank of the results. Unfortunately, several types of links found on the web may damage the estimated page reputation and thus cause a negative effect on the quality of search results. This work studies alternatives to reduce the negative impact of such noisy links. More specifically, the authors propose and evaluate new methods that deal with noisy links, considering scenarios where the reputation of pages is computed using the PageRank algorithm. They show, through experiments with real web content, that their methods achieve significant improvements when compared to previous solutions proposed in the literature.
    Type
    a
  5. Nejdl, W.; Risse, T.: Herausforderungen für die nationale, regionale und thematische Webarchivierung und deren Nutzung (2015) 0.00
    0.0010148063 = product of:
      0.0020296127 = sum of:
        0.0020296127 = product of:
          0.0040592253 = sum of:
            0.0040592253 = weight(_text_:a in 2531) [ClassicSimilarity], result of:
              0.0040592253 = score(doc=2531,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.07643694 = fieldWeight in 2531, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2531)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a