Search (293 results, page 1 of 15)

  • × language_ss:"e"
  • × theme_ss:"Internet"
  • × type_ss:"a"
  1. Keller, R.M.: ¬A bookmarking service for organizing and sharing URLs (1997) 0.04
    0.04063629 = product of:
      0.121908866 = sum of:
        0.0726894 = weight(_text_:ranking in 2721) [ClassicSimilarity], result of:
          0.0726894 = score(doc=2721,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 2721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
        0.049219467 = product of:
          0.0738292 = sum of:
            0.030743055 = weight(_text_:29 in 2721) [ClassicSimilarity], result of:
              0.030743055 = score(doc=2721,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 2721, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2721)
            0.04308614 = weight(_text_:22 in 2721) [ClassicSimilarity], result of:
              0.04308614 = score(doc=2721,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.32829654 = fieldWeight in 2721, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2721)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Presents WebTagger, an implemented prototype of a personal book marking service that provides both individuals and groups with a customisable means of organizing and accessing Web-based information resources. The service enables users to supply feedback on the utility of these resources relative to their informatio needs, and provides dynamically updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet and require no special software. The service simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers and enables rapid access to stored bookmarks
    Date
    1. 8.1996 22:08:06
    17. 1.1999 14:22:14
    Source
    Computer networks and ISDN systems. 29(1997) no.8, S.1103-1114
  2. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.04
    0.037651278 = product of:
      0.112953834 = sum of:
        0.102798335 = weight(_text_:ranking in 2742) [ClassicSimilarity], result of:
          0.102798335 = score(doc=2742,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 2742, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.0101555 = product of:
          0.030466499 = sum of:
            0.030466499 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.030466499 = score(doc=2742,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  3. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.03
    0.032217465 = product of:
      0.09665239 = sum of:
        0.084804304 = weight(_text_:ranking in 1319) [ClassicSimilarity], result of:
          0.084804304 = score(doc=1319,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 1319, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.011848084 = product of:
          0.03554425 = sum of:
            0.03554425 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.03554425 = score(doc=1319,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
  4. Hsieh-Yee, I.: ¬The retrieval power of selected search engines : how well do they address general reference questions and subject questions? (1998) 0.03
    0.032217465 = product of:
      0.09665239 = sum of:
        0.084804304 = weight(_text_:ranking in 2186) [ClassicSimilarity], result of:
          0.084804304 = score(doc=2186,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2186)
        0.011848084 = product of:
          0.03554425 = sum of:
            0.03554425 = weight(_text_:22 in 2186) [ClassicSimilarity], result of:
              0.03554425 = score(doc=2186,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.2708308 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2186)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Evaluates the performance of 8 major Internet search engines in answering 21 real reference questions and 5 made up subject questions. Reports on the retrieval and relevancy ranking abilities of the search engines. Concludes that the search engines did not produce good results for the reference questions unlike for the subject questions. The best engines are identified by type of questions, with Infoseek best for the subject questions, and OpenText best for refrence questions
    Date
    25.12.1998 19:22:51
  5. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.03
    0.027645696 = product of:
      0.082937084 = sum of:
        0.0726894 = weight(_text_:ranking in 3090) [ClassicSimilarity], result of:
          0.0726894 = score(doc=3090,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 3090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=3090)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 3090) [ClassicSimilarity], result of:
              0.030743055 = score(doc=3090,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Date
    9. 1.2005 19:20:29
  6. Davis, C.H.: From document retrieval to Web browsing : some universal concerns (1997) 0.01
    0.014134051 = product of:
      0.084804304 = sum of:
        0.084804304 = weight(_text_:ranking in 399) [ClassicSimilarity], result of:
          0.084804304 = score(doc=399,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.16666667 = coord(1/6)
    
    Abstract
    Computer based systems can produce enourmous retrieval sets even when good search logic is used. Sometimes this is desirable, more often it is not. Appropriate filters can limit search results, but they represent only a partial solution. Simple ranking techniques are needed that are both effective and easily understood by the humans doing the searching. Optimal search output, whether from a traditional database or the Internet, will result when intuitive interfaces are designed that inspire confidence while making the necessary mathematics transparent. Weighted term searching using powers of 2, a technique proposed early in the history of information retrieval, can be simplifies and used in combination with modern graphics and textual input to achieve these results
  7. Ardö, A.; Koch, T.: Wide-area information server (WAIS) as the hub of an electronic library service at Lund University (1993) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 8459) [ClassicSimilarity], result of:
          0.0726894 = score(doc=8459,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 8459, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=8459)
      0.16666667 = coord(1/6)
    
    Abstract
    Electronic information sources are being collected at the Lund University Library within the areas of computer science, Internet use and environmental studies. Within each area there are several different types of sources e.g. the environment area has bibliographic information, journal content pages, a local directory database on environmental related research projects and an archive of articles from relevant electronic conferences. A seed-bank database is planned in collaboration with the Nordic Gene Bank. The popularity of the wide area information server is growing and there are several hundred available sources today, however, improvements need to be done with the possibilities to select sources and in the search and relevance-ranking algorithms
  8. MacDougall, S.: Rethinking indexing : the impact of the Internet (1996) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 704) [ClassicSimilarity], result of:
          0.0726894 = score(doc=704,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=704)
      0.16666667 = coord(1/6)
    
    Abstract
    Considers the challenge to professional indexers posed by the Internet. Indexing and searching on the Internet appears to have a retrograde step, as well developed and efficient information retrieval techniques have been replaced by cruder techniques, involving automatic keyword indexing and frequency ranking, leading to large retrieval sets and low precision. This is made worse by the apparent acceptance of this poor perfromance by Internet users and the feeling, on the part of indexers, that they are being bypassed by the producers of these hyperlinked menus and search engines. Key issues are: how far 'human' indexing will still be required in the Internet environment; how indexing techniques will have to change to stay relevant; and the future role of indexers. The challenge facing indexers is to adapt their skills to suit the online environment and to convince publishers of the need for efficient indexes on the Internet
  9. Rieh, S.Y.: Judgment of information quality and cognitive authority in the Web (2002) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 202) [ClassicSimilarity], result of:
          0.0726894 = score(doc=202,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=202)
      0.16666667 = coord(1/6)
    
    Abstract
    In the Web, making judgments of information quality and authority is a difficult task for most users because overall, there is no quality control mechanism. This study examines the problem of the judgment of information quality and cognitive authority by observing people's searching behavior in the Web. Its purpose is to understand the various factors that influence people's judgment of quality and authority in the Web, and the effects of those judgments on selection behaviors. Fifteen scholars from diverse disciplines participated, and data were collected combining verbal protocols during the searches, search logs, and postsearch interviews. It was found that the subjects made two distinct kinds of judgment: predictive judgment, and evaluative judgment. The factors influencing each judgment of quality and authority were identified in terms of characteristics of information objects, characteristics of sources, knowledge, situation, ranking in search output, and general assumption. Implications for Web design that will effectively support people's judgments of quality and authority are also discussed
  10. Garnsey, M.R.: What distance learners should know about information retrieval on the World Wide Web (2002) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 1626) [ClassicSimilarity], result of:
          0.0726894 = score(doc=1626,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 1626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1626)
      0.16666667 = coord(1/6)
    
    Abstract
    The Internet can be a valuable tool allowing distance learners to access information not available locally. Search engines are the most common means of locating relevant information an the Internet, but to use them efficiently students should be taught the basics of searching and how to evaluate the results. This article briefly reviews how Search engines work, studies comparing Search engines, and criteria useful in evaluating the quality of returned Web pages. Research indicates there are statistical differences in the precision of Search engines, with AltaVista ranking high in several studies. When evaluating the quality of Web pages, standard criteria used in evaluating print resources is appropriate, as well as additional criteria which relate to the Web site itself. Giving distance learners training in how to use Search engines and how to evaluate the results will allow them to access relevant information efficiently while ensuring that it is of adequate quality.
  11. Pu, H.-T.; Chuang, S.-L.; Yang, C.: Subject categorization of query terms for exploring Web users' search interests (2002) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 587) [ClassicSimilarity], result of:
          0.0605745 = score(doc=587,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=587)
      0.16666667 = coord(1/6)
    
    Abstract
    Subject content analysis of Web query terms is essential to understand Web searching interests. Such analysis includes exploring search topics and observing changes in their frequency distributions with time. To provide a basis for in-depth analysis of users' search interests on a larger scale, this article presents a query categorization approach to automatically classifying Web query terms into broad subject categories. Because a query is short in length and simple in structure, its intended subject(s) of search is difficult to judge. Our approach, therefore, combines the search processes of real-world search engines to obtain highly ranked Web documents based on each unknown query term. These documents are used to extract cooccurring terms and to create a feature set. An effective ranking function has also been developed to find the most appropriate categories. Three search engine logs in Taiwan were collected and tested. They contained over 5 million queries from different periods of time. The achieved performance is quite encouraging compared with that of human categorization. The experimental results demonstrate that the approach is efficient in dealing with large numbers of queries and adaptable to the dynamic Web environment. Through good integration of human and machine efforts, the frequency distributions of subject categories in response to changes in users' search interests can be systematically observed in real time. The approach has also shown potential for use in various information retrieval applications, and provides a basis for further Web searching studies.
  12. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 3091) [ClassicSimilarity], result of:
          0.0605745 = score(doc=3091,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 3091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
      0.16666667 = coord(1/6)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
  13. Nikolov, D.; Lalmas, M.; Flammini, A.; Menczer, F.: Quantifying biases in online information exposure (2019) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 4986) [ClassicSimilarity], result of:
          0.0605745 = score(doc=4986,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 4986, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4986)
      0.16666667 = coord(1/6)
    
    Abstract
    Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this article, we mine a massive data set of web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."
  14. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 5300) [ClassicSimilarity], result of:
          0.0605745 = score(doc=5300,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
      0.16666667 = coord(1/6)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
  15. Hasanain, M.; Elsayed, T.: Studying effectiveness of Web search for fact checking (2022) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 558) [ClassicSimilarity], result of:
          0.0605745 = score(doc=558,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=558)
      0.16666667 = coord(1/6)
    
    Abstract
    Web search is commonly used by fact checking systems as a source of evidence for claim verification. In this work, we demonstrate that the task of retrieving pages useful for fact checking, called evidential pages, is indeed different from the task of retrieving topically relevant pages that are typically optimized by search engines; thus, it should be handled differently. We conduct a comprehensive study on the performance of retrieving evidential pages over a test collection we developed for the task of re-ranking Web pages by usefulness for fact-checking. Results show that pages (retrieved by a commercial search engine) that are topically relevant to a claim are not always useful for verifying it, and that the engine's performance in retrieving evidential pages is weakly correlated with retrieval of topically relevant pages. Additionally, we identify types of evidence in evidential pages and some linguistic cues that can help predict page usefulness. Moreover, preliminary experiments show that a retrieval model leveraging those cues has a higher performance compared to the search engine. Finally, we show that existing systems have a long way to go to support effective fact checking. To that end, our work provides insights to guide design of better future systems for the task.
  16. Bachiochi, D.: Usability studies and designing navigational aids for the World Wide Web (1997) 0.01
    0.009068082 = product of:
      0.05440849 = sum of:
        0.05440849 = product of:
          0.081612736 = sum of:
            0.04099074 = weight(_text_:29 in 2402) [ClassicSimilarity], result of:
              0.04099074 = score(doc=2402,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.31092256 = fieldWeight in 2402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2402)
            0.040622 = weight(_text_:22 in 2402) [ClassicSimilarity], result of:
              0.040622 = score(doc=2402,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.30952093 = fieldWeight in 2402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2402)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 29(1997) no.8, S.1489-1496
  17. Wiley, D.L.: ¬The organizational politics of the World Wide Web (1998) 0.01
    0.009068082 = product of:
      0.05440849 = sum of:
        0.05440849 = product of:
          0.081612736 = sum of:
            0.04099074 = weight(_text_:29 in 2778) [ClassicSimilarity], result of:
              0.04099074 = score(doc=2778,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.31092256 = fieldWeight in 2778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2778)
            0.040622 = weight(_text_:22 in 2778) [ClassicSimilarity], result of:
              0.040622 = score(doc=2778,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.30952093 = fieldWeight in 2778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2778)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 1.1999 18:41:46
    Source
    Internet reference services quarterly. 3(1998) no.2, S.23-29
  18. Moore, A.: As I sit studying : WWW-based reference services (1998) 0.01
    0.009068082 = product of:
      0.05440849 = sum of:
        0.05440849 = product of:
          0.081612736 = sum of:
            0.04099074 = weight(_text_:29 in 1457) [ClassicSimilarity], result of:
              0.04099074 = score(doc=1457,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.31092256 = fieldWeight in 1457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1457)
            0.040622 = weight(_text_:22 in 1457) [ClassicSimilarity], result of:
              0.040622 = score(doc=1457,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.30952093 = fieldWeight in 1457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1457)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    17. 7.1998 22:10:42
    Source
    Internet reference services quarterly. 3(1998) no.1, S.29-36
  19. Davis, E.; Stone, J.: ¬A painless route on to the Web : Web services 1: The Royal Postgraduate Medical School (1997) 0.01
    0.009068082 = product of:
      0.05440849 = sum of:
        0.05440849 = product of:
          0.081612736 = sum of:
            0.04099074 = weight(_text_:29 in 1632) [ClassicSimilarity], result of:
              0.04099074 = score(doc=1632,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.31092256 = fieldWeight in 1632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1632)
            0.040622 = weight(_text_:22 in 1632) [ClassicSimilarity], result of:
              0.040622 = score(doc=1632,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.30952093 = fieldWeight in 1632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1632)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 7.1998 21:22:27
  20. Broder, A.Z.: Syntactic clustering of the Web (1997) 0.01
    0.009068082 = product of:
      0.05440849 = sum of:
        0.05440849 = product of:
          0.081612736 = sum of:
            0.04099074 = weight(_text_:29 in 2671) [ClassicSimilarity], result of:
              0.04099074 = score(doc=2671,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.31092256 = fieldWeight in 2671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2671)
            0.040622 = weight(_text_:22 in 2671) [ClassicSimilarity], result of:
              0.040622 = score(doc=2671,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.30952093 = fieldWeight in 2671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2671)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 29(1997) no.8, S.1157-1166

Years