Search (955 results, page 48 of 48)

  • × theme_ss:"Suchmaschinen"
  1. Chen, S.Y.; Magoulas, G.D.; Dimakopoulos, D.: ¬A flexible interface design for Web directories to accommodate different cognitive styles (2005) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3269) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3269,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3269, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3269)
      0.16666667 = coord(1/6)
    
    Abstract
    Search engines are very popular tools for collecting information from distributed resources. They provide not only search facilities, but they also offer directories for users to browse content divided into groups. In this paper, we've adopted an individual differences approach to explore user's attitudes towards various interface features provided by existing Web directories. Among a variety of individual differences, cognitive style is a particularly important characteristic that influences the effectiveness of information seeking. Empirical results indicate that users' cognitive styles influence their reactions to the organization of subject categories, presentation of the results, and screen layout. We developed a set of design guidelines an the basis of these results, and propose a flexible interface that adopts these guidelines to accommodate the preferences of different cognitive style groups.
  2. Radev, D.; Fan, W.; Qu, H.; Wu, H.; Grewal, A.: Probabilistic question answering on the Web (2005) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3455) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3455,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3455, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3455)
      0.16666667 = coord(1/6)
    
    Abstract
    Web-based search engines such as Google and NorthernLight return documents that are relevant to a user query, not answers to user questions. We have developed an architecture that augments existing search engines so that they support natural language question answering. The process entails five steps: query modulation, document retrieval, passage extraction, phrase extraction, and answer ranking. In this article, we describe some probabilistic approaches to the last three of these stages. We show how our techniques apply to a number of existing search engines, and we also present results contrasting three different methods for question answering. Our algorithm, probabilistic phrase reranking (PPR), uses proximity and question type features and achieves a total reciprocal document rank of .20 an the TREC8 corpus. Our techniques have been implemented as a Web-accessible system, called NSIR.
  3. Warnick, W.L.; Leberman, A.; Scott, R.L.; Spence, K.J.; Johnsom, L.A.; Allen, V.S.: Searching the deep Web : directed query engine applications at the Department of Energy (2001) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 1215) [ClassicSimilarity], result of:
          0.005354538 = score(doc=1215,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 1215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1215)
      0.16666667 = coord(1/6)
    
    Abstract
    Directed Query Engines, an emerging class of search engine specifically designed to access distributed resources on the deep web, offer the opportunity to create inexpensive digital libraries. Already, one such engine, Distributed Explorer, has been used to select and assemble high quality information resources and incorporate them into publicly available systems for the physical sciences. By nesting Directed Query Engines so that one query launches several other engines in a cascading fashion, enormous virtual collections may soon be assembled to form a comprehensive information infrastructure for the physical sciences. Once a Directed Query Engine has been configured for a set of information resources, distributed alerts tools can provide patrons with personalized, profile-based notices of recent additions to any of the selected resources. Due to the potentially enormous size and scope of Directed Query Engine applications, consideration must be given to issues surrounding the representation of large quantities of information from multiple, heterogeneous sources.
  4. Thelwall, M.: Extracting accurate and complete results from search engines : case study windows live (2008) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 1338) [ClassicSimilarity], result of:
          0.005354538 = score(doc=1338,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1338)
      0.16666667 = coord(1/6)
    
    Abstract
    Although designed for general Web searching, Webometrics and related research commercial search engines are also used to produce estimated hit counts or lists of URLs matching a query. Unfortunately, however, they do not return all matching URLs for a search and their hit count estimates are unreliable. In this article, we assess whether it is possible to obtain complete lists of matching URLs from Windows Live, and whether any of its hit count estimates are robust. As part of this, we introduce two new methods to extract extra URLs from search engines: automated query splitting and automated domain and TLD searching. Both methods successfully identify additional matching URLs but the findings suggest that there is no way to get complete lists of matching URLs or accurate hit counts from Windows Live, although some estimating suggestions are provided.
  5. Ding, L.; Finin, T.; Joshi, A.; Peng, Y.; Cost, R.S.; Sachs, J.; Pan, R.; Reddivari, P.; Doshi, V.: Swoogle : a Semantic Web search and metadata engine (2004) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 4704) [ClassicSimilarity], result of:
          0.005354538 = score(doc=4704,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 4704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4704)
      0.16666667 = coord(1/6)
    
    Abstract
    Swoogle is a crawler-based indexing and retrieval system for the Semantic Web, i.e., for Web documents in RDF or OWL. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is rank, a measure of the importance of a Semantic Web document.
  6. Ortiz-Cordova, A.; Jansen, B.J.: Classifying web search queries to identify high revenue generating customers (2012) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 279) [ClassicSimilarity], result of:
          0.005354538 = score(doc=279,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=279)
      0.16666667 = coord(1/6)
    
    Abstract
    Traffic from search engines is important for most online businesses, with the majority of visitors to many websites being referred by search engines. Therefore, an understanding of this search engine traffic is critical to the success of these websites. Understanding search engine traffic means understanding the underlying intent of the query terms and the corresponding user behaviors of searchers submitting keywords. In this research, using 712,643 query keywords from a popular Spanish music website relying on contextual advertising as its business model, we use a k-means clustering algorithm to categorize the referral keywords with similar characteristics of onsite customer behavior, including attributes such as clickthrough rate and revenue. We identified 6 clusters of consumer keywords. Clusters range from a large number of users who are low impact to a small number of high impact users. We demonstrate how online businesses can leverage this segmentation clustering approach to provide a more tailored consumer experience. Implications are that businesses can effectively segment customers to develop better business models to increase advertising conversion rates.
  7. Noruzi, A.: Google Scholar : the new generation of citation indexes (2005) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 5061) [ClassicSimilarity], result of:
          0.005354538 = score(doc=5061,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 5061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5061)
      0.16666667 = coord(1/6)
    
    Abstract
    Google Scholar (http://scholar.google.com) provides a new method of locating potentially relevant articles on a given subject by identifying subsequent articles that cite a previously published article. An important feature of Google Scholar is that researchers can use it to trace interconnections among authors citing articles on the same topic and to determine the frequency with which others cite a specific article, as it has a "cited by" feature. This study begins with an overview of how to use Google Scholar for citation analysis and identifies advanced search techniques not well documented by Google Scholar. This study also compares the citation counts provided by Web of Science and Google Scholar for articles in the field of "Webometrics." It makes several suggestions for improving Google Scholar. Finally, it concludes that Google Scholar provides a free alternative or complement to other citation indexes.
  8. Chen, H.; Houston, A.L.; Sewell, R.R.; Schatz, B.R.: Internet browsing and searching : user evaluations of category map and concept space techniques (1998) 0.00
    8.413845E-4 = product of:
      0.005048307 = sum of:
        0.005048307 = weight(_text_:in in 869) [ClassicSimilarity], result of:
          0.005048307 = score(doc=869,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.08501591 = fieldWeight in 869, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=869)
      0.16666667 = coord(1/6)
    
    Abstract
    The Internet provides an exceptional testbed for developing algorithms that can improve bowsing and searching large information spaces. Browsing and searching tasks are susceptible to problems of information overload and vocabulary differences. Much of the current research is aimed at the development and refinement of algorithms to improve browsing and searching by addressing these problems. Our research was focused on discovering whether two of the algorithms our research group has developed, a Kohonen algorithm category map for browsing, and an automatically generated concept space algorithm for searching, can help improve browsing and / or searching the Internet. Our results indicate that a Kohonen self-organizing map (SOM)-based algorithm can successfully categorize a large and eclectic Internet information space (the Entertainment subcategory of Yahoo!) into manageable sub-spaces that users can successfully navigate to locate a homepage of interest to them. The SOM algorithm worked best with browsing tasks that were very broad, and in which subjects skipped around between categories. Subjects especially liked the visual and graphical aspects of the map. Subjects who tried to do a directed search, and those that wanted to use the more familiar mental models (alphabetic or hierarchical organization) for browsing, found that the work did not work well. The results from the concept space experiment were especially encouraging. There were no significant differences among the precision measures for the set of documents identified by subject-suggested terms, thesaurus-suggested terms, and the combination of subject- and thesaurus-suggested terms. The recall measures indicated that the combination of subject- and thesaurs-suggested terms exhibited significantly better recall than subject-suggested terms alone. Furthermore, analysis of the homepages indicated that there was limited overlap between the homepages retrieved by the subject-suggested and thesaurus-suggested terms. Since the retrieval homepages for the most part were different, this suggests that a user can enhance a keyword-based search by using an automatically generated concept space. Subejcts especially liked the level of control that they could exert over the search, and the fact that the terms suggested by the thesaurus were 'real' (i.e., orininating in the homepages) and therefore guaranteed to have retrieval success
  9. Vidmar, D.; Anderson, C.: History of Internet search tools (2002) 0.00
    8.413845E-4 = product of:
      0.005048307 = sum of:
        0.005048307 = weight(_text_:in in 4258) [ClassicSimilarity], result of:
          0.005048307 = score(doc=4258,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.08501591 = fieldWeight in 4258, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4258)
      0.16666667 = coord(1/6)
    
    Abstract
    Finding information an the Internet and the World Wide Web (WWW) has always been somewhat like trying to find a needle in a haystack. An added dimension to the haystack metaphor is that the Internet environment is a dynamic collection of information. Changes occur almost every second. New pages are added. Old pages are deleted or altered. From the very beginning of the World Wide Web (WWW), search tools were needed to create order and provide an interface that allowed users to retrieve current documents while at the same time deleting inactive sites. Search databases and indexes could not be static; neither could the interface that served as the public relations instrument for the product. The tools of Internet searching emerged from the simple and modest beginnings of research and graduate school projects to the highly competitive and highly secretive proprietary corporate environment. As search tools evolved, they changed not only how people find information, but also how they viewed the world of the twenty-first century. The Internet grew out of a need to connect computers at one location to computers at other locations, thus creating a globalization of shared resources. The early iterations of shared data were basic but grew rapidly as more and more computers became connected. Connectivity led to an information base that multiplied and evolved exponentially. This information base ultimately became unwieldy, and some of the early Internet pioneers began to see the necessity for both an organizational scheme and a method for accessing what was available. Each new tool provided more order, and in general an improved searching mechanism. From the early beginnings of Telnet, File Transfer Protocol (FTP), Archie, Veronica, and Gopher to the current iterations of Web search engines and search directories that use graphical interfaces, spiders, worms, robots, complex algorithms, proprietary information, competing interfaces, and advertising, access to the vast store of materials that is the Internet has depended upon search tools.
  10. Spink, A.; Jansen, B.J.; Pedersen , J.: Searching for people on Web search engines (2004) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 4429) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=4429,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 4429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
      0.16666667 = coord(1/6)
    
    Abstract
    The Web is a communication and information technology that is often used for the distribution and retrieval of personal information. Many people and organizations mount Web sites containing large amounts of information on individuals, particularly about celebrities. However, limited studies have examined how people search for information on other people, using personal names, via Web search engines. Explores the nature of personal name searching on Web search engines. The specific research questions addressed in the study are: "Do personal names form a major part of queries to Web search engines?"; "What are the characteristics of personal name Web searching?"; and "How effective is personal name Web searching?". Random samples of queries from two Web search engines were analyzed. The findings show that: personal name searching is a common but not a major part of Web searching with few people seeking information on celebrities via Web search engines; few personal name queries include double quotations or additional identifying terms; and name searches on Alta Vista included more advanced search features relative to those on AlltheWeb.com. Discusses the implications of the findings for Web searching and search engines, and further research.
  11. Hupfer, M.E.; Detlor, B.: Gender and Web information seeking : a self-concept orientation model (2006) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5119) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5119,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5119)
      0.16666667 = coord(1/6)
    
    Abstract
    Adapting the consumer behavior selectivity model to the Web environment, this paper's key contribution is the introduction of a self-concept orientation model of Web information seeking. This model, which addresses gender, effort, and information content factors, questions the commonly assumed equivalence of sex and gender by specifying the measurement of gender-related selfconcept traits known as self- and other-orientation. Regression analyses identified associations between self-orientation, other-orientation, and self-reported search frequencies for content with identical subject domain (e.g., medical information, government information) and differing relevance (i.e., important to the individual personally versus important to someone close to him or her). Self- and other-orientation interacted such that when individuals were highly self-oriented, their frequency of search for both self- and other-relevant information depended on their level of other-orientation. Specifically, high-self/high-other individuals, with a comprehensive processing strategy, searched most often, whereas high-self/low-other respondents, with an effort minimization strategy, reported the lowest search frequencies. This interaction pattern was even more pronounced for other-relevant information seeking. We found no sex differences in search frequency for either self-relevant or other-relevant information.
  12. Chen, Z.; Meng, X.; Fowler, R.H.; Zhu, B.: Real-time adaptive feature and document learning for Web search (2001) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5209) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5209,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5209)
      0.16666667 = coord(1/6)
    
    Abstract
    Chen et alia report on the design of FEATURES, a web search engine with adaptive features based on minimal relevance feedback. Rather than developing user profiles from previous searcher activity either at the server or client location, or updating indexes after search completion, FEATURES allows for index and user characterization files to be updated during query modification on retrieval from a general purpose search engine. Indexing terms relevant to a query are defined as the union of all terms assigned to documents retrieved by the initial search run and are used to build a vector space model on this retrieved set. The top ten weighted terms are presented to the user for a relevant non-relevant choice which is used to modify the term weights. Documents are chosen if their summed term weights are greater than some threshold. A user evaluation of the top ten ranked documents as non-relevant will decrease these term weights and a positive judgement will increase them. A new ordering of the retrieved set will generate new display lists of terms and documents. Precision is improved in a test on Alta Vista searches.
  13. Kules, B.; Shneiderman, B.: Users can change their web search tactics : design guidelines for categorized overviews (2008) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 2044) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=2044,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 2044, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2044)
      0.16666667 = coord(1/6)
    
    Abstract
    Categorized overviews of web search results are a promising way to support user exploration, understanding, and discovery. These search interfaces combine a metadata-based overview with the list of search results to enable a rich form of interaction. A study of 24 sophisticated users carrying out complex tasks suggests how searchers may adapt their search tactics when using categorized overviews. This mixed methods study evaluated categorized overviews of web search results organized into thematic, geographic, and government categories. Participants conducted four exploratory searches during a 2-hour session to generate ideas for newspaper articles about specified topics such as "human smuggling." Results showed that subjects explored deeper while feeling more organized, and that the categorized overview helped subjects better assess their results, although no significant differences were detected in the quality of the article ideas. A qualitative analysis of searcher comments identified seven tactics that participants reported adopting when using categorized overviews. This paper concludes by proposing a set of guidelines for the design of exploratory search interfaces. An understanding of the impact of categorized overviews on search tactics will be useful to web search researchers, search interface designers, information architects and web developers.
  14. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3471) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3471,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
      0.16666667 = coord(1/6)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
  15. Souza, J.; Carvalho, A.; Cristo, M.; Moura, E.; Calado, P.; Chirita, P.-A.; Nejdl, W.: Using site-level connections to estimate link confidence (2012) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 498) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=498,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 498, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=498)
      0.16666667 = coord(1/6)
    
    Abstract
    Search engines are essential tools for web users today. They rely on a large number of features to compute the rank of search results for each given query. The estimated reputation of pages is among the effective features available for search engine designers, probably being adopted by most current commercial search engines. Page reputation is estimated by analyzing the linkage relationships between pages. This information is used by link analysis algorithms as a query-independent feature, to be taken into account when computing the rank of the results. Unfortunately, several types of links found on the web may damage the estimated page reputation and thus cause a negative effect on the quality of search results. This work studies alternatives to reduce the negative impact of such noisy links. More specifically, the authors propose and evaluate new methods that deal with noisy links, considering scenarios where the reputation of pages is computed using the PageRank algorithm. They show, through experiments with real web content, that their methods achieve significant improvements when compared to previous solutions proposed in the literature.

Years

Languages

Types

  • a 802
  • el 92
  • m 78
  • x 15
  • s 14
  • r 4
  • p 2
  • More… Less…

Subjects

Classifications