Search (205 results, page 2 of 11)

  • × language_ss:"e"
  • × theme_ss:"Suchmaschinen"
  • × year_i:[2000 TO 2010}
  1. Jansen, B.J.; Spink, A.: How are we searching the World Wide Web? : A comparison of nine search engine transaction logs (2006) 0.03
    0.027630107 = product of:
      0.055260215 = sum of:
        0.055260215 = product of:
          0.11052043 = sum of:
            0.11052043 = weight(_text_:web in 968) [ClassicSimilarity], result of:
              0.11052043 = score(doc=968,freq=26.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.65002745 = fieldWeight in 968, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=968)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Web and especially major Web search engines are essential tools in the quest to locate online information for many people. This paper reports results from research that examines characteristics and changes in Web searching from nine studies of five Web search engines based in the US and Europe. We compare interactions occurring between users and Web search engines from the perspectives of session length, query length, query complexity, and content viewed among the Web search engines. The results of our research shows (1) users are viewing fewer result pages, (2) searchers on US-based Web search engines use more query operators than searchers on European-based search engines, (3) there are statistically significant differences in the use of Boolean operators and result pages viewed, and (4) one cannot necessary apply results from studies of one particular Web search engine to another Web search engine. The wide spread use of Web search engines, employment of simple queries, and decreased viewing of result pages may have resulted from algorithmic enhancements by Web search engine companies. We discuss the implications of the findings for the development of Web search engines and design of online content.
  2. Naing, M.-M.; Lim, E.-P.; Chiang, R.H.L.: Extracting link chains of relationship instances from a Web site (2006) 0.03
    0.027587567 = product of:
      0.055175133 = sum of:
        0.055175133 = product of:
          0.110350266 = sum of:
            0.110350266 = weight(_text_:web in 6111) [ClassicSimilarity], result of:
              0.110350266 = score(doc=6111,freq=18.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.64902663 = fieldWeight in 6111, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Web pages from a Web site can often be associated with concepts in an ontology, and pairs of Web pages also can be associated with relationships between concepts. With such associations, the Web site can be searched, browsed, or even reorganized based on the concept and relationship labels of its Web pages. In this article, we study the link chain extraction problem that is critical to the extraction of Web pages that are related. A link chain is an ordered list of anchor elements linking two Web pages related by some semantic relationship. We propose a link chain extraction method that derives extraction rules for identifying the anchor elements forming the link chains. We applied the proposed method to two well-structured Web sites and found that its performance in terms of precision and recall is good, even with a small number of training examples.
  3. Broder, A.; Kumar, R.; Maghoul, F.; Raghavan, P.; Rajagopalan, S.; Stata, R.; Tomkins, A.; Wiener, J.: Graph structure in the Web (2000) 0.03
    0.027416745 = product of:
      0.05483349 = sum of:
        0.05483349 = product of:
          0.10966698 = sum of:
            0.10966698 = weight(_text_:web in 5595) [ClassicSimilarity], result of:
              0.10966698 = score(doc=5595,freq=10.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6450079 = fieldWeight in 5595, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5595)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200M pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale
  4. Nicholson, S.: ¬A proposal for categorization and nomenclature for Web search tools (2000) 0.03
    0.027416745 = product of:
      0.05483349 = sum of:
        0.05483349 = product of:
          0.10966698 = sum of:
            0.10966698 = weight(_text_:web in 6103) [ClassicSimilarity], result of:
              0.10966698 = score(doc=6103,freq=10.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6450079 = fieldWeight in 6103, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6103)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ambiguities in Web search tool (more commonly known as "search engine") terminology are problematic when conducting precise, replicable research or when teaching others to use search tools. Standardized terminology would enable Web searchers to be aware of subtle differences between Web search tools and the implications of these for searching. A categorization and nomenclature for standardized classifications of different aspects of Web search tools is proposed, and advantages and disadvantages of using tools in each category are discussed
  5. Spink, A.; Jansen, B.J.; Pedersen , J.: Searching for people on Web search engines (2004) 0.03
    0.026546149 = product of:
      0.053092297 = sum of:
        0.053092297 = product of:
          0.106184594 = sum of:
            0.106184594 = weight(_text_:web in 4429) [ClassicSimilarity], result of:
              0.106184594 = score(doc=4429,freq=24.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6245262 = fieldWeight in 4429, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Web is a communication and information technology that is often used for the distribution and retrieval of personal information. Many people and organizations mount Web sites containing large amounts of information on individuals, particularly about celebrities. However, limited studies have examined how people search for information on other people, using personal names, via Web search engines. Explores the nature of personal name searching on Web search engines. The specific research questions addressed in the study are: "Do personal names form a major part of queries to Web search engines?"; "What are the characteristics of personal name Web searching?"; and "How effective is personal name Web searching?". Random samples of queries from two Web search engines were analyzed. The findings show that: personal name searching is a common but not a major part of Web searching with few people seeking information on celebrities via Web search engines; few personal name queries include double quotations or additional identifying terms; and name searches on Alta Vista included more advanced search features relative to those on AlltheWeb.com. Discusses the implications of the findings for Web searching and search engines, and further research.
  6. Nicholson, S.: Raising reliability of Web search tool research through replication and chaos theory (2000) 0.03
    0.026279347 = product of:
      0.052558694 = sum of:
        0.052558694 = product of:
          0.10511739 = sum of:
            0.10511739 = weight(_text_:web in 4806) [ClassicSimilarity], result of:
              0.10511739 = score(doc=4806,freq=12.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6182494 = fieldWeight in 4806, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4806)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Because the WWW is a dynamic collection of information, the Web search tools (or 'search engines') that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of 10 replications of the 1996 classic Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replicable and therefore reliable results following multile iterations
    Footnote
    Vgl.: Ding, W. u. G. Marchionini: A comparative study of Web search service performance
  7. Ozmutlu, S.; Spink, A.; Ozmutlu, H.C.: ¬A day in the life of Web searching : an exploratory study (2004) 0.03
    0.026279347 = product of:
      0.052558694 = sum of:
        0.052558694 = product of:
          0.10511739 = sum of:
            0.10511739 = weight(_text_:web in 2530) [ClassicSimilarity], result of:
              0.10511739 = score(doc=2530,freq=12.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6182494 = fieldWeight in 2530, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2530)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Understanding Web searching behavior is important in developing more successful and cost-efficient Web search engines. We provide results from a comparative time-based Web study of US-based Excite and Norwegian-based Fast Web search logs, exploring variations in user searching related to changes in time of the day. Findings suggest: (1) fluctuations in Web user behavior over the day, (2) user investigations of query results are much longer, and submission of queries and number of users are much higher in the mornings, and (3) some query characteristics, including terms per query and query reformulation, remain steady throughout the day. Implications and further research are discussed.
  8. Web work : Information seeking and knowledge work on the World Wide Web (2000) 0.03
    0.026009807 = product of:
      0.052019615 = sum of:
        0.052019615 = product of:
          0.10403923 = sum of:
            0.10403923 = weight(_text_:web in 1190) [ClassicSimilarity], result of:
              0.10403923 = score(doc=1190,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6119082 = fieldWeight in 1190, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1190)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Thelwall, M.; Vaughan, L.: New versions of PageRank employing alternative Web document models (2004) 0.03
    0.026009807 = product of:
      0.052019615 = sum of:
        0.052019615 = product of:
          0.10403923 = sum of:
            0.10403923 = weight(_text_:web in 674) [ClassicSimilarity], result of:
              0.10403923 = score(doc=674,freq=16.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6119082 = fieldWeight in 674, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=674)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Introduces several new versions of PageRank (the link based Web page ranking algorithm), based on an information science perspective on the concept of the Web document. Although the Web page is the typical indivisible unit of information in search engine results and most Web information retrieval algorithms, other research has suggested that aggregating pages based on directories and domains gives promising alternatives, particularly when Web links are the object of study. The new algorithms introduced based on these alternatives were used to rank four sets of Web pages. The ranking results were compared with human subjects' rankings. The results of the tests were somewhat inconclusive: the new approach worked well for the set that includes pages from different Web sites; however, it does not work well in ranking pages that are from the same site. It seems that the new algorithms may be effective for some tasks but not for others, especially when only low numbers of links are involved or the pages to be ranked are from the same site or directory.
  10. Spink, A.: Web search : emerging patterns (2004) 0.03
    0.026009807 = product of:
      0.052019615 = sum of:
        0.052019615 = product of:
          0.10403923 = sum of:
            0.10403923 = weight(_text_:web in 23) [ClassicSimilarity], result of:
              0.10403923 = score(doc=23,freq=16.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6119082 = fieldWeight in 23, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=23)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article examines the public searching of the Web and provides an overview of recent research exploring what we know about how people search the Web. The article reports selected findings from studies conducted from 1997 to 2002 using large-scale Web user data provided by commercial Web companies, including Excite, Ask Jeeves, and AlltheWeb.com. We examined what topics people search for on the Web; how people search the Web using keywords in queries during search sessions; and the different types of searches conducted for multimedia, medical, e-commerce, sex, etc., information. Key findings include changes and differences in search topics over time, including a shift from entertainment to e-commerce searching by largely North American users. Findings show little change in current patterns of Web searching by many users from short queries and sessions. Alternatively, we see more complex searching behaviors by some users, including successive and multitasking searches.
  11. Sherman, C.; Price, G.: ¬The invisible Web : uncovering information sources search engines can't see (2001) 0.03
    0.025416005 = product of:
      0.05083201 = sum of:
        0.05083201 = product of:
          0.10166402 = sum of:
            0.10166402 = weight(_text_:web in 62) [ClassicSimilarity], result of:
              0.10166402 = score(doc=62,freq=22.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59793836 = fieldWeight in 62, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Enormous expanses of the Internet are unreachable with standard Web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible Web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, information in databases is generally inaccessible to the software spiders and crawlers that compile search engine indexes. As Web technology improves, more and more information is being stored in databases that feed into dynamically generated Web pages. The tips provided in this resource will ensure that those databases are exposed and Net-based research will be conducted in the most thorough and effective manner. Discusses the use of online information resources and problems caused by dynamically generated Web pages, paying special attention to information mapping, assessing the validity of information, and the future of Web searching.
  12. Spink, A.; Wolfram, D.; Jansen, B.J.; Saracevic, T.: Searching the Web : the public and their queries (2001) 0.03
    0.025276989 = product of:
      0.050553977 = sum of:
        0.050553977 = product of:
          0.101107955 = sum of:
            0.101107955 = weight(_text_:web in 6980) [ClassicSimilarity], result of:
              0.101107955 = score(doc=6980,freq=34.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59466785 = fieldWeight in 6980, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6980)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In previous articles, we reported the state of Web searching in 1997 (Jansen, Spink, & Saracevic, 2000) and in 1999 (Spink, Wolfram, Jansen, & Saracevic, 2001). Such snapshot studies and statistics on Web use appear regularly (OCLC, 1999), but provide little information about Web searching trends. In this article, we compare and contrast results from our two previous studies of Excite queries' data sets, each containing over 1 million queries submitted by over 200,000 Excite users collected on 16 September 1997 and 20 December 1999. We examine how public Web searching changing during that 2-year time period. As Table 1 shows, the overall structure of Web queries in some areas did not change, while in others we see change from 1997 to 1999. Our comparison shows how Web searching changed incrementally and also dramatically. We see some moves toward greater simplicity, including shorter queries (i.e., fewer terms) and shorter sessions (i.e., fewer queries per user), with little modification (addition or deletion) of terms in subsequent queries. The trend toward shorter queries suggests that Web information content should target specific terms in order to reach Web users. Another trend was to view fewer pages of results per query. Most Excite users examined only one page of results per query, since an Excite results page contains ten ranked Web sites. Were users satisfied with the results and did not need to view more pages? It appears that the public continues to have a low tolerance of wading through retrieved sites. This decline in interactivity levels is a disturbing finding for the future of Web searching. Queries that included Boolean operators were in the minority, but the percentage increased between the two time periods. Most Boolean use involved the AND operator with many mistakes. The use of relevance feedback almost doubled from 1997 to 1999, but overall use was still small. An unusually large number of terms were used with low frequency, such as personal names, spelling errors, non-English words, and Web-specific terms, such as URLs. Web query vocabulary contains more words than found in large English texts in general. The public language of Web queries has its own and unique characteristics. How did Web searching topics change from 1997 to 1999? We classified a random sample of 2,414 queries from 1997 and 2,539 queries from 1999 into 11 categories (Table 2). From 1997 to 1999, Web searching shifted from entertainment, recreation and sex, and pornography, preferences to e-commerce-related topics under commerce, travel, employment, and economy. This shift coincided with changes in information distribution on the publicly indexed Web.
  13. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.03
    0.025276989 = product of:
      0.050553977 = sum of:
        0.050553977 = product of:
          0.101107955 = sum of:
            0.101107955 = weight(_text_:web in 4709) [ClassicSimilarity], result of:
              0.101107955 = score(doc=4709,freq=34.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59466785 = fieldWeight in 4709, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
    Source
    http://www.searchenginejournal.com/swoogle-an-engine-for-the-semantic-web/5469/
    Theme
    Semantic Web
  14. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.02
    0.024956053 = product of:
      0.049912106 = sum of:
        0.049912106 = product of:
          0.09982421 = sum of:
            0.09982421 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.09982421 = score(doc=4872,freq=4.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:40:22
  15. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.02
    0.024705233 = product of:
      0.049410466 = sum of:
        0.049410466 = product of:
          0.09882093 = sum of:
            0.09882093 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.09882093 = score(doc=3445,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    25. 8.2005 17:42:22
  16. Bawden, D.: Google and the universe of knowledge (2008) 0.02
    0.024705233 = product of:
      0.049410466 = sum of:
        0.049410466 = product of:
          0.09882093 = sum of:
            0.09882093 = weight(_text_:22 in 844) [ClassicSimilarity], result of:
              0.09882093 = score(doc=844,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5416616 = fieldWeight in 844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=844)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 6.2008 16:22:20
  17. Spink, A.; Gunar, O.: E-Commerce Web queries : Excite and AskJeeves study (2001) 0.02
    0.024522282 = product of:
      0.049044564 = sum of:
        0.049044564 = product of:
          0.09808913 = sum of:
            0.09808913 = weight(_text_:web in 910) [ClassicSimilarity], result of:
              0.09808913 = score(doc=910,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5769126 = fieldWeight in 910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.125 = fieldNorm(doc=910)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Gerhart, S.L.: Do Web search engines suppress controversy? : Simulating the exchange process (2004) 0.02
    0.024522282 = product of:
      0.049044564 = sum of:
        0.049044564 = product of:
          0.09808913 = sum of:
            0.09808913 = weight(_text_:web in 8164) [ClassicSimilarity], result of:
              0.09808913 = score(doc=8164,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5769126 = fieldWeight in 8164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.125 = fieldNorm(doc=8164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Munson, K.I.: Internet search engines : understanding their design to improve information retrieval (2000) 0.02
    0.024522282 = product of:
      0.049044564 = sum of:
        0.049044564 = product of:
          0.09808913 = sum of:
            0.09808913 = weight(_text_:web in 6105) [ClassicSimilarity], result of:
              0.09808913 = score(doc=6105,freq=8.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5769126 = fieldWeight in 6105, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6105)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The relationship between the methods currently used for indexing the World Wide Web and the programs, languages, and protocols on which the World Wide Web is based is examined. Two methods for indexing the Web are described, directories being briefly discussed while search engines are considered in detail. The automated approach used to create these tools is examined with special emphasis on the parts of a document used in indexing. Shortcomings of the approach are described. Suggestions for effective use of Web search engines are given
  20. Can, F.; Nuray, R.; Sevdik, A.B.: Automatic performance evaluation of Web search engines (2004) 0.02
    0.02432995 = product of:
      0.0486599 = sum of:
        0.0486599 = product of:
          0.0973198 = sum of:
            0.0973198 = weight(_text_:web in 2570) [ClassicSimilarity], result of:
              0.0973198 = score(doc=2570,freq=14.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.57238775 = fieldWeight in 2570, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2570)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Measuring the information retrieval effectiveness of World Wide Web search engines is costly because of human relevance judgments involved. However, both for business enterprises and people it is important to know the most effective Web search engines, since such search engines help their users find higher number of relevant Web pages with less effort. Furthermore, this information can be used for several practical purposes. In this study we introduce automatic Web search engine evaluation method as an efficient and effective assessment tool of such systems. The experiments based on eight Web search engines, 25 queries, and binary user relevance judgments show that our method provides results consistent with human-based evaluations. It is shown that the observed consistencies are statistically significant. This indicates that the new method can be successfully used in the evaluation of Web search engines.

Types

  • a 174
  • el 21
  • m 17
  • s 4
  • r 1
  • x 1
  • More… Less…