Search (17 results, page 1 of 1)

  • × author_ss:"Bar-Ilan, J."
  • × year_i:[2000 TO 2010}
  1. Bar-Ilan, J.: Methods for measuring search engine performance over time (2002) 0.00
    0.0033143433 = product of:
      0.0066286866 = sum of:
        0.0066286866 = product of:
          0.013257373 = sum of:
            0.013257373 = weight(_text_:a in 305) [ClassicSimilarity], result of:
              0.013257373 = score(doc=305,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.24964198 = fieldWeight in 305, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=305)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study introduces methods for evaluating search engine performance over a time period. Several measures are defined, which as a whole describe search engine functionality over time. The necessary setup for such studies is described, and the use of these measures is illustrated through a specific example. The set of measures introduced here may serve as a guideline for the search engines for testing and improving their functionality. We recommend setting up a standard suite of measures for evaluating search engine performance.
    Type
    a
  2. Bar-Ilan, J.: Information hub blogs (2005) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 193) [ClassicSimilarity], result of:
              0.0108246 = score(doc=193,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=193)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  3. Bar-Ilan, J.: What do we know about links and linking? : a framework for studying links in academic environments (2005) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 1058) [ClassicSimilarity], result of:
              0.010739701 = score(doc=1058,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 1058, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1058)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Web is an enormous set of documents connected through hypertext links created by authors of Web pages. These links have been studied quantitatively, but little has been done so far in order to understand why these links are created. As a first step towards a better understanding, we propose a classification of link types in academic environments on the Web. The classification is multi-faceted and involves different aspects of the source and the target page, the link area and the relationship between the source and the target. Such classification provides an insight into the diverse uses of hypertext links on the Web, and has implications for browsing and ranking in IR systems by differentiating between different types of links. As a case study we classified a sample of links between sites of Israeli academic institutions.
    Type
    a
  4. Bar-Ilan, J.; Peritz, B.C.: Evolution, continuity, and disappearance of documents on a specific topic an the Web : a longitudinal study of "informetrics" (2004) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 2886) [ClassicSimilarity], result of:
              0.010589487 = score(doc=2886,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 2886, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2886)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The present paper analyzes the changes that occurred to a set of Web pages related to "informetrics" over a period of 5 years between June 1998 and June 2003. Four times during this time span, in 1998,1999, 2002, and 2003, we monitored previously located pages and searched for new ones related to the topic. Thus, we were able to study the growth of the topic, white analyzing the rates of change and disappearance. The results indicate that modification, disappearance, and resurfacing cannot be ignored when studying the structure and development of the Web.
    Type
    a
  5. Bar-Ilan, J.; Peritz, B.C.: ¬A method for measuring the evolution of a topic on the Web : the case of "informetrics" (2009) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 3089) [ClassicSimilarity], result of:
              0.010148063 = score(doc=3089,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 3089, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The universe of information has been enriched by the creation of the World Wide Web, which has become an indispensible source for research. Since this source is growing at an enormous speed, an in-depth look of its performance to create a method for its evaluation has become necessary; however, growth is not the only process that influences the evolution of the Web. During their lifetime, Web pages may change their content and links to/from other Web pages, be duplicated or moved to a different URL, be removed from the Web either temporarily or permanently, and be temporarily inaccessible due to server and/or communication failures. To obtain a better understanding of these processes, we developed a method for tracking topics on the Web for long periods of time, without the need to employ a crawler and relying only on publicly available resources. The multiple data-collection methods used allow us to discover new pages related to the topic, to identify changes to existing pages, and to detect previously existing pages that have been removed or whose content is not relevant anymore to the specified topic. The method is demonstrated through monitoring Web pages that contain the term informetrics for a period of 8 years. The data-collection method also allowed us to analyze the dynamic changes in search engine coverage, illustrated here on Google - the search engine used for the longest period of time for data collection in this project.
    Type
    a
  6. Bar-Ilan, J.; Gutman,T.: How do search engines respond to some non-English queries? (2005) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 4653) [ClassicSimilarity], result of:
              0.009471525 = score(doc=4653,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 4653, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4653)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Bar-Ilan, J.: ¬The use of Web search engines in information science research (2003) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 4271) [ClassicSimilarity], result of:
              0.009076704 = score(doc=4271,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 4271, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4271)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The World Wide Web was created in 1989, but it has already become a major information channel and source, influencing our everyday lives, commercial transactions, and scientific communication, to mention just a few areas. The seventeenth-century philosopher Descartes proclaimed, "I think, therefore I am" (cogito, ergo sum). Today the Web is such an integral part of our lives that we could rephrase Descartes' statement as "I have a Web presence, therefore I am." Because many people, companies, and organizations take this notion seriously, in addition to more substantial reasons for publishing information an the Web, the number of Web pages is in the billions and growing constantly. However, it is not sufficient to have a Web presence; tools that enable users to locate Web pages are needed as well. The major tools for discovering and locating information an the Web are search engines. This review discusses the use of Web search engines in information science research. Before going into detail, we should define the terms "information science," "Web search engine," and "use" in the context of this review.
    Type
    a
  8. Bar-Ilan, J.: Evaluating the stability of the search tools Hotbot and Snap : a case study (2000) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 1180) [ClassicSimilarity], result of:
              0.008202582 = score(doc=1180,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 1180, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1180)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the results of a case study in which 20 random queries were presented for ten consecutive days to Hotbot and Snap, two search tools that draw their results from the database of Inktomi. The results show huge daily fluctuations in the number of hits retrieved by Hotbot, and high stability in the hits displayed by Snap. These findings are to alert users of Hotbot of its instability as of October 1999, and they raise questions about the reliability of previous studies estimating the size of Hotbot based on its overlap with other search engines.
    Type
    a
  9. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4587) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4587,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4587, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4587)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
    Type
    a
  10. Bar-Ilan, J.: Web links and search engine ranking : the case of Google and the query "Jew" (2006) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 6104) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=6104,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 6104, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6104)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The World Wide Web has become one of our more important information sources, and commercial search engines are the major tools for locating information; however, it is not enough for a Web page to be indexed by the search engines-it also must rank high on relevant queries. One of the parameters involved in ranking is the number and quality of links pointing to the page, based on the assumption that links convey appreciation for a page. This article presents the results of a content analysis of the links to two top pages retrieved by Google for the query "jew" as of July 2004: the "jew" entry on the free online encyclopedia Wikipedia, and the home page of "Jew Watch," a highly anti-Semitic site. The top results for the query "jew" gained public attention in April 2004, when it was noticed that the "Jew Watch" homepage ranked number 1. From this point on, both sides engaged in "Googlebombing" (i.e., increasing the number of links pointing to these pages). The results of the study show that most of the links to these pages come from blogs and discussion links, and the number of links pointing to these pages in appreciation of their content is extremely small. These findings have implications for ranking algorithms based on link counts, and emphasize the huge difference between Web links and citations in the scientific community.
    Type
    a
  11. Bar-Ilan, J.; Keenoy, K.; Levene, M.; Yaari, E.: Presentation bias is significant in determining user preference for search results : a user study (2009) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 2703) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=2703,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 2703, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2703)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We describe the results of an experiment designed to study user preferences for different orderings of search results from three major search engines. In the experiment, 65 users were asked to choose the best ordering from two different orderings of the same set of search results: Each pair consisted of the search engine's original top-10 ordering and a synthetic ordering created from the same top-10 results retrieved by the search engine. This process was repeated for 12 queries and nine different synthetic orderings. The results show that there is a slight overall preference for the search engines' original orderings, but the preference is rarely significant. Users' choice of the best result from each of the different orderings indicates that placement on the page (i.e., whether the result appears near the top) is the most important factor used in determining the quality of the result, not the actual content displayed in the top-10 snippets. In addition to the placement bias, we detected a small bias due to the reputation of the sites appearing in the search results.
    Type
    a
  12. Bar-Ilan, J.; Peritz, B.C.: Informetric theories and methods for exploring the Internet : an analytical survey of recent research literature (2002) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 813) [ClassicSimilarity], result of:
              0.007030784 = score(doc=813,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 813, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=813)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Internet, and more specifically the World Wide Web, is quickly becoming one of our main information sources. Systematic evaluation and analysis can help us understand how this medium works, grows, and changes, and how it influences our lives and research. New approaches in informetrics can provide an appropriate means towards achieving the above goals, and towards establishing a sound theory. This paper presents a selective review of research based on the Internet, using bibliometric and informetric methods and tools. Some of these studies clearly show the applicability of bibliometric laws to the Internet, while others establish new definitions and methods based on the respective definitions for printed sources. Both informetrics and Internet research can gain from these additional methods.
    Type
    a
  13. Bar-Ilan, J.; Belous, Y.: Children as architects of Web directories : an exploratory study (2007) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 289) [ClassicSimilarity], result of:
              0.006765375 = score(doc=289,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 289, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=289)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Children are increasingly using the Web. Cognitive theory tells us that directory structures are especially suited for information retrieval by children; however, empirical results show that they prefer keyword searching. One of the reasons for these findings could be that the directory structures and terminology are created by grown-ups. Using a card-sorting method and an enveloping system, we simulated the structure of a directory. Our goal was to try to understand what browsable, hierarchical subject categories children create when suggested terms are supplied and they are free to add or delete terms. Twelve groups of four children each (fourth and fifth graders) participated in our exploratory study. The initial terminology presented to the children was based on names of categories used in popular directories, in the sections on Arts, Television, Music, Cinema, and Celebrities. The children were allowed to introduce additional cards and change the terms appearing on the 61 cards. Findings show that the different groups reached reasonable consensus; the majority of the category names used by existing directories were acceptable by them and only a small minority of the terms caused confusion. Our recommendation is to include children in the design process of directories, not only in designing the interface but also in designing the content structure as well.
    Type
    a
  14. Bar-Ilan, J.; Levene, M.; Mat-Hassan, M.: Methods for evaluating dynamic changes in search engine rankings : a case study (2006) 0.00
    0.0015127839 = product of:
      0.0030255679 = sum of:
        0.0030255679 = product of:
          0.0060511357 = sum of:
            0.0060511357 = weight(_text_:a in 616) [ClassicSimilarity], result of:
              0.0060511357 = score(doc=616,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11394546 = fieldWeight in 616, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=616)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines. Design/methodology/approach - The papers compare rankings of the top-ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top-ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top-ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine. Findings - The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top-ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging. Originality/value - The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.
    Type
    a
  15. Bar-Ilan, J.; Keenoy, K.; Yaari, E.; Levene, M.: User rankings of search engine results (2007) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 470) [ClassicSimilarity], result of:
              0.005858987 = score(doc=470,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 470, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=470)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this study, we investigate the similarities and differences between rankings of search results by users and search engines. Sixty-seven students took part in a 3-week-long experiment, during which they were asked to identify and rank the top 10 documents from the set of URLs that were retrieved by three major search engines (Google, MSN Search, and Yahoo!) for 12 selected queries. The URLs and accompanying snippets were displayed in random order, without disclosing which search engine(s) retrieved any specific URL for the query. We computed the similarity of the rankings of the users and search engines using four nonparametric correlation measures in [0,1] that complement each other. The findings show that the similarities between the users' choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no "average user," and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors. This is the first large-scale experiment in which users were asked to rank the results of identical queries. The analysis of the experimental results demonstrates the potential for personalized search.
    Type
    a
  16. Bar-Ilan, J.: Informetrics (2009) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 3822) [ClassicSimilarity], result of:
              0.005740611 = score(doc=3822,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 3822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Informetrics is a subfield of information science and it encompasses bibliometrics, scientometrics, cybermetrics, and webometrics. This encyclopedia entry provides an overview of informetrics and its subfields. In general, informetrics deals with quantitative aspects of information: its production, dissemination, evaluation, and use. Bibliometrics and scientometrics study scientific literature: papers, journals, patents, and citations; while in webometric studies the sources studied are Web pages and Web sites, and citations are replaced by hypertext links. The entry introduces major topics in informetrics: citation analysis and citation related studies, the journal impact factor, the recently defined h-index, citation databases, co-citation analysis, open access publications and its implications, informetric laws, techniques for mapping and visualization of informetric phenomena, the emerging subfields of webometrics, cybermetrics and link analysis, and research evaluation.
    Type
    a
  17. Bar-Ilan, J.: Comparing rankings of search results on the Web (2005) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 1068) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=1068,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 1068, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1068)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Web has become an information source for professional data gathering. Because of the vast amounts of information on almost all topics, one cannot systematically go over the whole set of results, and therefore must rely on the ordering of the results by the search engine. It is well known that search engines on the Web have low overlap in terms of coverage. In this study we measure how similar are the rankings of search engines on the overlapping results. We compare rankings of results for identical queries retrieved from several search engines. The method is based only on the set of URLs that appear in the answer sets of the engines being compared. For comparing the similarity of rankings of two search engines, the Spearman correlation coefficient is computed. When comparing more than two sets Kendall's W is used. These are well-known measures and the statistical significance of the results can be computed. The methods are demonstrated on a set of 15 queries that were submitted to four large Web search engines. The findings indicate that the large public search engines on the Web employ considerably different ranking algorithms.
    Type
    a