Search (12 results, page 1 of 1)

  • × author_ss:"Bar-Ilan, J."
  1. Bar-Ilan, J.: Web links and search engine ranking : the case of Google and the query "Jew" (2006) 0.12
    0.124703124 = product of:
      0.18705468 = sum of:
        0.10200114 = weight(_text_:query in 6104) [ClassicSimilarity], result of:
          0.10200114 = score(doc=6104,freq=6.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.44470036 = fieldWeight in 6104, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6104)
        0.08505354 = product of:
          0.17010708 = sum of:
            0.17010708 = weight(_text_:page in 6104) [ClassicSimilarity], result of:
              0.17010708 = score(doc=6104,freq=8.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.6171075 = fieldWeight in 6104, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6104)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The World Wide Web has become one of our more important information sources, and commercial search engines are the major tools for locating information; however, it is not enough for a Web page to be indexed by the search engines-it also must rank high on relevant queries. One of the parameters involved in ranking is the number and quality of links pointing to the page, based on the assumption that links convey appreciation for a page. This article presents the results of a content analysis of the links to two top pages retrieved by Google for the query "jew" as of July 2004: the "jew" entry on the free online encyclopedia Wikipedia, and the home page of "Jew Watch," a highly anti-Semitic site. The top results for the query "jew" gained public attention in April 2004, when it was noticed that the "Jew Watch" homepage ranked number 1. From this point on, both sides engaged in "Googlebombing" (i.e., increasing the number of links pointing to these pages). The results of the study show that most of the links to these pages come from blogs and discussion links, and the number of links pointing to these pages in appreciation of their content is extremely small. These findings have implications for ranking algorithms based on link counts, and emphasize the huge difference between Web links and citations in the scientific community.
  2. Bar-Ilan, J.: On the overlap, the precision and estimated recall of search engines : a case study of the query 'Erdös' (1998) 0.04
    0.03886567 = product of:
      0.116597004 = sum of:
        0.116597004 = weight(_text_:query in 3753) [ClassicSimilarity], result of:
          0.116597004 = score(doc=3753,freq=4.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.5083348 = fieldWeight in 3753, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3753)
      0.33333334 = coord(1/3)
    
    Abstract
    Investigates the retrieval capabilities of 6 Internet search engines on a simple query. Existing work on search engine evaluation considers only the first 10 or 20 results returned by the search engine. In this work, all documents that the search engine pointed at were retrieved and thoroughly examined. Thus the precision of the whole retrieval process could be calculated, the overlap between the results of the engines studied, and an estimate on the recall of the searches given. The precision of the engines is high, recall is very low and the overlap is minimal
  3. Barsky, E.; Bar-Ilan, J.: ¬The impact of task phrasing on the choice of search keywords and on the search process and success (2012) 0.03
    0.034722965 = product of:
      0.10416889 = sum of:
        0.10416889 = product of:
          0.20833778 = sum of:
            0.20833778 = weight(_text_:page in 455) [ClassicSimilarity], result of:
              0.20833778 = score(doc=455,freq=12.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.7557993 = fieldWeight in 455, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=455)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This experiment studied the impact of various task phrasings on the search process. Eighty-eight searchers performed four web search tasks prescribed by the researchers. Each task was linked to an existing target web page, containing a piece of text that served as the basis for the task. A matching phrasing was a task whose wording matched the text of the target page. A nonmatching phrasing was synonymous with the matching phrasing, but had no match with the target page. Searchers received tasks for both types in English and in Hebrew. The search process was logged. The findings confirm that task phrasing shapes the search process and outcome, and also user satisfaction. Each search stage-retrieval of the target page, visiting the target page, and finding the target answer-was associated with different phenomena; for example, target page retrieval was negatively affected by persistence in search patterns (e.g., use of phrases), user-originated keywords, shorter queries, and omitting key keywords from the queries. Searchers were easily driven away from the top-ranked target pages by lower-ranked pages with title tags matching the queries. Some searchers created consistently longer queries than other searchers, regardless of the task length. Several consistent behavior patterns that characterized the Hebrew language were uncovered, including the use of keyword modifications (replacing infinitive forms with nouns), omitting prefixes and articles, and preferences for the common language. The success self-assessment also depended on whether the wording of the answer matched the task phrasing.
  4. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.02
    0.023556154 = product of:
      0.07066846 = sum of:
        0.07066846 = weight(_text_:query in 4587) [ClassicSimilarity], result of:
          0.07066846 = score(doc=4587,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.30809742 = fieldWeight in 4587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
      0.33333334 = coord(1/3)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
  5. Bar-Ilan, J.; Keenoy, K.; Yaari, E.; Levene, M.: User rankings of search engine results (2007) 0.02
    0.019630127 = product of:
      0.05889038 = sum of:
        0.05889038 = weight(_text_:query in 470) [ClassicSimilarity], result of:
          0.05889038 = score(doc=470,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.25674784 = fieldWeight in 470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
      0.33333334 = coord(1/3)
    
    Abstract
    In this study, we investigate the similarities and differences between rankings of search results by users and search engines. Sixty-seven students took part in a 3-week-long experiment, during which they were asked to identify and rank the top 10 documents from the set of URLs that were retrieved by three major search engines (Google, MSN Search, and Yahoo!) for 12 selected queries. The URLs and accompanying snippets were displayed in random order, without disclosing which search engine(s) retrieved any specific URL for the query. We computed the similarity of the rankings of the users and search engines using four nonparametric correlation measures in [0,1] that complement each other. The findings show that the similarities between the users' choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no "average user," and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors. This is the first large-scale experiment in which users were asked to rank the results of identical queries. The analysis of the experimental results demonstrates the potential for personalized search.
  6. Bar-Ilan, J.: What do we know about links and linking? : a framework for studying links in academic environments (2005) 0.02
    0.01701071 = product of:
      0.051032126 = sum of:
        0.051032126 = product of:
          0.10206425 = sum of:
            0.10206425 = weight(_text_:page in 1058) [ClassicSimilarity], result of:
              0.10206425 = score(doc=1058,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.37026453 = fieldWeight in 1058, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1058)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Web is an enormous set of documents connected through hypertext links created by authors of Web pages. These links have been studied quantitatively, but little has been done so far in order to understand why these links are created. As a first step towards a better understanding, we propose a classification of link types in academic environments on the Web. The classification is multi-faceted and involves different aspects of the source and the target page, the link area and the relationship between the source and the target. Such classification provides an insight into the diverse uses of hypertext links on the Web, and has implications for browsing and ranking in IR systems by differentiating between different types of links. As a case study we classified a sample of links between sites of Israeli academic institutions.
  7. Bar-Ilan, J.; Keenoy, K.; Levene, M.; Yaari, E.: Presentation bias is significant in determining user preference for search results : a user study (2009) 0.01
    0.01417559 = product of:
      0.04252677 = sum of:
        0.04252677 = product of:
          0.08505354 = sum of:
            0.08505354 = weight(_text_:page in 2703) [ClassicSimilarity], result of:
              0.08505354 = score(doc=2703,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.30855376 = fieldWeight in 2703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2703)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We describe the results of an experiment designed to study user preferences for different orderings of search results from three major search engines. In the experiment, 65 users were asked to choose the best ordering from two different orderings of the same set of search results: Each pair consisted of the search engine's original top-10 ordering and a synthetic ordering created from the same top-10 results retrieved by the search engine. This process was repeated for 12 queries and nine different synthetic orderings. The results show that there is a slight overall preference for the search engines' original orderings, but the preference is rarely significant. Users' choice of the best result from each of the different orderings indicates that placement on the page (i.e., whether the result appears near the top) is the most important factor used in determining the quality of the result, not the actual content displayed in the top-10 snippets. In addition to the placement bias, we detected a small bias due to the reputation of the sites appearing in the search results.
  8. Bar-Ilan, J.; Zhitomirsky-Geffet, M.; Miller, Y.; Shoham, S.: ¬The effects of background information and social interaction on image tagging (2010) 0.01
    0.01417559 = product of:
      0.04252677 = sum of:
        0.04252677 = product of:
          0.08505354 = sum of:
            0.08505354 = weight(_text_:page in 3453) [ClassicSimilarity], result of:
              0.08505354 = score(doc=3453,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.30855376 = fieldWeight in 3453, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3453)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we describe the results of an experiment designed to understand the effects of background information and social interaction on image tagging. The participants in the experiment were asked to tag 12 preselected images of Jewish cultural heritage. The users were partitioned into three groups: the first group saw only the images with no additional information whatsoever, the second group saw the images plus a short, descriptive title, and the third group saw the images, the titles, and the URL of the page in which the image appeared. In the first stage of the experiment, each user tagged the images without seeing the tags provided by the other users. In the second stage, the users saw the tags assigned by others and were encouraged to interact. Results show that after the social interaction phase, the tag sets converged and the popular tags became even more popular. Although in all cases the total number of assigned tags increased after the social interaction phase, the number of distinct tags decreased in most cases. When viewing the image only, in some cases the users were not able to correctly identify what they saw in some of the pictures, but they overcame the initial difficulties after interaction. We conclude from this experiment that social interaction may lead to convergence in tagging and that the wisdom of the crowds helps overcome the difficulties due to the lack of information.
  9. Bar-Ilan, J.; Levene, M.; Mat-Hassan, M.: Methods for evaluating dynamic changes in search engine rankings : a case study (2006) 0.01
    0.011340473 = product of:
      0.03402142 = sum of:
        0.03402142 = product of:
          0.06804284 = sum of:
            0.06804284 = weight(_text_:page in 616) [ClassicSimilarity], result of:
              0.06804284 = score(doc=616,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.24684301 = fieldWeight in 616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.03125 = fieldNorm(doc=616)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines. Design/methodology/approach - The papers compare rankings of the top-ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top-ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top-ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine. Findings - The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top-ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging. Originality/value - The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.
  10. Bronstein, J.; Gazit, T.; Perez, O.; Bar-Ilan, J.; Aharony, N.; Amichai-Hamburger, Y.: ¬An examination of the factors contributing to participation in online social platforms (2016) 0.01
    0.0055721086 = product of:
      0.016716326 = sum of:
        0.016716326 = product of:
          0.03343265 = sum of:
            0.03343265 = weight(_text_:22 in 3364) [ClassicSimilarity], result of:
              0.03343265 = score(doc=3364,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.19345059 = fieldWeight in 3364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3364)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
  11. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.00
    0.004457687 = product of:
      0.013373061 = sum of:
        0.013373061 = product of:
          0.026746122 = sum of:
            0.026746122 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.026746122 = score(doc=1634,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
  12. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Testing the stability of "wisdom of crowds" judgments of search results over time and their similarity with the search engine rankings (2016) 0.00
    0.004457687 = product of:
      0.013373061 = sum of:
        0.013373061 = product of:
          0.026746122 = sum of:
            0.026746122 = weight(_text_:22 in 3071) [ClassicSimilarity], result of:
              0.026746122 = score(doc=3071,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.15476047 = fieldWeight in 3071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3071)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22