Search (29 results, page 1 of 2)

  • × author_ss:"Bar-Ilan, J."
  1. Bar-Ilan, J.; Levene, M.; Mat-Hassan, M.: Methods for evaluating dynamic changes in search engine rankings : a case study (2006) 0.04
    0.038344223 = product of:
      0.076688446 = sum of:
        0.012757615 = weight(_text_:for in 616) [ClassicSimilarity], result of:
          0.012757615 = score(doc=616,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14371942 = fieldWeight in 616, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=616)
        0.06393083 = weight(_text_:computing in 616) [ClassicSimilarity], result of:
          0.06393083 = score(doc=616,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.24445872 = fieldWeight in 616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=616)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines. Design/methodology/approach - The papers compare rankings of the top-ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top-ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top-ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine. Findings - The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top-ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging. Originality/value - The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.
  2. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.01968414 = product of:
      0.03936828 = sum of:
        0.026557092 = weight(_text_:for in 1634) [ClassicSimilarity], result of:
          0.026557092 = score(doc=1634,freq=26.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2991758 = fieldWeight in 1634, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.012811186 = product of:
          0.025622372 = sum of:
            0.025622372 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.025622372 = score(doc=1634,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  3. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Testing the stability of "wisdom of crowds" judgments of search results over time and their similarity with the search engine rankings (2016) 0.01
    0.011613867 = product of:
      0.023227734 = sum of:
        0.010416549 = weight(_text_:for in 3071) [ClassicSimilarity], result of:
          0.010416549 = score(doc=3071,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.117346406 = fieldWeight in 3071, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=3071)
        0.012811186 = product of:
          0.025622372 = sum of:
            0.025622372 = weight(_text_:22 in 3071) [ClassicSimilarity], result of:
              0.025622372 = score(doc=3071,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.15476047 = fieldWeight in 3071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3071)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies that individual users might change their assessment of search results over time. It is also known that aggregated judgements of multiple individual users can lead to correct and reliable decisions; this phenomenon is known as the "wisdom of crowds". The purpose of this paper is to examine whether aggregated judgements will be more stable and thus more reliable over time than individual user judgements. Design/methodology/approach - In this study two simple measures are proposed to calculate the aggregated judgements of search results and compare their reliability and stability to individual user judgements. In addition, the aggregated "wisdom of crowds" judgements were used as a means to compare the differences between human assessments of search results and search engine's rankings. A large-scale user study was conducted with 87 participants who evaluated two different queries and four diverse result sets twice, with an interval of two months. Two types of judgements were considered in this study: relevance on a four-point scale, and ranking on a ten-point scale without ties. Findings - It was found that aggregated judgements are much more stable than individual user judgements, yet they are quite different from search engine rankings. Practical implications - The proposed "wisdom of crowds"-based approach provides a reliable reference point for the evaluation of search engines. This is also important for exploring the need of personalisation and adapting search engine's ranking over time to changes in users preferences. Originality/value - This is a first study that applies the notion of "wisdom of crowds" to examine an under-explored in the literature phenomenon of "change in time" in user evaluation of relevance.
    Date
    20. 1.2015 18:30:22
  4. Bar-Ilan, J.: Methods for measuring search engine performance over time (2002) 0.01
    0.009743789 = product of:
      0.038975157 = sum of:
        0.038975157 = weight(_text_:for in 305) [ClassicSimilarity], result of:
          0.038975157 = score(doc=305,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.43907005 = fieldWeight in 305, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=305)
      0.25 = coord(1/4)
    
    Abstract
    This study introduces methods for evaluating search engine performance over a time period. Several measures are defined, which as a whole describe search engine functionality over time. The necessary setup for such studies is described, and the use of these measures is illustrated through a specific example. The set of measures introduced here may serve as a guideline for the search engines for testing and improving their functionality. We recommend setting up a standard suite of measures for evaluating search engine performance.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.4, S.308-319
  5. Bergman, O.; Gradovitch, N.; Bar-Ilan, J.; Beyth-Marom, R.: Folder versus tag preference in personal information management (2013) 0.01
    0.0072787846 = product of:
      0.029115139 = sum of:
        0.029115139 = weight(_text_:for in 1103) [ClassicSimilarity], result of:
          0.029115139 = score(doc=1103,freq=20.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.32799318 = fieldWeight in 1103, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1103)
      0.25 = coord(1/4)
    
    Abstract
    Users' preferences for folders versus tags was studied in 2 working environments where both options were available to them. In the Gmail study, we informed 75 participants about both folder-labeling and tag-labeling, observed their storage behavior after 1 month, and asked them to estimate the proportions of different retrieval options in their behavior. In the Windows 7 study, we informed 23 participants about tags and asked them to tag all their files for 2 weeks, followed by a period of 5 weeks of free choice between the 2 methods. Their storage and retrieval habits were tested prior to the learning session and, after 7 weeks, using special classification recording software and a retrieval-habits questionnaire. A controlled retrieval task and an in-depth interview were conducted. Results of both studies show a strong preference for folders over tags for both storage and retrieval. In the minority of cases where tags were used for storage, participants typically used a single tag per information item. Moreover, when multiple classification was used for storage, it was only marginally used for retrieval. The controlled retrieval task showed lower success rates and slower retrieval speeds for tag use. Possible reasons for participants' preferences are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.1995-2012
  6. Bar-Ilan, J.; Peritz, B.C.: ¬A method for measuring the evolution of a topic on the Web : the case of "informetrics" (2009) 0.01
    0.0069052614 = product of:
      0.027621046 = sum of:
        0.027621046 = weight(_text_:for in 3089) [ClassicSimilarity], result of:
          0.027621046 = score(doc=3089,freq=18.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.31116164 = fieldWeight in 3089, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3089)
      0.25 = coord(1/4)
    
    Abstract
    The universe of information has been enriched by the creation of the World Wide Web, which has become an indispensible source for research. Since this source is growing at an enormous speed, an in-depth look of its performance to create a method for its evaluation has become necessary; however, growth is not the only process that influences the evolution of the Web. During their lifetime, Web pages may change their content and links to/from other Web pages, be duplicated or moved to a different URL, be removed from the Web either temporarily or permanently, and be temporarily inaccessible due to server and/or communication failures. To obtain a better understanding of these processes, we developed a method for tracking topics on the Web for long periods of time, without the need to employ a crawler and relying only on publicly available resources. The multiple data-collection methods used allow us to discover new pages related to the topic, to identify changes to existing pages, and to detect previously existing pages that have been removed or whose content is not relevant anymore to the specified topic. The method is demonstrated through monitoring Web pages that contain the term informetrics for a period of 8 years. The data-collection method also allowed us to analyze the dynamic changes in search engine coverage, illustrated here on Google - the search engine used for the longest period of time for data collection in this project.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.9, S.1730-1740
  7. Shema, H.; Bar-Ilan, J.; Thelwall, M.: Do blog citations correlate with a higher number of future citations? : Research blogs as a potential source for alternative metrics (2014) 0.01
    0.0067657465 = product of:
      0.027062986 = sum of:
        0.027062986 = weight(_text_:for in 1258) [ClassicSimilarity], result of:
          0.027062986 = score(doc=1258,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3048749 = fieldWeight in 1258, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1258)
      0.25 = coord(1/4)
    
    Abstract
    Journal-based citations are an important source of data for impact indices. However, the impact of journal articles extends beyond formal scholarly discourse. Measuring online scholarly impact calls for new indices, complementary to the older ones. This article examines a possible alternative metric source, blog posts aggregated at ResearchBlogging.org, which discuss peer-reviewed articles and provide full bibliographic references. Articles reviewed in these blogs therefore receive "blog citations." We hypothesized that articles receiving blog citations close to their publication time receive more journal citations later than the articles in the same journal published in the same year that did not receive such blog citations. Statistically significant evidence for articles published in 2009 and 2010 support this hypothesis for seven of 12 journals (58%) in 2009 and 13 of 19 journals (68%) in 2010. We suggest, based on these results, that blog citations can be used as an alternative metric source.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.5, S.1018-1027
  8. Bar-Ilan, J.: Web links and search engine ranking : the case of Google and the query "Jew" (2006) 0.01
    0.006089868 = product of:
      0.024359472 = sum of:
        0.024359472 = weight(_text_:for in 6104) [ClassicSimilarity], result of:
          0.024359472 = score(doc=6104,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27441877 = fieldWeight in 6104, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6104)
      0.25 = coord(1/4)
    
    Abstract
    The World Wide Web has become one of our more important information sources, and commercial search engines are the major tools for locating information; however, it is not enough for a Web page to be indexed by the search engines-it also must rank high on relevant queries. One of the parameters involved in ranking is the number and quality of links pointing to the page, based on the assumption that links convey appreciation for a page. This article presents the results of a content analysis of the links to two top pages retrieved by Google for the query "jew" as of July 2004: the "jew" entry on the free online encyclopedia Wikipedia, and the home page of "Jew Watch," a highly anti-Semitic site. The top results for the query "jew" gained public attention in April 2004, when it was noticed that the "Jew Watch" homepage ranked number 1. From this point on, both sides engaged in "Googlebombing" (i.e., increasing the number of links pointing to these pages). The results of the study show that most of the links to these pages come from blogs and discussion links, and the number of links pointing to these pages in appreciation of their content is extremely small. These findings have implications for ranking algorithms based on link counts, and emphasize the huge difference between Web links and citations in the scientific community.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.12, S.1581-1589
  9. Zhitomirsky-Geffet, M.; Erez, E.S.; Bar-Ilan, J.: Toward multiviewpoint ontology construction by collaboration of non-experts and crowdsourcing : the case of the effect of diet on health (2017) 0.01
    0.005638122 = product of:
      0.022552488 = sum of:
        0.022552488 = weight(_text_:for in 3439) [ClassicSimilarity], result of:
          0.022552488 = score(doc=3439,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 3439, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3439)
      0.25 = coord(1/4)
    
    Abstract
    Domain experts are skilled in buliding a narrow ontology that reflects their subfield of expertise based on their work experience and personal beliefs. We call this type of ontology a single-viewpoint ontology. There can be a variety of such single viewpoint ontologies that represent a wide spectrum of subfields and expert opinions on the domain. However, to have a complete formal vocabulary for the domain they need to be linked and unified into a multiviewpoint model while having the subjective viewpoint statements marked and distinguished from the objectively true statements. In this study, we propose and implement a two-phase methodology for multiviewpoint ontology construction by nonexpert users. The proposed methodology was implemented for the domain of the effect of diet on health. A large-scale crowdsourcing experiment was conducted with about 750 ontological statements to determine whether each of these statements is objectively true, viewpoint, or erroneous. Typically, in crowdsourcing experiments the workers are asked for their personal opinions on the given subject. However, in our case their ability to objectively assess others' opinions was examined as well. Our results show substantially higher accuracy in classification for the objective assessment approach compared to the results based on personal opinions.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.681-694
  10. Lazinger, S.S.; Bar-Ilan, J.; Peritz, B.C.: Internet use by faculty members in various disciplines : a comparative case study (1997) 0.01
    0.0055814567 = product of:
      0.022325827 = sum of:
        0.022325827 = weight(_text_:for in 390) [ClassicSimilarity], result of:
          0.022325827 = score(doc=390,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 390, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=390)
      0.25 = coord(1/4)
    
    Abstract
    Examines and compares the use of the Internet among various sectors of the faculty at the Hebrew University of Jerusalem, Israel, in order to verify the influence of a number of parameters on this use. Questionnaires were sent to faculty members in all departments and professional schools of the Hebrew University of Jerusalem, a total population of 918 for both the pilot project and the main study. Results indicated that Internet use is consistently higher among faculty members in the sciences and agriculture than among those in the humanities or social sciences. Makes suggestions for training the level of Internet use among the various disciplines of the faculty
    Source
    Journal of the American Society for Information Science. 48(1997) no.6, S.508-518
  11. Bar-Ilan, J.; Keenoy, K.; Levene, M.; Yaari, E.: Presentation bias is significant in determining user preference for search results : a user study (2009) 0.01
    0.0051468783 = product of:
      0.020587513 = sum of:
        0.020587513 = weight(_text_:for in 2703) [ClassicSimilarity], result of:
          0.020587513 = score(doc=2703,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 2703, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2703)
      0.25 = coord(1/4)
    
    Abstract
    We describe the results of an experiment designed to study user preferences for different orderings of search results from three major search engines. In the experiment, 65 users were asked to choose the best ordering from two different orderings of the same set of search results: Each pair consisted of the search engine's original top-10 ordering and a synthetic ordering created from the same top-10 results retrieved by the search engine. This process was repeated for 12 queries and nine different synthetic orderings. The results show that there is a slight overall preference for the search engines' original orderings, but the preference is rarely significant. Users' choice of the best result from each of the different orderings indicates that placement on the page (i.e., whether the result appears near the top) is the most important factor used in determining the quality of the result, not the actual content displayed in the top-10 snippets. In addition to the placement bias, we detected a small bias due to the reputation of the sites appearing in the search results.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.1, S.135-149
  12. Barsky, E.; Bar-Ilan, J.: ¬The impact of task phrasing on the choice of search keywords and on the search process and success (2012) 0.01
    0.0051468783 = product of:
      0.020587513 = sum of:
        0.020587513 = weight(_text_:for in 455) [ClassicSimilarity], result of:
          0.020587513 = score(doc=455,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 455, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=455)
      0.25 = coord(1/4)
    
    Abstract
    This experiment studied the impact of various task phrasings on the search process. Eighty-eight searchers performed four web search tasks prescribed by the researchers. Each task was linked to an existing target web page, containing a piece of text that served as the basis for the task. A matching phrasing was a task whose wording matched the text of the target page. A nonmatching phrasing was synonymous with the matching phrasing, but had no match with the target page. Searchers received tasks for both types in English and in Hebrew. The search process was logged. The findings confirm that task phrasing shapes the search process and outcome, and also user satisfaction. Each search stage-retrieval of the target page, visiting the target page, and finding the target answer-was associated with different phenomena; for example, target page retrieval was negatively affected by persistence in search patterns (e.g., use of phrases), user-originated keywords, shorter queries, and omitting key keywords from the queries. Searchers were easily driven away from the top-ranked target pages by lower-ranked pages with title tags matching the queries. Some searchers created consistently longer queries than other searchers, regardless of the task length. Several consistent behavior patterns that characterized the Hebrew language were uncovered, including the use of keyword modifications (replacing infinitive forms with nouns), omitting prefixes and articles, and preferences for the common language. The success self-assessment also depended on whether the wording of the answer matched the task phrasing.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.10, S.1987-2005
  13. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Analysis of change in users' assessment of search results over time (2017) 0.01
    0.0051468783 = product of:
      0.020587513 = sum of:
        0.020587513 = weight(_text_:for in 3593) [ClassicSimilarity], result of:
          0.020587513 = score(doc=3593,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 3593, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3593)
      0.25 = coord(1/4)
    
    Abstract
    We present the first systematic study of the influence of time on user judgements for rankings and relevance grades of web search engine results. The goal of this study is to evaluate the change in user assessment of search results and explore how users' judgements change. To this end, we conducted a large-scale user study with 86 participants who evaluated 2 different queries and 4 diverse result sets twice with an interval of 2 months. To analyze the results we investigate whether 2 types of patterns of user behavior from the theory of categorical thinking hold for the case of evaluation of search results: (a) coarseness and (b) locality. To quantify these patterns we devised 2 new measures of change in user judgements and distinguish between local (when users swap between close ranks and relevance values) and nonlocal changes. Two types of judgements were considered in this study: (a) relevance on a 4-point scale, and (b) ranking on a 10-point scale without ties. We found that users tend to change their judgements of the results over time in about 50% of cases for relevance and in 85% of cases for ranking. However, the majority of these changes were local.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1137-1148
  14. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.00
    0.004784106 = product of:
      0.019136423 = sum of:
        0.019136423 = weight(_text_:for in 4587) [ClassicSimilarity], result of:
          0.019136423 = score(doc=4587,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 4587, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
      0.25 = coord(1/4)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
    Source
    Journal of the American Society for Information Science. 51(2000) no.5, S.432-443
  15. Bar-Ilan, J.; Keenoy, K.; Yaari, E.; Levene, M.: User rankings of search engine results (2007) 0.00
    0.0046035075 = product of:
      0.01841403 = sum of:
        0.01841403 = weight(_text_:for in 470) [ClassicSimilarity], result of:
          0.01841403 = score(doc=470,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 470, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
      0.25 = coord(1/4)
    
    Abstract
    In this study, we investigate the similarities and differences between rankings of search results by users and search engines. Sixty-seven students took part in a 3-week-long experiment, during which they were asked to identify and rank the top 10 documents from the set of URLs that were retrieved by three major search engines (Google, MSN Search, and Yahoo!) for 12 selected queries. The URLs and accompanying snippets were displayed in random order, without disclosing which search engine(s) retrieved any specific URL for the query. We computed the similarity of the rankings of the users and search engines using four nonparametric correlation measures in [0,1] that complement each other. The findings show that the similarities between the users' choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no "average user," and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors. This is the first large-scale experiment in which users were asked to rank the results of identical queries. The analysis of the experimental results demonstrates the potential for personalized search.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.9, S.1254-1266
  16. Bar-Ilan, J.; Levene, M.: ¬The hw-rank : an h-index variant for ranking web pages (2015) 0.00
    0.0046035075 = product of:
      0.01841403 = sum of:
        0.01841403 = weight(_text_:for in 1694) [ClassicSimilarity], result of:
          0.01841403 = score(doc=1694,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 1694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.078125 = fieldNorm(doc=1694)
      0.25 = coord(1/4)
    
  17. Bar-Ilan, J.; Peritz, B.C.: Evolution, continuity, and disappearance of documents on a specific topic an the Web : a longitudinal study of "informetrics" (2004) 0.00
    0.00455724 = product of:
      0.01822896 = sum of:
        0.01822896 = weight(_text_:for in 2886) [ClassicSimilarity], result of:
          0.01822896 = score(doc=2886,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20535621 = fieldWeight in 2886, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2886)
      0.25 = coord(1/4)
    
    Abstract
    The present paper analyzes the changes that occurred to a set of Web pages related to "informetrics" over a period of 5 years between June 1998 and June 2003. Four times during this time span, in 1998,1999, 2002, and 2003, we monitored previously located pages and searched for new ones related to the topic. Thus, we were able to study the growth of the topic, white analyzing the rates of change and disappearance. The results indicate that modification, disappearance, and resurfacing cannot be ignored when studying the structure and development of the Web.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.11, S.980-990
  18. Bronstein, J.; Gazit, T.; Perez, O.; Bar-Ilan, J.; Aharony, N.; Amichai-Hamburger, Y.: ¬An examination of the factors contributing to participation in online social platforms (2016) 0.00
    0.004003496 = product of:
      0.016013984 = sum of:
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 3364) [ClassicSimilarity], result of:
              0.032027967 = score(doc=3364,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 3364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3364)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2015 18:30:22
  19. Bar-Ilan, J.; Belous, Y.: Children as architects of Web directories : an exploratory study (2007) 0.00
    0.003986755 = product of:
      0.01594702 = sum of:
        0.01594702 = weight(_text_:for in 289) [ClassicSimilarity], result of:
          0.01594702 = score(doc=289,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 289, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=289)
      0.25 = coord(1/4)
    
    Abstract
    Children are increasingly using the Web. Cognitive theory tells us that directory structures are especially suited for information retrieval by children; however, empirical results show that they prefer keyword searching. One of the reasons for these findings could be that the directory structures and terminology are created by grown-ups. Using a card-sorting method and an enveloping system, we simulated the structure of a directory. Our goal was to try to understand what browsable, hierarchical subject categories children create when suggested terms are supplied and they are free to add or delete terms. Twelve groups of four children each (fourth and fifth graders) participated in our exploratory study. The initial terminology presented to the children was based on names of categories used in popular directories, in the sections on Arts, Television, Music, Cinema, and Celebrities. The children were allowed to introduce additional cards and change the terms appearing on the 61 cards. Findings show that the different groups reached reasonable consensus; the majority of the category names used by existing directories were acceptable by them and only a small minority of the terms caused confusion. Our recommendation is to include children in the design process of directories, not only in designing the interface but also in designing the content structure as well.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.895-907
  20. Bar-Ilan, J.: Comparing rankings of search results on the Web (2005) 0.00
    0.003986755 = product of:
      0.01594702 = sum of:
        0.01594702 = weight(_text_:for in 1068) [ClassicSimilarity], result of:
          0.01594702 = score(doc=1068,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 1068, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1068)
      0.25 = coord(1/4)
    
    Abstract
    The Web has become an information source for professional data gathering. Because of the vast amounts of information on almost all topics, one cannot systematically go over the whole set of results, and therefore must rely on the ordering of the results by the search engine. It is well known that search engines on the Web have low overlap in terms of coverage. In this study we measure how similar are the rankings of search engines on the overlapping results. We compare rankings of results for identical queries retrieved from several search engines. The method is based only on the set of URLs that appear in the answer sets of the engines being compared. For comparing the similarity of rankings of two search engines, the Spearman correlation coefficient is computed. When comparing more than two sets Kendall's W is used. These are well-known measures and the statistical significance of the results can be computed. The methods are demonstrated on a set of 15 queries that were submitted to four large Web search engines. The findings indicate that the large public search engines on the Web employ considerably different ranking algorithms.