Search (15 results, page 1 of 1)

  • × author_ss:"Bar-Ilan, J."
  1. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.05
    0.047668897 = product of:
      0.09533779 = sum of:
        0.0066587473 = product of:
          0.02663499 = sum of:
            0.02663499 = weight(_text_:based in 1634) [ClassicSimilarity], result of:
              0.02663499 = score(doc=1634,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.18831211 = fieldWeight in 1634, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.25 = coord(1/4)
        0.088679045 = sum of:
          0.063238226 = weight(_text_:assessment in 1634) [ClassicSimilarity], result of:
            0.063238226 = score(doc=1634,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.2439969 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.025440816 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.025440816 = score(doc=1634,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  2. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Testing the stability of "wisdom of crowds" judgments of search results over time and their similarity with the search engine rankings (2016) 0.05
    0.046693746 = product of:
      0.09338749 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 3071) [ClassicSimilarity], result of:
              0.018833783 = score(doc=3071,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 3071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3071)
          0.25 = coord(1/4)
        0.088679045 = sum of:
          0.063238226 = weight(_text_:assessment in 3071) [ClassicSimilarity], result of:
            0.063238226 = score(doc=3071,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.2439969 = fieldWeight in 3071, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.03125 = fieldNorm(doc=3071)
          0.025440816 = weight(_text_:22 in 3071) [ClassicSimilarity], result of:
            0.025440816 = score(doc=3071,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.15476047 = fieldWeight in 3071, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3071)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies that individual users might change their assessment of search results over time. It is also known that aggregated judgements of multiple individual users can lead to correct and reliable decisions; this phenomenon is known as the "wisdom of crowds". The purpose of this paper is to examine whether aggregated judgements will be more stable and thus more reliable over time than individual user judgements. Design/methodology/approach - In this study two simple measures are proposed to calculate the aggregated judgements of search results and compare their reliability and stability to individual user judgements. In addition, the aggregated "wisdom of crowds" judgements were used as a means to compare the differences between human assessments of search results and search engine's rankings. A large-scale user study was conducted with 87 participants who evaluated two different queries and four diverse result sets twice, with an interval of two months. Two types of judgements were considered in this study: relevance on a four-point scale, and ranking on a ten-point scale without ties. Findings - It was found that aggregated judgements are much more stable than individual user judgements, yet they are quite different from search engine rankings. Practical implications - The proposed "wisdom of crowds"-based approach provides a reliable reference point for the evaluation of search engines. This is also important for exploring the need of personalisation and adapting search engine's ranking over time to changes in users preferences. Originality/value - This is a first study that applies the notion of "wisdom of crowds" to examine an under-explored in the literature phenomenon of "change in time" in user evaluation of relevance.
    Date
    20. 1.2015 18:30:22
  3. Zhitomirsky-Geffet, M.; Erez, E.S.; Bar-Ilan, J.: Toward multiviewpoint ontology construction by collaboration of non-experts and crowdsourcing : the case of the effect of diet on health (2017) 0.02
    0.023923663 = product of:
      0.047847327 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 3439) [ClassicSimilarity], result of:
              0.033293735 = score(doc=3439,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 3439, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3439)
          0.25 = coord(1/4)
        0.039523892 = product of:
          0.079047784 = sum of:
            0.079047784 = weight(_text_:assessment in 3439) [ClassicSimilarity], result of:
              0.079047784 = score(doc=3439,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30499613 = fieldWeight in 3439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3439)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Domain experts are skilled in buliding a narrow ontology that reflects their subfield of expertise based on their work experience and personal beliefs. We call this type of ontology a single-viewpoint ontology. There can be a variety of such single viewpoint ontologies that represent a wide spectrum of subfields and expert opinions on the domain. However, to have a complete formal vocabulary for the domain they need to be linked and unified into a multiviewpoint model while having the subjective viewpoint statements marked and distinguished from the objectively true statements. In this study, we propose and implement a two-phase methodology for multiviewpoint ontology construction by nonexpert users. The proposed methodology was implemented for the domain of the effect of diet on health. A large-scale crowdsourcing experiment was conducted with about 750 ontological statements to determine whether each of these statements is objectively true, viewpoint, or erroneous. Typically, in crowdsourcing experiments the workers are asked for their personal opinions on the given subject. However, in our case their ability to objectively assess others' opinions was examined as well. Our results show substantially higher accuracy in classification for the objective assessment approach compared to the results based on personal opinions.
  4. Bar-Ilan, J.; Peritz, B.C.: ¬A method for measuring the evolution of a topic on the Web : the case of "informetrics" (2009) 0.01
    0.014115169 = product of:
      0.056460675 = sum of:
        0.056460675 = weight(_text_:term in 3089) [ClassicSimilarity], result of:
          0.056460675 = score(doc=3089,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 3089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3089)
      0.25 = coord(1/4)
    
    Abstract
    The universe of information has been enriched by the creation of the World Wide Web, which has become an indispensible source for research. Since this source is growing at an enormous speed, an in-depth look of its performance to create a method for its evaluation has become necessary; however, growth is not the only process that influences the evolution of the Web. During their lifetime, Web pages may change their content and links to/from other Web pages, be duplicated or moved to a different URL, be removed from the Web either temporarily or permanently, and be temporarily inaccessible due to server and/or communication failures. To obtain a better understanding of these processes, we developed a method for tracking topics on the Web for long periods of time, without the need to employ a crawler and relying only on publicly available resources. The multiple data-collection methods used allow us to discover new pages related to the topic, to identify changes to existing pages, and to detect previously existing pages that have been removed or whose content is not relevant anymore to the specified topic. The method is demonstrated through monitoring Web pages that contain the term informetrics for a period of 8 years. The data-collection method also allowed us to analyze the dynamic changes in search engine coverage, illustrated here on Google - the search engine used for the longest period of time for data collection in this project.
  5. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Analysis of change in users' assessment of search results over time (2017) 0.01
    0.013973806 = product of:
      0.055895224 = sum of:
        0.055895224 = product of:
          0.11179045 = sum of:
            0.11179045 = weight(_text_:assessment in 3593) [ClassicSimilarity], result of:
              0.11179045 = score(doc=3593,freq=4.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.43132967 = fieldWeight in 3593, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3593)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We present the first systematic study of the influence of time on user judgements for rankings and relevance grades of web search engine results. The goal of this study is to evaluate the change in user assessment of search results and explore how users' judgements change. To this end, we conducted a large-scale user study with 86 participants who evaluated 2 different queries and 4 diverse result sets twice with an interval of 2 months. To analyze the results we investigate whether 2 types of patterns of user behavior from the theory of categorical thinking hold for the case of evaluation of search results: (a) coarseness and (b) locality. To quantify these patterns we devised 2 new measures of change in user judgements and distinguish between local (when users swap between close ranks and relevance values) and nonlocal changes. Two types of judgements were considered in this study: (a) relevance on a 4-point scale, and (b) ranking on a 10-point scale without ties. We found that users tend to change their judgements of the results over time in about 50% of cases for relevance and in 85% of cases for ranking. However, the majority of these changes were local.
  6. Barsky, E.; Bar-Ilan, J.: ¬The impact of task phrasing on the choice of search keywords and on the search process and success (2012) 0.01
    0.009880973 = product of:
      0.039523892 = sum of:
        0.039523892 = product of:
          0.079047784 = sum of:
            0.079047784 = weight(_text_:assessment in 455) [ClassicSimilarity], result of:
              0.079047784 = score(doc=455,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30499613 = fieldWeight in 455, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=455)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This experiment studied the impact of various task phrasings on the search process. Eighty-eight searchers performed four web search tasks prescribed by the researchers. Each task was linked to an existing target web page, containing a piece of text that served as the basis for the task. A matching phrasing was a task whose wording matched the text of the target page. A nonmatching phrasing was synonymous with the matching phrasing, but had no match with the target page. Searchers received tasks for both types in English and in Hebrew. The search process was logged. The findings confirm that task phrasing shapes the search process and outcome, and also user satisfaction. Each search stage-retrieval of the target page, visiting the target page, and finding the target answer-was associated with different phenomena; for example, target page retrieval was negatively affected by persistence in search patterns (e.g., use of phrases), user-originated keywords, shorter queries, and omitting key keywords from the queries. Searchers were easily driven away from the top-ranked target pages by lower-ranked pages with title tags matching the queries. Some searchers created consistently longer queries than other searchers, regardless of the task length. Several consistent behavior patterns that characterized the Hebrew language were uncovered, including the use of keyword modifications (replacing infinitive forms with nouns), omitting prefixes and articles, and preferences for the common language. The success self-assessment also depended on whether the wording of the answer matched the task phrasing.
  7. Bronstein, J.; Gazit, T.; Perez, O.; Bar-Ilan, J.; Aharony, N.; Amichai-Hamburger, Y.: ¬An examination of the factors contributing to participation in online social platforms (2016) 0.00
    0.003975128 = product of:
      0.015900511 = sum of:
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 3364) [ClassicSimilarity], result of:
              0.031801023 = score(doc=3364,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 3364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3364)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2015 18:30:22
  8. Bar-Ilan, J.; Peritz, B.C.: Informetric theories and methods for exploring the Internet : an analytical survey of recent research literature (2002) 0.00
    0.0024970302 = product of:
      0.009988121 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 813) [ClassicSimilarity], result of:
              0.039952483 = score(doc=813,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 813, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=813)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The Internet, and more specifically the World Wide Web, is quickly becoming one of our main information sources. Systematic evaluation and analysis can help us understand how this medium works, grows, and changes, and how it influences our lives and research. New approaches in informetrics can provide an appropriate means towards achieving the above goals, and towards establishing a sound theory. This paper presents a selective review of research based on the Internet, using bibliometric and informetric methods and tools. Some of these studies clearly show the applicability of bibliometric laws to the Internet, while others establish new definitions and methods based on the respective definitions for printed sources. Both informetrics and Internet research can gain from these additional methods.
  9. Shema, H.; Bar-Ilan, J.; Thelwall, M.: Do blog citations correlate with a higher number of future citations? : Research blogs as a potential source for alternative metrics (2014) 0.00
    0.0024970302 = product of:
      0.009988121 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 1258) [ClassicSimilarity], result of:
              0.039952483 = score(doc=1258,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 1258, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1258)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Journal-based citations are an important source of data for impact indices. However, the impact of journal articles extends beyond formal scholarly discourse. Measuring online scholarly impact calls for new indices, complementary to the older ones. This article examines a possible alternative metric source, blog posts aggregated at ResearchBlogging.org, which discuss peer-reviewed articles and provide full bibliographic references. Articles reviewed in these blogs therefore receive "blog citations." We hypothesized that articles receiving blog citations close to their publication time receive more journal citations later than the articles in the same journal published in the same year that did not receive such blog citations. Statistically significant evidence for articles published in 2009 and 2010 support this hypothesis for seven of 12 journals (58%) in 2009 and 13 of 19 journals (68%) in 2010. We suggest, based on these results, that blog citations can be used as an alternative metric source.
  10. Bar-Ilan, J.: Web links and search engine ranking : the case of Google and the query "Jew" (2006) 0.00
    0.0020808585 = product of:
      0.008323434 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 6104) [ClassicSimilarity], result of:
              0.033293735 = score(doc=6104,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 6104, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6104)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The World Wide Web has become one of our more important information sources, and commercial search engines are the major tools for locating information; however, it is not enough for a Web page to be indexed by the search engines-it also must rank high on relevant queries. One of the parameters involved in ranking is the number and quality of links pointing to the page, based on the assumption that links convey appreciation for a page. This article presents the results of a content analysis of the links to two top pages retrieved by Google for the query "jew" as of July 2004: the "jew" entry on the free online encyclopedia Wikipedia, and the home page of "Jew Watch," a highly anti-Semitic site. The top results for the query "jew" gained public attention in April 2004, when it was noticed that the "Jew Watch" homepage ranked number 1. From this point on, both sides engaged in "Googlebombing" (i.e., increasing the number of links pointing to these pages). The results of the study show that most of the links to these pages come from blogs and discussion links, and the number of links pointing to these pages in appreciation of their content is extremely small. These findings have implications for ranking algorithms based on link counts, and emphasize the huge difference between Web links and citations in the scientific community.
  11. Bar-Ilan, J.: Evaluating the stability of the search tools Hotbot and Snap : a case study (2000) 0.00
    0.002059945 = product of:
      0.00823978 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 1180) [ClassicSimilarity], result of:
              0.03295912 = score(doc=1180,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 1180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1180)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the results of a case study in which 20 random queries were presented for ten consecutive days to Hotbot and Snap, two search tools that draw their results from the database of Inktomi. The results show huge daily fluctuations in the number of hits retrieved by Hotbot, and high stability in the hits displayed by Snap. These findings are to alert users of Hotbot of its instability as of October 1999, and they raise questions about the reliability of previous studies estimating the size of Hotbot based on its overlap with other search engines.
  12. Bar-Ilan, J.; Belous, Y.: Children as architects of Web directories : an exploratory study (2007) 0.00
    0.0014713892 = product of:
      0.005885557 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 289) [ClassicSimilarity], result of:
              0.023542227 = score(doc=289,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=289)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Children are increasingly using the Web. Cognitive theory tells us that directory structures are especially suited for information retrieval by children; however, empirical results show that they prefer keyword searching. One of the reasons for these findings could be that the directory structures and terminology are created by grown-ups. Using a card-sorting method and an enveloping system, we simulated the structure of a directory. Our goal was to try to understand what browsable, hierarchical subject categories children create when suggested terms are supplied and they are free to add or delete terms. Twelve groups of four children each (fourth and fifth graders) participated in our exploratory study. The initial terminology presented to the children was based on names of categories used in popular directories, in the sections on Arts, Television, Music, Cinema, and Celebrities. The children were allowed to introduce additional cards and change the terms appearing on the 61 cards. Findings show that the different groups reached reasonable consensus; the majority of the category names used by existing directories were acceptable by them and only a small minority of the terms caused confusion. Our recommendation is to include children in the design process of directories, not only in designing the interface but also in designing the content structure as well.
  13. Bar-Ilan, J.: Comparing rankings of search results on the Web (2005) 0.00
    0.0014713892 = product of:
      0.005885557 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 1068) [ClassicSimilarity], result of:
              0.023542227 = score(doc=1068,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 1068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1068)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The Web has become an information source for professional data gathering. Because of the vast amounts of information on almost all topics, one cannot systematically go over the whole set of results, and therefore must rely on the ordering of the results by the search engine. It is well known that search engines on the Web have low overlap in terms of coverage. In this study we measure how similar are the rankings of search engines on the overlapping results. We compare rankings of results for identical queries retrieved from several search engines. The method is based only on the set of URLs that appear in the answer sets of the engines being compared. For comparing the similarity of rankings of two search engines, the Spearman correlation coefficient is computed. When comparing more than two sets Kendall's W is used. These are well-known measures and the statistical significance of the results can be computed. The methods are demonstrated on a set of 15 queries that were submitted to four large Web search engines. The findings indicate that the large public search engines on the Web employ considerably different ranking algorithms.
  14. Shema, H.; Bar-Ilan, J.; Thelwall, M.: How is research blogged? : A content analysis approach (2015) 0.00
    0.0014713892 = product of:
      0.005885557 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 1863) [ClassicSimilarity], result of:
              0.023542227 = score(doc=1863,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 1863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1863)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Blogs that cite academic articles have emerged as a potential source of alternative impact metrics for the visibility of the blogged articles. Nevertheless, to evaluate more fully the value of blog citations, it is necessary to investigate whether research blogs focus on particular types of articles or give new perspectives on scientific discourse. Therefore, we studied the characteristics of peer-reviewed references in blogs and the typical content of blog posts to gain insight into bloggers' motivations. The sample consisted of 391 blog posts from 2010 to 2012 in Researchblogging.org's health category. The bloggers mostly cited recent research articles or reviews from top multidisciplinary and general medical journals. Using content analysis methods, we created a general classification scheme for blog post content with 10 major topic categories, each with several subcategories. The results suggest that health research bloggers rarely self-cite and that the vast majority of their blog posts (90%) include a general discussion of the issue covered in the article, with more than one quarter providing health-related advice based on the article(s) covered. These factors suggest a genuine attempt to engage with a wider, nonacademic audience. Nevertheless, almost 30% of the posts included some criticism of the issues being discussed.
  15. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Categorical relevance judgment (2018) 0.00
    0.0014713892 = product of:
      0.005885557 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 4457) [ClassicSimilarity], result of:
              0.023542227 = score(doc=4457,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 4457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4457)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    In this study we aim to explore users' behavior when assessing search results relevance based on the hypothesis of categorical thinking. To investigate how users categories search engine results, we perform several experiments where users are asked to group a list of 20 search results into several categories, while attaching a relevance judgment to each formed category. Moreover, to determine how users change their minds over time, each experiment was repeated three times under the same conditions, with a gap of one month between rounds. The results show that on average users form 4-5 categories. Within each round the size of a category decreases with the relevance of a category. To measure the agreement between the search engine's ranking and the users' relevance judgments, we defined two novel similarity measures, the average concordance and the MinMax swap ratio. Similarity is shown to be the highest for the third round as the users' opinion stabilizes. Qualitative analysis uncovered some interesting points that users tended to categories results by type and reliability of their source, and particularly, found commercial sites less trustworthy, and attached high relevance to Wikipedia when their prior domain knowledge was limited.