Search (9 results, page 1 of 1)

  • × author_ss:"Bar-Ilan, J."
  1. Bar-Ilan, J.; Levene, M.: ¬The hw-rank : an h-index variant for ranking web pages (2015) 0.04
    0.03574687 = product of:
      0.1072406 = sum of:
        0.1072406 = weight(_text_:index in 1694) [ClassicSimilarity], result of:
          0.1072406 = score(doc=1694,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.48279524 = fieldWeight in 1694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.078125 = fieldNorm(doc=1694)
      0.33333334 = coord(1/3)
    
  2. Bar-Ilan, J.: Informetrics (2009) 0.02
    0.02144812 = product of:
      0.06434436 = sum of:
        0.06434436 = weight(_text_:index in 3822) [ClassicSimilarity], result of:
          0.06434436 = score(doc=3822,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.28967714 = fieldWeight in 3822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=3822)
      0.33333334 = coord(1/3)
    
    Abstract
    Informetrics is a subfield of information science and it encompasses bibliometrics, scientometrics, cybermetrics, and webometrics. This encyclopedia entry provides an overview of informetrics and its subfields. In general, informetrics deals with quantitative aspects of information: its production, dissemination, evaluation, and use. Bibliometrics and scientometrics study scientific literature: papers, journals, patents, and citations; while in webometric studies the sources studied are Web pages and Web sites, and citations are replaced by hypertext links. The entry introduces major topics in informetrics: citation analysis and citation related studies, the journal impact factor, the recently defined h-index, citation databases, co-citation analysis, open access publications and its implications, informetric laws, techniques for mapping and visualization of informetric phenomena, the emerging subfields of webometrics, cybermetrics and link analysis, and research evaluation.
  3. Bar-Ilan, J.: What do we know about links and linking? : a framework for studying links in academic environments (2005) 0.01
    0.00986604 = product of:
      0.029598119 = sum of:
        0.029598119 = product of:
          0.059196237 = sum of:
            0.059196237 = weight(_text_:classification in 1058) [ClassicSimilarity], result of:
              0.059196237 = score(doc=1058,freq=6.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.3656675 = fieldWeight in 1058, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1058)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Web is an enormous set of documents connected through hypertext links created by authors of Web pages. These links have been studied quantitatively, but little has been done so far in order to understand why these links are created. As a first step towards a better understanding, we propose a classification of link types in academic environments on the Web. The classification is multi-faceted and involves different aspects of the source and the target page, the link area and the relationship between the source and the target. Such classification provides an insight into the diverse uses of hypertext links on the Web, and has implications for browsing and ranking in IR systems by differentiating between different types of links. As a case study we classified a sample of links between sites of Israeli academic institutions.
  4. Bergman, O.; Gradovitch, N.; Bar-Ilan, J.; Beyth-Marom, R.: Folder versus tag preference in personal information management (2013) 0.01
    0.0067129894 = product of:
      0.020138968 = sum of:
        0.020138968 = product of:
          0.040277936 = sum of:
            0.040277936 = weight(_text_:classification in 1103) [ClassicSimilarity], result of:
              0.040277936 = score(doc=1103,freq=4.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.24880521 = fieldWeight in 1103, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1103)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Users' preferences for folders versus tags was studied in 2 working environments where both options were available to them. In the Gmail study, we informed 75 participants about both folder-labeling and tag-labeling, observed their storage behavior after 1 month, and asked them to estimate the proportions of different retrieval options in their behavior. In the Windows 7 study, we informed 23 participants about tags and asked them to tag all their files for 2 weeks, followed by a period of 5 weeks of free choice between the 2 methods. Their storage and retrieval habits were tested prior to the learning session and, after 7 weeks, using special classification recording software and a retrieval-habits questionnaire. A controlled retrieval task and an in-depth interview were conducted. Results of both studies show a strong preference for folders over tags for both storage and retrieval. In the minority of cases where tags were used for storage, participants typically used a single tag per information item. Moreover, when multiple classification was used for storage, it was only marginally used for retrieval. The controlled retrieval task showed lower success rates and slower retrieval speeds for tag use. Possible reasons for participants' preferences are discussed.
  5. Bronstein, J.; Gazit, T.; Perez, O.; Bar-Ilan, J.; Aharony, N.; Amichai-Hamburger, Y.: ¬An examination of the factors contributing to participation in online social platforms (2016) 0.01
    0.00573921 = product of:
      0.01721763 = sum of:
        0.01721763 = product of:
          0.03443526 = sum of:
            0.03443526 = weight(_text_:22 in 3364) [ClassicSimilarity], result of:
              0.03443526 = score(doc=3364,freq=2.0), product of:
                0.17800546 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05083213 = queryNorm
                0.19345059 = fieldWeight in 3364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3364)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
  6. Shema, H.; Bar-Ilan, J.; Thelwall, M.: How is research blogged? : A content analysis approach (2015) 0.00
    0.0047468003 = product of:
      0.014240401 = sum of:
        0.014240401 = product of:
          0.028480802 = sum of:
            0.028480802 = weight(_text_:classification in 1863) [ClassicSimilarity], result of:
              0.028480802 = score(doc=1863,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.17593184 = fieldWeight in 1863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1863)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Blogs that cite academic articles have emerged as a potential source of alternative impact metrics for the visibility of the blogged articles. Nevertheless, to evaluate more fully the value of blog citations, it is necessary to investigate whether research blogs focus on particular types of articles or give new perspectives on scientific discourse. Therefore, we studied the characteristics of peer-reviewed references in blogs and the typical content of blog posts to gain insight into bloggers' motivations. The sample consisted of 391 blog posts from 2010 to 2012 in Researchblogging.org's health category. The bloggers mostly cited recent research articles or reviews from top multidisciplinary and general medical journals. Using content analysis methods, we created a general classification scheme for blog post content with 10 major topic categories, each with several subcategories. The results suggest that health research bloggers rarely self-cite and that the vast majority of their blog posts (90%) include a general discussion of the issue covered in the article, with more than one quarter providing health-related advice based on the article(s) covered. These factors suggest a genuine attempt to engage with a wider, nonacademic audience. Nevertheless, almost 30% of the posts included some criticism of the issues being discussed.
  7. Zhitomirsky-Geffet, M.; Erez, E.S.; Bar-Ilan, J.: Toward multiviewpoint ontology construction by collaboration of non-experts and crowdsourcing : the case of the effect of diet on health (2017) 0.00
    0.0047468003 = product of:
      0.014240401 = sum of:
        0.014240401 = product of:
          0.028480802 = sum of:
            0.028480802 = weight(_text_:classification in 3439) [ClassicSimilarity], result of:
              0.028480802 = score(doc=3439,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.17593184 = fieldWeight in 3439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3439)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Domain experts are skilled in buliding a narrow ontology that reflects their subfield of expertise based on their work experience and personal beliefs. We call this type of ontology a single-viewpoint ontology. There can be a variety of such single viewpoint ontologies that represent a wide spectrum of subfields and expert opinions on the domain. However, to have a complete formal vocabulary for the domain they need to be linked and unified into a multiviewpoint model while having the subjective viewpoint statements marked and distinguished from the objectively true statements. In this study, we propose and implement a two-phase methodology for multiviewpoint ontology construction by nonexpert users. The proposed methodology was implemented for the domain of the effect of diet on health. A large-scale crowdsourcing experiment was conducted with about 750 ontological statements to determine whether each of these statements is objectively true, viewpoint, or erroneous. Typically, in crowdsourcing experiments the workers are asked for their personal opinions on the given subject. However, in our case their ability to objectively assess others' opinions was examined as well. Our results show substantially higher accuracy in classification for the objective assessment approach compared to the results based on personal opinions.
  8. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.00
    0.004591368 = product of:
      0.0137741035 = sum of:
        0.0137741035 = product of:
          0.027548207 = sum of:
            0.027548207 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.027548207 = score(doc=1634,freq=2.0), product of:
                0.17800546 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05083213 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
  9. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Testing the stability of "wisdom of crowds" judgments of search results over time and their similarity with the search engine rankings (2016) 0.00
    0.004591368 = product of:
      0.0137741035 = sum of:
        0.0137741035 = product of:
          0.027548207 = sum of:
            0.027548207 = weight(_text_:22 in 3071) [ClassicSimilarity], result of:
              0.027548207 = score(doc=3071,freq=2.0), product of:
                0.17800546 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05083213 = queryNorm
                0.15476047 = fieldWeight in 3071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3071)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22