Search (195 results, page 1 of 10)

  • × theme_ss:"Informetrie"
  1. H-Index auch im Web of Science (2008) 0.06
    0.057196997 = product of:
      0.114393994 = sum of:
        0.114393994 = sum of:
          0.072124116 = weight(_text_:search in 590) [ClassicSimilarity], result of:
            0.072124116 = score(doc=590,freq=6.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.39907667 = fieldWeight in 590, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=590)
          0.04226988 = weight(_text_:22 in 590) [ClassicSimilarity], result of:
            0.04226988 = score(doc=590,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.23214069 = fieldWeight in 590, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=590)
      0.5 = coord(1/2)
    
    Content
    "Zur Kurzmitteilung "Latest enhancements in Scopus: ... h-Index incorporated in Scopus" in den letzten Online-Mitteilungen (Online-Mitteilungen 92, S.31) ist zu korrigieren, dass der h-Index sehr wohl bereits im Web of Science enthalten ist. Allerdings findet man/frau diese Information nicht in der "cited ref search", sondern neben der Trefferliste einer Quick Search, General Search oder einer Suche über den Author Finder in der rechten Navigationsleiste unter dem Titel "Citation Report". Der "Citation Report" bietet für die in der jeweiligen Trefferliste angezeigten Arbeiten: - Die Gesamtzahl der Zitierungen aller Arbeiten in der Trefferliste - Die mittlere Zitationshäufigkeit dieser Arbeiten - Die Anzahl der Zitierungen der einzelnen Arbeiten, aufgeschlüsselt nach Publikationsjahr der zitierenden Arbeiten - Die mittlere Zitationshäufigkeit dieser Arbeiten pro Jahr - Den h-Index (ein h-Index von x sagt aus, dass x Arbeiten der Trefferliste mehr als x-mal zitiert wurden; er ist gegenüber sehr hohen Zitierungen einzelner Arbeiten unempfindlicher als die mittlere Zitationshäufigkeit)."
    Date
    6. 4.2008 19:04:22
  2. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.06
    0.057196997 = product of:
      0.114393994 = sum of:
        0.114393994 = sum of:
          0.072124116 = weight(_text_:search in 2742) [ClassicSimilarity], result of:
            0.072124116 = score(doc=2742,freq=6.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.39907667 = fieldWeight in 2742, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=2742)
          0.04226988 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
            0.04226988 = score(doc=2742,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.23214069 = fieldWeight in 2742, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2742)
      0.5 = coord(1/2)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  3. Hayer, L.: Lazarsfeld zitiert : eine bibliometrische Analyse (2008) 0.03
    0.034962818 = product of:
      0.069925636 = sum of:
        0.069925636 = sum of:
          0.03470073 = weight(_text_:search in 1934) [ClassicSimilarity], result of:
            0.03470073 = score(doc=1934,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.19200584 = fieldWeight in 1934, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1934)
          0.035224903 = weight(_text_:22 in 1934) [ClassicSimilarity], result of:
            0.035224903 = score(doc=1934,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.19345059 = fieldWeight in 1934, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1934)
      0.5 = coord(1/2)
    
    Abstract
    Um sich einer Antwort auf die Frage anzunähern, welche Bedeutung der Nachlass eines Wissenschaftlers wie jener Paul F. Lazarsfelds (mit zahlreichen noch unveröffentlichten Schriften) für die aktuelle Forschung haben könne, kann untersucht werden, wie häufig dieser Wissenschaftler zitiert wird. Wenn ein Autor zitiert wird, wird er auch genutzt. Wird er über einen langen Zeitraum oft genutzt, ist vermutlich auch die Auseinandersetzung mit seinem Nachlass von Nutzen. Außerdem kann aufgrund der Zitierungen festgestellt werden, was aus dem Lebenswerk eines Wissenschaftlers für die aktuelle Forschung relevant erscheint. Daraus können die vordringlichen Fragestellungen in der Bearbeitung des Nachlasses abgeleitet werden. Die Aufgabe für die folgende Untersuchung lautete daher: Wie oft wird Paul F. Lazarsfeld zitiert? Dabei interessierte auch: Wer zitiert wo? Die Untersuchung wurde mit Hilfe der Meta-Datenbank "ISI Web of Knowledge" durchgeführt. In dieser wurde im "Web of Science" mit dem Werkzeug "Cited Reference Search" nach dem zitierten Autor (Cited Author) "Lazarsfeld P*" gesucht. Diese Suche ergab 1535 Referenzen (References). Werden alle Referenzen gewählt, führt dies zu 4839 Ergebnissen (Results). Dabei wurden die Datenbanken SCI-Expanded, SSCI und A&HCI verwendet. Bei dieser Suche wurden die Publikationsjahre 1941-2008 analysiert. Vor 1956 wurden allerdings nur sehr wenige Zitate gefunden: 1946 fünf, ansonsten maximal drei, 1942-1944 und 1949 überhaupt keines. Zudem ist das Jahr 2008 noch lange nicht zu Ende. (Es gab jedoch schon vor Ende März 24 Zitate!)
    Date
    22. 6.2008 12:54:12
  4. Walters, W.H.; Linvill, A.C.: Bibliographic index coverage of open-access journals in six subject areas (2011) 0.03
    0.034962818 = product of:
      0.069925636 = sum of:
        0.069925636 = sum of:
          0.03470073 = weight(_text_:search in 4635) [ClassicSimilarity], result of:
            0.03470073 = score(doc=4635,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.19200584 = fieldWeight in 4635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4635)
          0.035224903 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
            0.035224903 = score(doc=4635,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.19345059 = fieldWeight in 4635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4635)
      0.5 = coord(1/2)
    
    Abstract
    We investigate the extent to which open-access (OA) journals and articles in biology, computer science, economics, history, medicine, and psychology are indexed in each of 11 bibliographic databases. We also look for variations in index coverage by journal subject, journal size, publisher type, publisher size, date of first OA issue, region of publication, language of publication, publication fee, and citation impact factor. Two databases, Biological Abstracts and PubMed, provide very good coverage of the OA journal literature, indexing 60 to 63% of all OA articles in their disciplines. Five databases provide moderately good coverage (22-41%), and four provide relatively poor coverage (0-12%). OA articles in biology journals, English-only journals, high-impact journals, and journals that charge publication fees of $1,000 or more are especially likely to be indexed. Conversely, articles from OA publishers in Africa, Asia, or Central/South America are especially unlikely to be indexed. Four of the 11 databases index commercially published articles at a substantially higher rate than articles published by universities, scholarly societies, nonprofit publishers, or governments. Finally, three databases-EBSCO Academic Search Complete, ProQuest Research Library, and Wilson OmniFile-provide less comprehensive coverage of OA articles than of articles in comparable subscription journals.
  5. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.03
    0.028179923 = product of:
      0.056359846 = sum of:
        0.056359846 = product of:
          0.11271969 = sum of:
            0.11271969 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
              0.11271969 = score(doc=5509,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.61904186 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986), S.417-419
  6. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.028179923 = product of:
      0.056359846 = sum of:
        0.056359846 = product of:
          0.11271969 = sum of:
            0.11271969 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.11271969 = score(doc=6091,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  7. Fiala, J.: Information flood : fiction and reality (1987) 0.03
    0.028179923 = product of:
      0.056359846 = sum of:
        0.056359846 = product of:
          0.11271969 = sum of:
            0.11271969 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.11271969 = score(doc=1080,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.61904186 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  8. Thelwall, M.: Web impact factors and search engine coverage (2000) 0.03
    0.027760584 = product of:
      0.055521168 = sum of:
        0.055521168 = product of:
          0.111042336 = sum of:
            0.111042336 = weight(_text_:search in 4539) [ClassicSimilarity], result of:
              0.111042336 = score(doc=4539,freq=8.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.6144187 = fieldWeight in 4539, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4539)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Search engines index only a proportion of the web and this proportion is not determined randomly but by following algorithms that take into account the properties that impact factors measure. A survey was conducted in order to test the coverage of search engines and to decide thether their partial coverage is indeed an obstacle to using them to calculate web impact factors. The results indicate that search engine coverage, even of large national domains is extremely uneven and would be likely to lead to misleading calculations
  9. Bar-Ilan, J.: On the overlap, the precision and estimated recall of search engines : a case study of the query 'Erdös' (1998) 0.03
    0.027157614 = product of:
      0.054315228 = sum of:
        0.054315228 = product of:
          0.108630456 = sum of:
            0.108630456 = weight(_text_:search in 3753) [ClassicSimilarity], result of:
              0.108630456 = score(doc=3753,freq=10.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.6010733 = fieldWeight in 3753, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3753)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Investigates the retrieval capabilities of 6 Internet search engines on a simple query. Existing work on search engine evaluation considers only the first 10 or 20 results returned by the search engine. In this work, all documents that the search engine pointed at were retrieved and thoroughly examined. Thus the precision of the whole retrieval process could be calculated, the overlap between the results of the engines studied, and an estimate on the recall of the searches given. The precision of the engines is high, recall is very low and the overlap is minimal
  10. Haley, M.R.: Ranking top economics and finance journals using Microsoft academic search versus Google scholar : How does the new publish or perish option compare? (2014) 0.03
    0.025499722 = product of:
      0.050999444 = sum of:
        0.050999444 = product of:
          0.10199889 = sum of:
            0.10199889 = weight(_text_:search in 1255) [ClassicSimilarity], result of:
              0.10199889 = score(doc=1255,freq=12.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.5643796 = fieldWeight in 1255, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recently, Harzing's Publish or Perish software was updated to include Microsoft Academic Search as a second citation database search option for computing various citation-based metrics. This article explores the new search option by scoring 50 top economics and finance journals and comparing them with the results obtained using the original Google Scholar-based search option. The new database delivers significantly smaller scores for all metrics, but the rank correlations across the two databases for the h-index, g-index, AWCR, and e-index are significantly correlated, especially when the time frame is restricted to more recent years. Comparisons are also made to the Article Influence score from eigenfactor.org and to the RePEc h-index, both of which adjust for journal-level self-citations.
    Object
    Microsoft Academic Search
  11. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.02
    0.024907768 = product of:
      0.049815536 = sum of:
        0.049815536 = product of:
          0.09963107 = sum of:
            0.09963107 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
              0.09963107 = score(doc=3690,freq=4.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.54716086 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1999 19:22:35
  12. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024907768 = product of:
      0.049815536 = sum of:
        0.049815536 = product of:
          0.09963107 = sum of:
            0.09963107 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.09963107 = score(doc=3925,freq=4.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  13. Diodato, V.: Dictionary of bibliometrics (1994) 0.02
    0.024657432 = product of:
      0.049314864 = sum of:
        0.049314864 = product of:
          0.09862973 = sum of:
            0.09862973 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
              0.09862973 = score(doc=5666,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.5416616 = fieldWeight in 5666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Journal of library and information science 22(1996) no.2, S.116-117 (L.C. Smith)
  14. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.02
    0.024657432 = product of:
      0.049314864 = sum of:
        0.049314864 = product of:
          0.09862973 = sum of:
            0.09862973 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.09862973 = score(doc=6902,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:55:29
  15. Bookstein, A.: Informetric distributions : II. Resilience to ambiguity (1990) 0.02
    0.024657432 = product of:
      0.049314864 = sum of:
        0.049314864 = product of:
          0.09862973 = sum of:
            0.09862973 = weight(_text_:22 in 4689) [ClassicSimilarity], result of:
              0.09862973 = score(doc=4689,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.5416616 = fieldWeight in 4689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:55:55
  16. Bhavnani, S.K.: Why is it difficult to find comprehensive information? : implications of information scatter for search and design (2005) 0.02
    0.021249771 = product of:
      0.042499542 = sum of:
        0.042499542 = product of:
          0.084999084 = sum of:
            0.084999084 = weight(_text_:search in 3684) [ClassicSimilarity], result of:
              0.084999084 = score(doc=3684,freq=12.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.47031635 = fieldWeight in 3684, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3684)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The rapid development of Web sites providing extensive coverage of a topic, coupled with the development of powerful search engines (designed to help users find such Web sites), suggests that users can easily find comprehensive information about a topic. In domains such as consumer healthcare, finding comprehensive information about a topic is critical as it can improve a patient's judgment in making healthcare decisions, and can encourage higher compliance with treatment. However, recent studies show that despite using powerful search engines, many healthcare information seekers have difficulty finding comprehensive information even for narrow healthcare topics because the relevant information is scattered across many Web sites. To date, no studies have analyzed how facts related to a search topic are distributed across relevant Web pages and Web sites. In this study, the distribution of facts related to five common healthcare topics across high-quality sites is analyzed, and the reasons underlying those distributions are explored. The analysis revealed the existence of few pages that had many facts, many pages that had few facts, and no single page or site that provided all the facts. While such a distribution conforms to other information-related phenomena, a deeper analysis revealed that the distributions were caused by a trade-off between depth and breadth, leading to the existence of general, specialized, and sparse pages. Furthermore, the results helped to make explicit the knowledge needed by searchers to find comprehensive healthcare information, and suggested the motivation to explore distribution-conscious approaches for the development of future search systems, search interfaces, Web page designs, and training.
  17. Thelwall, M.: Quantitative comparisons of search engine results (2008) 0.02
    0.021249771 = product of:
      0.042499542 = sum of:
        0.042499542 = product of:
          0.084999084 = sum of:
            0.084999084 = weight(_text_:search in 2350) [ClassicSimilarity], result of:
              0.084999084 = score(doc=2350,freq=12.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.47031635 = fieldWeight in 2350, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2350)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Search engines are normally used to find information or Web sites, but Webometric investigations use them for quantitative data such as the number of pages matching a query and the international spread of those pages. For this type of application, the accuracy of the hit count estimates and range of URLs in the full results are important. Here, we compare the applications programming interfaces of Google, Yahoo!, and Live Search for 1,587 single word searches. The hit count estimates were broadly consistent but with Yahoo! and Google, reporting 5-6 times more hits than Live Search. Yahoo! tended to return slightly more matching URLs than Google, with Live Search returning significantly fewer. Yahoo!'s result URLs included a significantly wider range of domains and sites than the other two, and there was little consistency between the three engines in the number of different domains. In contrast, the three engines were reasonably consistent in the number of different top-level domains represented in the result URLs, although Yahoo! tended to return the most. In conclusion, quantitative results from the three search engines are mostly consistent but with unexpected types of inconsistency that users should be aware of. Google is recommended for hit count estimates but Yahoo! is recommended for all other Webometric purposes.
  18. Amolochitis, E.; Christou, I.T.; Tan, Z.-H.; Prasad, R.: ¬A heuristic hierarchical scheme for academic search and retrieval (2013) 0.02
    0.021249771 = product of:
      0.042499542 = sum of:
        0.042499542 = product of:
          0.084999084 = sum of:
            0.084999084 = weight(_text_:search in 2711) [ClassicSimilarity], result of:
              0.084999084 = score(doc=2711,freq=12.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.47031635 = fieldWeight in 2711, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2711)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present PubSearch, a hybrid heuristic scheme for re-ranking academic papers retrieved from standard digital libraries such as the ACM Portal. The scheme is based on the hierarchical combination of a custom implementation of the term frequency heuristic, a time-depreciated citation score and a graph-theoretic computed score that relates the paper's index terms with each other. We designed and developed a meta-search engine that submits user queries to standard digital repositories of academic publications and re-ranks the repository results using the hierarchical heuristic scheme. We evaluate our proposed re-ranking scheme via user feedback against the results of ACM Portal on a total of 58 different user queries specified from 15 different users. The results show that our proposed scheme significantly outperforms ACM Portal in terms of retrieval precision as measured by most common metrics in Information Retrieval including Normalized Discounted Cumulative Gain (NDCG), Expected Reciprocal Rank (ERR) as well as a newly introduced lexicographic rule (LEX) of ranking search results. In particular, PubSearch outperforms ACM Portal by more than 77% in terms of ERR, by more than 11% in terms of NDCG, and by more than 907.5% in terms of LEX. We also re-rank the top-10 results of a subset of the original 58 user queries produced by Google Scholar, Microsoft Academic Search, and ArnetMiner; the results show that PubSearch compares very well against these search engines as well. The proposed scheme can be easily plugged in any existing search engine for retrieval of academic publications.
  19. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.02
    0.02113494 = product of:
      0.04226988 = sum of:
        0.04226988 = product of:
          0.08453976 = sum of:
            0.08453976 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
              0.08453976 = score(doc=4890,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.46428138 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4890)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 17:02:22
  20. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.02113494 = product of:
      0.04226988 = sum of:
        0.04226988 = product of:
          0.08453976 = sum of:
            0.08453976 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.08453976 = score(doc=1239,freq=2.0), product of:
                0.18208735 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051997773 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22

Authors

Years

Languages

  • e 183
  • d 9
  • sp 2
  • ro 1
  • More… Less…

Types

  • a 191
  • el 4
  • m 3
  • s 1
  • More… Less…