Search (81 results, page 1 of 5)

  • × language_ss:"e"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.12
    0.121650845 = sum of:
      0.07216977 = product of:
        0.2886791 = sum of:
          0.2886791 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
            0.2886791 = score(doc=230,freq=2.0), product of:
              0.38523552 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045439374 = queryNorm
              0.7493574 = fieldWeight in 230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0625 = fieldNorm(doc=230)
        0.25 = coord(1/4)
      0.049481075 = product of:
        0.09896215 = sum of:
          0.09896215 = weight(_text_:i in 230) [ClassicSimilarity], result of:
            0.09896215 = score(doc=230,freq=6.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.57742584 = fieldWeight in 230, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0625 = fieldNorm(doc=230)
        0.5 = coord(1/2)
    
    Abstract
    In this lecture I intend to challenge those who uphold a monist or even a dualist view of the universe; and I will propose, instead, a pluralist view. I will propose a view of the universe that recognizes at least three different but interacting sub-universes.
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.05
    0.04654435 = product of:
      0.0930887 = sum of:
        0.0930887 = sum of:
          0.049993843 = weight(_text_:i in 40) [ClassicSimilarity], result of:
            0.049993843 = score(doc=40,freq=2.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.29170483 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.043094855 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.043094855 = score(doc=40,freq=2.0), product of:
              0.15912095 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045439374 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  3. Kaser, R.T.: If information wants to be free . . . then who's going to pay for it? (2000) 0.03
    0.029609075 = product of:
      0.05921815 = sum of:
        0.05921815 = product of:
          0.1184363 = sum of:
            0.1184363 = weight(_text_:i in 1234) [ClassicSimilarity], result of:
              0.1184363 = score(doc=1234,freq=22.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.6910539 = fieldWeight in 1234, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I have become "brutally honest" of late, at least according to one listener who heard my remarks during a recent whistle stop speaking tour of publishing conventions. This comment caught me a little off guard. Not that I haven't always been frank, but I do try never to be brutal. The truth, I guess, can be painful, even if the intention of the teller is simply objectivity. This paper is based on a "brutally honest" talk I have been giving to publishers, first, in February, to the Association of American Publishers' Professional and Scholarly Publishing Division, at which point I was calling the piece, "The Illusion of Free Information." It was this initial rendition that led to the invitation to publish something here. Since then I've been working on the talk. I gave a second version of it in March to the assembly of the American Society of Information Dissemination Centers, where I called it, "When Sectors Clash: Public Access vs. Private Interest." And, most recently, I gave yet a third version of it to the governing board of the American Institute of Physics. This time I called it: "The Future of Society Publishing." The notion of free information, our government's proper role in distributing free information, and the future of scholarly publishing in a world of free information . . . these are the issues that are floating around in my head. My goal here is to tell you where my thinking is only at this moment, for I reserve the right to continue thinking and developing new permutations on this mentally challenging theme.
  4. Baker, T.: ¬A grammar of Dublin Core (2000) 0.03
    0.02659677 = product of:
      0.05319354 = sum of:
        0.05319354 = sum of:
          0.02856791 = weight(_text_:i in 1236) [ClassicSimilarity], result of:
            0.02856791 = score(doc=1236,freq=2.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.16668847 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
          0.024625631 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
            0.024625631 = score(doc=1236,freq=2.0), product of:
              0.15912095 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045439374 = queryNorm
              0.15476047 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  5. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.02176619 = product of:
      0.04353238 = sum of:
        0.04353238 = product of:
          0.08706476 = sum of:
            0.08706476 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.08706476 = score(doc=3925,freq=4.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  6. Dunning, A.: Do we still need search engines? (1999) 0.02
    0.021547427 = product of:
      0.043094855 = sum of:
        0.043094855 = product of:
          0.08618971 = sum of:
            0.08618971 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.08618971 = score(doc=6021,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Ariadne. 1999, no.22
  7. Chessum, K.; Haiming, L.; Frommholz, I.: ¬A study of search user interface design based on Hofstede's six cultural dimensions (2022) 0.02
    0.021425933 = product of:
      0.042851865 = sum of:
        0.042851865 = product of:
          0.08570373 = sum of:
            0.08570373 = weight(_text_:i in 856) [ClassicSimilarity], result of:
              0.08570373 = score(doc=856,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.50006545 = fieldWeight in 856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.09375 = fieldNorm(doc=856)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.018469224 = product of:
      0.036938448 = sum of:
        0.036938448 = product of:
          0.073876895 = sum of:
            0.073876895 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.073876895 = score(doc=3895,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 8.2005 19:20:22
  9. Weibel, S.L.: Border crossings : reflections on a decade of metadata consensus building (2005) 0.02
    0.017854942 = product of:
      0.035709884 = sum of:
        0.035709884 = product of:
          0.07141977 = sum of:
            0.07141977 = weight(_text_:i in 1187) [ClassicSimilarity], result of:
              0.07141977 = score(doc=1187,freq=8.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.41672117 = fieldWeight in 1187, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1187)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In June of this year, I performed my final official duties as part of the Dublin Core Metadata Initiative management team. It is a happy irony to affix a seal on that service in this journal, as both D-Lib Magazine and the Dublin Core celebrate their tenth anniversaries. This essay is a personal reflection on some of the achievements and lessons of that decade. The OCLC-NCSA Metadata Workshop took place in March of 1995, and as we tried to understand what it meant and who would care, D-Lib magazine came into being and offered a natural venue for sharing our work. I recall a certain skepticism when Bill Arms said "We want D-Lib to be the first place people look for the latest developments in digital library research." These were the early days in the evolution of electronic publishing, and the goal was ambitious. By any measure, a decade of high-quality electronic publishing is an auspicious accomplishment, and D-Lib (and its host, CNRI) deserve congratulations for having achieved their goal. I am grateful to have been a contributor. That first DC workshop led to further workshops, a community, a variety of standards in several countries, an ISO standard, a conference series, and an international consortium. Looking back on this evolution is both satisfying and wistful. While I am pleased that the achievements are substantial, the unmet challenges also provide a rich till in which to cultivate insights on the development of digital infrastructure.
  10. Rogers, I.: ¬The Google Pagerank algorithm and how it works (2002) 0.02
    0.017854942 = product of:
      0.035709884 = sum of:
        0.035709884 = product of:
          0.07141977 = sum of:
            0.07141977 = weight(_text_:i in 2548) [ClassicSimilarity], result of:
              0.07141977 = score(doc=2548,freq=8.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.41672117 = fieldWeight in 2548, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Page Rank is a topic much discussed by Search Engine Optimisation (SEO) experts. At the heart of PageRank is a mathematical formula that seems scary to look at but is actually fairly simple to understand. Despite this many people seem to get it wrong! In particular "Chris Ridings of www.searchenginesystems.net" has written a paper entitled "PageRank Explained: Everything you've always wanted to know about PageRank", pointed to by many people, that contains a fundamental mistake early on in the explanation! Unfortunately this means some of the recommendations in the paper are not quite accurate. By showing code to correctly calculate real PageRank I hope to achieve several things in this response: - Clearly explain how PageRank is calculated. - Go through every example in Chris' paper, and add some more of my own, showing the correct PageRank for each diagram. By showing the code used to calculate each diagram I've opened myself up to peer review - mostly in an effort to make sure the examples are correct, but also because the code can help explain the PageRank calculations. - Describe some principles and observations on website design based on these correctly calculated examples. Any good web designer should take the time to fully understand how PageRank really works - if you don't then your site's layout could be seriously hurting your Google listings! [Note: I have nothing in particular against Chris. If I find any other papers on the subject I'll try to comment evenly]
  11. Bates, M.J.: ¬The nature of browsing (2019) 0.02
    0.017675493 = product of:
      0.035350986 = sum of:
        0.035350986 = product of:
          0.07070197 = sum of:
            0.07070197 = weight(_text_:i in 2265) [ClassicSimilarity], result of:
              0.07070197 = score(doc=2265,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.41253293 = fieldWeight in 2265, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2265)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The recent article by McKay et al. on browsing (2019) provides a valuable addition to the empirical literature of information science on this topic, and I read the descriptions of the various browsing cases with interest. However, the authors refer to my article on browsing (Bates, 2007) in ways that do not make sense to me and which do not at all conform to what I actually said.
  12. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.01539102 = product of:
      0.03078204 = sum of:
        0.03078204 = product of:
          0.06156408 = sum of:
            0.06156408 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.06156408 = score(doc=5865,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 12:51:57
  13. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.015236333 = product of:
      0.030472666 = sum of:
        0.030472666 = product of:
          0.060945332 = sum of:
            0.060945332 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.060945332 = score(doc=3450,freq=4.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  14. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 761) [ClassicSimilarity], result of:
              0.060601693 = score(doc=761,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 761, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=761)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  15. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 2906) [ClassicSimilarity], result of:
              0.060601693 = score(doc=2906,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 2906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.
  16. Barbaresi, A.: Toponyms as entry points into a digital edition : mapping Die Fackel (2018) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 5058) [ClassicSimilarity], result of:
              0.060601693 = score(doc=5058,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 5058, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5058)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The emergence of Spatial Humanities has prompted for interdisciplinary work on digitized texts, especially since the significance of place names exceeds the usually admitted frame of deictic and indexical functions. In this perspective, I present a visualization of toponyms co-occurrences in the literary journal Die Fackel ("The Torch"), published by the satirist and language critic Karl Kraus in Vienna from 1899 until 1936. The distant reading experiments consist in drawing lines on maps in order to uncover patterns which are not easily retraceable during close reading. I discuss their status in the context of a digital humanities study. This is not an authoritative cartography of the work but rather an indirect depiction of the viewpoint of Kraus and his contemporaries. Drawing on Kraus' vitriolic recording of political life, toponyms in Die Fackel tell a story about the ongoing reconfiguration of Europe.
  17. Schreiber, M.: Restricting the h-index to a citation time window : a case study of a timed Hirsch index (2014) 0.01
    0.014283955 = product of:
      0.02856791 = sum of:
        0.02856791 = product of:
          0.05713582 = sum of:
            0.05713582 = weight(_text_:i in 1563) [ClassicSimilarity], result of:
              0.05713582 = score(doc=1563,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.33337694 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The h-index has been shown to increase in many cases mostly because of citations to rather old publications. This inertia can be circumvented by restricting the evaluation to a citation time window. Here I report results of an empirical study analyzing the evolution of the thus defined timed h-index in dependence on the length of the citation time window.
  18. Celik, I.; Abel, F.; Siehndel, P.: Adaptive faceted search on Twitter (2011) 0.01
    0.014283955 = product of:
      0.02856791 = sum of:
        0.02856791 = product of:
          0.05713582 = sum of:
            0.05713582 = weight(_text_:i in 2221) [ClassicSimilarity], result of:
              0.05713582 = score(doc=2221,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.33337694 = fieldWeight in 2221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2221)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Wattenberg, M.; Viégas, F.; Johnson, I.: How to use t-SNE effectively (2016) 0.01
    0.014283955 = product of:
      0.02856791 = sum of:
        0.02856791 = product of:
          0.05713582 = sum of:
            0.05713582 = weight(_text_:i in 3887) [ClassicSimilarity], result of:
              0.05713582 = score(doc=3887,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.33337694 = fieldWeight in 3887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3887)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Hawking, S.: This is the most dangerous time for our planet (2016) 0.01
    0.013391207 = product of:
      0.026782414 = sum of:
        0.026782414 = product of:
          0.053564828 = sum of:
            0.053564828 = weight(_text_:i in 3273) [ClassicSimilarity], result of:
              0.053564828 = score(doc=3273,freq=18.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.31254086 = fieldWeight in 3273, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "As a theoretical physicist based in Cambridge, I have lived my life in an extraordinarily privileged bubble. Cambridge is an unusual town, centered around one of the world's great universities. Within that town, the scientific community which I became part of in my twenties is even more rarefied. And within that scientific community, the small group of international theoretical physicists with whom I have spent my working life might sometimes be tempted to regard themselves as the pinnacle. Add to this, the celebrity that has come with my books, and the isolation imposed by my illness, I feel as though my ivory tower is getting taller. So the recent apparent rejection of the elite in both America and Britain is surely aimed at me, as much as anyone. Whatever we might think about the decision by the British electorate to reject membership of the European Union, and by the American public to embrace Donald Trump as their next President, there is no doubt in the minds of commentators that this was a cry of anger by people who felt that they had been abandoned by their leaders. It was, everyone seems to agree, the moment that the forgotten spoke, finding their voice to reject the advice and guidance of experts and the elite everywhere.
    I am no exception to this rule. I warned before the Brexit vote that it would damage scientific research in Britain, that a vote to leave would be a step backward, and the electorate, or at least a sufficiently significant proportion of it, took no more notice of me than any of the other political leaders, trade unionists, artists, scientists, businessmen and celebrities who all gave the same unheeded advice to the rest of the country. What matters now however, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake. The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, the rise of AI is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
    This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms which it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive. We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent. It is also the case that another unintended consequence of the global spread of the internet and social media is that the stark nature of these inequalities are far more apparent than they have been in the past. For me, the ability to use technology to communicate has been a liberating and positive experience. Without it, I would not have been able to continue working these many years past. But it also means that the lives of the richest people in the most prosperous parts of the world are agonisingly visible to anyone, however poor and who has access to a phone. And since there are now more people with a telephone than access to clean water in Sub-Saharan Africa, this will shortly mean nearly everyone on our increasingly crowded planet will not be able to escape the inequality.
    The consequences of this are plain to see; the rural poor flock to cities, to shanty towns, driven by hope. And then often, finding that the Instagram nirvana is not available there, they seek it overseas, joining the ever greater numbers of economic migrants in search of a better life. These migrants in turn place new demands on the infrastructures and economies of the countries in which they arrive, undermining tolerance and further fuelling political populism. For me, the really concerning aspect of this, is that now, more than at any time in our history, our species needs to work together. We face awesome environmental challenges. Climate change, food production, overpopulation, the decimation of other species, epidemic disease, acidification of the oceans. Together, they are a reminder that we are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it. Perhaps in a few hundred years, we will have established human colonies amidst the stars, but right now we only have one planet, and we need to work together to protect it. To do that, we need to break down not build up barriers within and between nations. If we are to stand a chance of doing that, the world's leaders need to acknowledge that they have failed and are failing the many. With resources increasingly concentrated in the hands of a few, we are going to have to learn to share far more than at present. With not only jobs but entire industries disappearing, we must help people to re-train for a new world and support them financially while they do so. If communities and economies cannot cope with current levels of migration, we must do more to encourage global development, as that is the only way that the migratory millions will be persuaded to seek their future at home. We can do this, I am an enormous optimist for my species, but it will require the elites, from London to Harvard, from Cambridge to Hollywood, to learn the lessons of the past month. To learn above all a measure of humility."

Years