Search (51 results, page 1 of 3)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.04510611 = product of:
      0.09021222 = sum of:
        0.09021222 = product of:
          0.36084887 = sum of:
            0.36084887 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.36084887 = score(doc=1826,freq=2.0), product of:
                0.38523552 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045439374 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Saabiyeh, N.: What is a good ontology semantic similarity measure that considers multiple inheritance cases of concepts? (2018) 0.02
    0.024996921 = product of:
      0.049993843 = sum of:
        0.049993843 = product of:
          0.099987686 = sum of:
            0.099987686 = weight(_text_:i in 4530) [ClassicSimilarity], result of:
              0.099987686 = score(doc=4530,freq=8.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.58340967 = fieldWeight in 4530, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4530)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I need to measure semantic similarity between CSO ontology concepts, depending on Ontology structure (concept path, depth, least common subsumer (LCS) ...). CSO (Computer Science Ontology) is a large-scale ontology of research areas. A concepts in CSO may have multiple parents/super concepts (i.e. a concept may be a child of many other concepts), e.g. : (world wide web) is parent of (semantic web) (semantics) is parent of (semantic web) I found some measures that meet my needs, but the papers proposing these measures are not cited, so i got hesitated. I also found a measure that depends on weighted edges, but multiple inheritance (super concepts) is not considered..
  3. Onofri, A.: Concepts in context (2013) 0.02
    0.02164797 = product of:
      0.04329594 = sum of:
        0.04329594 = product of:
          0.08659188 = sum of:
            0.08659188 = weight(_text_:i in 1077) [ClassicSimilarity], result of:
              0.08659188 = score(doc=1077,freq=24.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.5052476 = fieldWeight in 1077, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1077)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.
  4. Bates, M.J.: ¬The nature of browsing (2019) 0.02
    0.017675493 = product of:
      0.035350986 = sum of:
        0.035350986 = product of:
          0.07070197 = sum of:
            0.07070197 = weight(_text_:i in 2265) [ClassicSimilarity], result of:
              0.07070197 = score(doc=2265,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.41253293 = fieldWeight in 2265, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2265)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The recent article by McKay et al. on browsing (2019) provides a valuable addition to the empirical literature of information science on this topic, and I read the descriptions of the various browsing cases with interest. However, the authors refer to my article on browsing (Bates, 2007) in ways that do not make sense to me and which do not at all conform to what I actually said.
  5. Karpathy, A.: ¬The unreasonable effectiveness of recurrent neural networks (2015) 0.02
    0.015462836 = product of:
      0.030925673 = sum of:
        0.030925673 = product of:
          0.061851345 = sum of:
            0.061851345 = weight(_text_:i in 1865) [ClassicSimilarity], result of:
              0.061851345 = score(doc=1865,freq=6.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.36089116 = fieldWeight in 1865, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    There's something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I've in fact reached the opposite conclusion). Fast forward about a year: I'm training RNNs all the time and I've witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you. By the way, together with this post I am also releasing code on Github (https://github.com/karpathy/char-rnn) that allows you to train character-level language models based on multi-layer LSTMs. You give it a large chunk of text and it will learn to generate text like it one character at a time. You can also use it to reproduce my experiments below. But we're getting ahead of ourselves; What are RNNs anyway?
  6. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.01539102 = product of:
      0.03078204 = sum of:
        0.03078204 = product of:
          0.06156408 = sum of:
            0.06156408 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.06156408 = score(doc=5865,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 12:51:57
  7. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.015236333 = product of:
      0.030472666 = sum of:
        0.030472666 = product of:
          0.060945332 = sum of:
            0.060945332 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.060945332 = score(doc=3450,freq=4.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  8. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 761) [ClassicSimilarity], result of:
              0.060601693 = score(doc=761,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 761, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=761)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  9. Fiorelli, G.: Hummingbird unleashed (2013) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 2546) [ClassicSimilarity], result of:
              0.060601693 = score(doc=2546,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 2546, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Sometimes I think that us SEOs could be wonderful characters for a Woody Allen movie: We are stressed, nervous, paranoid, we have a tendency for sudden changes of mood...okay, maybe I am exaggerating a little bit, but that's how we tend to (over)react whenever Google announces something. One thing that doesn't help is the lack of clarity coming from Google, which not only never mentions Hummingbird in any official document (for example, in the post of its 15th anniversary), but has also shied away from details of this epochal update in the "off-the-record" declarations of Amit Singhal. In fact, in some ways those statements partly contributed to the confusion. When Google announces an update-especially one like Hummingbird-the best thing to do is to avoid trying to immediately understand what it really is based on intuition alone. It is better to wait until the dust falls to the ground, recover the original documents, examine those related to them (and any variants), take the time to see the update in action, calmly investigate, and then after all that try to find the most plausible answers.
  10. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 2906) [ClassicSimilarity], result of:
              0.060601693 = score(doc=2906,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 2906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.
  11. Thornton, K: Powerful structure : inspecting infrastructures of information organization in Wikimedia Foundation projects (2016) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 3288) [ClassicSimilarity], result of:
              0.060601693 = score(doc=3288,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 3288, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3288)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This dissertation investigates the social and technological factors of collaboratively organizing information in commons-based peer production systems. To do so, it analyzes the diverse strategies that members of Wikimedia Foundation (WMF) project communities use to organize information. Key findings from this dissertation show that conceptual structures of information organization are encoded into the infrastructure of WMF projects. The fact that WMF projects are commons-based peer production systems means that we can inspect the code that enables these systems, but a specific type of technical literacy is required to do so. I use three methods in this dissertation. I conduct a qualitative content analysis of the discussions surrounding the design, implementation and evaluation of the category system; a quantitative analysis using descriptive statistics of patterns of editing among editors who contributed to the code of templates for information boxes; and a close reading of the infrastructure used to create the category system, the infobox templates, and the knowledge base of structured data.
  12. Barbaresi, A.: Toponyms as entry points into a digital edition : mapping Die Fackel (2018) 0.02
    0.015150423 = product of:
      0.030300846 = sum of:
        0.030300846 = product of:
          0.060601693 = sum of:
            0.060601693 = weight(_text_:i in 5058) [ClassicSimilarity], result of:
              0.060601693 = score(doc=5058,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.35359967 = fieldWeight in 5058, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5058)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The emergence of Spatial Humanities has prompted for interdisciplinary work on digitized texts, especially since the significance of place names exceeds the usually admitted frame of deictic and indexical functions. In this perspective, I present a visualization of toponyms co-occurrences in the literary journal Die Fackel ("The Torch"), published by the satirist and language critic Karl Kraus in Vienna from 1899 until 1936. The distant reading experiments consist in drawing lines on maps in order to uncover patterns which are not easily retraceable during close reading. I discuss their status in the context of a digital humanities study. This is not an authoritative cartography of the work but rather an indirect depiction of the viewpoint of Kraus and his contemporaries. Drawing on Kraus' vitriolic recording of political life, toponyms in Die Fackel tell a story about the ongoing reconfiguration of Europe.
  13. Schreiber, M.: Restricting the h-index to a citation time window : a case study of a timed Hirsch index (2014) 0.01
    0.014283955 = product of:
      0.02856791 = sum of:
        0.02856791 = product of:
          0.05713582 = sum of:
            0.05713582 = weight(_text_:i in 1563) [ClassicSimilarity], result of:
              0.05713582 = score(doc=1563,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.33337694 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The h-index has been shown to increase in many cases mostly because of citations to rather old publications. This inertia can be circumvented by restricting the evaluation to a citation time window. Here I report results of an empirical study analyzing the evolution of the thus defined timed h-index in dependence on the length of the citation time window.
  14. Celik, I.; Abel, F.; Siehndel, P.: Adaptive faceted search on Twitter (2011) 0.01
    0.014283955 = product of:
      0.02856791 = sum of:
        0.02856791 = product of:
          0.05713582 = sum of:
            0.05713582 = weight(_text_:i in 2221) [ClassicSimilarity], result of:
              0.05713582 = score(doc=2221,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.33337694 = fieldWeight in 2221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2221)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Wattenberg, M.; Viégas, F.; Johnson, I.: How to use t-SNE effectively (2016) 0.01
    0.014283955 = product of:
      0.02856791 = sum of:
        0.02856791 = product of:
          0.05713582 = sum of:
            0.05713582 = weight(_text_:i in 3887) [ClassicSimilarity], result of:
              0.05713582 = score(doc=3887,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.33337694 = fieldWeight in 3887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3887)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Hawking, S.: This is the most dangerous time for our planet (2016) 0.01
    0.013391207 = product of:
      0.026782414 = sum of:
        0.026782414 = product of:
          0.053564828 = sum of:
            0.053564828 = weight(_text_:i in 3273) [ClassicSimilarity], result of:
              0.053564828 = score(doc=3273,freq=18.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.31254086 = fieldWeight in 3273, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "As a theoretical physicist based in Cambridge, I have lived my life in an extraordinarily privileged bubble. Cambridge is an unusual town, centered around one of the world's great universities. Within that town, the scientific community which I became part of in my twenties is even more rarefied. And within that scientific community, the small group of international theoretical physicists with whom I have spent my working life might sometimes be tempted to regard themselves as the pinnacle. Add to this, the celebrity that has come with my books, and the isolation imposed by my illness, I feel as though my ivory tower is getting taller. So the recent apparent rejection of the elite in both America and Britain is surely aimed at me, as much as anyone. Whatever we might think about the decision by the British electorate to reject membership of the European Union, and by the American public to embrace Donald Trump as their next President, there is no doubt in the minds of commentators that this was a cry of anger by people who felt that they had been abandoned by their leaders. It was, everyone seems to agree, the moment that the forgotten spoke, finding their voice to reject the advice and guidance of experts and the elite everywhere.
    I am no exception to this rule. I warned before the Brexit vote that it would damage scientific research in Britain, that a vote to leave would be a step backward, and the electorate, or at least a sufficiently significant proportion of it, took no more notice of me than any of the other political leaders, trade unionists, artists, scientists, businessmen and celebrities who all gave the same unheeded advice to the rest of the country. What matters now however, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake. The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, the rise of AI is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
    This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms which it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive. We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent. It is also the case that another unintended consequence of the global spread of the internet and social media is that the stark nature of these inequalities are far more apparent than they have been in the past. For me, the ability to use technology to communicate has been a liberating and positive experience. Without it, I would not have been able to continue working these many years past. But it also means that the lives of the richest people in the most prosperous parts of the world are agonisingly visible to anyone, however poor and who has access to a phone. And since there are now more people with a telephone than access to clean water in Sub-Saharan Africa, this will shortly mean nearly everyone on our increasingly crowded planet will not be able to escape the inequality.
    The consequences of this are plain to see; the rural poor flock to cities, to shanty towns, driven by hope. And then often, finding that the Instagram nirvana is not available there, they seek it overseas, joining the ever greater numbers of economic migrants in search of a better life. These migrants in turn place new demands on the infrastructures and economies of the countries in which they arrive, undermining tolerance and further fuelling political populism. For me, the really concerning aspect of this, is that now, more than at any time in our history, our species needs to work together. We face awesome environmental challenges. Climate change, food production, overpopulation, the decimation of other species, epidemic disease, acidification of the oceans. Together, they are a reminder that we are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it. Perhaps in a few hundred years, we will have established human colonies amidst the stars, but right now we only have one planet, and we need to work together to protect it. To do that, we need to break down not build up barriers within and between nations. If we are to stand a chance of doing that, the world's leaders need to acknowledge that they have failed and are failing the many. With resources increasingly concentrated in the hands of a few, we are going to have to learn to share far more than at present. With not only jobs but entire industries disappearing, we must help people to re-train for a new world and support them financially while they do so. If communities and economies cannot cope with current levels of migration, we must do more to encourage global development, as that is the only way that the migratory millions will be persuaded to seek their future at home. We can do this, I am an enormous optimist for my species, but it will require the elites, from London to Harvard, from Cambridge to Hollywood, to learn the lessons of the past month. To learn above all a measure of humility."
  17. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.013059714 = product of:
      0.026119428 = sum of:
        0.026119428 = product of:
          0.052238856 = sum of:
            0.052238856 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.052238856 = score(doc=1967,freq=4.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  18. Gábor, K.; Zargayouna, H.; Tellier, I.; Buscaldi, D.; Charnois, T.: ¬A typology of semantic relations dedicated to scientific literature analysis (2016) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 2933) [ClassicSimilarity], result of:
              0.049993843 = score(doc=2933,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 2933, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2933)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. ISKO Encyclopedia of Knowledge Organization (2016) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 3181) [ClassicSimilarity], result of:
              0.049993843 = score(doc=3181,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 3181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3181)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    i
  20. Abdelkareem, M.A.A.: In terms of publication index, what indicator is the best for researchers indexing, Google Scholar, Scopus, Clarivate or others? (2018) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 4548) [ClassicSimilarity], result of:
              0.049993843 = score(doc=4548,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 4548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I believe that Google Scholar is the most popular academic indexing way for researchers and citations. However, some other indexing institutions may be more professional than Google Scholar but not as popular as Google Scholar. Other indexing websites like Scopus and Clarivate are providing more statistical figures for scholars, institutions or even journals. On account of publication citations, always Google Scholar shows higher citations for a paper than other indexing websites since Google Scholar consider most of the publication platforms so he can easily count the citations. While other databases just consider the citations come from those journals that are already indexed in their database

Types

  • a 32
  • x 3
  • i 1
  • n 1
  • s 1
  • More… Less…