Search (692 results, page 1 of 35)

  • × year_i:[2010 TO 2020}
  1. Jahns, Y.: Take a Chance on Me : Aus den Veranstaltungen der Sektionen Bibliografie, Katalogisierung, ..., 76. IFLA-Generalkonferenz in Göteborg, Nachtrag zum im Bibliotheksdienst Nr. 10, Oktober 2010, erschienenen Beitrag (2010) 0.09
    0.0917965 = product of:
      0.2753895 = sum of:
        0.2753895 = weight(_text_:me in 3488) [ClassicSimilarity], result of:
          0.2753895 = score(doc=3488,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.80279493 = fieldWeight in 3488, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.078125 = fieldNorm(doc=3488)
      0.33333334 = coord(1/3)
    
  2. McGrath, K.: Thoughts on FRBR and moving images (2014) 0.09
    0.0917965 = product of:
      0.2753895 = sum of:
        0.2753895 = weight(_text_:me in 2431) [ClassicSimilarity], result of:
          0.2753895 = score(doc=2431,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.80279493 = fieldWeight in 2431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.078125 = fieldNorm(doc=2431)
      0.33333334 = coord(1/3)
    
    Abstract
    I'd like to talk about some things that have come up for me as I've thought about how FRBR might apply to moving images.
  3. Gorman, M.: ¬The origins and making of the ISBD : a personal history, 1966-1978 (2014) 0.09
    0.0908739 = product of:
      0.2726217 = sum of:
        0.2726217 = weight(_text_:me in 1995) [ClassicSimilarity], result of:
          0.2726217 = score(doc=1995,freq=4.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.79472643 = fieldWeight in 1995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1995)
      0.33333334 = coord(1/3)
    
    Abstract
    What follows are my memories of the events, starting almost five decades ago, that led to the International Standard for Bibliographic Description (ISBD)-still the most successful and widely used international cataloging standard in history. Many of the documents of the time were little more than ephemera (working papers and the like) and some are not now available to me. I have checked my recollections in all the documents to which I have access and apologize in advance for any errors of time or place. I also apologize for the, alas, unavoidable, given the nature of the essay, many repetitions of the words "I" and "me."
  4. Tsai, R.T.-H.; Chiu, B.; Wu, C.-E.: Visual webpage block importance prediction using conditional random fields (2011) 0.08
    0.07949811 = product of:
      0.23849432 = sum of:
        0.23849432 = weight(_text_:me in 4924) [ClassicSimilarity], result of:
          0.23849432 = score(doc=4924,freq=6.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.69524086 = fieldWeight in 4924, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4924)
      0.33333334 = coord(1/3)
    
    Abstract
    We have developed a system that segments web pages into blocks and predicts those blocks' importance (block importance prediction or BIP). First, we use VIPS to partition a page into a tree composed of blocks and then extracts features from each block and labels all leaf nodes. This paper makes two main contributions. Firstly, we are pioneering the formulation of BIP as a sequence tagging task. We employ DFS, which outputs a single sequence for the whole tree in which related sub-blocks are adjacent. Our second contribution is using the conditional random fields (CRF) model for labeling these sequences. CRF's transition features model correlations between neighboring labels well, and CRF can simultaneously label all blocks in a sequence to find the global optimal solution for the whole sequence, not only the best solution for each block. In our experiments, our CRF-based system achieves an F1-measure of 97.41%, which significantly outperforms our ME-based baseline (95.64%). Lastly, we tested the CRF-based system using sites which were not covered in the training data. On completely novel sites CRF performed slightly worse than ME. However, when given only two training pages from a given site, CRF improved almost three times as much as ME.
  5. Altenhöner, R.; Gömpel, R.; Jahns, Y.; Junger, U.; Mahnke, C.; Meyer, A.; Oehlschläger, S.: Take a Chance on Me : Aus den Veranstaltungen der Sektionen Bibliografie, Katalogisierung, Klassifikation und Indexierung, Knowledge Management und Informationstechnologie sowie den Core Activities ICADS und UNIMARC der IFLA Division III (Library Services) und der Arbeitsgruppe der IFLA-Präsidentin für die Informationsgesellschaft beim Weltkongress Bibliothek und Information, 76. IFLA-Generalkonferenz in Göteborg, Schweden (2010) 0.07
    0.0734372 = product of:
      0.2203116 = sum of:
        0.2203116 = weight(_text_:me in 4075) [ClassicSimilarity], result of:
          0.2203116 = score(doc=4075,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.64223593 = fieldWeight in 4075, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0625 = fieldNorm(doc=4075)
      0.33333334 = coord(1/3)
    
  6. Finke, M.; Risch, J.: "Match Me If You Can" : Sammeln und semantisches Aufbereiten von Fußballdaten (2017) 0.07
    0.0734372 = product of:
      0.2203116 = sum of:
        0.2203116 = weight(_text_:me in 3723) [ClassicSimilarity], result of:
          0.2203116 = score(doc=3723,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.64223593 = fieldWeight in 3723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0625 = fieldNorm(doc=3723)
      0.33333334 = coord(1/3)
    
  7. Schnetker, M.F.J.: Transhumanistische Mythologie : Rechte Utopien einer technologischen Erlösung durch künstliche Intelligenz (2019) 0.06
    0.064909935 = product of:
      0.19472979 = sum of:
        0.19472979 = weight(_text_:me in 5332) [ClassicSimilarity], result of:
          0.19472979 = score(doc=5332,freq=4.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.56766176 = fieldWeight in 5332, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5332)
      0.33333334 = coord(1/3)
    
    Classification
    ME 2500
    RVK
    ME 2500
  8. Bates, M.J.: ¬The nature of browsing (2019) 0.06
    0.064257555 = product of:
      0.19277266 = sum of:
        0.19277266 = weight(_text_:me in 2265) [ClassicSimilarity], result of:
          0.19277266 = score(doc=2265,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.56195647 = fieldWeight in 2265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2265)
      0.33333334 = coord(1/3)
    
    Abstract
    The recent article by McKay et al. on browsing (2019) provides a valuable addition to the empirical literature of information science on this topic, and I read the descriptions of the various browsing cases with interest. However, the authors refer to my article on browsing (Bates, 2007) in ways that do not make sense to me and which do not at all conform to what I actually said.
  9. Zhang, J.; Yu, Q.; Zheng, F.; Long, C.; Lu, Z.; Duan, Z.: Comparing keywords plus of WOS and author keywords : a case study of patient adherence research (2016) 0.06
    0.05619083 = product of:
      0.16857249 = sum of:
        0.16857249 = product of:
          0.33714497 = sum of:
            0.33714497 = weight(_text_:plus in 2857) [ClassicSimilarity], result of:
              0.33714497 = score(doc=2857,freq=16.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                1.157144 = fieldWeight in 2857, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2857)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Bibliometric analysis based on literature in the Web of Science (WOS) has become an increasingly popular method for visualizing the structure of scientific fields. Keywords Plus and Author Keywords are commonly selected as units of analysis, despite the limited research evidence demonstrating the effectiveness of Keywords Plus. This study was conceived to evaluate the efficacy of Keywords Plus as a parameter for capturing the content and scientific concepts presented in articles. Using scientific papers about patient adherence that were retrieved from WOS, a comparative assessment of Keywords Plus and Author Keywords was performed at the scientific field level and the document level, respectively. Our search yielded more Keywords Plus terms than Author Keywords, and the Keywords Plus terms were more broadly descriptive. Keywords Plus is as effective as Author Keywords in terms of bibliometric analysis investigating the knowledge structure of scientific fields, but it is less comprehensive in representing an article's content.
  10. Iorio, A.D.; Peroni, S.; Poggi, F.; Vitali, F.: Dealing with structural patterns of XML documents (2014) 0.05
    0.05252579 = product of:
      0.15757737 = sum of:
        0.15757737 = sum of:
          0.11919874 = weight(_text_:plus in 1345) [ClassicSimilarity], result of:
            0.11919874 = score(doc=1345,freq=2.0), product of:
              0.29135957 = queryWeight, product of:
                6.1714344 = idf(docFreq=250, maxDocs=44218)
                0.047210995 = queryNorm
              0.40911216 = fieldWeight in 1345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1714344 = idf(docFreq=250, maxDocs=44218)
                0.046875 = fieldNorm(doc=1345)
          0.03837863 = weight(_text_:22 in 1345) [ClassicSimilarity], result of:
            0.03837863 = score(doc=1345,freq=2.0), product of:
              0.16532487 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047210995 = queryNorm
              0.23214069 = fieldWeight in 1345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1345)
      0.33333334 = coord(1/3)
    
    Abstract
    Evaluating collections of XML documents without paying attention to the schema they were written in may give interesting insights into the expected characteristics of a markup language, as well as any regularity that may span vocabularies and languages, and that are more fundamental and frequent than plain content models. In this paper we explore the idea of structural patterns in XML vocabularies, by examining the characteristics of elements as they are used, rather than as they are defined. We introduce from the ground up a formal theory of 8 plus 3 structural patterns for XML elements, and verify their identifiability in a number of different XML vocabularies. The results allowed the creation of visualization and content extraction tools that are completely independent of the schema and without any previous knowledge of the semantics and organization of the XML vocabulary of the documents.
    Date
    22. 8.2014 17:08:49
  11. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.05
    0.05252579 = product of:
      0.15757737 = sum of:
        0.15757737 = sum of:
          0.11919874 = weight(_text_:plus in 1848) [ClassicSimilarity], result of:
            0.11919874 = score(doc=1848,freq=2.0), product of:
              0.29135957 = queryWeight, product of:
                6.1714344 = idf(docFreq=250, maxDocs=44218)
                0.047210995 = queryNorm
              0.40911216 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1714344 = idf(docFreq=250, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
          0.03837863 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
            0.03837863 = score(doc=1848,freq=2.0), product of:
              0.16532487 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047210995 = queryNorm
              0.23214069 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
      0.33333334 = coord(1/3)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  12. Mirizzi, R.; Ragone, A.; Noia, T. Di; Sciascio, E. Di: ¬A recommender system for linked data (2012) 0.05
    0.051927947 = product of:
      0.15578383 = sum of:
        0.15578383 = weight(_text_:me in 436) [ClassicSimilarity], result of:
          0.15578383 = score(doc=436,freq=4.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.4541294 = fieldWeight in 436, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.03125 = fieldNorm(doc=436)
      0.33333334 = coord(1/3)
    
    Abstract
    Peter and Alice are at home, it is a calm winter night, snow is falling, and it is too cold to go outside. "Why don't we just order a pizza and watch a movie?" says Alice wrapped in her favorite blanket. "Why not?"-Peter replies-"Which movie do you wanna watch?" "Well, what about some comedy, romance-like one? Com'on Pete, look on Facebook, there is that nice application Kara suggested me some days ago!" answers Alice. "Oh yes, MORE, here we go, tell me a movie you like a lot," says Peter excited. "Uhm, I wanna see something like the Bridget Jones's Diary or Four Weddings and a Funeral, humour, romance, good actors..." replies his beloved, rubbing her hands. Peter is a bit concerned, he is more into fantasy genre, but he wants to please Alice, so he looks on MORE for movies similar to the Bridget Jones's Diary and Four Weddings and a Funeral: "Here we are my dear, MORE suggests the sequel or, if you prefer, Love Actually," I would prefer the second." "Great! Let's rent it!" nods Peter in agreement. The scenario just presented highlights an interesting and useful feature of a modern Web application. There are tasks where the users look for items similar to the ones they already know. Hence, we need systems that recommend items based on user preferences. In other words, systems should allow an easy and friendly exploration of the information/data related to a particular domain of interest. Such characteristics are well known in the literature and in common applications such as recommender systems. Nevertheless, new challenges in this field arise whenthe information used by these systems exploits the huge amount of interlinked data coming from the Semantic Web. In this chapter, we present MORE, a system for 'movie recommendation' in the Web of Data.
  13. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.05
    0.049989056 = product of:
      0.14996716 = sum of:
        0.14996716 = product of:
          0.44990146 = sum of:
            0.44990146 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.44990146 = score(doc=973,freq=2.0), product of:
                0.40025535 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047210995 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  14. Ullrich, H.; Ruppert, A.: Katalog plus, die Freiburger Lösung zur Kombination von lokalem Katalog und globalem RDS-Index (2012) 0.05
    0.04635507 = product of:
      0.1390652 = sum of:
        0.1390652 = product of:
          0.2781304 = sum of:
            0.2781304 = weight(_text_:plus in 806) [ClassicSimilarity], result of:
              0.2781304 = score(doc=806,freq=2.0), product of:
                0.29135957 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.047210995 = queryNorm
                0.954595 = fieldWeight in 806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.109375 = fieldNorm(doc=806)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  15. Cushing, A.L.: "It's stuff that speaks to me" : exploring the characteristics of digital possessions (2013) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 1013) [ClassicSimilarity], result of:
          0.13769475 = score(doc=1013,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 1013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1013)
      0.33333334 = coord(1/3)
    
  16. Karpathy, A.: ¬The unreasonable effectiveness of recurrent neural networks (2015) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 1865) [ClassicSimilarity], result of:
          0.13769475 = score(doc=1865,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 1865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1865)
      0.33333334 = coord(1/3)
    
    Abstract
    There's something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I've in fact reached the opposite conclusion). Fast forward about a year: I'm training RNNs all the time and I've witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you. By the way, together with this post I am also releasing code on Github (https://github.com/karpathy/char-rnn) that allows you to train character-level language models based on multi-layer LSTMs. You give it a large chunk of text and it will learn to generate text like it one character at a time. You can also use it to reproduce my experiments below. But we're getting ahead of ourselves; What are RNNs anyway?
  17. Rubenstein, E.L.: "They are always there for me" : the convergence of social support and information in an online breast cancer community (2015) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 2041) [ClassicSimilarity], result of:
          0.13769475 = score(doc=2041,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 2041, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2041)
      0.33333334 = coord(1/3)
    
  18. Smiraglia, R.P.: Keywords redux : an editorial (2015) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 2099) [ClassicSimilarity], result of:
          0.13769475 = score(doc=2099,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 2099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2099)
      0.33333334 = coord(1/3)
    
    Abstract
    In KO volume 40 number 3 (2013) I included an editorial about keywords-both about the absence prior to that date of designated keywords in articles in Knowledge Organization, and about the misuse of the idea by some other journal publications (Smiraglia 2013). At the time I was chagrined to discover how little correlation there was across the formal indexing of a small set of papers from our journal, and especially to see how little correspondence there was between actual keywords appearing in the published texts, and any of the indexing supplied by either Web of Science or LISTA (Thomson Reuters' Web of ScienceT (WoS) and EBSCOHost's Library and Information Science and Technology Abstracts with Full Text (LISTA). The idea of a keyword arose in the early days of automated indexing, when it was discovered that using terms that actually occurred in full texts (or, in the earliest days, in titles and abstracts) as search "keys," usually in Boolean combinations, provided fairly precise recall in small, contextually confined text corpora. A recent Wikipedia entry (Keywords 2015) embues keywords with properties of structural reasoning, but notes that they are "key" among the most frequently occurring terms in a text corpus. The jury is still out on whether keyword retrieval is better than indexing with subject headings, but in general, keyword searches in large, unstructured text corpora (which is what we have today) are imprecise and result in large recall sets with many irrelevant hits (see the recent analysis by Gross, Taylor and Joudrey (2014). Thus it seems inadvisable to me, as editor, especially of a journal on knowledge organization, to facilitate imprecise indexing of our journal's content.
  19. Hawking, S.: This is the most dangerous time for our planet (2016) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 3273) [ClassicSimilarity], result of:
          0.13769475 = score(doc=3273,freq=8.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 3273, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
      0.33333334 = coord(1/3)
    
    Content
    "As a theoretical physicist based in Cambridge, I have lived my life in an extraordinarily privileged bubble. Cambridge is an unusual town, centered around one of the world's great universities. Within that town, the scientific community which I became part of in my twenties is even more rarefied. And within that scientific community, the small group of international theoretical physicists with whom I have spent my working life might sometimes be tempted to regard themselves as the pinnacle. Add to this, the celebrity that has come with my books, and the isolation imposed by my illness, I feel as though my ivory tower is getting taller. So the recent apparent rejection of the elite in both America and Britain is surely aimed at me, as much as anyone. Whatever we might think about the decision by the British electorate to reject membership of the European Union, and by the American public to embrace Donald Trump as their next President, there is no doubt in the minds of commentators that this was a cry of anger by people who felt that they had been abandoned by their leaders. It was, everyone seems to agree, the moment that the forgotten spoke, finding their voice to reject the advice and guidance of experts and the elite everywhere.
    I am no exception to this rule. I warned before the Brexit vote that it would damage scientific research in Britain, that a vote to leave would be a step backward, and the electorate, or at least a sufficiently significant proportion of it, took no more notice of me than any of the other political leaders, trade unionists, artists, scientists, businessmen and celebrities who all gave the same unheeded advice to the rest of the country. What matters now however, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake. The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, the rise of AI is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
    This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms which it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive. We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent. It is also the case that another unintended consequence of the global spread of the internet and social media is that the stark nature of these inequalities are far more apparent than they have been in the past. For me, the ability to use technology to communicate has been a liberating and positive experience. Without it, I would not have been able to continue working these many years past. But it also means that the lives of the richest people in the most prosperous parts of the world are agonisingly visible to anyone, however poor and who has access to a phone. And since there are now more people with a telephone than access to clean water in Sub-Saharan Africa, this will shortly mean nearly everyone on our increasingly crowded planet will not be able to escape the inequality.
    The consequences of this are plain to see; the rural poor flock to cities, to shanty towns, driven by hope. And then often, finding that the Instagram nirvana is not available there, they seek it overseas, joining the ever greater numbers of economic migrants in search of a better life. These migrants in turn place new demands on the infrastructures and economies of the countries in which they arrive, undermining tolerance and further fuelling political populism. For me, the really concerning aspect of this, is that now, more than at any time in our history, our species needs to work together. We face awesome environmental challenges. Climate change, food production, overpopulation, the decimation of other species, epidemic disease, acidification of the oceans. Together, they are a reminder that we are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it. Perhaps in a few hundred years, we will have established human colonies amidst the stars, but right now we only have one planet, and we need to work together to protect it. To do that, we need to break down not build up barriers within and between nations. If we are to stand a chance of doing that, the world's leaders need to acknowledge that they have failed and are failing the many. With resources increasingly concentrated in the hands of a few, we are going to have to learn to share far more than at present. With not only jobs but entire industries disappearing, we must help people to re-train for a new world and support them financially while they do so. If communities and economies cannot cope with current levels of migration, we must do more to encourage global development, as that is the only way that the migratory millions will be persuaded to seek their future at home. We can do this, I am an enormous optimist for my species, but it will require the elites, from London to Harvard, from Cambridge to Hollywood, to learn the lessons of the past month. To learn above all a measure of humility."
  20. Dane, F.C.: ¬The importance of the sources of professional obligations (2014) 0.05
    0.04589825 = product of:
      0.13769475 = sum of:
        0.13769475 = weight(_text_:me in 3367) [ClassicSimilarity], result of:
          0.13769475 = score(doc=3367,freq=2.0), product of:
            0.3430384 = queryWeight, product of:
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.047210995 = queryNorm
            0.40139747 = fieldWeight in 3367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2660704 = idf(docFreq=83, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
      0.33333334 = coord(1/3)
    
    Abstract
    The study of philosophy provides many general benefits to members of any field or discipline, the easiest of which to defend are an appreciation of, and experience with, critical thinking, including the ability to apply principles thoughtfully and logically in a variety of contexts; it is the discipline that, according to Plato, Socrates believed made life worth living. Today, however, most disciplines can lay claim to critical thinking - information science certainly involves a great deal of logical analysis - but only philosophy, in the Western world, can lay claim to having developed logic and critical thinking and thereby may have furthered the process more than any other discipline. Historically, philosophy is also the discipline in which one learns how to think about the most complex and important questions including questions about what is right and proper; that is, philosophy arguably lays claim to the development of ethics. Before going further, I should note that I am neither a philosopher nor an information scientist. I am a social psychologist and statistician whose interests have brought me into the realm of practical ethics primarily through ethical issues relevant to empirical research. I should also note that I am firmly in the camp of those who consider there to be an important distinction between morals and ethics; as do others, I argue that moral judgements essentially involve questions about whether or not rules, defined broadly, are followed, whereas ethical judgements essentially involve questions about whether or not a particular rule is worthwhile and, when there are incompatible rules, which rule should be granted higher priority.

Authors

Languages

  • e 497
  • d 186
  • a 1
  • f 1
  • hu 1
  • More… Less…

Types

  • a 597
  • el 67
  • m 49
  • s 17
  • x 13
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications