Search (1138 results, page 1 of 57)

  • × type_ss:"el"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.24
    0.2369526 = product of:
      0.5212957 = sum of:
        0.054844867 = product of:
          0.21937947 = sum of:
            0.21937947 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.21937947 = score(doc=230,freq=2.0), product of:
                0.2927568 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.034531306 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.25 = coord(1/4)
        0.21937947 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.21937947 = score(doc=230,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.012927286 = weight(_text_:of in 230) [ClassicSimilarity], result of:
          0.012927286 = score(doc=230,freq=6.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.23940048 = fieldWeight in 230, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.014764623 = weight(_text_:on in 230) [ClassicSimilarity], result of:
          0.014764623 = score(doc=230,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.19440265 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.21937947 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.21937947 = score(doc=230,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.45454547 = coord(5/11)
    
    Abstract
    In this lecture I intend to challenge those who uphold a monist or even a dualist view of the universe; and I will propose, instead, a pluralist view. I will propose a view of the universe that recognizes at least three different but interacting sub-universes.
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.23
    0.23107655 = product of:
      0.6354605 = sum of:
        0.068556085 = product of:
          0.27422434 = sum of:
            0.27422434 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.27422434 = score(doc=1826,freq=2.0), product of:
                0.2927568 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.034531306 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
        0.27422434 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.27422434 = score(doc=1826,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.018455777 = weight(_text_:on in 1826) [ClassicSimilarity], result of:
          0.018455777 = score(doc=1826,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.24300331 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.27422434 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.27422434 = score(doc=1826,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.36363637 = coord(4/11)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.08
    0.084137015 = product of:
      0.30850238 = sum of:
        0.034278043 = product of:
          0.13711217 = sum of:
            0.13711217 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.13711217 = score(doc=4388,freq=2.0), product of:
                0.2927568 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.034531306 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.25 = coord(1/4)
        0.13711217 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.13711217 = score(doc=4388,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.13711217 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.13711217 = score(doc=4388,freq=2.0), product of:
            0.2927568 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.034531306 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.27272728 = coord(3/11)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Hawking, S.: This is the most dangerous time for our planet (2016) 0.05
    0.053312503 = product of:
      0.1172875 = sum of:
        0.0145656215 = weight(_text_:of in 3273) [ClassicSimilarity], result of:
          0.0145656215 = score(doc=3273,freq=78.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.26974082 = fieldWeight in 3273, product of:
              8.83176 = tf(freq=78.0), with freq of:
                78.0 = termFreq=78.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
        0.026927741 = weight(_text_:technological in 3273) [ClassicSimilarity], result of:
          0.026927741 = score(doc=3273,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.14676279 = fieldWeight in 3273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
        0.007991587 = weight(_text_:on in 3273) [ClassicSimilarity], result of:
          0.007991587 = score(doc=3273,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.10522352 = fieldWeight in 3273, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
        0.030240921 = weight(_text_:great in 3273) [ClassicSimilarity], result of:
          0.030240921 = score(doc=3273,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.15552977 = fieldWeight in 3273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
        0.03756163 = product of:
          0.07512326 = sum of:
            0.07512326 = weight(_text_:britain in 3273) [ClassicSimilarity], result of:
              0.07512326 = score(doc=3273,freq=4.0), product of:
                0.25769958 = queryWeight, product of:
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.034531306 = queryNorm
                0.29151487 = fieldWeight in 3273, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3273)
          0.5 = coord(1/2)
      0.45454547 = coord(5/11)
    
    Content
    "As a theoretical physicist based in Cambridge, I have lived my life in an extraordinarily privileged bubble. Cambridge is an unusual town, centered around one of the world's great universities. Within that town, the scientific community which I became part of in my twenties is even more rarefied. And within that scientific community, the small group of international theoretical physicists with whom I have spent my working life might sometimes be tempted to regard themselves as the pinnacle. Add to this, the celebrity that has come with my books, and the isolation imposed by my illness, I feel as though my ivory tower is getting taller. So the recent apparent rejection of the elite in both America and Britain is surely aimed at me, as much as anyone. Whatever we might think about the decision by the British electorate to reject membership of the European Union, and by the American public to embrace Donald Trump as their next President, there is no doubt in the minds of commentators that this was a cry of anger by people who felt that they had been abandoned by their leaders. It was, everyone seems to agree, the moment that the forgotten spoke, finding their voice to reject the advice and guidance of experts and the elite everywhere.
    I am no exception to this rule. I warned before the Brexit vote that it would damage scientific research in Britain, that a vote to leave would be a step backward, and the electorate, or at least a sufficiently significant proportion of it, took no more notice of me than any of the other political leaders, trade unionists, artists, scientists, businessmen and celebrities who all gave the same unheeded advice to the rest of the country. What matters now however, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake. The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, the rise of AI is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
    This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms which it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive. We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent. It is also the case that another unintended consequence of the global spread of the internet and social media is that the stark nature of these inequalities are far more apparent than they have been in the past. For me, the ability to use technology to communicate has been a liberating and positive experience. Without it, I would not have been able to continue working these many years past. But it also means that the lives of the richest people in the most prosperous parts of the world are agonisingly visible to anyone, however poor and who has access to a phone. And since there are now more people with a telephone than access to clean water in Sub-Saharan Africa, this will shortly mean nearly everyone on our increasingly crowded planet will not be able to escape the inequality.
    The consequences of this are plain to see; the rural poor flock to cities, to shanty towns, driven by hope. And then often, finding that the Instagram nirvana is not available there, they seek it overseas, joining the ever greater numbers of economic migrants in search of a better life. These migrants in turn place new demands on the infrastructures and economies of the countries in which they arrive, undermining tolerance and further fuelling political populism. For me, the really concerning aspect of this, is that now, more than at any time in our history, our species needs to work together. We face awesome environmental challenges. Climate change, food production, overpopulation, the decimation of other species, epidemic disease, acidification of the oceans. Together, they are a reminder that we are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it. Perhaps in a few hundred years, we will have established human colonies amidst the stars, but right now we only have one planet, and we need to work together to protect it. To do that, we need to break down not build up barriers within and between nations. If we are to stand a chance of doing that, the world's leaders need to acknowledge that they have failed and are failing the many. With resources increasingly concentrated in the hands of a few, we are going to have to learn to share far more than at present. With not only jobs but entire industries disappearing, we must help people to re-train for a new world and support them financially while they do so. If communities and economies cannot cope with current levels of migration, we must do more to encourage global development, as that is the only way that the migratory millions will be persuaded to seek their future at home. We can do this, I am an enormous optimist for my species, but it will require the elites, from London to Harvard, from Cambridge to Hollywood, to learn the lessons of the past month. To learn above all a measure of humility."
  5. Rusch-Feja, D.; Becker, H.J.: Global Info : the German digital libraries project (1999) 0.05
    0.051031753 = product of:
      0.14033732 = sum of:
        0.016266478 = weight(_text_:of in 1242) [ClassicSimilarity], result of:
          0.016266478 = score(doc=1242,freq=38.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.30123898 = fieldWeight in 1242, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1242)
        0.043084387 = weight(_text_:technological in 1242) [ClassicSimilarity], result of:
          0.043084387 = score(doc=1242,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.23482047 = fieldWeight in 1242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.03125 = fieldNorm(doc=1242)
        0.07054629 = weight(_text_:innovations in 1242) [ClassicSimilarity], result of:
          0.07054629 = score(doc=1242,freq=2.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.30047828 = fieldWeight in 1242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.03125 = fieldNorm(doc=1242)
        0.010440165 = weight(_text_:on in 1242) [ClassicSimilarity], result of:
          0.010440165 = score(doc=1242,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.13746344 = fieldWeight in 1242, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1242)
      0.36363637 = coord(4/11)
    
    Abstract
    The concept for the German Digital Libraries Program is imbedded in the Information Infrastructure Program of the German Federal Government for the years 1996-2000 which has been explicated in the Program Paper entitled "Information as Raw Material for Innovation".3 The Program Paper was published 1996 by the Federal Ministry for Education, Research, and Technology. The actual grants program "Global Info" was initiated by the Information and Communication Commission of the Joint Learned Societies to further technological advancement in enabling all researchers in Germany direct access to literature, research results, and other relevant information. This Commission was founded by four of the learned societies in 1995, and it has sponsored a series of workshops to increase awareness of leading edge technology and innovations in accessing electronic information sources. Now, nine of the leading research-level learned societies -- often those with umbrella responsibilities for other learned societies in their field -- are members of the Information and Communication Commission and represent the mathematicians, physicists, computer scientists, chemists, educational researchers, sociologists, psychologists, biologists and information technologists in the German Association of Engineers. (The German professional librarian societies are not members, as such, of this Commission, but are represented through delegates from libraries in the learned societies and in the future, hopefully, also by the German Association of Documentalists or through the cooperation between the documentalist and librarian professional societies.) The Federal Ministry earmarked 60 Million German Marks for projects within the framework of the German Digital Libraries Program in two phases over the next six years. The scope for the German Digital Libraries Program was announced in a press release in April 1997,4 and the first call for preliminary projects and expressions of interest in participation ended in July 1997. The Consortium members were suggested by the Information and Communication Commission of the Learned Societies (IuK Kommission), by key scientific research funding agencies in the German government, and by the publishers themselves. The first official meeting of the participants took place on December 1, 1997, at the Deutsche Bibliothek, located in the renowned center of German book trade, Frankfurt, thus documenting the active role and participation of libraries and publishers. In contrast to the Digital Libraries Project of the National Science Foundation in the United States, the German Digital Libraries project is based on furthering cooperation with universities, scientific publishing houses (including various international publishers), book dealers, and special subject information centers, as well as academic and research libraries. The goals of the German Digital Libraries Project are to achieve: 1) efficient access to world wide information; 2) directly from the scientist's desktop; 3) while providing the organization for and stimulating fundamental structural changes in the information and communication process of the scientific community.
  6. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.05
    0.05027428 = product of:
      0.11060341 = sum of:
        0.012825893 = weight(_text_:of in 405) [ClassicSimilarity], result of:
          0.012825893 = score(doc=405,freq=42.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.23752278 = fieldWeight in 405, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.032313287 = weight(_text_:technological in 405) [ClassicSimilarity], result of:
          0.032313287 = score(doc=405,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.17611535 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.052909717 = weight(_text_:innovations in 405) [ClassicSimilarity], result of:
          0.052909717 = score(doc=405,freq=2.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.22535871 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.0055367337 = weight(_text_:on in 405) [ClassicSimilarity], result of:
          0.0055367337 = score(doc=405,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.072900996 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.0070177726 = product of:
          0.014035545 = sum of:
            0.014035545 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.014035545 = score(doc=405,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.45454547 = coord(5/11)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  7. Keen, A.; Weinberger, D.: Keen vs. Weinberger : July 18, 2007. (2007) 0.04
    0.04485955 = product of:
      0.12336376 = sum of:
        0.015386548 = weight(_text_:of in 1304) [ClassicSimilarity], result of:
          0.015386548 = score(doc=1304,freq=34.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.28494355 = fieldWeight in 1304, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1304)
        0.043084387 = weight(_text_:technological in 1304) [ClassicSimilarity], result of:
          0.043084387 = score(doc=1304,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.23482047 = fieldWeight in 1304, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.03125 = fieldNorm(doc=1304)
        0.016507352 = weight(_text_:on in 1304) [ClassicSimilarity], result of:
          0.016507352 = score(doc=1304,freq=10.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.21734878 = fieldWeight in 1304, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1304)
        0.048385475 = weight(_text_:great in 1304) [ClassicSimilarity], result of:
          0.048385475 = score(doc=1304,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.24884763 = fieldWeight in 1304, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1304)
      0.36363637 = coord(4/11)
    
    Abstract
    This is the full text of a "Reply All" debate on Web 2.0 between authors Andrew Keen and David Weinberger
    Content
    "Mr. Keen begins: So what, exactly, is Web 2.0? It is the radical democratization of media which is enabling anyone to publish anything on the Internet. Mainstream media's traditional audience has become Web 2.0's empowered author. Web 2.0 transforms all of us -- from 90-year-old grandmothers to eight-year-old third graders -- into digital writers, music artists, movie makers and journalists. Web 2.0 is YouTube, the blogosphere, Wikipedia, MySpace or Facebook. Web 2.0 is YOU! (Time Magazine's Person of the Year for 2006). Is Web 2.0 a dream or a nightmare? Is it a remix of Disney's "Cinderella" or of Kafka's "Metamorphosis"? Have we -- as empowered conversationalists in the global citizen media community -- woken up with the golden slipper of our ugly sister (aka: mainstream media) on our dainty little foot? Or have we -- as authors-formerly-know-as-the-audience -- woken up as giant cockroaches doomed to eternally stare at our hideous selves in the mirror of Web 2.0? Silicon Valley, of course, interprets Web 2.0 as Disney rather than Kafka. After all, as the sales and marketing architects of this great democratization argue, what could be wrong with a radically flattened media? Isn't it dreamy that we can all now publish ourselves, that we each possess digital versions of Johannes Gutenberg's printing press, that we are now able to easily create, distribute and sell our content on the Internet? This is personal liberation with an early 21st Century twist -- a mash-up of the countercultural Sixties, the free market idealism of the Eighties, and the technological determinism and consumer-centricity of the Nineties. The people have finally spoken. The media has become their message and the people are self-broadcasting this message of emancipation on their 70 million blogs, their hundreds of millions of YouTube videos, their MySpace pages and their Wikipedia entries. ..."
  8. Genetasio, G.: ¬The International Cataloguing Principles and their future", in: JLIS.it 3/1 (2012) (2012) 0.04
    0.04222946 = product of:
      0.15484135 = sum of:
        0.017701415 = weight(_text_:of in 2625) [ClassicSimilarity], result of:
          0.017701415 = score(doc=2625,freq=20.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.32781258 = fieldWeight in 2625, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2625)
        0.105819434 = weight(_text_:innovations in 2625) [ClassicSimilarity], result of:
          0.105819434 = score(doc=2625,freq=2.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.45071742 = fieldWeight in 2625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.046875 = fieldNorm(doc=2625)
        0.031320494 = weight(_text_:on in 2625) [ClassicSimilarity], result of:
          0.031320494 = score(doc=2625,freq=16.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.4123903 = fieldWeight in 2625, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2625)
      0.27272728 = coord(3/11)
    
    Abstract
    The article aims to provide an update on the 2009 Statement of International Cataloguing Principles (ICP) and on the status of work on the Statement by the IFLA Cataloguing Section. The article begins with a summary of the drafting process of the ICP by the IME ICC, International Meeting of Experts on an International Cataloguing Code, focusing in particular on the first meeting (IME ICC1) and on the earlier drafts of the 2009 Statement. It then analyzes both the major innovations and the unsatisfactory aspects of the ICP. Finally, it explains and comments on the recent documents by the IFLA Cataloguing Section relating to the ICP, which express their intention to revise the Statement and to verify the convenience of drawing up an international cataloguing code. The latter intention is considered in detail and criticized by the author in the light of the recent publication of the RDA, Resource Description and Access. The article is complemented by an updated bibliography on the ICP.
  9. Report on the future of bibliographic control : draft for public comment (2007) 0.04
    0.039127994 = product of:
      0.107601985 = sum of:
        0.044661034 = weight(_text_:higher in 1271) [ClassicSimilarity], result of:
          0.044661034 = score(doc=1271,freq=4.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.24622294 = fieldWeight in 1271, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.014271337 = weight(_text_:of in 1271) [ClassicSimilarity], result of:
          0.014271337 = score(doc=1271,freq=52.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.26429096 = fieldWeight in 1271, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.012380512 = weight(_text_:on in 1271) [ClassicSimilarity], result of:
          0.012380512 = score(doc=1271,freq=10.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.16301158 = fieldWeight in 1271, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.036289107 = weight(_text_:great in 1271) [ClassicSimilarity], result of:
          0.036289107 = score(doc=1271,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.18663573 = fieldWeight in 1271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
      0.36363637 = coord(4/11)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.
    The Library of Congress must begin by prioritizing the recommendations that are directed in whole or in part at LC. Some define tasks that can be achieved immediately and with moderate effort; others will require analysis and planning that will have to be coordinated broadly and carefully. The Working Group has consciously not associated time frames with any of its recommendations. The recommendations fall into five general areas: 1. Increase the efficiency of bibliographic production for all libraries through increased cooperation and increased sharing of bibliographic records, and by maximizing the use of data produced throughout the entire "supply chain" for information resources. 2. Transfer effort into higher-value activity. In particular, expand the possibilities for knowledge creation by "exposing" rare and unique materials held by libraries that are currently hidden from view and, thus, underused. 3. Position our technology for the future by recognizing that the World Wide Web is both our technology platform and the appropriate platform for the delivery of our standards. Recognize that people are not the only users of the data we produce in the name of bibliographic control, but so too are machine applications that interact with those data in a variety of ways. 4. Position our community for the future by facilitating the incorporation of evaluative and other user-supplied information into our resource descriptions. Work to realize the potential of the FRBR framework for revealing and capitalizing on the various relationships that exist among information resources. 5. Strengthen the library profession through education and the development of metrics that will inform decision-making now and in the future. The Working Group intends what follows to serve as a broad blueprint for the Library of Congress and its colleagues in the library and information technology communities for extending and promoting access to information resources.
    Editor
    Library of Congress / Working Group on the Future of Bibliographic Control
  10. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.04
    0.03795848 = product of:
      0.13918109 = sum of:
        0.017701415 = weight(_text_:of in 1142) [ClassicSimilarity], result of:
          0.017701415 = score(doc=1142,freq=20.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.32781258 = fieldWeight in 1142, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1142)
        0.105819434 = weight(_text_:innovations in 1142) [ClassicSimilarity], result of:
          0.105819434 = score(doc=1142,freq=2.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.45071742 = fieldWeight in 1142, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.046875 = fieldNorm(doc=1142)
        0.015660247 = weight(_text_:on in 1142) [ClassicSimilarity], result of:
          0.015660247 = score(doc=1142,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.20619515 = fieldWeight in 1142, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1142)
      0.27272728 = coord(3/11)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.
  11. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.03
    0.033494633 = product of:
      0.09211024 = sum of:
        0.026316766 = weight(_text_:higher in 1182) [ClassicSimilarity], result of:
          0.026316766 = score(doc=1182,freq=2.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.14508826 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.013798464 = weight(_text_:of in 1182) [ClassicSimilarity], result of:
          0.013798464 = score(doc=1182,freq=70.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2555338 = fieldWeight in 1182, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.009227889 = weight(_text_:on in 1182) [ClassicSimilarity], result of:
          0.009227889 = score(doc=1182,freq=8.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.121501654 = fieldWeight in 1182, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.042767122 = weight(_text_:great in 1182) [ClassicSimilarity], result of:
          0.042767122 = score(doc=1182,freq=4.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.21995232 = fieldWeight in 1182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.36363637 = coord(4/11)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  12. Zia, L.L.: Growing a national learning environments and resources network for science, mathematics, engineering, and technology education : current issues and opportunities for the NSDL program (2001) 0.03
    0.033179153 = product of:
      0.12165689 = sum of:
        0.017503629 = weight(_text_:of in 1217) [ClassicSimilarity], result of:
          0.017503629 = score(doc=1217,freq=44.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.3241498 = fieldWeight in 1217, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1217)
        0.0073823114 = weight(_text_:on in 1217) [ClassicSimilarity], result of:
          0.0073823114 = score(doc=1217,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.097201325 = fieldWeight in 1217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1217)
        0.09677095 = weight(_text_:great in 1217) [ClassicSimilarity], result of:
          0.09677095 = score(doc=1217,freq=8.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.49769527 = fieldWeight in 1217, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1217)
      0.27272728 = coord(3/11)
    
    Abstract
    The National Science Foundation's (NSF) National Science, Mathematics, Engineering, and Technology Education Digital Library (NSDL) program seeks to create, develop, and sustain a national digital library supporting science, mathematics, engineering, and technology (SMET) education at all levels -- preK-12, undergraduate, graduate, and life-long learning. The resulting virtual institution is expected to catalyze and support continual improvements in the quality of science, mathematics, engineering, and technology (SMET) education in both formal and informal settings. The vision for this program has been explored through a series of workshops over the past several years and documented in accompanying reports and monographs. (See [1-7, 10, 12, and 13].) These efforts have led to a characterization of the digital library as a learning environments and resources network for science, mathematics, engineering, and technology education, that is: * designed to meet the needs of learners, in both individual and collaborative settings; * constructed to enable dynamic use of a broad array of materials for learning primarily in digital format; and * managed actively to promote reliable anytime, anywhere access to quality collections and services, available both within and without the network. Underlying the NSDL program are several working assumptions. First, while there is currently no lack of "great piles of content" on the Web, there is an urgent need for "piles of great content". The difficulties in discovering and verifying the authority of appropriate Web-based material are certainly well known, yet there are many examples of learning resources of great promise available (particularly those exploiting the power of multiple media), with more added every day. The breadth and interconnectedness of the Web are simultaneously a great strength and shortcoming. Second, the "unit" or granularity of educational content can and will shrink, affording the opportunity for users to become creators and vice versa, as learning objects are reused, repackaged, and repurposed. To be sure, this scenario cannot take place without serious attention to intellectual property and digital rights management concerns. But new models and technologies are being explored (see a number of recent articles in the January issue of D-Lib Magazine). Third, there is a need for an "organizational infrastructure" that facilitates connections between distributed users and distributed content, as alluded to in the third bullet above. Finally, while much of the ongoing use of the library is envisioned to be "free" in the sense of the public good, there is an opportunity and a need to consider multiple alternative models of sustainability, particularly in the area of services offered by the digital library. More details about the NSDL program including information about proposal deadlines and current awards may be found at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl>.
  13. Parent, I.: ¬The importance of national bibliographies in the digital age (2007) 0.03
    0.03159833 = product of:
      0.11586054 = sum of:
        0.014927144 = weight(_text_:of in 687) [ClassicSimilarity], result of:
          0.014927144 = score(doc=687,freq=8.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.27643585 = fieldWeight in 687, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=687)
        0.08616877 = weight(_text_:technological in 687) [ClassicSimilarity], result of:
          0.08616877 = score(doc=687,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.46964094 = fieldWeight in 687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.0625 = fieldNorm(doc=687)
        0.014764623 = weight(_text_:on in 687) [ClassicSimilarity], result of:
          0.014764623 = score(doc=687,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.19440265 = fieldWeight in 687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=687)
      0.27272728 = coord(3/11)
    
    Abstract
    Technological developments are introducing both challenges and opportunities for the future production of national bibliographies. There are new complex issues which must be addressed collectively by national bibliographic agencies. As an international community, we must consider new methods and models for the on-going provision of authoritative data in national bibliographies, which continue to play an essential role in the control of and access to each country's published heritage.
  14. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.03
    0.031301707 = product of:
      0.08607969 = sum of:
        0.017897017 = weight(_text_:of in 3608) [ClassicSimilarity], result of:
          0.017897017 = score(doc=3608,freq=46.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.33143494 = fieldWeight in 3608, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.010440165 = weight(_text_:on in 3608) [ClassicSimilarity], result of:
          0.010440165 = score(doc=3608,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.13746344 = fieldWeight in 3608, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.048385475 = weight(_text_:great in 3608) [ClassicSimilarity], result of:
          0.048385475 = score(doc=3608,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.24884763 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.0093570305 = product of:
          0.018714061 = sum of:
            0.018714061 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.018714061 = score(doc=3608,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.36363637 = coord(4/11)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Source
    https://www.theatlantic.com/technology/archive/2017/04/the-tragedy-of-google-books/523320/
  15. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.03
    0.031160796 = product of:
      0.11425625 = sum of:
        0.011311376 = weight(_text_:of in 3061) [ClassicSimilarity], result of:
          0.011311376 = score(doc=3061,freq=6.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.20947541 = fieldWeight in 3061, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
        0.01827029 = weight(_text_:on in 3061) [ClassicSimilarity], result of:
          0.01827029 = score(doc=3061,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.24056101 = fieldWeight in 3061, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
        0.08467458 = weight(_text_:great in 3061) [ClassicSimilarity], result of:
          0.08467458 = score(doc=3061,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.43548337 = fieldWeight in 3061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.27272728 = coord(3/11)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  16. ¬The Computer Science Ontology (CSO) (2018) 0.03
    0.029066548 = product of:
      0.10657734 = sum of:
        0.074435055 = weight(_text_:higher in 4429) [ClassicSimilarity], result of:
          0.074435055 = score(doc=4429,freq=4.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.41037157 = fieldWeight in 4429, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
        0.016159108 = weight(_text_:of in 4429) [ClassicSimilarity], result of:
          0.016159108 = score(doc=4429,freq=24.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2992506 = fieldWeight in 4429, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
        0.015983174 = weight(_text_:on in 4429) [ClassicSimilarity], result of:
          0.015983174 = score(doc=4429,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.21044704 = fieldWeight in 4429, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
      0.27272728 = coord(3/11)
    
    Abstract
    The Computer Science Ontology (CSO) is a large-scale ontology of research areas that was automatically generated using the Klink-2 algorithm on the Rexplore dataset, which consists of about 16 million publications, mainly in the field of Computer Science. The Klink-2 algorithm combines semantic technologies, machine learning, and knowledge from external sources to automatically generate a fully populated ontology of research areas. Some relationships were also revised manually by experts during the preparation of two ontology-assisted surveys in the field of Semantic Web and Software Architecture. The main root of CSO is Computer Science, however, the ontology includes also a few secondary roots, such as Linguistics, Geometry, Semantics, and so on. CSO presents two main advantages over manually crafted categorisations used in Computer Science (e.g., 2012 ACM Classification, Microsoft Academic Search Classification). First, it can characterise higher-level research areas by means of hundreds of sub-topics and related terms, which enables to map very specific terms to higher-level research areas. Secondly, it can be easily updated by running Klink-2 on a set of new publications. A more comprehensive discussion of the advantages of adopting an automatically generated ontology in the scholarly domain can be found in.
  17. Valacchi, F.: Things in the World : the integration process of archival descriptions in intercultural systems (2016) 0.03
    0.028798673 = product of:
      0.105595134 = sum of:
        0.01727841 = weight(_text_:of in 2957) [ClassicSimilarity], result of:
          0.01727841 = score(doc=2957,freq=14.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.31997898 = fieldWeight in 2957, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2957)
        0.07539768 = weight(_text_:technological in 2957) [ClassicSimilarity], result of:
          0.07539768 = score(doc=2957,freq=2.0), product of:
            0.18347798 = queryWeight, product of:
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.034531306 = queryNorm
            0.41093582 = fieldWeight in 2957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3133807 = idf(docFreq=591, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2957)
        0.012919044 = weight(_text_:on in 2957) [ClassicSimilarity], result of:
          0.012919044 = score(doc=2957,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.17010231 = fieldWeight in 2957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2957)
      0.27272728 = coord(3/11)
    
    Abstract
    This paper conjectures that standard archival descriptions are no longer efficient in order to answer to society needs, mainly in an intercultural perspective. After a brief evaluation of the peculiarities of cultural heritage different domain languages, the specific issues of archival descriptions are discussed, seeking the possible strategies - technological as well as cultural - valid to open to an integration of descriptive languages. A particular focus is proposed on RDA, an approach which shows to be the best candidate to harmonize the separate descriptions typical of archival domain and activating the potential informative integrations with any limitation of information environments and single content quality.
  18. Hjoerland, B.: Lifeboat for knowledge organization 0.03
    0.028397523 = product of:
      0.10412425 = sum of:
        0.0065306257 = weight(_text_:of in 2973) [ClassicSimilarity], result of:
          0.0065306257 = score(doc=2973,freq=2.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.120940685 = fieldWeight in 2973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2973)
        0.012919044 = weight(_text_:on in 2973) [ClassicSimilarity], result of:
          0.012919044 = score(doc=2973,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.17010231 = fieldWeight in 2973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2973)
        0.08467458 = weight(_text_:great in 2973) [ClassicSimilarity], result of:
          0.08467458 = score(doc=2973,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.43548337 = fieldWeight in 2973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2973)
      0.27272728 = coord(3/11)
    
    Abstract
    In spring 2002 I started teaching Knowledge Organization (KO) at the new master education at The Royal School of Library and Information Science in Copenhagen (MS RSLIS). I began collecting information about KO as support for my own teaching and research. In the beginning I made the information available to the student through a password protected system "SiteScape". This site was a great success, but I encountered problems in transferring the system for new classes the following years. Therefore I have now decided to make it public on the www and to protect only information that should not be made public. References freely available in electronic form are given an URL (if known).
  19. Alfaro, L.de: How (much) to trust Wikipedia (2008) 0.03
    0.02802544 = product of:
      0.10275995 = sum of:
        0.074435055 = weight(_text_:higher in 2138) [ClassicSimilarity], result of:
          0.074435055 = score(doc=2138,freq=4.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.41037157 = fieldWeight in 2138, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2138)
        0.012341722 = weight(_text_:of in 2138) [ClassicSimilarity], result of:
          0.012341722 = score(doc=2138,freq=14.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.22855641 = fieldWeight in 2138, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2138)
        0.015983174 = weight(_text_:on in 2138) [ClassicSimilarity], result of:
          0.015983174 = score(doc=2138,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.21044704 = fieldWeight in 2138, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2138)
      0.27272728 = coord(3/11)
    
    Abstract
    The Wikipedia is a collaborative encyclopedia: anyone can contribute to its articles simply by clicking on an "edit'' button. The open nature of the Wikipedia has been key to its success, but has a flip side: if anyone can edit, how can readers know whether to trust its content? To help answer this question, we have developed a reputation system for Wikipedia authors, and a trust system for Wikipedia text. Authors gain reputation when their contributions are long-lived, and they lose reputation when their contributions are undone in short order. Each word in the Wikipedia is assigned a value of trust that depends on the reputation of its author, as well as on the reputation of the authors that subsequently revised the text where the word appears. To validate our algorithms, we show that reputation and trust have good predictive value: higher-reputation authors are more likely to give lasting contributions, and higher-trust text is less likely to be edited. The trust can be visualized via an intuitive coloring of the text background. The coloring provides an effective way of spotting attempts to tamper with Wikipedia information. A trust-colored version of the entire English Wikipedia can be browsed at http://trust.cse.ucsc.edu/
  20. Bawden, D.; Robinson, L.: Information and the gaining of understanding (2015) 0.03
    0.027982553 = product of:
      0.10260269 = sum of:
        0.07368694 = weight(_text_:higher in 893) [ClassicSimilarity], result of:
          0.07368694 = score(doc=893,freq=2.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.4062471 = fieldWeight in 893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.0546875 = fieldNorm(doc=893)
        0.0159967 = weight(_text_:of in 893) [ClassicSimilarity], result of:
          0.0159967 = score(doc=893,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.29624295 = fieldWeight in 893, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=893)
        0.012919044 = weight(_text_:on in 893) [ClassicSimilarity], result of:
          0.012919044 = score(doc=893,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.17010231 = fieldWeight in 893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=893)
      0.27272728 = coord(3/11)
    
    Abstract
    It is suggested that, in addition to data, information and knowledge, the information sciences should focus on understanding, understood as a higher-order knowledge, with coherent and explanatory potential. The limited ways in which understanding has been addressed in the design of information systems, in studies of information behaviour, in formulations of information literacy and in impact studies are briefly reviewed, and future prospects considered. The paper is an extended version of a keynote presentation given at the i3 conference in June 2015.
    Source
    Journal of information science. 41(2015) no.x, S.1-6

Years

Languages

Types

  • a 550
  • r 24
  • p 22
  • s 20
  • i 19
  • n 15
  • x 15
  • b 10
  • m 10
  • More… Less…

Themes