Search (384 results, page 1 of 20)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.25
    0.2473204 = product of:
      0.4946408 = sum of:
        0.1236602 = product of:
          0.3709806 = sum of:
            0.3709806 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3709806 = score(doc=1826,freq=2.0), product of:
                0.39605197 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0467152 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.3709806 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.3709806 = score(doc=1826,freq=2.0), product of:
            0.39605197 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0467152 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.20
    0.19785634 = product of:
      0.39571267 = sum of:
        0.09892817 = product of:
          0.2967845 = sum of:
            0.2967845 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.2967845 = score(doc=230,freq=2.0), product of:
                0.39605197 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0467152 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.2967845 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.2967845 = score(doc=230,freq=2.0), product of:
            0.39605197 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0467152 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.12
    0.1236602 = product of:
      0.2473204 = sum of:
        0.0618301 = product of:
          0.1854903 = sum of:
            0.1854903 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.1854903 = score(doc=4388,freq=2.0), product of:
                0.39605197 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0467152 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.1854903 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.1854903 = score(doc=4388,freq=2.0), product of:
            0.39605197 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0467152 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(2/4)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Decimal Classification Editorial Policy Committee (2002) 0.06
    0.059254553 = product of:
      0.11850911 = sum of:
        0.017906228 = weight(_text_:science in 236) [ClassicSimilarity], result of:
          0.017906228 = score(doc=236,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.1455159 = fieldWeight in 236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=236)
        0.10060288 = sum of:
          0.055848222 = weight(_text_:history in 236) [ClassicSimilarity], result of:
            0.055848222 = score(doc=236,freq=2.0), product of:
              0.21731828 = queryWeight, product of:
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.0467152 = queryNorm
              0.25698814 = fieldWeight in 236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.0390625 = fieldNorm(doc=236)
          0.04475466 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
            0.04475466 = score(doc=236,freq=4.0), product of:
              0.16358867 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0467152 = queryNorm
              0.27358043 = fieldWeight in 236, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=236)
      0.5 = coord(2/4)
    
    Abstract
    The Decimal Classification Editorial Policy Committee (EPC) held its Meeting 117 at the Library Dec. 3-5, 2001, with chair Andrea Stamm (Northwestern University) presiding. Through its actions at this meeting, significant progress was made toward publication of DDC unabridged Edition 22 in mid-2003 and Abridged Edition 14 in early 2004. For Edition 22, the committee approved the revisions to two major segments of the classification: Table 2 through 55 Iran (the first half of the geographic area table) and 900 History and geography. EPC approved updates to several parts of the classification it had already considered: 004-006 Data processing, Computer science; 340 Law; 370 Education; 510 Mathematics; 610 Medicine; Table 3 issues concerning treatment of scientific and technical themes, with folklore, arts, and printing ramifications at 398.2 - 398.3, 704.94, and 758; Table 5 and Table 6 Ethnic Groups and Languages (portions concerning American native peoples and languages); and tourism issues at 647.9 and 790. Reports on the results of testing the approved 200 Religion and 305-306 Social groups schedules were received, as was a progress report on revision work for the manual being done by Ross Trotter (British Library, retired). Revisions for Abridged Edition 14 that received committee approval included 010 Bibliography; 070 Journalism; 150 Psychology; 370 Education; 380 Commerce, communications, and transportation; 621 Applied physics; 624 Civil engineering; and 629.8 Automatic control engineering. At the meeting the committee received print versions of _DC&_ numbers 4 and 5. Primarily for the use of Dewey translators, these cumulations list changes, substantive and cosmetic, to DDC Edition 21 and Abridged Edition 13 for the period October 1999 - December 2001. EPC will hold its Meeting 118 at the Library May 15-17, 2002.
  5. Veltman, K.H.: Towards a Semantic Web for culture 0.03
    0.029475685 = product of:
      0.05895137 = sum of:
        0.020258585 = weight(_text_:science in 4040) [ClassicSimilarity], result of:
          0.020258585 = score(doc=4040,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.16463245 = fieldWeight in 4040, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=4040)
        0.038692784 = product of:
          0.07738557 = sum of:
            0.07738557 = weight(_text_:history in 4040) [ClassicSimilarity], result of:
              0.07738557 = score(doc=4040,freq=6.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.35609323 = fieldWeight in 4040, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4040)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Today's semantic web deals with meaning in a very restricted sense and offers static solutions. This is adequate for many scientific, technical purposes and for business transactions requiring machine-to-machine communication, but does not answer the needs of culture. Science, technology and business are concerned primarily with the latest findings, the state of the art, i.e. the paradigm or dominant world-view of the day. In this context, history is considered non-essential because it deals with things that are out of date. By contrast, culture faces a much larger challenge, namely, to re-present changes in ways of knowing; changing meanings in different places at a given time (synchronically) and over time (diachronically). Culture is about both objects and the commentaries on them; about a cumulative body of knowledge; about collective memory and heritage. Here, history plays a central role and older does not mean less important or less relevant. Hence, a Leonardo painting that is 400 years old, or a Greek statue that is 2500 years old, typically have richer commentaries and are often more valuable than their contemporary equivalents. In this context, the science of meaning (semantics) is necessarily much more complex than semantic primitives. A semantic web in the cultural domain must enable us to trace how meaning and knowledge organisation have evolved historically in different cultures. This paper examines five issues to address this challenge: 1) different world-views (i.e. a shift from substance to function and from ontology to multiple ontologies); 2) developments in definitions and meaning; 3) distinctions between words and concepts; 4) new classes of relations; and 5) dynamic models of knowledge organisation. These issues reveal that historical dimensions of cultural diversity in knowledge organisation are also central to classification of biological diversity. New ways are proposed of visualizing knowledge using a time/space horizon to distinguish between universals and particulars. It is suggested that new visualization methods make possible a history of questions as well as of answers, thus enabling dynamic access to cultural and historical dimensions of knowledge. Unlike earlier media, which were limited to recording factual dimensions of collective memory, digital media enable us to explore theories, ways of perceiving, ways of knowing; to enter into other mindsets and world-views and thus to attain novel insights and new levels of tolerance. Some practical consequences are outlined.
  6. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.03
    0.028802473 = product of:
      0.057604946 = sum of:
        0.035452522 = weight(_text_:science in 40) [ClassicSimilarity], result of:
          0.035452522 = score(doc=40,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.2881068 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.022152426 = product of:
          0.04430485 = sum of:
            0.04430485 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
              0.04430485 = score(doc=40,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.2708308 = fieldWeight in 40, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  7. Hanken, J.: Organizing the world in the age of DNA (2007) 0.03
    0.027498204 = product of:
      0.05499641 = sum of:
        0.021487473 = weight(_text_:science in 1213) [ClassicSimilarity], result of:
          0.021487473 = score(doc=1213,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.17461908 = fieldWeight in 1213, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1213)
        0.033508934 = product of:
          0.06701787 = sum of:
            0.06701787 = weight(_text_:history in 1213) [ClassicSimilarity], result of:
              0.06701787 = score(doc=1213,freq=2.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.3083858 = fieldWeight in 1213, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1213)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    By meticulously categorizing life, maybe there's a chance we can save some species from extinction. That's perhaps the source of the excitement that erupted online when a team of leading biologists and software engineers announced the Encyclopedia of Life project in early May. The scientists plan to create an online catalog of the genome, geographic distribution, phylogenetic position, habitat, and ecological relationships of all 1.8 million known species on the planet. To do it, they'll use the latest technologies and scientific methods -- but they'll also use Carl Linnaeus' 272-year-old taxonomy system. James Hanken, director of the Museum of Comparative Zoology at Harvard University where he teaches evolutionary biology will lead Harvard's role in the project. The author of over 100 scientific publications, he's also an accomplished photographer, with his work appearing in Natural History, Audubon and Playboy. Hanken chatted with Wired News about Linnaeus' legacy in an age of genetic discovery that the father of taxonomy could not have imagined -- and the movement to uproot the Linnaean system.
    Source
    http://www.wired.com/print/science/planetearth/news/2007/05/hanken_qanda
  8. Ishikawa, S.: ¬A final solution to the mind-body problem by quantum language (2017) 0.03
    0.027498204 = product of:
      0.05499641 = sum of:
        0.021487473 = weight(_text_:science in 3666) [ClassicSimilarity], result of:
          0.021487473 = score(doc=3666,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.17461908 = fieldWeight in 3666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3666)
        0.033508934 = product of:
          0.06701787 = sum of:
            0.06701787 = weight(_text_:history in 3666) [ClassicSimilarity], result of:
              0.06701787 = score(doc=3666,freq=2.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.3083858 = fieldWeight in 3666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3666)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Recently we proposed "quantum language", which was not only characterized as the metaphysical and linguistic turn of quantum mechanics but also the linguistic turn of Descartes = Kant epistemology. And further we believe that quantum language is the only scientifically successful theory in dualistic idealism. If this turn is regarded as progress in the history of western philosophy (i.e., if "philosophical progress" is defined by "approaching to quantum language"), we should study the linguistic mind-body problem more than the epistemological mind-body problem. In this paper, we show that to solve the mind-body problem and to propose "measurement axiom" in quantum language are equivalent. Since our approach is always within dualistic idealism, we believe that our linguistic answer is the only true solution to the mind-body problem.
    Source
    Journal of quantum information science. 7(2017) no.2, S.48 [http://www.scirp.org/journal/PaperInformation.aspx?PaperID=76391]
  9. Kratochwil, F.; Peltonen, H.: Constructivism (2022) 0.03
    0.026508883 = product of:
      0.053017765 = sum of:
        0.014324983 = weight(_text_:science in 829) [ClassicSimilarity], result of:
          0.014324983 = score(doc=829,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.11641272 = fieldWeight in 829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=829)
        0.038692784 = product of:
          0.07738557 = sum of:
            0.07738557 = weight(_text_:history in 829) [ClassicSimilarity], result of:
              0.07738557 = score(doc=829,freq=6.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.35609323 = fieldWeight in 829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.03125 = fieldNorm(doc=829)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Constructivism in the social sciences has known several ups and downs over the last decades. It was successful rather early in sociology but hotly contested in International Politics/Relations (IR). Oddly enough, just at the moments it made important inroads into the research agenda and became accepted by the mainstream, the enthusiasm for it waned. Many constructivists-as did mainstream scholars-moved from "grand theory" or even "meta-theory" toward "normal science," or experimented with other (eclectic) approaches, of which the turns to practices, to emotions, to new materialism, to the visual, and to the queer are some of the latest manifestations. In a way, constructivism was "successful," on the one hand, by introducing norms, norm-dynamics, and diffusion; the role of new actors in world politics; and the changing role of institutions into the debates, while losing, on the other hand, much of its critical potential. The latter survived only on the fringes-and in Europe more than in the United States. In IR, curiously, constructivism, which was rooted in various European traditions (philosophy, history, linguistics, social analysis), was originally introduced in Europe via the disciplinary discussions taking place in the United States. Yet, especially in its critical version, it has found a more conducive environment in Europe than in the United States.
    In the United States, soon after its emergence, constructivism became "mainstreamed" by having its analysis of norms reduced to "variable research." In such research, positive examples of for instance the spread of norms were included, but strangely empirical evidence of counterexamples of norm "deaths" (preventive strikes, unlawful combatants, drone strikes, extrajudicial killings) were not. The elective affinity of constructivism and humanitarianism seemed to have transformed the former into the Enlightenment project of "progress." Even Kant was finally pressed into the service of "liberalism" in the U.S. discussion, and his notion of the "practical interest of reason" morphed into the political project of an "end of history." This "slant" has prevented a serious conceptual engagement with the "history" of law and (inter-)national politics and the epistemological problems that are raised thereby. This bowdlerization of constructivism is further buttressed by the fact that in the "knowledge industry" none of the "leading" U.S. departments has a constructivist on board, ensuring thereby the narrowness of conceptual and methodological choices to which the future "professionals" are exposed. This article contextualizes constructivism and its emergence within a changing world and within the evolution of the discipline. The aim is not to provide a definition or a typology of constructivism, since such efforts go against the critical dimension of constructivism. An application of this critique on constructivism itself leads to a reflection on truth, knowledge, and the need for (re-)orientation.
  10. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.03
    0.026248364 = product of:
      0.104993455 = sum of:
        0.104993455 = sum of:
          0.06701787 = weight(_text_:history in 1289) [ClassicSimilarity], result of:
            0.06701787 = score(doc=1289,freq=2.0), product of:
              0.21731828 = queryWeight, product of:
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.0467152 = queryNorm
              0.3083858 = fieldWeight in 1289, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.046875 = fieldNorm(doc=1289)
          0.037975587 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
            0.037975587 = score(doc=1289,freq=2.0), product of:
              0.16358867 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0467152 = queryNorm
              0.23214069 = fieldWeight in 1289, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1289)
      0.25 = coord(1/4)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  11. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.02
    0.022345083 = product of:
      0.044690166 = sum of:
        0.032031637 = weight(_text_:science in 3335) [ClassicSimilarity], result of:
          0.032031637 = score(doc=3335,freq=10.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.26030678 = fieldWeight in 3335, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=3335)
        0.01265853 = product of:
          0.02531706 = sum of:
            0.02531706 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
              0.02531706 = score(doc=3335,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.15476047 = fieldWeight in 3335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3335)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Es ist wieder an der Zeit den Begriff "Information" zu aktualisieren beziehungsweise einen Bericht zum Status Quo zu liefern. Information ist der zentrale Gegenstand der Informationswissenschaft und stellt einen der wichtigsten Forschungsgegenstände der Bibliotheks- und Informationswissenschaft dar. Erstaunlicherweise findet jedoch ein stetiger Diskurs, der mit der kritischen Auseinandersetzung und der damit verbundenen Aktualisierung von Konzepten in den Geisteswissensschaften vergleichbar ist, zumindest im deutschsprachigen Raum1 nicht konstant statt. Im Sinne einer theoretischen Grundlagenforschung und zur Erarbeitung einer gemeinsamen begrifflichen Matrix wäre dies aber sicherlich wünschenswert. Bereits im letzten Jahr erschienen in dem von Søren Brier (Siehe "The foundation of LIS in information science and semiotics"2 sowie "Semiotics in Information Science. An Interview with Søren Brier on the application of semiotic theories and the epistemological problem of a transdisciplinary Information Science"3) herausgegebenen Journal "Cybernetics and Human Knowing" acht lesenswerte Stellungnahmen von namhaften Philosophen beziehungsweise Bibliotheks- und Informationswissenschaftlern zum Begriff der Information. Unglücklicherweise ist das Journal "Cybernetics & Human Knowing" in Deutschland schwer zugänglich, da es sich nicht um ein Open-Access-Journal handelt und lediglich von acht deutschen Bibliotheken abonniert wird.4 Aufgrund der schlechten Verfügbarkeit scheint es sinnvoll hier eine ausführliche Besprechung dieser acht Kurzartikel anzubieten.
    Das Journal, das sich laut Zusatz zum Hauptsachtitel thematisch mit "second order cybernetics, autopoiesis and cyber-semiotics" beschäftigt, existiert seit 1992/93 als Druckausgabe. Seit 1998 (Jahrgang 5, Heft 1) wird es parallel kostenpflichtig elektronisch im Paket über den Verlag Imprint Academic in Exeter angeboten. Das Konzept Information wird dort aufgrund der Ausrichtung, die man als theoretischen Beitrag zu den Digital Humanities (avant la lettre) ansehen könnte, regelmäßig behandelt. Insbesondere die phänomenologisch und mathematisch fundierte Semiotik von Charles Sanders Peirce taucht in diesem Zusammenhang immer wieder auf. Dabei spielt stets die Verbindung zur Praxis, vor allem im Bereich Library- and Information Science (LIS), eine große Rolle, die man auch bei Brier selbst, der in seinem Hauptwerk "Cybersemiotics" die Peirceschen Zeichenkategorien unter anderem auf die bibliothekarische Tätigkeit des Indexierens anwendet,5 beobachten kann. Die Ausgabe 1/ 2015 der Zeitschrift fragt nun "What underlines Information?" und beinhaltet unter anderem Artikel zum Entwurf einer Philosophie der Information des Chinesen Wu Kun sowie zu Peirce und Spencer Brown. Die acht Kurzartikel zum Informationsbegriff in der Bibliotheks- und Informationswissenschaft wurden von den Thellefsen-Brüdern (Torkild und Martin) sowie Bent Sørensen, die auch selbst gemeinsam einen der Kommentare verfasst haben.
  12. Metrics in research : for better or worse? (2016) 0.02
    0.021298937 = product of:
      0.042597875 = sum of:
        0.020258585 = weight(_text_:science in 3312) [ClassicSimilarity], result of:
          0.020258585 = score(doc=3312,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.16463245 = fieldWeight in 3312, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=3312)
        0.022339288 = product of:
          0.044678576 = sum of:
            0.044678576 = weight(_text_:history in 3312) [ClassicSimilarity], result of:
              0.044678576 = score(doc=3312,freq=2.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.20559052 = fieldWeight in 3312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3312)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Inhalt: Metrics in Research - For better or worse? / Jozica Dolenc, Philippe Hünenberger Oliver Renn - A brief visual history of research metrics / Oliver Renn, Jozica Dolenc, Joachim Schnabl - Bibliometry: The wizard of O's / Philippe Hünenberger - The grip of bibliometrics - A student perspective / Matthias Tinzl - Honesty and transparency to taxpayers is the long-term fundament for stable university funding / Wendelin J. Stark - Beyond metrics: Managing the performance of your work / Charlie Rapple - Scientific profiling instead of bibliometrics: Key performance indicators of the future / Rafael Ball - More knowledge, less numbers / Carl Philipp Rosenau - Do we really need BIBLIO-metrics to evaluate individual researchers? / Rüdiger Mutz - Using research metrics responsibly and effectively as a researcher / Peter I. Darroch, Lisa H. Colledge - Metrics in research: More (valuable) questions than answers / Urs Hugentobler - Publication of research results: Use and abuse / Wilfred F. van Gunsteren - Wanted: Transparent algorithms, interpretation skills, common sense / Eva E. Wille - Impact factors, the h-index, and citation hype - Metrics in research from the point of view of a journal editor / Renato Zenobi - Rashomon or metrics in a publisher's world / Gabriella Karger - The impact factor and I: A love-hate relationship / Jean-Christophe Leroux - Personal experiences bringing altmetrics to the academic market / Ben McLeish - Fatally attracted by numbers? / Oliver Renn - On computable numbers / Gerd Folkers, Laura Folkers - ScienceMatters - Single observation science publishing and linking observations to create an internet of science / Lawrence Rajendran.
  13. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.020573197 = product of:
      0.041146394 = sum of:
        0.02532323 = weight(_text_:science in 4550) [ClassicSimilarity], result of:
          0.02532323 = score(doc=4550,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.20579056 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.015823163 = product of:
          0.031646326 = sum of:
            0.031646326 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.031646326 = score(doc=4550,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  14. Place, E.: Internationale Zusammenarbeit bei Internet Subject Gateways (1999) 0.02
    0.020237632 = product of:
      0.040475264 = sum of:
        0.021487473 = weight(_text_:science in 4189) [ClassicSimilarity], result of:
          0.021487473 = score(doc=4189,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.17461908 = fieldWeight in 4189, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4189)
        0.018987793 = product of:
          0.037975587 = sum of:
            0.037975587 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.037975587 = score(doc=4189,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.23214069 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Eine ganze Anzahl von Bibliotheken in Europa befaßt sich mit der Entwicklung von Internet Subject Gateways - einer Serviceleistung, die den Nutzern helfen soll, qualitativ hochwertige Internetquellen zu finden. Subject Gateways wie SOSIG (The Social Science Information Gateway) sind bereits seit einigen Jahren im Internet verfügbar und stellen eine Alternative zu Internet-Suchmaschinen wie AltaVista und Verzeichnissen wie Yahoo dar. Bezeichnenderweise stützen sich Subject Gateways auf die Fertigkeiten, Verfahrensweisen und Standards der internationalen Bibliothekswelt und wenden diese auf Informationen aus dem Internet an. Dieses Referat will daher betonen, daß Bibliothekare/innen idealerweise eine vorherrschende Rolle im Aufbau von Suchservices für Internetquellen spielen und daß Information Gateways eine Möglichkeit dafür darstellen. Es wird einige der Subject Gateway-Initiativen in Europa umreißen und die Werkzeuge und Technologien beschreiben, die vom Projekt DESIRE entwickelt wurden, um die Entwicklung neuer Gateways in anderen Ländern zu unterstützen. Es wird auch erörtert, wie IMesh, eine Gruppe für Gateways aus der ganzen Welt eine internationale Strategie für Gateways anstrebt und versucht, Standards zur Umsetzung dieses Projekts zu entwickeln
    Date
    22. 6.2002 19:35:09
  15. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.02
    0.020237632 = product of:
      0.040475264 = sum of:
        0.021487473 = weight(_text_:science in 479) [ClassicSimilarity], result of:
          0.021487473 = score(doc=479,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.17461908 = fieldWeight in 479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=479)
        0.018987793 = product of:
          0.037975587 = sum of:
            0.037975587 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.037975587 = score(doc=479,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Anthropological inquiry suggests that all societies classify animals and plants in similar ways. Paradoxically, in the same cultures that have seen large advances in biological science, citizenry's practical knowledge of nature has dramatically diminished. Here we describe historical, cross-cultural and developmental research on how people ordinarily conceptualize organic nature (folkbiology), concentrating on cognitive consequences associated with knowledge devolution. We show that results on psychological studies of categorization and reasoning from "standard populations" fail to generalize to humanity at large. Usual populations (Euro-American college students) have impoverished experience with nature, which yields misleading results about knowledge acquisition and the ontogenetic relationship between folkbiology and folkpsychology. We also show that groups living in the same habitat can manifest strikingly distinct behaviors, cognitions and social relations relative to it. This has novel implications for environmental decision making and management, including commons problems.
    Date
    23. 1.2022 10:22:18
  16. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.02
    0.018332135 = product of:
      0.03666427 = sum of:
        0.014324983 = weight(_text_:science in 1178) [ClassicSimilarity], result of:
          0.014324983 = score(doc=1178,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.11641272 = fieldWeight in 1178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=1178)
        0.022339288 = product of:
          0.044678576 = sum of:
            0.044678576 = weight(_text_:history in 1178) [ClassicSimilarity], result of:
              0.044678576 = score(doc=1178,freq=2.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.20559052 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  17. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.02
    0.016864695 = product of:
      0.03372939 = sum of:
        0.017906228 = weight(_text_:science in 3628) [ClassicSimilarity], result of:
          0.017906228 = score(doc=3628,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.1455159 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.015823163 = product of:
          0.031646326 = sum of:
            0.031646326 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.031646326 = score(doc=3628,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  18. Open MIND (2015) 0.02
    0.016864695 = product of:
      0.03372939 = sum of:
        0.017906228 = weight(_text_:science in 1648) [ClassicSimilarity], result of:
          0.017906228 = score(doc=1648,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.1455159 = fieldWeight in 1648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1648)
        0.015823163 = product of:
          0.031646326 = sum of:
            0.031646326 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
              0.031646326 = score(doc=1648,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.19345059 = fieldWeight in 1648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1648)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review. Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and reproduced by anyone.
    Date
    27. 1.2015 11:48:22
  19. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.02
    0.016864695 = product of:
      0.03372939 = sum of:
        0.017906228 = weight(_text_:science in 3780) [ClassicSimilarity], result of:
          0.017906228 = score(doc=3780,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.1455159 = fieldWeight in 3780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3780)
        0.015823163 = product of:
          0.031646326 = sum of:
            0.031646326 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.031646326 = score(doc=3780,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Wir leben im 21. Jahrhundert, und vieles, was vor hundert und noch vor fünfzig Jahren als Science Fiction abgetan worden wäre, ist mittlerweile Realität. Raumsonden fliegen zum Mars, machen dort Experimente und liefern Daten zur Erde zurück. Roboter werden für Routineaufgaben eingesetzt, zum Beispiel in der Industrie oder in der Medizin. Digitalisierung, künstliche Intelligenz und automatisierte Verfahren sind kaum mehr aus unserem Alltag wegzudenken. Grundlage vieler Prozesse sind lernende Algorithmen. Die fortschreitende digitale Transformation ist global und umfasst alle Lebens- und Arbeitsbereiche: Wirtschaft, Gesellschaft und Politik. Sie eröffnet neue Möglichkeiten, von denen auch Bibliotheken profitieren. Der starke Anstieg digitaler Publikationen, die einen wichtigen und prozentual immer größer werdenden Teil des Kulturerbes darstellen, sollte für Bibliotheken Anlass sein, diese Möglichkeiten aktiv aufzugreifen und einzusetzen. Die Auswertbarkeit digitaler Inhalte, beispielsweise durch Text- and Data-Mining (TDM), und die Entwicklung technischer Verfahren, mittels derer Inhalte miteinander vernetzt und semantisch in Beziehung gesetzt werden können, bieten Raum, auch bibliothekarische Erschließungsverfahren neu zu denken. Daher beschäftigt sich die Deutsche Nationalbibliothek (DNB) seit einigen Jahren mit der Frage, wie sich die Prozesse bei der Erschließung von Medienwerken verbessern und maschinell unterstützen lassen. Sie steht dabei im regelmäßigen kollegialen Austausch mit anderen Bibliotheken, die sich ebenfalls aktiv mit dieser Fragestellung befassen, sowie mit europäischen Nationalbibliotheken, die ihrerseits Interesse an dem Thema und den Erfahrungen der DNB haben. Als Nationalbibliothek mit umfangreichen Beständen an digitalen Publikationen hat die DNB auch Expertise bei der digitalen Langzeitarchivierung aufgebaut und ist im Netzwerk ihrer Partner als kompetente Gesprächspartnerin geschätzt.
    Date
    19. 8.2017 9:24:22
  20. Jörs, B.: ¬Ein kleines Fach zwischen "Daten" und "Wissen" II : Anmerkungen zum (virtuellen) "16th International Symposium of Information Science" (ISI 2021", Regensburg) (2021) 0.02
    0.016864695 = product of:
      0.03372939 = sum of:
        0.017906228 = weight(_text_:science in 330) [ClassicSimilarity], result of:
          0.017906228 = score(doc=330,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.1455159 = fieldWeight in 330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=330)
        0.015823163 = product of:
          0.031646326 = sum of:
            0.031646326 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
              0.031646326 = score(doc=330,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.19345059 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Nur noch Informationsethik, Informationskompetenz und Information Assessment? Doch gerade die Abschottung von anderen Disziplinen verstärkt die Isolation des "kleinen Faches" Informationswissenschaft in der Scientific Community. So bleiben ihr als letzte "eigenständige" Forschungsrandgebiete nur die, die Wolf Rauch als Keynote Speaker bereits in seinem einführenden, historisch-genetischen Vortrag zur Lage der Informationswissenschaft auf der ISI 2021 benannt hat: "Wenn die universitäre Informationswissenschaft (zumindest in Europa) wohl kaum eine Chance hat, im Bereich der Entwicklung von Systemen und Anwendungen wieder an die Spitze der Entwicklung vorzustoßen, bleiben ihr doch Gebiete, in denen ihr Beitrag in der kommenden Entwicklungsphase dringend erforderlich sein wird: Informationsethik, Informationskompetenz, Information Assessment" (Wolf Rauch: Was aus der Informationswissenschaft geworden ist; in: Thomas Schmidt; Christian Wolff (Eds): Information between Data and Knowledge. Schriften zur Informationswissenschaft 74, Regensburg, 2021, Seiten 20-22 - siehe auch die Rezeption des Beitrages von Rauch durch Johannes Elia Panskus, Was aus der Informationswissenschaft geworden ist. Sie ist in der Realität angekommen, in: Open Password, 17. März 2021). Das ist alles? Ernüchternd.

Years

Languages

  • e 250
  • d 122
  • el 2
  • i 2
  • a 1
  • f 1
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 206
  • i 12
  • r 7
  • x 7
  • m 6
  • s 6
  • b 4
  • p 2
  • n 1
  • More… Less…