Search (193 results, page 1 of 10)

  • × theme_ss:"Information"
  1. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.39
    0.39483398 = product of:
      0.7107011 = sum of:
        0.068223014 = product of:
          0.20466904 = sum of:
            0.20466904 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.20466904 = score(doc=5955,freq=2.0), product of:
                0.31214407 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036818076 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
        0.20466904 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.20466904 = score(doc=5955,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.028470935 = weight(_text_:data in 5955) [ClassicSimilarity], result of:
          0.028470935 = score(doc=5955,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.24455236 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.20466904 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.20466904 = score(doc=5955,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.20466904 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.20466904 = score(doc=5955,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
      0.5555556 = coord(5/9)
    
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  2. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.31
    0.31091207 = product of:
      0.5596417 = sum of:
        0.048730727 = product of:
          0.14619218 = sum of:
            0.14619218 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.14619218 = score(doc=76,freq=2.0), product of:
                0.31214407 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036818076 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.14619218 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.14619218 = score(doc=76,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.14619218 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.14619218 = score(doc=76,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.07233446 = weight(_text_:germany in 76) [ClassicSimilarity], result of:
          0.07233446 = score(doc=76,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.32944247 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.14619218 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.14619218 = score(doc=76,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.5555556 = coord(5/9)
    
    Content
    Originally published July 1981 as Internal Report 81-2, Dept. of Neurobiology, Max-Planck-Institute for Biophysical Chemistry, 3400 Gottingen, W.-Germany.
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  3. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.22
    0.216581 = product of:
      0.48730725 = sum of:
        0.048730727 = product of:
          0.14619218 = sum of:
            0.14619218 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.14619218 = score(doc=5895,freq=2.0), product of:
                0.31214407 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036818076 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.14619218 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.14619218 = score(doc=5895,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.14619218 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.14619218 = score(doc=5895,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.14619218 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.14619218 = score(doc=5895,freq=2.0), product of:
            0.31214407 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036818076 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.44444445 = coord(4/9)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  4. Schnurr, E.-M.: Religionskonflikt und Öffentlichkeit : eine Mediengeschichte des Kölner Kriegs ; 1582 bis 1590 (2009) 0.03
    0.027279032 = product of:
      0.2455113 = sum of:
        0.2455113 = weight(_text_:germany in 1711) [ClassicSimilarity], result of:
          0.2455113 = score(doc=1711,freq=36.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            1.1181648 = fieldWeight in 1711, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.03125 = fieldNorm(doc=1711)
      0.11111111 = coord(1/9)
    
    LCSH
    Cologne, Germany / History / 1582 / 1590
    Counter / Reformation / Germany / Cologne
    Pamphlets / Germany / Cologne / History / 16th century
    Press / Germany / Cologne / History / 16th century.
    Communication / Political aspects / Germany / Cologne / History / 16th century.
    Publishers and publishing / Political aspects / Germany / Cologne / History / 16th century.
    Public opinion / Political aspects / Germany / Cologne / History / 16th century.
    War in mass media / Germany / Cologne (Electorate) / History / 16th century.
    Mass media and war / Germany / Cologne (Electorate) / History / 16th century.
    Subject
    Cologne, Germany / History / 1582 / 1590
    Counter / Reformation / Germany / Cologne
    Pamphlets / Germany / Cologne / History / 16th century
    Press / Germany / Cologne / History / 16th century.
    Communication / Political aspects / Germany / Cologne / History / 16th century.
    Publishers and publishing / Political aspects / Germany / Cologne / History / 16th century.
    Public opinion / Political aspects / Germany / Cologne / History / 16th century.
    War in mass media / Germany / Cologne (Electorate) / History / 16th century.
    Mass media and war / Germany / Cologne (Electorate) / History / 16th century.
  5. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.02
    0.01921511 = product of:
      0.057645332 = sum of:
        0.024660662 = weight(_text_:bibliographic in 5655) [ClassicSimilarity], result of:
          0.024660662 = score(doc=5655,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.17204987 = fieldWeight in 5655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=5655)
        0.02300799 = weight(_text_:data in 5655) [ClassicSimilarity], result of:
          0.02300799 = score(doc=5655,freq=4.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.19762816 = fieldWeight in 5655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=5655)
        0.009976682 = product of:
          0.019953365 = sum of:
            0.019953365 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
              0.019953365 = score(doc=5655,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.15476047 = fieldWeight in 5655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5655)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  6. Badia, A.: Data, information, knowledge : an information science analysis (2014) 0.02
    0.018027142 = product of:
      0.08112214 = sum of:
        0.063662946 = weight(_text_:data in 1296) [ClassicSimilarity], result of:
          0.063662946 = score(doc=1296,freq=10.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.5468357 = fieldWeight in 1296, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1296)
        0.017459193 = product of:
          0.034918386 = sum of:
            0.034918386 = weight(_text_:22 in 1296) [ClassicSimilarity], result of:
              0.034918386 = score(doc=1296,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.2708308 = fieldWeight in 1296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1296)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    I analyze the text of an article that appeared in this journal in 2007 that published the results of a questionnaire in which a number of experts were asked to define the concepts of data, information, and knowledge. I apply standard information retrieval techniques to build a list of the most frequent terms in each set of definitions. I then apply information extraction techniques to analyze how the top terms are used in the definitions. As a result, I draw data-driven conclusions about the aggregate opinion of the experts. I contrast this with the original analysis of the data to provide readers with an alternative viewpoint on what the data tell us.
    Date
    16. 6.2014 19:22:57
  7. Houston, R.D.; Harmon, E.G.: Re-envisioning the information concept : systematic definitions (2002) 0.01
    0.014910768 = product of:
      0.06709845 = sum of:
        0.032538213 = weight(_text_:data in 136) [ClassicSimilarity], result of:
          0.032538213 = score(doc=136,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2794884 = fieldWeight in 136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=136)
        0.03456024 = product of:
          0.06912048 = sum of:
            0.06912048 = weight(_text_:22 in 136) [ClassicSimilarity], result of:
              0.06912048 = score(doc=136,freq=6.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.536106 = fieldWeight in 136, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=136)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper suggests a framework and systematic definitions for 6 words commonly used in dthe field of information science: data, information, knowledge, wisdom, inspiration, and intelligence. We intend these definitions to lead to a quantification of information science, a quantification that will enable their measurement, manipulastion, and prediction.
    Date
    22. 2.2007 18:56:23
    22. 2.2007 19:22:13
  8. Information cultures in the digital age : a Festschrift in Honor of Rafael Capurro (2016) 0.01
    0.0143410815 = product of:
      0.064534865 = sum of:
        0.021134188 = weight(_text_:data in 4127) [ClassicSimilarity], result of:
          0.021134188 = score(doc=4127,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.18153305 = fieldWeight in 4127, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4127)
        0.043400675 = weight(_text_:germany in 4127) [ClassicSimilarity], result of:
          0.043400675 = score(doc=4127,freq=2.0), product of:
            0.21956629 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.036818076 = queryNorm
            0.19766548 = fieldWeight in 4127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4127)
      0.22222222 = coord(2/9)
    
    Content
    Inhalt: Super-Science, Fundamental Dimension, Way of Being: Library and Information Science in an Age of Messages / Bawden, David (et al.) (S.31-43) - The "Naturalization" of the Philosophy of Rafael Capurro: Logic, Information and Ethics / Brenner, Joseph E. (S.45-64) - Turing's Cyberworld / Eldred, Michael (S.65-81) - Hermeneutics and Information Science: The Ongoing Journey From Simple Objective Interpretation to Understanding Data as a Form of Disclosure / Kelly, Matthew (S.83-110) - The Epistemological Maturity of Information Science and the Debate Around Paradigms / Ribeiro, Fernanda (et al.) (S.111-124) - A Methodology for Studying Knowledge Creation in Organizational Settings: A Phenomenological Viewpoint / Suorsa, Anna (et al.) (S.125-142) - The Significance of Digital Hermeneutics for the Philosophy of Technology / Tripathi, Arun Kumar (S.143-157) - Reconciling Social Responsibility and Neutrality in LIS Professional Ethics: A Virtue Ethics Approach / Burgess, John T F (S.161-172) - Information Ethics in the Age of Digital Labour and the Surveillance-Industrial Complex / Fuchs, Christian (S.173-190) - Intercultural Information Ethics: A Pragmatic Consideration / Hongladarom, Soraj (S.191-206) - Ethics of European Institutions as Normative Foundation of Responsible Research and Innovation in ICT / Stahl, Bernd Carsten (S.207-219) - Raphael's / Holgate, John D. (S.223-245) - Understanding the Pulse of Existence: An Examination of Capurro's Angeletics / Morador, Fernando Flores (S.247-252) - The Demon in the Gap of Language: Capurro, Ethics and language in Divided Germany / Saldanha, Gustavo Silva (S.253-268) - General Intellect, Communication and Contemporary Media Theory / Frohmann, Bernd (S.271-286) - "Data": The data / Furner, Jonathan (S.287-306) - On the Pre-History of Library Ethics: Documents and Legitimacy / Hansson, Joacim (S.307-319) -
  9. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.01
    0.013049919 = product of:
      0.058724634 = sum of:
        0.038388252 = weight(_text_:readable in 1182) [ClassicSimilarity], result of:
          0.038388252 = score(doc=1182,freq=2.0), product of:
            0.2262076 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.036818076 = queryNorm
            0.16970363 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.020336384 = weight(_text_:data in 1182) [ClassicSimilarity], result of:
          0.020336384 = score(doc=1182,freq=8.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.17468026 = fieldWeight in 1182, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.22222222 = coord(2/9)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  10. Infield, N.: Capitalising on knowledge : if knowledge is power, why don't librarians rule the world? (1997) 0.01
    0.011664795 = product of:
      0.052491575 = sum of:
        0.032538213 = weight(_text_:data in 668) [ClassicSimilarity], result of:
          0.032538213 = score(doc=668,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2794884 = fieldWeight in 668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=668)
        0.019953365 = product of:
          0.03990673 = sum of:
            0.03990673 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.03990673 = score(doc=668,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.30952093 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    While knowledge management is seen to be the biggest thing to hit the information profession since the Internet, the concept is surrounded by confusion. Traces the progress of knowledge on the information continuum which extends from data to informed decision. The reason for which knowledge management has suddenly become inluential is that its principal proponents now are not information professionals but management consultants seeking to retain their intellectual capital. Explains the reasons for this, the practical meaning of knowledge management and what information professionals should be doing to take advantage of the vogue
    Source
    Information world review. 1997, no.130, S.22
  11. Meadows, J.: Understanding information (2001) 0.01
    0.010206696 = product of:
      0.04593013 = sum of:
        0.028470935 = weight(_text_:data in 3067) [ClassicSimilarity], result of:
          0.028470935 = score(doc=3067,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.24455236 = fieldWeight in 3067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3067)
        0.017459193 = product of:
          0.034918386 = sum of:
            0.034918386 = weight(_text_:22 in 3067) [ClassicSimilarity], result of:
              0.034918386 = score(doc=3067,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.2708308 = fieldWeight in 3067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3067)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Die moderne Gesellschaft leidet an Reizüberflutung durch Fernsehen, Internet, Zeitschriften aller Art. Jack Meadows, Professor für Bibliotheks- und Informationswissenschaft setzt sich mit Definitionen zu Begriffen wie 'Data', 'Information', 'Communication' oder 'Knowledge' auseinander, die für uns alläglich geworden sind. wie verarbeiten wir den Fluss von wichtigen und unwichtigen Informationen, der täglich auf uns einströmt? Welche 'Daten' sind es für uns Wert, gespeichert zu werden, welche vergessen wir nach kurzer Zeit? Wann wird aus Information Wissen oder gar Weisheit? Das Buch ist eine grundlegende Einführung in das weitläufige Thema Information und Wissensmanagement
    Date
    15. 6.2002 19:22:01
  12. Westbrook, L.: Information myths and intimate partner violence : sources, contexts, and consequences (2009) 0.01
    0.010206696 = product of:
      0.04593013 = sum of:
        0.028470935 = weight(_text_:data in 2790) [ClassicSimilarity], result of:
          0.028470935 = score(doc=2790,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.24455236 = fieldWeight in 2790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2790)
        0.017459193 = product of:
          0.034918386 = sum of:
            0.034918386 = weight(_text_:22 in 2790) [ClassicSimilarity], result of:
              0.034918386 = score(doc=2790,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.2708308 = fieldWeight in 2790, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2790)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Survivors of intimate partner violence face more than information gaps; many face powerful barriers in the form of information myths. Triangulating data from in-depth interviews and community bulletin board postings, this study incorporates insights from survivors, police, and shelter staff to begin mapping the information landscape through which survivors move. An unanticipated feature of that landscape is a set of 28 compelling information myths that prevent some survivors from making effective use of the social, legal, economic, and support resources available to them. This analysis of the sources, contexts, and consequences of these information myths is the first step in devising strategies to counter their ill effects.
    Date
    22. 3.2009 19:16:44
  13. Allen, B.L.: Visualization and cognitve abilities (1998) 0.01
    0.008748596 = product of:
      0.03936868 = sum of:
        0.024403658 = weight(_text_:data in 2340) [ClassicSimilarity], result of:
          0.024403658 = score(doc=2340,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2096163 = fieldWeight in 2340, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2340)
        0.014965023 = product of:
          0.029930046 = sum of:
            0.029930046 = weight(_text_:22 in 2340) [ClassicSimilarity], result of:
              0.029930046 = score(doc=2340,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.23214069 = fieldWeight in 2340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2340)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  14. Malsburg, C. von der: Concerning the neuronal code (2018) 0.01
    0.008748596 = product of:
      0.03936868 = sum of:
        0.024403658 = weight(_text_:data in 73) [ClassicSimilarity], result of:
          0.024403658 = score(doc=73,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.2096163 = fieldWeight in 73, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=73)
        0.014965023 = product of:
          0.029930046 = sum of:
            0.029930046 = weight(_text_:22 in 73) [ClassicSimilarity], result of:
              0.029930046 = score(doc=73,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.23214069 = fieldWeight in 73, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=73)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The central problem with understanding brain and mind is the neural code issue: understanding the matter of our brain as basis for the phenomena of our mind. The richness with which our mind represents our environment, the parsimony of genetic data, the tremendous efficiency with which the brain learns from scant sensory input and the creativity with which our mind constructs mental worlds all speak in favor of mind as an emergent phenomenon. This raises the further issue of how the neural code supports these processes of organization. The central point of this communication is that the neural code has the form of structured net fragments that are formed by network self-organization, activate and de-activate on the functional time scale, and spontaneously combine to form larger nets with the same basic structure.
    Date
    27.12.2020 16:56:22
  15. Taylor, A.G.: ¬The information universe : will we have chaos of control? (1994) 0.01
    0.007750098 = product of:
      0.06975088 = sum of:
        0.06975088 = weight(_text_:bibliographic in 1644) [ClassicSimilarity], result of:
          0.06975088 = score(doc=1644,freq=4.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.4866305 = fieldWeight in 1644, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=1644)
      0.11111111 = coord(1/9)
    
    Abstract
    Presents evidence to suggest that the online world needs the bibliographic skills of librarians but that the term bibliographic control is likely to be associated specifically with libraries and liable to misinterpretation. Suggests that it may be time to start talking about information organization which may be described as having the following 4 aspects: making new information bearing entities known; acquiring such entities at certain points of accumulation; providing name, title and subject access to the entities; and providing for the physical location of copies. Urges librarians rapidly to adapt their skills to this increasing need for information organization
  16. Lovhoiden, H.: ¬The myth of information : rediscovering data protocols design as the key to data management (1995) 0.01
    0.007073661 = product of:
      0.063662946 = sum of:
        0.063662946 = weight(_text_:data in 4666) [ClassicSimilarity], result of:
          0.063662946 = score(doc=4666,freq=10.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.5468357 = fieldWeight in 4666, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4666)
      0.11111111 = coord(1/9)
    
    Abstract
    Information researchers are more concerned with exploring myths than advancing the field. Rejects the concept of information, claiming it to be reminiscent of the pipeline metaphor of communication. This claim is based on a constructive world view, sometimes recognised as radical constructivism, sometimes as second order cybernetics, but regarded here as sensible realism. Hence redefines information resources management as data management, since the only thing that can be stored, transferred or received in information systems is data. Their design must be based on this fact. Object orientation must be recognised as a superior approach when developing systems. Common data protocol design is the single most important task for the systems designer and systems performance cannot be improved through computer-human interface design
  17. Cooke, N.J.: Varieties of knowledge elicitation techniques (1994) 0.01
    0.0068501844 = product of:
      0.06165166 = sum of:
        0.06165166 = weight(_text_:bibliographic in 2245) [ClassicSimilarity], result of:
          0.06165166 = score(doc=2245,freq=2.0), product of:
            0.14333439 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.036818076 = queryNorm
            0.43012467 = fieldWeight in 2245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.078125 = fieldNorm(doc=2245)
      0.11111111 = coord(1/9)
    
    Abstract
    Information on knowledge elicitation methods is widely scattered across the fields of psychology, business management, education, counselling, cognitive science, linguistics, philosophy, knowledge engineering and anthropology. Identifies knowledge elicitation techniques and the associated bibliographic information. Organizes the techniques into categories on the basis of methodological similarity. Summarizes for each category of techniques strengths, weaknesses and recommends applications
  18. fwt: Wie das Gehirn Bilder 'liest' (1999) 0.01
    0.0062707374 = product of:
      0.056436636 = sum of:
        0.056436636 = product of:
          0.11287327 = sum of:
            0.11287327 = weight(_text_:22 in 4042) [ClassicSimilarity], result of:
              0.11287327 = score(doc=4042,freq=4.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.8754574 = fieldWeight in 4042, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4042)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    22. 7.2000 19:01:22
  19. Maguire, P.; Maguire, R.: Consciousness is data compression (2010) 0.01
    0.006261982 = product of:
      0.05635784 = sum of:
        0.05635784 = weight(_text_:data in 4972) [ClassicSimilarity], result of:
          0.05635784 = score(doc=4972,freq=6.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.48408815 = fieldWeight in 4972, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4972)
      0.11111111 = coord(1/9)
    
    Abstract
    In this article we advance the conjecture that conscious awareness is equivalent to data compression. Algorithmic information theory supports the assertion that all forms of understanding are contingent on compression (Chaitin, 2007). Here, we argue that the experience people refer to as consciousness is the particular form of understanding that the brain provides. We therefore propose that the degree of consciousness of a system can be measured in terms of the amount of data compression it carries out.
  20. San Segundo, R.: ¬A new conception of representation of knowledge (2004) 0.01
    0.0058323974 = product of:
      0.026245788 = sum of:
        0.016269106 = weight(_text_:data in 3077) [ClassicSimilarity], result of:
          0.016269106 = score(doc=3077,freq=2.0), product of:
            0.11642061 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.036818076 = queryNorm
            0.1397442 = fieldWeight in 3077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3077)
        0.009976682 = product of:
          0.019953365 = sum of:
            0.019953365 = weight(_text_:22 in 3077) [ClassicSimilarity], result of:
              0.019953365 = score(doc=3077,freq=2.0), product of:
                0.12893063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036818076 = queryNorm
                0.15476047 = fieldWeight in 3077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3077)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The new term Representation of knowledge, applied to the framework of electronic segments of information, with comprehension of new material support for information, and a review and total conceptualisation of the terminology which is being applied, entails a review of all traditional documentary practices. Therefore, a definition of the concept of Representation of knowledge is indispensable. The term representation has been used in westere cultural and intellectual tradition to refer to the diverse ways that a subject comprehends an object. Representation is a process which requires the structure of natural language and human memory whereby it is interwoven in a subject and in conscience. However, at the present time, the term Representation of knowledge is applied to the processing of electronic information, combined with the aim of emulating the human mind in such a way that one has endeavoured to transfer, with great difficulty, the complex structurality of the conceptual representation of human knowledge to new digital information technologies. Thus, nowadays, representation of knowledge has taken an diverse meanings and it has focussed, for the moment, an certain structures and conceptual hierarchies which carry and transfer information, and has initially been based an the current representation of knowledge using artificial intelligence. The traditional languages of documentation, also referred to as languages of representation, offer a structured representation of conceptual fields, symbols and terms of natural and notational language, and they are the pillars for the necessary correspondence between the object or text and its representation. These correspondences, connections and symbolisations will be established within the electronic framework by means of different models and of the "goal" domain, which will give rise to organisations, structures, maps, networks and levels, as new electronic documents are not compact units but segments of information. Thus, the new representation of knowledge refers to data, images, figures and symbolised, treated, processed and structured ideas which replace or refer to documents within the framework of technical processing and the recuperation of electronic information.
    Date
    2. 1.2005 18:22:25

Years

Languages

  • e 120
  • d 69
  • de 1
  • f 1
  • fi 1
  • More… Less…

Types

  • a 157
  • m 32
  • el 11
  • s 9
  • r 1
  • More… Less…

Subjects

Classifications