Search (150 results, page 1 of 8)

  • × language_ss:"e"
  • × theme_ss:"Information"
  1. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.16
    0.1630852 = product of:
      0.38053215 = sum of:
        0.05436174 = product of:
          0.1630852 = sum of:
            0.1630852 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.1630852 = score(doc=76,freq=2.0), product of:
                0.34821346 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04107254 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.1630852 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.1630852 = score(doc=76,freq=2.0), product of:
            0.34821346 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04107254 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.1630852 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.1630852 = score(doc=76,freq=2.0), product of:
            0.34821346 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04107254 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.42857143 = coord(3/7)
    
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  2. Rubin, V.L.: Disinformation and misinformation triangle (2019) 0.06
    0.056203403 = product of:
      0.13114128 = sum of:
        0.03718255 = weight(_text_:processing in 5462) [ClassicSimilarity], result of:
          0.03718255 = score(doc=5462,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.22363065 = fieldWeight in 5462, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5462)
        0.04992717 = weight(_text_:digital in 5462) [ClassicSimilarity], result of:
          0.04992717 = score(doc=5462,freq=4.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.3081681 = fieldWeight in 5462, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5462)
        0.044031553 = weight(_text_:techniques in 5462) [ClassicSimilarity], result of:
          0.044031553 = score(doc=5462,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.24335694 = fieldWeight in 5462, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5462)
      0.42857143 = coord(3/7)
    
    Abstract
    Purpose The purpose of this paper is to treat disinformation and misinformation (intentionally deceptive and unintentionally inaccurate misleading information, respectively) as a socio-cultural technology-enabled epidemic in digital news, propagated via social media. Design/methodology/approach The proposed disinformation and misinformation triangle is a conceptual model that identifies the three minimal causal factors occurring simultaneously to facilitate the spread of the epidemic at the societal level. Findings Following the epidemiological disease triangle model, the three interacting causal factors are translated into the digital news context: the virulent pathogens are falsifications, clickbait, satirical "fakes" and other deceptive or misleading news content; the susceptible hosts are information-overloaded, time-pressed news readers lacking media literacy skills; and the conducive environments are polluted poorly regulated social media platforms that propagate and encourage the spread of various "fakes." Originality/value The three types of interventions - automation, education and regulation - are proposed as a set of holistic measures to reveal, and potentially control, predict and prevent further proliferation of the epidemic. Partial automated solutions with natural language processing, machine learning and various automated detection techniques are currently available, as exemplified here briefly. Automated solutions assist (but not replace) human judgments about whether news is truthful and credible. Information literacy efforts require further in-depth understanding of the phenomenon and interdisciplinary collaboration outside of the traditional library and information science, incorporating media studies, journalism, interpersonal psychology and communication perspectives.
  3. Repo, A.J.: ¬The dual approach to the value of information : an appraisal of use and exchange values (1989) 0.04
    0.040875565 = product of:
      0.14306447 = sum of:
        0.10411114 = weight(_text_:processing in 5772) [ClassicSimilarity], result of:
          0.10411114 = score(doc=5772,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.6261658 = fieldWeight in 5772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.109375 = fieldNorm(doc=5772)
        0.038953334 = product of:
          0.07790667 = sum of:
            0.07790667 = weight(_text_:22 in 5772) [ClassicSimilarity], result of:
              0.07790667 = score(doc=5772,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.5416616 = fieldWeight in 5772, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5772)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Information processing and management. 22(1986) no.5, S.373-383
  4. Dillon, A.: Spatial-semantics : how users derive shape from information space (2000) 0.04
    0.035146713 = product of:
      0.12301349 = sum of:
        0.06310088 = weight(_text_:processing in 4602) [ClassicSimilarity], result of:
          0.06310088 = score(doc=4602,freq=4.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.3795138 = fieldWeight in 4602, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=4602)
        0.059912607 = weight(_text_:digital in 4602) [ClassicSimilarity], result of:
          0.059912607 = score(doc=4602,freq=4.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.36980176 = fieldWeight in 4602, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=4602)
      0.2857143 = coord(2/7)
    
    Abstract
    User problems with large information spaces multiply in complexity when we enter the digital domain. Virtual information environments can offer 3D representations, reconfigurations, and access to large databases that may overwhelm many users' abilities to filter and represent. As a result, user frequently experience disorienting in navigation large digital spaces to locate an duse information. To date, the research response has been predominantly based on the analysis of visual navigational aids that might support users' bottom-up processing of the spatial display. In the present paper, an emerging alternative is considered that places greater emphasis on the top-down application of semantic knowledge by the user gleaned from their experiences within the sociocognitive context of information production and consumption. A distinction between spatial and semantic cues is introduced, and existing empirical data are reviewed that highlight the differential reliance on spatial or semantic information as the domain expertise of the user increases. The conclusion is reached that interfaces for shaping information should be built on an increasing analysis of users' semantic processing
  5. San Segundo, R.: ¬A new conception of representation of knowledge (2004) 0.03
    0.034902792 = product of:
      0.081439845 = sum of:
        0.042067256 = weight(_text_:processing in 3077) [ClassicSimilarity], result of:
          0.042067256 = score(doc=3077,freq=4.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.2530092 = fieldWeight in 3077, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=3077)
        0.028243072 = weight(_text_:digital in 3077) [ClassicSimilarity], result of:
          0.028243072 = score(doc=3077,freq=2.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.17432621 = fieldWeight in 3077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=3077)
        0.011129524 = product of:
          0.022259047 = sum of:
            0.022259047 = weight(_text_:22 in 3077) [ClassicSimilarity], result of:
              0.022259047 = score(doc=3077,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.15476047 = fieldWeight in 3077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3077)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The new term Representation of knowledge, applied to the framework of electronic segments of information, with comprehension of new material support for information, and a review and total conceptualisation of the terminology which is being applied, entails a review of all traditional documentary practices. Therefore, a definition of the concept of Representation of knowledge is indispensable. The term representation has been used in westere cultural and intellectual tradition to refer to the diverse ways that a subject comprehends an object. Representation is a process which requires the structure of natural language and human memory whereby it is interwoven in a subject and in conscience. However, at the present time, the term Representation of knowledge is applied to the processing of electronic information, combined with the aim of emulating the human mind in such a way that one has endeavoured to transfer, with great difficulty, the complex structurality of the conceptual representation of human knowledge to new digital information technologies. Thus, nowadays, representation of knowledge has taken an diverse meanings and it has focussed, for the moment, an certain structures and conceptual hierarchies which carry and transfer information, and has initially been based an the current representation of knowledge using artificial intelligence. The traditional languages of documentation, also referred to as languages of representation, offer a structured representation of conceptual fields, symbols and terms of natural and notational language, and they are the pillars for the necessary correspondence between the object or text and its representation. These correspondences, connections and symbolisations will be established within the electronic framework by means of different models and of the "goal" domain, which will give rise to organisations, structures, maps, networks and levels, as new electronic documents are not compact units but segments of information. Thus, the new representation of knowledge refers to data, images, figures and symbolised, treated, processed and structured ideas which replace or refer to documents within the framework of technical processing and the recuperation of electronic information.
    Date
    2. 1.2005 18:22:25
  6. Hernon, P.: Disinformation and misinformation through the Internet : findings of an exploratory study (1995) 0.03
    0.031734157 = product of:
      0.111069545 = sum of:
        0.04942538 = weight(_text_:digital in 2206) [ClassicSimilarity], result of:
          0.04942538 = score(doc=2206,freq=2.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.30507088 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2206)
        0.06164417 = weight(_text_:techniques in 2206) [ClassicSimilarity], result of:
          0.06164417 = score(doc=2206,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.3406997 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2206)
      0.2857143 = coord(2/7)
    
    Abstract
    There are in creased opportunities for disinformation and misinformation to occur on the Internet and for students, faculty, and others to unknowingly reference them. The extent of inaccurary over the Internet was investigated in 1994 by means of a questionnaire involving 16 participants which covered: individuals' views on the accuracy of information available through the Internet; their reactions to the creation of disinformation and misinformation; their awareness of instances of disinformation and misinformation on the Internet; and their views on the official or authentic version or dource. Findings indictae a need to develop digital signatures and other authenticating techniques
  7. Badia, A.: Data, information, knowledge : an information science analysis (2014) 0.03
    0.03047277 = product of:
      0.10665469 = sum of:
        0.08717802 = weight(_text_:techniques in 1296) [ClassicSimilarity], result of:
          0.08717802 = score(doc=1296,freq=4.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.48182213 = fieldWeight in 1296, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1296)
        0.019476667 = product of:
          0.038953334 = sum of:
            0.038953334 = weight(_text_:22 in 1296) [ClassicSimilarity], result of:
              0.038953334 = score(doc=1296,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.2708308 = fieldWeight in 1296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1296)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    I analyze the text of an article that appeared in this journal in 2007 that published the results of a questionnaire in which a number of experts were asked to define the concepts of data, information, and knowledge. I apply standard information retrieval techniques to build a list of the most frequent terms in each set of definitions. I then apply information extraction techniques to analyze how the top terms are used in the definitions. As a result, I draw data-driven conclusions about the aggregate opinion of the experts. I contrast this with the original analysis of the data to provide readers with an alternative viewpoint on what the data tell us.
    Date
    16. 6.2014 19:22:57
  8. Allen, B.L.: Visualization and cognitve abilities (1998) 0.03
    0.026850509 = product of:
      0.09397677 = sum of:
        0.07728249 = weight(_text_:processing in 2340) [ClassicSimilarity], result of:
          0.07728249 = score(doc=2340,freq=6.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.4648076 = fieldWeight in 2340, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=2340)
        0.016694285 = product of:
          0.03338857 = sum of:
            0.03338857 = weight(_text_:22 in 2340) [ClassicSimilarity], result of:
              0.03338857 = score(doc=2340,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.23214069 = fieldWeight in 2340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2340)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The idea of obtaining subject access to information by being able to visualize an information space, and to navigate through that space toward useful or interesting information, is attractive and plausible. However, this approach to subject access requires additional cognitive processing associated with the interaction of cognitive facilities that deal with concepts and those that deal with space. This additional cognitive processing may cause problems for users, particularly in dealing with the dimensions, the details, and the symbols of information space. Further, it seems likely that different cognitive abilities are associated with conceptual and spatial cognition. As a result, users who deal well with subject access using traditional conceptual approaches may experience difficulty in using visualization and navigation. An experiment designed to investigate the effects of different cognitive abilities on the use of both conceptual and spatial representations of information is outlined
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  9. Dillon, A.; Vaughan, M.: "It's the journey and the destination" : shape and the emergent property of genre in evaluating digital documents (1997) 0.03
    0.025535632 = product of:
      0.089374706 = sum of:
        0.06989804 = weight(_text_:digital in 2889) [ClassicSimilarity], result of:
          0.06989804 = score(doc=2889,freq=4.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.43143538 = fieldWeight in 2889, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2889)
        0.019476667 = product of:
          0.038953334 = sum of:
            0.038953334 = weight(_text_:22 in 2889) [ClassicSimilarity], result of:
              0.038953334 = score(doc=2889,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.2708308 = fieldWeight in 2889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2889)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Navigation is a limited metaphor for hypermedia and website use that potentially constraints our understanding of human-computer interaction. Traces the emergence of the navigation metaphor and the emprical analysis of navigation measures in usability evaluation before suggesting an alternative concept to consider: shape. The shape concept affords a richer analytic tool for considering humans' use of digital documents and invokes social level analysis of meaning that are shared among discourse communities who both produce and consume the information resources
    Date
    6. 2.1999 20:10:22
  10. Cooke, N.J.: Varieties of knowledge elicitation techniques (1994) 0.03
    0.025160888 = product of:
      0.17612621 = sum of:
        0.17612621 = weight(_text_:techniques in 2245) [ClassicSimilarity], result of:
          0.17612621 = score(doc=2245,freq=8.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.9734278 = fieldWeight in 2245, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.078125 = fieldNorm(doc=2245)
      0.14285715 = coord(1/7)
    
    Abstract
    Information on knowledge elicitation methods is widely scattered across the fields of psychology, business management, education, counselling, cognitive science, linguistics, philosophy, knowledge engineering and anthropology. Identifies knowledge elicitation techniques and the associated bibliographic information. Organizes the techniques into categories on the basis of methodological similarity. Summarizes for each category of techniques strengths, weaknesses and recommends applications
  11. Klir, G.J.; Folger, T.A.: Fuzzy sets, uncertainty and information (1988) 0.02
    0.021950537 = product of:
      0.15365376 = sum of:
        0.15365376 = product of:
          0.3073075 = sum of:
            0.3073075 = weight(_text_:mathematics in 6039) [ClassicSimilarity], result of:
              0.3073075 = score(doc=6039,freq=4.0), product of:
                0.25945482 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.04107254 = queryNorm
                1.1844356 = fieldWeight in 6039, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6039)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    PRECIS
    Mathematics / Fuzzy sets
    Subject
    Mathematics / Fuzzy sets
  12. Weaver, W.: ¬The mathematics of communication (1949) 0.02
    0.020695165 = product of:
      0.14486615 = sum of:
        0.14486615 = product of:
          0.2897323 = sum of:
            0.2897323 = weight(_text_:mathematics in 2438) [ClassicSimilarity], result of:
              0.2897323 = score(doc=2438,freq=2.0), product of:
                0.25945482 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.04107254 = queryNorm
                1.1166966 = fieldWeight in 2438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.125 = fieldNorm(doc=2438)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  13. Verdi, M.P.; Kulhavy, R.W.; Stock, W.A.; Rittscho, K.A.; Savenye, W.: Why maps improve memory for text : the influence of structural information on working-memory operations (1993) 0.02
    0.0175181 = product of:
      0.061313346 = sum of:
        0.04461906 = weight(_text_:processing in 2090) [ClassicSimilarity], result of:
          0.04461906 = score(doc=2090,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.26835677 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=2090)
        0.016694285 = product of:
          0.03338857 = sum of:
            0.03338857 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.03338857 = score(doc=2090,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.23214069 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In order to test how associated verbal and spatial stimuli are processed in memory, undergraduates studied a reference map as either an intact unit or as a series of individual features, and read a text containing facts related to map features. In Addition, the map was presented either before or after reading the text. Seeing the intact map prior to the text led to better recall of both map information and facts from the text. These results support a dual coding modell, where stimuli such as maps possess a retrieval advantage because they allow simultaneous representation in working memory. This advantage occurs because information from the map can be used to cue retrieval of associated verbal facts, without exceeding the processing constraints of the memorial system
    Date
    22. 7.2000 19:18:18
  14. Quillian, M.R.: Semantic memory (1968) 0.02
    0.016997738 = product of:
      0.11898416 = sum of:
        0.11898416 = weight(_text_:processing in 1478) [ClassicSimilarity], result of:
          0.11898416 = score(doc=1478,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.7156181 = fieldWeight in 1478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.125 = fieldNorm(doc=1478)
      0.14285715 = coord(1/7)
    
    Source
    Semantic information processing. Ed.: M. Minsky
  15. Derr, R.L.: ¬The concept of information in ordinary discourse (1985) 0.02
    0.016997738 = product of:
      0.11898416 = sum of:
        0.11898416 = weight(_text_:processing in 3297) [ClassicSimilarity], result of:
          0.11898416 = score(doc=3297,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.7156181 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.125 = fieldNorm(doc=3297)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 21(1985) no.6, S.489-500
  16. Wersig, G.: Information science : the study of postmodern knowledge usage (1993) 0.02
    0.016997738 = product of:
      0.11898416 = sum of:
        0.11898416 = weight(_text_:processing in 4706) [ClassicSimilarity], result of:
          0.11898416 = score(doc=4706,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.7156181 = fieldWeight in 4706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.125 = fieldNorm(doc=4706)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 29(1993) no.2, S.229-240
  17. Cremmins, E.T.: Value-added processing of representational and speculative information using cognitive skills (1992) 0.02
    0.016997738 = product of:
      0.11898416 = sum of:
        0.11898416 = weight(_text_:processing in 7515) [ClassicSimilarity], result of:
          0.11898416 = score(doc=7515,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.7156181 = fieldWeight in 7515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.125 = fieldNorm(doc=7515)
      0.14285715 = coord(1/7)
    
  18. ¬The impact of information (1995) 0.02
    0.016997738 = product of:
      0.11898416 = sum of:
        0.11898416 = weight(_text_:processing in 3257) [ClassicSimilarity], result of:
          0.11898416 = score(doc=3257,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.7156181 = fieldWeight in 3257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.125 = fieldNorm(doc=3257)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 31(1995) no.4, S.455-498
  19. Massaro, D.W.; Cowan, N.: Information processing models : microscopes of the mind (1993) 0.02
    0.016997738 = product of:
      0.11898416 = sum of:
        0.11898416 = weight(_text_:processing in 3293) [ClassicSimilarity], result of:
          0.11898416 = score(doc=3293,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.7156181 = fieldWeight in 3293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.125 = fieldNorm(doc=3293)
      0.14285715 = coord(1/7)
    
  20. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.02
    0.016589193 = product of:
      0.058062173 = sum of:
        0.018591275 = weight(_text_:processing in 1182) [ClassicSimilarity], result of:
          0.018591275 = score(doc=1182,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.111815326 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.039470896 = weight(_text_:digital in 1182) [ClassicSimilarity], result of:
          0.039470896 = score(doc=1182,freq=10.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.2436283 = fieldWeight in 1182, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.2857143 = coord(2/7)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.

Types

  • a 126
  • m 18
  • el 7
  • s 5
  • x 1
  • More… Less…

Subjects