Search (67 results, page 1 of 4)

  • × theme_ss:"Information"
  • × year_i:[2000 TO 2010}
  1. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.16
    0.1630852 = product of:
      0.38053215 = sum of:
        0.05436174 = product of:
          0.1630852 = sum of:
            0.1630852 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.1630852 = score(doc=5895,freq=2.0), product of:
                0.34821346 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04107254 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.1630852 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.1630852 = score(doc=5895,freq=2.0), product of:
            0.34821346 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04107254 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.1630852 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.1630852 = score(doc=5895,freq=2.0), product of:
            0.34821346 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04107254 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.42857143 = coord(3/7)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  2. Dillon, A.: Spatial-semantics : how users derive shape from information space (2000) 0.04
    0.035146713 = product of:
      0.12301349 = sum of:
        0.06310088 = weight(_text_:processing in 4602) [ClassicSimilarity], result of:
          0.06310088 = score(doc=4602,freq=4.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.3795138 = fieldWeight in 4602, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=4602)
        0.059912607 = weight(_text_:digital in 4602) [ClassicSimilarity], result of:
          0.059912607 = score(doc=4602,freq=4.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.36980176 = fieldWeight in 4602, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=4602)
      0.2857143 = coord(2/7)
    
    Abstract
    User problems with large information spaces multiply in complexity when we enter the digital domain. Virtual information environments can offer 3D representations, reconfigurations, and access to large databases that may overwhelm many users' abilities to filter and represent. As a result, user frequently experience disorienting in navigation large digital spaces to locate an duse information. To date, the research response has been predominantly based on the analysis of visual navigational aids that might support users' bottom-up processing of the spatial display. In the present paper, an emerging alternative is considered that places greater emphasis on the top-down application of semantic knowledge by the user gleaned from their experiences within the sociocognitive context of information production and consumption. A distinction between spatial and semantic cues is introduced, and existing empirical data are reviewed that highlight the differential reliance on spatial or semantic information as the domain expertise of the user increases. The conclusion is reached that interfaces for shaping information should be built on an increasing analysis of users' semantic processing
  3. San Segundo, R.: ¬A new conception of representation of knowledge (2004) 0.03
    0.034902792 = product of:
      0.081439845 = sum of:
        0.042067256 = weight(_text_:processing in 3077) [ClassicSimilarity], result of:
          0.042067256 = score(doc=3077,freq=4.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.2530092 = fieldWeight in 3077, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=3077)
        0.028243072 = weight(_text_:digital in 3077) [ClassicSimilarity], result of:
          0.028243072 = score(doc=3077,freq=2.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.17432621 = fieldWeight in 3077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=3077)
        0.011129524 = product of:
          0.022259047 = sum of:
            0.022259047 = weight(_text_:22 in 3077) [ClassicSimilarity], result of:
              0.022259047 = score(doc=3077,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.15476047 = fieldWeight in 3077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3077)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The new term Representation of knowledge, applied to the framework of electronic segments of information, with comprehension of new material support for information, and a review and total conceptualisation of the terminology which is being applied, entails a review of all traditional documentary practices. Therefore, a definition of the concept of Representation of knowledge is indispensable. The term representation has been used in westere cultural and intellectual tradition to refer to the diverse ways that a subject comprehends an object. Representation is a process which requires the structure of natural language and human memory whereby it is interwoven in a subject and in conscience. However, at the present time, the term Representation of knowledge is applied to the processing of electronic information, combined with the aim of emulating the human mind in such a way that one has endeavoured to transfer, with great difficulty, the complex structurality of the conceptual representation of human knowledge to new digital information technologies. Thus, nowadays, representation of knowledge has taken an diverse meanings and it has focussed, for the moment, an certain structures and conceptual hierarchies which carry and transfer information, and has initially been based an the current representation of knowledge using artificial intelligence. The traditional languages of documentation, also referred to as languages of representation, offer a structured representation of conceptual fields, symbols and terms of natural and notational language, and they are the pillars for the necessary correspondence between the object or text and its representation. These correspondences, connections and symbolisations will be established within the electronic framework by means of different models and of the "goal" domain, which will give rise to organisations, structures, maps, networks and levels, as new electronic documents are not compact units but segments of information. Thus, the new representation of knowledge refers to data, images, figures and symbolised, treated, processed and structured ideas which replace or refer to documents within the framework of technical processing and the recuperation of electronic information.
    Date
    2. 1.2005 18:22:25
  4. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.02
    0.016589193 = product of:
      0.058062173 = sum of:
        0.018591275 = weight(_text_:processing in 1182) [ClassicSimilarity], result of:
          0.018591275 = score(doc=1182,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.111815326 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.039470896 = weight(_text_:digital in 1182) [ClassicSimilarity], result of:
          0.039470896 = score(doc=1182,freq=10.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.2436283 = fieldWeight in 1182, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.2857143 = coord(2/7)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  5. Bawden, D.: Information and digital literacies : a review of concepts (2001) 0.01
    0.01222961 = product of:
      0.08560727 = sum of:
        0.08560727 = weight(_text_:digital in 4479) [ClassicSimilarity], result of:
          0.08560727 = score(doc=4479,freq=6.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.5283983 = fieldWeight in 4479, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4479)
      0.14285715 = coord(1/7)
    
    Abstract
    The concepts of 'information literacy' and 'digital literacy' are described, and reviewed, by way of a literature survey and analysis. Related concepts, including computer literacy, library literacy, network literacy, Internet literacy and hyper-literacy are also discussed, and their relationships elucidated. After a general introduction, the paper begins with the basic concept of 'literacy', which is then expanded to include newer forms of literacy, more suitable for complex information environments. Some of these, for example library, media and computer literacies, are based largely on specific skills, but have some extension beyond them. They lead togeneral concepts, such as information literacy and digital literacy which are based on knowledge, perceptions and attitudes, though reliant on the simpler skills-based literacies
  6. Stoyan, H.: Information in der Informatik (2004) 0.01
    0.011678734 = product of:
      0.040875565 = sum of:
        0.02974604 = weight(_text_:processing in 2959) [ClassicSimilarity], result of:
          0.02974604 = score(doc=2959,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.17890452 = fieldWeight in 2959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=2959)
        0.011129524 = product of:
          0.022259047 = sum of:
            0.022259047 = weight(_text_:22 in 2959) [ClassicSimilarity], result of:
              0.022259047 = score(doc=2959,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.15476047 = fieldWeight in 2959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2959)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    1957 hat Karl Steinbuch mit seinem Mitarbeiter Helmut Gröttrup den Begriff "Informatik" erfunden. Er gebrauchte diesen Begriff nicht zur Bezeichnung eines wissenschaftlichen Fachgebiets, sondern eher für seine Abteilung bei der Firma SEL in Stuttgart. Zu dieser Zeit standen sich in diesem Feld drei Parteien gegenüber: Die Mathematiker, die mit Rechenanlagen elektronisch rechneten, die Elektrotechniker, die Nachrichtenverarbeitung trieben und die Wirtschaftler und Lochkartenleute, die mit mechanisch-elektronischen Geräten zählten, buchten und aufsummierten. Während sich in den USA und England die Mathematiker mit dem Namen für das Gerät "Computer" durchsetzten und die Wissenschaft pragmatisch "Computer Science" genannt wurde, war in Deutschland die Diskussion bis in die 60er Jahre unentschieden: Die Abkürzung EDV hält sich noch immer gegenüber "Rechner" und "Computer"; Steinbuch selbst nannte 1962 sein Taschenbuch nicht "Taschenbuch der Informatik" sondern "Taschenbuch der Nachrichtenverarbeitung". 1955 wurde eine Informatik-Tagung in Darmstadt noch "Elektronische Rechenanlagen und Informationsverarbeitung" genannt. Die Internationale Gesellschaft hieß "International Federation for Information Processing". 1957 aber definierte Steinbuch "Informatik" als "Automatische Informationsverarbeitung" und war auf diese Art den Mathematikern entgegengegangen. Als Firmenbezeichnung schien der Begriff geschützt zu sein. Noch 1967 wurde der Fachbeirat der Bundesregierung "für Datenverarbeitung" genannt. Erst als die Franzosen die Bezeichnung "Informatique" verwendeten, war der Weg frei für die Übernahme. So wurde der Ausschuss des Fachbeirats zur Etablierung des Hochschulstudiums bereits der "Einführung von Informatik-Studiengängen" gewidmet. Man überzeugte den damaligen Forschungsminister Stoltenberg und dieser machte in einer Rede den Begriff "Informatik" publik. Ende der 60er Jahre übernahmen F. L. Bauer und andere den Begriff, nannten 1969 die Berufsgenossenschaft "Gesellschaft für Informatik" und sorgten für die entsprechende Benennung des wissenschaftlichen Fachgebiets. Die strittigen Grundbegriffe dieses Prozesses: Information/Informationen, Nachrichten und Daten scheinen heute nur Nuancen zu trennen. Damals ging es natürlich auch um Politik, um Forschungsrichtungen, um den Geist der Wissenschaft, um die Ausrichtung. Mehr Mathematik, mehr Ingenieurwissenschaft oder mehr Betriebswirtschaft, so könnte man die Grundströmungen vereinfachen. Mit der Ausrichtung der Informatik nicht versöhnte Elektrotechniker nannten sich Informationstechniker, die Datenverarbeiter sammelten sich im Lager der Wirtschaftsinformatiker. Mit den Grundbegriffen der Informatik, Nachricht, Information, Datum, hat es seitdem umfangreiche Auseinandersetzungen gegeben. Lehrbücher mussten geschrieben werden, Lexika und Nachschlagewerke wurden verfasst, Arbeitsgruppen tagten. Die Arbeiten C. Shannons zur Kommunikation, mit denen eine statistische Informationstheorie eingeführt worden war, spielten dabei nur eine geringe Rolle.
    Date
    5. 4.2013 10:22:48
  7. Kelton, K.; Fleischmann, K.R.; Wallace, W.A.: Trust in digital information (2008) 0.01
    0.010482524 = product of:
      0.07337766 = sum of:
        0.07337766 = weight(_text_:digital in 1365) [ClassicSimilarity], result of:
          0.07337766 = score(doc=1365,freq=6.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.4529128 = fieldWeight in 1365, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=1365)
      0.14285715 = coord(1/7)
    
    Abstract
    Trust in information is developing into a vitally important topic as the Internet becomes increasingly ubiquitous within society. Although many discussions of trust in this environment focus on issues like security, technical reliability, or e-commerce, few address the problem of trust in the information obtained from the Internet. The authors assert that there is a strong need for theoretical and empirical research on trust within the field of information science. As an initial step, the present study develops a model of trust in digital information by integrating the research on trust from the behavioral and social sciences with the research on information quality and human- computer interaction. The model positions trust as a key mediating variable between information quality and information usage, with important consequences for both the producers and consumers of digital information. The authors close by outlining important directions for future research on trust in information science and technology.
  8. Kantor, P.B.: Information theory (2009) 0.01
    0.010347582 = product of:
      0.07243308 = sum of:
        0.07243308 = product of:
          0.14486615 = sum of:
            0.14486615 = weight(_text_:mathematics in 3815) [ClassicSimilarity], result of:
              0.14486615 = score(doc=3815,freq=2.0), product of:
                0.25945482 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.04107254 = queryNorm
                0.5583483 = fieldWeight in 3815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3815)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Information theory "measures quantity of information" and is that branch of applied mathematics that deals with the efficient transmission of messages in an encoded language. It is fundamental to modern methods of telecommunication, image compression, and security. Its relation to library information science is less direct. More relevant to the LIS conception of "quantity of information" are economic concepts related to the expected value of a decision, and the influence of imperfect information on that expected value.
  9. Floridi, L.: Open problems in the philosophy of information (2004) 0.01
    0.0090541355 = product of:
      0.063378945 = sum of:
        0.063378945 = product of:
          0.12675789 = sum of:
            0.12675789 = weight(_text_:mathematics in 2635) [ClassicSimilarity], result of:
              0.12675789 = score(doc=2635,freq=2.0), product of:
                0.25945482 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.04107254 = queryNorm
                0.48855478 = fieldWeight in 2635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2635)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    The philosophy of information (PI) is a new area of research with its own field of investigation and methodology. This article, based an the Herbert A. Simon Lecture of Computing and Philosophy I gave at Carnegie Mellon University in 2001, analyses the eighteen principal open problems in PI. Section 1 introduces the analysis by outlining Herbert Simon's approach to PI. Section 2 discusses some methodological considerations about what counts as a good philosophical problem. The discussion centers an Hilbert's famous analysis of the central problems in mathematics. The rest of the article is devoted to the eighteen problems. These are organized into five sections: problems in the analysis of the concept of information, in semantics, in the study of intelligence, in the relation between information and nature, and in the investigation of values.
  10. Joint, N.: Digital information and the "privatisation of knowledge" (2007) 0.01
    0.009021919 = product of:
      0.06315343 = sum of:
        0.06315343 = weight(_text_:digital in 862) [ClassicSimilarity], result of:
          0.06315343 = score(doc=862,freq=10.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.38980526 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=862)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this paper is to point out that past models of information ownership may not carry over to the age of digital information. The fact that public ownership of information (for example, by means of national and public library collections) created social benefits in the past does not mean that a greater degree of private sector involvement in information provision in the knowledge society of today is synonymous with an abandonment of past ideals of social information provision. Design/methodology/approach - A brief review of recent issues in digital preservation and national electronic heritage management, with an examination of the public-private sector characteristics of each issue. Findings - Private companies and philanthropic endeavours focussing on the business of digital information provision have done some things - which in the past we have associated with the public domain - remarkably well. It is probably fair to say that this has occurred against the pattern of expectation of the library profession. Research limitations/implications - The premise of this paper is that LIS research aimed at predicting future patterns of problem solving in information work should avoid the narrow use of patterns of public-private relationships inherited from a previous, print-based information order. Practical implications - This paper suggests practical ways in which the library and information profession can improve digital library services by looking to form creative partnerships with private sector problem solvers. Originality/value - This paper argues that the LIS profession should not take a doctrinaire approach to commercial company involvement in "our" information world. Librarians should facilitate collaboration between all parties, both public and private, to create original solutions to contemporary information provision problems. In this way we can help create pragmatic, non-doctrinaire solutions that really do work for the citizens of our contemporary information society.
  11. Kuhlen, R.: Universal Access : Wem gehört Wissen? (2002) 0.01
    0.008735436 = product of:
      0.061148047 = sum of:
        0.061148047 = weight(_text_:digital in 1601) [ClassicSimilarity], result of:
          0.061148047 = score(doc=1601,freq=6.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.37742734 = fieldWeight in 1601, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1601)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Frage nach dem Besitz an Wissen wird als die Frage nach dem Zugriff (Access) auf Wissen bzw. auf Information reformuliert. Antworten darauf entscheiden über die Entwicklung der Informationsgesellschaft. Die Präferenz dieser Bezeichnung gegenüber "Wissensgesellschaft" wird aus dem pragmatischen Informationsbegriff begründet. Von den insgesamt 6 vorgestellten Sichten auf "Informationsgesellschaft" wird näher auf die derzeit dominierende funktionale Sicht auf Wissen und Information eingegangen. Diese erklärt die gegenwärtigen Tendenzen der Kommerzialisierung und Wissensverwertung, aber auch der Transformation der Verhaltensformen gegenüber Wissen (z.B. Wechsel vom Kauf von Wissen zu dessen Leasing) mit den Konsequenzen des "Pricing for Information" und entsprechenden Kontrollverfahren des "Digital Rights Management". Im Ausgang von Differenzierungen im Begriff des "Access" werden Begründungen für "Universal access" vorgestellt, vor allem aus informationsethischer und normativ-prinzipalistischer Sicht. Ausführlich werden einige gegenwärtige Gefährdungen von "Universal access" an den Beispielen Filtern bzw. Abblocken, Manipulation von Metainformationsdiensten und des Leasing bzw. des "Digital Rights Management" diskutiert. "Digital Rights Management" ohne vertrauenssicherndes "User Rights Management" hat alle Potenziale, zum Folterinstrument der Informationsgesellschaft zu werden, aber auf der anderen Seite auch alle Potenziale, durch sozial gesteuerte Rechte- und Benutzerverwaltung das Instrument für Interessenausgleich und damit Informationsfrieden zu werden. Abschließend werden einige Vorschläge unterbreitet, wie das Prinzip des "Universal access" und damit der freie öffentliche Zugang zu Wissen und Information gesichert, zumindest gefördert werden kann. Aus der Diskussion leiten sich verschiedene mögliche Szenarien und die Schlußfolgerung ab, daß jede Zeit unter Anerkennung der technologischen und medialen Rahmenbedingungen ihren Konsens zwischen öffentlichem und privatem Interesse an Verwertung bzw. Austausch von Wissen und am Zugriff auf Wissen neu bestimmen muß.
  12. Wallis, J.: Cyberspace, information literacy and the information society : same difference? (2005) 0.01
    0.008735436 = product of:
      0.061148047 = sum of:
        0.061148047 = weight(_text_:digital in 4736) [ClassicSimilarity], result of:
          0.061148047 = score(doc=4736,freq=6.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.37742734 = fieldWeight in 4736, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4736)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - To establish that, in the opinion of the author, there is a need for an information literacy skill set for citizens of the modern information society, and that the role of library and information professionals may have to evolve, from intermediaries to facilitators and trainers. Design/methodology/approach - An opinion piece based on the author's experiences in digital library research, as a citizen of an information society and as a worker in the knowledge economy. Findings - That citizens of information societies have direct access to a bewildering range of digital information resources. Librarians and information professionals face less demand for their traditional role as intermediaries. Information literacy is defined and described as a vital skill set for citizens of information societies. It is suggested that librarians and information professionals are needed to pass on these skills to citizens at all levels of society for economic, social and personal empowerment. Research limitations/implications - The paper reflects the perspective of the author - it is not supported by quantitative data (notoriously difficult to collect on information literacy). Practical implications - Provides suggestions on how the library and information profession can retain their relevance to society in the networked age. Originality/value - This is the particular viewpoint of the author, with a diverse range of examples cited to back up the thrust of the paper. It describes how information literacy is required to interact effectively with the digital environment on an emotional as well as an intellectual level.
  13. Moser, P.K.; Nat, A. vander: Knowledge (2009) 0.01
    0.00806945 = product of:
      0.056486145 = sum of:
        0.056486145 = weight(_text_:digital in 3820) [ClassicSimilarity], result of:
          0.056486145 = score(doc=3820,freq=2.0), product of:
            0.16201277 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.04107254 = queryNorm
            0.34865242 = fieldWeight in 3820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=3820)
      0.14285715 = coord(1/7)
    
    Content
    Digital unter: http://dx.doi.org/10.1081/E-ELIS3-120043462. Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  14. Fallis, D.: On verifying the accuracy of information : philosophical perspectives (2004) 0.01
    0.0075482656 = product of:
      0.052837856 = sum of:
        0.052837856 = weight(_text_:techniques in 830) [ClassicSimilarity], result of:
          0.052837856 = score(doc=830,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.2920283 = fieldWeight in 830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.046875 = fieldNorm(doc=830)
      0.14285715 = coord(1/7)
    
    Abstract
    How can one verify the accuracy of recorded information (e.g., information found in books, newspapers, and on Web sites)? In this paper, I argue that work in the epistemology of testimony (especially that of philosophers David Hume and Alvin Goldman) can help with this important practical problem in library and information science. This work suggests that there are four important areas to consider when verifying the accuracy of information: (i) authority, (ii) independent corroboration, (iii) plausibility and support, and (iv) presentation. I show how philosophical research in these areas can improve how information professionals go about teaching people how to evaluate information. Finally, I discuss several further techniques that information professionals can and should use to make it easier for people to verify the accuracy of information.
  15. Morris, J.: Individual differences in the interpretation of text : implications for information science (2009) 0.01
    0.0075482656 = product of:
      0.052837856 = sum of:
        0.052837856 = weight(_text_:techniques in 3318) [ClassicSimilarity], result of:
          0.052837856 = score(doc=3318,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.2920283 = fieldWeight in 3318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.046875 = fieldNorm(doc=3318)
      0.14285715 = coord(1/7)
    
    Abstract
    Many tasks in library and information science (e.g., indexing, abstracting, classification, and text analysis techniques such as discourse and content analysis) require text meaning interpretation, and, therefore, any individual differences in interpretation are relevant and should be considered, especially for applications in which these tasks are done automatically. This article investigates individual differences in the interpretation of one aspect of text meaning that is commonly used in such automatic applications: lexical cohesion and lexical semantic relations. Experiments with 26 participants indicate an approximately 40% difference in interpretation. In total, 79, 83, and 89 lexical chains (groups of semantically related words) were analyzed in 3 texts, respectively. A major implication of this result is the possibility of modeling individual differences for individual users. Further research is suggested for different types of texts and readers than those used here, as well as similar research for different aspects of text meaning.
  16. Ford, N.: Modeling cognitive processes in information seeking : from Popper to Pask (2004) 0.01
    0.0074365106 = product of:
      0.05205557 = sum of:
        0.05205557 = weight(_text_:processing in 1862) [ClassicSimilarity], result of:
          0.05205557 = score(doc=1862,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.3130829 = fieldWeight in 1862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1862)
      0.14285715 = coord(1/7)
    
    Abstract
    This report explores the intellectual processes entailed during information seeking, as information needs are generated and information is sought and evaluated for relevance. It focuses an the details of cognitive processing, reviewing a number of models. In particular, Popper's model of the communication process between an individual and new information is explored and elaborated from the perspective of Pask's Conversation Theory. The implications of this theory are discussed in relation to the development of what Cole has termed "enabling" information retrieval systems.
  17. Brier, S.: Cybersemiotics and the problems of the information-processing paradigm as a candidate for a unified science of information behind library information science (2004) 0.01
    0.007360237 = product of:
      0.05152166 = sum of:
        0.05152166 = weight(_text_:processing in 838) [ClassicSimilarity], result of:
          0.05152166 = score(doc=838,freq=6.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.30987173 = fieldWeight in 838, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=838)
      0.14285715 = coord(1/7)
    
    Abstract
    As an answer to the humanistic, socially oriented critique of the information-processing paradigms used as a conceptual frame for library information science, this article formulates a broader and less objective concept of communication than that of the information-processing paradigm. Knowledge can be seen as the mental phenomenon that documents (combining signs into text, depending on the state of knowledge of the recipient) can cause through interpretation. The examination of these "correct circumstances" is an important part of information science. This article represents the following developments in the concept of information: Information is understood as potential until somebody interprets it. The objective carriers of potential knowledge are signs. Signs need interpretation to release knowledge in the form of interpretants. Interpretation is based on the total semantic network, horizons, worldviews, and experience of the person, including the emotional and social aspects. The realm of meaning is rooted in social-historical as well as embodied evolutionary processes that go beyond computational algorithmically logic. The semantic network derives a decisive aspect of signification from a person's embodied cultural worldview, which, in turn, derives from, develops, and has its roots in undefined tacit knowledge. To theoretically encompass both the computational and the semantic aspects of document classification and retrieval, we need to combine the cybernetic functionalistic approach with the semiotic pragmatic understanding of meaning as social and embodied. For such a marriage, it is necessary to go into the constructivistic second-order cybernetics and autopoiesis theory of von Foerster, Maturana, and Luhmann, on the one hand, and the pragmatic triadic semiotics of Peirce in the form of the embodied Biosemiotics, on the other hand. This combination is what I call Cybersemiotics.
  18. dpa: Struktur des Denkorgans wird bald entschlüsselt sein (2000) 0.01
    0.0067455107 = product of:
      0.047218572 = sum of:
        0.047218572 = product of:
          0.094437145 = sum of:
            0.094437145 = weight(_text_:22 in 3952) [ClassicSimilarity], result of:
              0.094437145 = score(doc=3952,freq=4.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.6565931 = fieldWeight in 3952, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3952)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    17. 7.1996 9:33:22
    22. 7.2000 19:05:41
  19. Ernst, W.: Datum und Information : Begriffsverwirrungen (2002) 0.01
    0.006374152 = product of:
      0.04461906 = sum of:
        0.04461906 = weight(_text_:processing in 1705) [ClassicSimilarity], result of:
          0.04461906 = score(doc=1705,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.26835677 = fieldWeight in 1705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=1705)
      0.14285715 = coord(1/7)
    
    Abstract
    Dem von Uwe Jochum diagnostizierten häufigen Versuch, den modernen mathematisch-nachrichtentechnischen Begriff der Information in die Geschichte zurückzuspiegeln und also alle möglichen Informationsbegriffe als Vorformen und Spielarten desselben auszuweisen, widerstrebt der (sit venia verbo) medienarchäologische Blick, der auf die Diskontinuitäten, die Brüche und Unvereinbarkeiten in der Genealogie des Informationsbegriffs zwischen analogen und digitalen, logischen und mathematischen, philosophischen und nondiskursiven Konzeptionen von Wissen achtet - und vor allem zwischen einer metaphorischen Beschreibung gesellschaftlicher Prozesse und einem medialen Begriff der Übertragung trennt. Eine genaue Lektüre des antiken Wissens-Verständnisses entdeckt in Aristoteles' Schrift Über die Seele tatsächlich den Begriff des "Mediums", des to metaxy als des "Dazwischen". Der ganze Unterschied zwischen aristotelischen und digitalen Medien liegt aber bekanntlich darin, daß im letzteren Zwischenraum tatsächlich etwas geschieht, ein data processing, das nicht länger ausschließlich von der Kognition des Menschen abhängig ist, sondern die Fähigkeit zum feedback besitzt - die begriffliche Alternative zum Wissensbegriff.
  20. Fallis, D.: Social epistemology and information science (2006) 0.01
    0.006359728 = product of:
      0.044518095 = sum of:
        0.044518095 = product of:
          0.08903619 = sum of:
            0.08903619 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.08903619 = score(doc=4368,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    13. 7.2008 19:22:28

Languages

  • e 37
  • d 30

Types

  • a 57
  • m 7
  • el 3
  • s 2
  • x 1
  • More… Less…