Search (61 results, page 1 of 4)

  • × theme_ss:"Information"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.08
    0.07838307 = product of:
      0.2351492 = sum of:
        0.0587873 = product of:
          0.1763619 = sum of:
            0.1763619 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.1763619 = score(doc=5895,freq=2.0), product of:
                0.37656134 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044416238 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.1763619 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.1763619 = score(doc=5895,freq=2.0), product of:
            0.37656134 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.044416238 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.33333334 = coord(2/6)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  2. ap: Schlaganfall : Computer-Bild zeigt den Heilungsprozess im Gehirn (2000) 0.04
    0.038251486 = product of:
      0.11475445 = sum of:
        0.07864773 = weight(_text_:computer in 4231) [ClassicSimilarity], result of:
          0.07864773 = score(doc=4231,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.48452407 = fieldWeight in 4231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.09375 = fieldNorm(doc=4231)
        0.03610672 = product of:
          0.07221344 = sum of:
            0.07221344 = weight(_text_:22 in 4231) [ClassicSimilarity], result of:
              0.07221344 = score(doc=4231,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.46428138 = fieldWeight in 4231, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4231)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 7.2000 19:05:31
  3. Bell, G.; Gemmell, J.: Erinnerung total (2007) 0.04
    0.036193818 = product of:
      0.072387636 = sum of:
        0.028901752 = weight(_text_:wide in 300) [ClassicSimilarity], result of:
          0.028901752 = score(doc=300,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.14686027 = fieldWeight in 300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=300)
        0.015679711 = weight(_text_:web in 300) [ClassicSimilarity], result of:
          0.015679711 = score(doc=300,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.108171105 = fieldWeight in 300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=300)
        0.02780617 = weight(_text_:computer in 300) [ClassicSimilarity], result of:
          0.02780617 = score(doc=300,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.17130512 = fieldWeight in 300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=300)
      0.5 = coord(3/6)
    
    Content
    "Unser Gedächtnis ist oft nervtötend unzuverlässig. Wir stoßen jeden Tag an seine Grenzen, wenn uns die Telefonnummer eines Freundes, der Name eines Geschäftspartners oder der Titel eines Lieblingsbuchs nicht einfallen will. Wir alle haben Strategien gegen die Folgen unserer Vergesslichkeit entwickelt, vom guten alten Schmierzettel bis zum elektronischen Terminplaner, und trotzdem gehen uns immer wieder wichtige Informationen durch die Lappen. Seit einiger Zeit arbeiten wir in einer Gruppe bei Microsoft Research an einem Pilotprojekt, das der Unvollkommenheit unseres Gedächtnisses radikal abhelfen soll: der totalen digitalen Aufzeichnung eines Menschenlebens. Unsere erste Versuchsperson ist einer von uns: Gordon Bell. Seit sechs Jahren unternehmen wir es, seine Kommunikation mit anderen Menschen sowie all seine Interaktion mit Maschinen aufzuzeichnen, außerdem alles, was er sieht und hört, sowie alle Internetseiten, die er aufsucht, und dies alles in einem persönlichen digitalen Archiv abzuspeichern, das einerseits leicht zu durchsuchen und andererseits sicher ist. Die Aufzeichnung beschränkt sich nicht auf bewusst Erlebtes. Tragbare Sensoren messen Dinge, die der Mensch überhaupt nicht wahrnimmt, wie etwa den Sauerstoffgehalt im Blut oder die CO, -Konzentration in der Atemluft. Ein Computer kann dann diese Daten auf gewisse Muster hin durchsuchen; so wäre zum Beispiel festzustellen, unter welchen Umweltbedingungen sich Asthma bei einem Kind verschlimmert oder ob die Daten des Herzschlags zusammen mit anderen physiologischen Größen Vorboten eines Herzanfalls sind. In Gestalt dieser Sensoren läuft also ein permanentes medizinisches Früherkennungsprogramm. Ihr Arzt hätte Zugang zu Ihrer detaillierten und ständig aktuellen Krankenakte, und wenn Sie, wie üblich, auf die Frage »Wann ist dieses Symptom zum ersten Mal aufgetreten?« keine klare Antwort haben - im digitalen Archiv ist sie zu finden.
    In unserem Forschungsprojekt »MyLifeBits« haben wir einige Hilfsmittel für ein solches lebenslanges digitales Archiv ausgearbeitet. Es gelingt uns inzwischen, ein Ereignis so lebensecht in Ton und Bild wiederzugeben, dass dies der persönliche Erinnerung so aufhilft wie das Internet der wissenschaftlichen Recherche. Zu jedem Wort, das der Besitzer des Archivs irgendwann - in einer E-Mail, in einem elektronischen Dokument oder auf einer Internetseite - gelesen hat, findet er mit ein paar Tastendrücken den Kontext. Der Computer führt eine Statistik über die Beschäftigungen seines Besitzers und macht ihn beizeiten darauf aufmerksam, dass er sich für die wichtigen Dinge des Lebens nicht genügend Zeit nimmt. Er kann auch die räumliche Position seines Herrn in regelmäßigen Zeitabständen festhalten und damit ein komplettes Bewegungsbild erstellen. Aber vielleicht das Wichtigste: Das Leben eines Menschen wird der Nachwelt, insbesondere seinen Kindern und Enkeln, so genau, so lebhaft und mit allen Einzelheiten überliefert, wie es bisher den Reichen und Berühmten vorbehalten war.
    Ein Netz von Pfaden Ein früher Traum von einem maschinell erweiterten Gedächtnis wurde gegen Ende des Zweiten Weltkriegs von Vannevar Bush geäußert. Bush, damals Direktor des Office of Scientific Research and Development (OSRD), das die militärischen Forschungsprogramme der USA koordinierte, und besser bekannt als Erfinder des Analogrechners, stellte 1945 in seinem Aufsatz »As we may think« eine fiktive Maschine namens Memex (Memory Extender, »Gedächtnis-Erweiterer«) vor, die alle Bücher, alle Aufzeichnungen und die gesamte Kommunikation eines Menschen auf Mikrofilm speichern sollte. Das Memex sollte in einem Schreibtisch eingebaut sein und über eine Tastatur, ein Mikrofon und mehrere Bildschirme verfügen. Bush hatte vorgesehen, dass der Benutzer am Schreibtisch mit einer Kamera Fotografien und Dokumente auf Mikrofilm ablichtete oder neue Dokumente erstellte, indem er auf einen berührungsempfindlichen Bildschirm schrieb. Unterwegs sollte eine per Stirnband am Kopf befestigte Kamera das Aufzeichnen übernehmen. Vor allem aber sollte das Memex ähnlich dem menschlichen Gehirn zu assoziativem Denken fähig sein. Bush beschreibt das sehr plastisch: »Kaum hat es einen Begriff erfasst, schon springt es zum nächsten, geleitet von Gedankenassoziationen und entlang einem komplexen Netz von Pfaden, das sich durch die Gehirnzellen zieht.« Im Lauf des folgenden halben Jahrhunderts entwickelten unerschrockene Informatikpioniere, unter ihnen Ted Nelson und Douglas Engelbart, einige dieser Ideen, und die Erfinder des World Wide Web setzten Bushs »Netz von Pfaden« in die Netzstruktur ihrer verlinkten Seiten um. Das Memex selbst blieb jedoch technisch außer Reichweite. Erst in den letzten Jahren haben die rasanten Fortschritte in Speichertechnik, Sensorik und Rechentechnologie den Weg für neue Aufzeichnungs- und Suchtechniken geebnet, die im Endeffekt weit über Bushs Vision hinausgehen könnten."
  4. Darnton, R.: Im Besitz des Wissens : Von der Gelehrtenrepublik des 18. Jahrhunderts zum digitalen Google-Monopol (2009) 0.03
    0.029720977 = product of:
      0.08916293 = sum of:
        0.057803504 = weight(_text_:wide in 2335) [ClassicSimilarity], result of:
          0.057803504 = score(doc=2335,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2335)
        0.031359423 = weight(_text_:web in 2335) [ClassicSimilarity], result of:
          0.031359423 = score(doc=2335,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2335)
      0.33333334 = coord(2/6)
    
    Abstract
    Wie eine gigantische Informationslandschaft tut sich das Internet vor unseren Augen auf. Und seit sich Google im Herbst letzten Jahres mit den Autoren und Verlegern, die die große Suchmaschine wegen Urheberrechtsverletzung verklagt hatten, auf einen Vergleich geeinigt hat, stellt sich die Frage nach der Orientierung im World Wide Web mit neuer Dringlichkeit. Während der letzten vier Jahre hat Google Millionen von Büchern, darunter zahllose urheberrechtlich geschützte Werke, aus den Beständen großer Forschungsbibliotheken digitalisiert und für die Onlinesuche ins Netz gestellt. Autoren und Verleger machten dagegen geltend, dass die Digitalisierung eine Copyrightverletzung darstelle. Nach langwierigen Verhandlungen einigte man sich auf eine Regelung, die gravierende Auswirkungen darauf haben wird, wie Bücher den Weg zu ihren Lesern finden. . . .
  5. Wathen, C.N.; Burkell, J.: Believe it or not : factors influencing credibility on the Web (2002) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 201) [ClassicSimilarity], result of:
          0.031359423 = score(doc=201,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=201)
        0.039323866 = weight(_text_:computer in 201) [ClassicSimilarity], result of:
          0.039323866 = score(doc=201,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=201)
      0.33333334 = coord(2/6)
    
    Abstract
    This article reviews selected literature related to the credibility of information, including (1) the general markers of credibility, and how different source, message and receiver characteristics affect people's perceptions of information; (2) the impact of information medium on the assessment of credibility; and (3) the assessment of credibility in the context of information presented on the Internet. The objective of the literature review is to synthesize the current state of knowledge in this area, develop new ways to think about how people interact with information presented via the Internet, and suggest next steps for research and practical applications. The review examines empirical evidence, key reviews, and descriptive material related to credibility in general, and in terms of on-line media. A general discussion of credibility and persuasion and a description of recent work on the credibility and persuasiveness of computer-based applications is presented. Finally, the article synthesizes what we have learned from various fields, and proposes a model as a framework for much-needed future research in this area
  6. Stoyan, H.: Information in der Informatik (2004) 0.02
    0.019147621 = product of:
      0.057442863 = sum of:
        0.045407288 = weight(_text_:computer in 2959) [ClassicSimilarity], result of:
          0.045407288 = score(doc=2959,freq=6.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.2797401 = fieldWeight in 2959, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=2959)
        0.012035574 = product of:
          0.024071148 = sum of:
            0.024071148 = weight(_text_:22 in 2959) [ClassicSimilarity], result of:
              0.024071148 = score(doc=2959,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.15476047 = fieldWeight in 2959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2959)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    1957 hat Karl Steinbuch mit seinem Mitarbeiter Helmut Gröttrup den Begriff "Informatik" erfunden. Er gebrauchte diesen Begriff nicht zur Bezeichnung eines wissenschaftlichen Fachgebiets, sondern eher für seine Abteilung bei der Firma SEL in Stuttgart. Zu dieser Zeit standen sich in diesem Feld drei Parteien gegenüber: Die Mathematiker, die mit Rechenanlagen elektronisch rechneten, die Elektrotechniker, die Nachrichtenverarbeitung trieben und die Wirtschaftler und Lochkartenleute, die mit mechanisch-elektronischen Geräten zählten, buchten und aufsummierten. Während sich in den USA und England die Mathematiker mit dem Namen für das Gerät "Computer" durchsetzten und die Wissenschaft pragmatisch "Computer Science" genannt wurde, war in Deutschland die Diskussion bis in die 60er Jahre unentschieden: Die Abkürzung EDV hält sich noch immer gegenüber "Rechner" und "Computer"; Steinbuch selbst nannte 1962 sein Taschenbuch nicht "Taschenbuch der Informatik" sondern "Taschenbuch der Nachrichtenverarbeitung". 1955 wurde eine Informatik-Tagung in Darmstadt noch "Elektronische Rechenanlagen und Informationsverarbeitung" genannt. Die Internationale Gesellschaft hieß "International Federation for Information Processing". 1957 aber definierte Steinbuch "Informatik" als "Automatische Informationsverarbeitung" und war auf diese Art den Mathematikern entgegengegangen. Als Firmenbezeichnung schien der Begriff geschützt zu sein. Noch 1967 wurde der Fachbeirat der Bundesregierung "für Datenverarbeitung" genannt. Erst als die Franzosen die Bezeichnung "Informatique" verwendeten, war der Weg frei für die Übernahme. So wurde der Ausschuss des Fachbeirats zur Etablierung des Hochschulstudiums bereits der "Einführung von Informatik-Studiengängen" gewidmet. Man überzeugte den damaligen Forschungsminister Stoltenberg und dieser machte in einer Rede den Begriff "Informatik" publik. Ende der 60er Jahre übernahmen F. L. Bauer und andere den Begriff, nannten 1969 die Berufsgenossenschaft "Gesellschaft für Informatik" und sorgten für die entsprechende Benennung des wissenschaftlichen Fachgebiets. Die strittigen Grundbegriffe dieses Prozesses: Information/Informationen, Nachrichten und Daten scheinen heute nur Nuancen zu trennen. Damals ging es natürlich auch um Politik, um Forschungsrichtungen, um den Geist der Wissenschaft, um die Ausrichtung. Mehr Mathematik, mehr Ingenieurwissenschaft oder mehr Betriebswirtschaft, so könnte man die Grundströmungen vereinfachen. Mit der Ausrichtung der Informatik nicht versöhnte Elektrotechniker nannten sich Informationstechniker, die Datenverarbeiter sammelten sich im Lager der Wirtschaftsinformatiker. Mit den Grundbegriffen der Informatik, Nachricht, Information, Datum, hat es seitdem umfangreiche Auseinandersetzungen gegeben. Lehrbücher mussten geschrieben werden, Lexika und Nachschlagewerke wurden verfasst, Arbeitsgruppen tagten. Die Arbeiten C. Shannons zur Kommunikation, mit denen eine statistische Informationstheorie eingeführt worden war, spielten dabei nur eine geringe Rolle.
    Date
    5. 4.2013 10:22:48
  7. Hjoerland, B.: ¬The special competency of information specialists (2002) 0.02
    0.017514316 = product of:
      0.052542947 = sum of:
        0.02780617 = weight(_text_:computer in 1265) [ClassicSimilarity], result of:
          0.02780617 = score(doc=1265,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.17130512 = fieldWeight in 1265, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1265)
        0.024736777 = product of:
          0.049473554 = sum of:
            0.049473554 = weight(_text_:programs in 1265) [ClassicSimilarity], result of:
              0.049473554 = score(doc=1265,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19214487 = fieldWeight in 1265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1265)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    "In a new article published in Journal of Documentation, 2002, I claim that the special competency of information specialists and information scientists are related to "domain analysis." Information science grew out of special librarianship and documentation (cf. Williams, 1997), and implicit in its tradition has in my opinion been a focus an subject knowledge. Although domain analysis has earlier been introduced in JASIST (Hjoerland & Albrechtsen, 1995), the new article introduces 11 Specific approaches to domain analysis, which I Claim together define the Specific competencies of information specialists. The approaches are (I) Producing and evaluating literature guides and subject gateways, (2) Producing and evaluating special classifications and thesauri, (3) Research an and competencies in indexing and retrieving information specialties, (4) Knowledge about empirical user studies in subject areas, (5) Producing and interpreting bibliometrical studies, (6) Historical studies of information structures and Services in domains, (7) Studies of documents and genres in knowledge domains, (8) Epistemological and critical studies of different paradigms, assumptions, and interests in domains, (9) Knowledge about terminological studies, LSP (Languages for Special Purposes), and discourse analysis in knowledge fields, (10) Knowledge about and studies of structures and institutions in scientific and professional communication in a domain, (11) Knowledge about methods and results from domain analytic studies about professional cognition, knowledge representation in computer science and artificial intelligence. By bringing these approaches together, the paper advocates a view which may have been implicit in previous literature but which has not before been Set out systematically. The approaches presented here are neither exhaustive nor mutually exhaustve, but an attempt is made to present the state of the art. Specific examples and selective reviews of literature are provided, and the strength and drawback of each of these approaches are being discussed. It is my Claim that the information specialist who has worked with these 1 1 approaches in a given domain (e.g., music, sociology, or chemistry) has a special expertise that should not be mixed up with the kind of expertise taught at universities in corresponding subjects. Some of these 11 approaches are today well-known in schools of LIS. Bibliometrics is an example, Other approaches are new and represent a view of what should be introduced in the training of information professionals. First and foremost does the article advocates the view that these 1 1 approaches should be seen as supplementary. That the Professional identity is best maintained if Chose methods are applied to the same examples (same domain). Somebody would perhaps feel that this would make the education of information professionals too narrow. The Counter argument is that you can only understand and use these methods properly in a new domain, if you already have a deep knowledge of the Specific information problems in at least orte domain. It is a dangerous illusion to believe that one becomes more competent to work in any field if orte does not know anything about any domain. The special challenge in our science is to provide general background for use in Specific fields. This is what domain analysis is developed for. Study programs that allow the students to specialize and to work independent in the selected field (such as, for example, the Curriculum at the Royal School of LIS in Denmark) should fit well with the intentions in domain analysis. In this connection it should be emphasized that the 11 approaches are presented as general approaches that may be used in about any domain whatsoever. They should, however, be seen in connection. If this is not the case, then their relative strengths and weaknesses cannot be evaluated. The approaches do not have the same status. Some (e.g., empirical user studies) are dependent an others (e.g., epistemological studies).
    It is my hope that domain analysis may contribute to the strengthening of the professional and scientific identity of our discipline and provide more coherence and depth in information studies. The paper is an argument about what should be core teachings in our field, It should be both broad enough to cover the important parts of IS and Specific enough to maintain a special focus and identity compared to, for example, computer science and the cognitive sciences. It is not a narrow view of information science and an the other hand it does not Set forth an unrealistic utopia."
  8. Raban, D.R.; Rafaeli, S.: ¬The effect of source nature and status on the subjective value of information (2006) 0.02
    0.015938118 = product of:
      0.047814354 = sum of:
        0.03276989 = weight(_text_:computer in 5268) [ClassicSimilarity], result of:
          0.03276989 = score(doc=5268,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 5268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5268)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 5268) [ClassicSimilarity], result of:
              0.030088935 = score(doc=5268,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 5268, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5268)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This is an empirical, experimental investigation of the value of information, as perceived through the willingness to purchase information (WTP) and the willingness to sell it (accept payment, WTA). We examined the effects of source nature: expertise versus content, and source status: copy versus exclusive original of information on the WTA-WTP ratio. In an animated computer simulation of a business game, players could maximize their profits by making choices regarding inventory and prices. Participants were offered the chance to bid for buying or selling information regarding the weather that may affect demand. We find, as hypothesized, that the subjective value of information does indeed follow the predictions of endowment effect theory. The ratio of willingness to accept to willingness to purchase (WTA-WTP) recorded for the 294 subjects resembles the ratio common for private goods, rather than the intuitively expected unity. The WTA-WTP ratios diverged from unity more often and in a more pronounced manner for information traded in the original form rather than as a copy of the original, although even for copies the WTA-WTP ratio is still double. The results yield a value of about three for the WTA-WTP ratio for original information whether the source is content or expertise. Copy information received a subjective value that was significantly different (lower) than original information. The implications for both online trading and online sharing of information are discussed.
    Date
    22. 7.2006 15:09:35
  9. Thissen, F.: Merkmale effektiven Lernens : Virtuelle Lehrveranstaltungen - neue Formen des Lehrens und Lernens (2001) 0.01
    0.013107955 = product of:
      0.07864773 = sum of:
        0.07864773 = weight(_text_:computer in 5847) [ClassicSimilarity], result of:
          0.07864773 = score(doc=5847,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.48452407 = fieldWeight in 5847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.09375 = fieldNorm(doc=5847)
      0.16666667 = coord(1/6)
    
    Theme
    Computer Based Training
  10. Bussmann, I.: ¬Die Bibliothek als Atelier des innvoativen Lernens (2001) 0.01
    0.013107955 = product of:
      0.07864773 = sum of:
        0.07864773 = weight(_text_:computer in 5848) [ClassicSimilarity], result of:
          0.07864773 = score(doc=5848,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.48452407 = fieldWeight in 5848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.09375 = fieldNorm(doc=5848)
      0.16666667 = coord(1/6)
    
    Theme
    Computer Based Training
  11. Rieh, S.Y.: Judgment of information quality and cognitive authority in the Web (2002) 0.01
    0.011686969 = product of:
      0.07012181 = sum of:
        0.07012181 = weight(_text_:web in 202) [ClassicSimilarity], result of:
          0.07012181 = score(doc=202,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.48375595 = fieldWeight in 202, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=202)
      0.16666667 = coord(1/6)
    
    Abstract
    In the Web, making judgments of information quality and authority is a difficult task for most users because overall, there is no quality control mechanism. This study examines the problem of the judgment of information quality and cognitive authority by observing people's searching behavior in the Web. Its purpose is to understand the various factors that influence people's judgment of quality and authority in the Web, and the effects of those judgments on selection behaviors. Fifteen scholars from diverse disciplines participated, and data were collected combining verbal protocols during the searches, search logs, and postsearch interviews. It was found that the subjects made two distinct kinds of judgment: predictive judgment, and evaluative judgment. The factors influencing each judgment of quality and authority were identified in terms of characteristics of information objects, characteristics of sources, knowledge, situation, ranking in search output, and general assumption. Implications for Web design that will effectively support people's judgments of quality and authority are also discussed
  12. Weizenbaum, J.: ¬Die Interpretation macht aus Signalen Informationen : Kinder und Computer (2001) 0.01
    0.011560131 = product of:
      0.069360785 = sum of:
        0.069360785 = weight(_text_:computer in 6372) [ClassicSimilarity], result of:
          0.069360785 = score(doc=6372,freq=14.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.42731008 = fieldWeight in 6372, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=6372)
      0.16666667 = coord(1/6)
    
    Abstract
    Unmittelbar vor den 4. Mediengesprächen in Buckow war ich in Mainz beim Südwestfunk in einer Talkshow. Dort ging es um Computer und Internet in den deutschen Schulen. Alle Gesprächsteilnehmer sprachen in diesem Zusammenhang von möglichen Katastrophen. Im Gegensatz zu mir meinten die anderen aber in erster Linie Gefahren für den Markt. Ich sehe eine ganz andere Katastrophe, und ich glaube schon, dass wir sie in fünf bis zehn Jahren in der Schule erkennen werden. Wir werden dann Generationen von Kindern als Versuchskaninchen benutzt haben, ohne die Schule als solche verbessert zu haben. Es gab doch schon so viele andere Sachen, die die Schule retten sollten: programmiertes Lernen, Sprachlabore und alles Mögliche. Und dann haben wir gesehen, das funktioniert nicht. Und das wird auch mit dem Computer passieren. In Amerika ist der Widerstand gegen Computer in der Schule besonders unter Computerprofis in den Universitäten inzwischen sehr stark. Wer will, kann das auch im Internet finden, und er wird sehen, dass die Skepsis wächst. Ich habe bestimmt keine Angst vor dem Computer, denn ich habe bereits vor 50 Jahren angefangen, mich damit zu beschäftigen. Dennoch setze ich andere Prioritäten für die Schule. Wenn man eine Gruppe von Lehrern zusammennimmt und fragt, was für ihre Arbeit wichtig ist, dann sagen sie, die Schüler brauchen eine saubere Schule mit sauberen Toiletten, sie brauchen kleinere Klassen und mehr Lehrer, sie brauchen Bewachung. Aber wenn man Toni Blair oder Mister Schröder hört, dann geht es nur darum, dass soundsoviele Hunderte Millionen ausgegeben werden, um die Schulen ans Netz zu bringen. ich erinnere mich, ich war mal zu Besuch in Wien im Kultusministerium, und einer hat uns begrüßt und zuallererst gesagt, im Jahr 2001 werden wir 50 000 Computer in den österreichischen Schulen haben. Und ich habe ihm gesagt, das wird Ihnen einmal leid tun! »Warum denn«, fragte er mich ganz erstaunt. Aber man muss die Frage andersherum stellen: Warum sollen die Computer in die Schulen kommen?
  13. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.01
    0.011226802 = product of:
      0.033680405 = sum of:
        0.013066427 = weight(_text_:web in 1182) [ClassicSimilarity], result of:
          0.013066427 = score(doc=1182,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.09014259 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.02061398 = product of:
          0.04122796 = sum of:
            0.04122796 = weight(_text_:programs in 1182) [ClassicSimilarity], result of:
              0.04122796 = score(doc=1182,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.16012073 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
  14. Hapke, T.: 'In-formation' - Informationskompetenz und Lernen im Zeitalter digitaler Bibliotheken (2005) 0.01
    0.010994123 = product of:
      0.065964736 = sum of:
        0.065964736 = product of:
          0.13192947 = sum of:
            0.13192947 = weight(_text_:programs in 3689) [ClassicSimilarity], result of:
              0.13192947 = score(doc=3689,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.5123863 = fieldWeight in 3689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3689)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Bibliothekswissenschaft - quo vadis? Eine Disziplin zwischen Traditionen und Visionen: Programme - Modelle - Forschungsaufgaben / Library Science - quo vadis? A Discipline between Challenges and Opportunities: Programs - Models - Research Assignments. Mit einem Geleitwort von / With a Preface by Guy St. Clair Consulting Specialist for Knowledge Management and Learning, New York, NY und einem Vorwort von / and a Foreword by Georg Ruppelt Sprecher von / Speaker of BID - Bibliothek & Information Deutschland Bundesvereinigung Deutscher Bibliotheksund Informationsverbände e.V. Hrsg. von P. Hauke
  15. Bawden, D.: Information and digital literacies : a review of concepts (2001) 0.01
    0.010813512 = product of:
      0.06488107 = sum of:
        0.06488107 = weight(_text_:computer in 4479) [ClassicSimilarity], result of:
          0.06488107 = score(doc=4479,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.39971197 = fieldWeight in 4479, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4479)
      0.16666667 = coord(1/6)
    
    Abstract
    The concepts of 'information literacy' and 'digital literacy' are described, and reviewed, by way of a literature survey and analysis. Related concepts, including computer literacy, library literacy, network literacy, Internet literacy and hyper-literacy are also discussed, and their relationships elucidated. After a general introduction, the paper begins with the basic concept of 'literacy', which is then expanded to include newer forms of literacy, more suitable for complex information environments. Some of these, for example library, media and computer literacies, are based largely on specific skills, but have some extension beyond them. They lead togeneral concepts, such as information literacy and digital literacy which are based on knowledge, perceptions and attitudes, though reliant on the simpler skills-based literacies
  16. Bawden, D.: Information as self-organized complexity : a unifying viewpoint (2007) 0.01
    0.009633917 = product of:
      0.057803504 = sum of:
        0.057803504 = weight(_text_:wide in 649) [ClassicSimilarity], result of:
          0.057803504 = score(doc=649,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
      0.16666667 = coord(1/6)
    
    Abstract
    Introduction. This short paper proposes that a unified concept of information as a form of self-organized complexity may be equally applicable to the physical, biological and human/social domains. This is seen as the evolutionary emergence of organized complexity in the physical universe, meaning in context in the biological domain, and understanding through knowledge in the human domain. Method.This study is based on analysis of literature from a wide range of disciplines. Conclusions.This perspective allows for the possibility that not only may the library/information sciences be able to draw insights from the natural sciences, but that library and information science research and scholarship may in turn contribute insights to these disciplines, normally thought of as more 'fundamental'.
  17. Karamuftuoglu, M.: Situating logic and information in information science (2009) 0.01
    0.009633917 = product of:
      0.057803504 = sum of:
        0.057803504 = weight(_text_:wide in 3111) [ClassicSimilarity], result of:
          0.057803504 = score(doc=3111,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 3111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3111)
      0.16666667 = coord(1/6)
    
    Abstract
    Information Science (IS) is commonly said to study collection, classification, storage, retrieval, and use of information. However, there is no consensus on what information is. This article examines some of the formal models of information and informational processes, namely, Situation Theory and Shannon's Information Theory, in terms of their suitability for providing a useful framework for studying information in IS. It is argued that formal models of information are concerned with mainly ontological aspects of information, whereas IS, because of its evaluative role with respect to semantic content, needs an epistemological conception of information. It is argued from this perspective that concepts of epistemological/aesthetic/ethical information are plausible, and that information science needs to rise to the challenge of studying many different conceptions of information embedded in different contexts. This goal requires exploration of a wide variety of tools from philosophy and logic.
  18. dpa: Struktur des Denkorgans wird bald entschlüsselt sein (2000) 0.01
    0.008510437 = product of:
      0.05106262 = sum of:
        0.05106262 = product of:
          0.10212524 = sum of:
            0.10212524 = weight(_text_:22 in 3952) [ClassicSimilarity], result of:
              0.10212524 = score(doc=3952,freq=4.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.6565931 = fieldWeight in 3952, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3952)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    17. 7.1996 9:33:22
    22. 7.2000 19:05:41
  19. Fallis, D.: Social epistemology and information science (2006) 0.01
    0.0080237165 = product of:
      0.048142295 = sum of:
        0.048142295 = product of:
          0.09628459 = sum of:
            0.09628459 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.09628459 = score(doc=4368,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    13. 7.2008 19:22:28
  20. afp: Gehirn von Taxifahrern passt sich an : Größerer Hippocampus (2000) 0.01
    0.007020752 = product of:
      0.04212451 = sum of:
        0.04212451 = product of:
          0.08424902 = sum of:
            0.08424902 = weight(_text_:22 in 4496) [ClassicSimilarity], result of:
              0.08424902 = score(doc=4496,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.5416616 = fieldWeight in 4496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4496)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 7.2000 19:05:18