Search (315 results, page 16 of 16)

  • × type_ss:"el"
  1. Bundesregierung: Digitale Bildung voranbringen (2016) 0.01
    0.005399848 = product of:
      0.010799696 = sum of:
        0.010799696 = product of:
          0.021599391 = sum of:
            0.021599391 = weight(_text_:22 in 3451) [ClassicSimilarity], result of:
              0.021599391 = score(doc=3451,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1354154 = fieldWeight in 3451, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3451)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 17:14:47
  2. Oberhauser, O.: Card-Image Public Access Catalogues (CIPACs) : a critical consideration of a cost-effective alternative to full retrospective catalogue conversion (2002) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1703) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1703,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1703)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: ABI-Technik 21(2002) H.3, S.292 (E. Pietzsch): "Otto C. Oberhauser hat mit seiner Diplomarbeit eine beeindruckende Analyse digitalisierter Zettelkataloge (CIPACs) vorgelegt. Die Arbeit wartet mit einer Fülle von Daten und Statistiken auf, wie sie bislang nicht vorgelegen haben. BibliothekarInnen, die sich mit der Digitalisierung von Katalogen tragen, finden darin eine einzigartige Vorlage zur Entscheidungsfindung. Nach einem einführenden Kapitel bringt Oberhauser zunächst einen Überblick über eine Auswahl weltweit verfügbarer CIPACs, deren Indexierungsmethode (Binäre Suche, partielle Indexierung, Suche in OCR-Daten) und stellt vergleichende Betrachtungen über geographische Verteilung, Größe, Software, Navigation und andere Eigenschaften an. Anschließend beschreibt und analysiert er Implementierungsprobleme, beginnend bei Gründen, die zur Digitalisierung führen können: Kosten, Umsetzungsdauer, Zugriffsverbesserung, Stellplatzersparnis. Er fährt fort mit technischen Aspekten wie Scannen und Qualitätskontrolle, Image Standards, OCR, manueller Nacharbeit, Servertechnologie. Dabei geht er auch auf die eher hinderlichen Eigenschaften älterer Kataloge ein sowie auf die Präsentation im Web und die Anbindung an vorhandene Opacs. Einem wichtigen Aspekt, nämlich der Beurteilung durch die wichtigste Zielgruppe, die BibliotheksbenutzerInnen, hat Oberhauser eine eigene Feldforschung gewidmet, deren Ergebnisse er im letzten Kapitel eingehend analysiert. Anhänge über die Art der Datenerhebung und Einzelbeschreibung vieler Kataloge runden die Arbeit ab. Insgesamt kann ich die Arbeit nur als die eindrucksvollste Sammlung von Daten, Statistiken und Analysen zum Thema CIPACs bezeichnen, die mir bislang begegnet ist. Auf einen schön herausgearbeiteten Einzelaspekt, nämlich die weitgehende Zersplitterung bei den eingesetzten Softwaresystemen, will ich besonders eingehen: Derzeit können wir grob zwischen Komplettlösungen (eine beauftragte Firma führt als Generalunternehmung sämtliche Aufgaben von der Digitalisierung bis zur Ablieferung der fertigen Anwendung aus) und geteilten Lösungen (die Digitalisierung wird getrennt von der Indexierung und der Softwareerstellung vergeben bzw. im eigenen Hause vorgenommen) unterscheiden. Letztere setzen ein Projektmanagement im Hause voraus. Gerade die Softwareerstellung im eigenen Haus aber kann zu Lösungen führen, die kommerziellen Angeboten keineswegs nachstehen. Schade ist nur, daß die vielfältigen Eigenentwicklungen bislang noch nicht zu Initiativen geführt haben, die, ähnlich wie bei Public Domain Software, eine "optimale", kostengünstige und weithin akzeptierte Softwarelösung zum Ziel haben. Einige kritische Anmerkungen sollen dennoch nicht unerwähnt bleiben. Beispielsweise fehlt eine Differenzierung zwischen "Reiterkarten"-Systemen, d.h. solchen mit Indexierung jeder 20. oder 50. Karte, und Systemen mit vollständiger Indexierung sämtlicher Kartenköpfe, führt doch diese weitreichende Designentscheidung zu erheblichen Kostenverschiebungen zwischen Katalogerstellung und späterer Benutzung. Auch bei den statistischen Auswertungen der Feldforschung hätte ich mir eine feinere Differenzierung nach Typ des CIPAC oder nach Bibliothek gewünscht. So haben beispielsweise mehr als die Hälfte der befragten BenutzerInnen angegeben, die Bedienung des CIPAC sei zunächst schwer verständlich oder seine Benutzung sei zeitaufwendig gewesen. Offen beibt jedoch, ob es Unterschiede zwischen den verschiedenen Realisierungstypen gibt.
  3. Thomas, C.; McDonald, R.H.; McDowell, C.S.: Overview - Repositories by the numbers (2007) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1169) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1169,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1260) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1260,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1260)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Daniel Jr., R.; Lagoze, C.: Extending the Warwick framework : from metadata containers to active digital objects (1997) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1264) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1264,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. ALA / Subcommittee on Subject Relationships/Reference Structures: Final Report to the ALCTS/CCS Subject Analysis Committee (1997) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1800) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1800,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1800)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält: Appendix A: Subcommittee on Subject Relationships/Reference Structures - REPORT TO THE ALCTS/CCS SUBJECT ANALYSIS COMMITTEE - July 1996 Appendix B (part 1): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (alphabetical display) (Separat in: http://web2.ala.org/ala/alctscontent/CCS/committees/subjectanalysis/subjectrelations/msrscu2.pdf) Appendix B (part 2): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (hierarchical display) Appendix C: Checklist of Candidate Subject Relationships for Information Retrieval. Compiled by Dee Michel, Pat Kuhr, and Jane Greenberg; edited by Greg Wool - June 1997 Appendix D: Review of Reference Displays in Selected CD-ROM Abstracts and Indexes by Harriette Hemmasi and Steven Riel Appendix E: Analysis of Relationships in Six LC Subject Authority Records by Harriette Hemmasi and Gary Strawn Appendix F: Report of a Preliminary Survey of Subject Referencing in OPACs by Gregory Wool Appendix G: LC Subject Referencing in OPACs--Why Bother? by Gregory Wool Appendix H: Research Needs on Subject Relationships and Reference Structures in Information Access compiled by Jane Greenberg and Steven Riel with contributions from Dee Michel and others edited by Gregory Wool Appendix I: Bibliography on Subject Relationships compiled mostly by Dee Michel with additional contributions from Jane Greenberg, Steven Riel, and Gregory Wool
  7. Onofri, A.: Concepts in context (2013) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1077) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1077,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1077)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.
  8. Foerster, H. von; Müller, A.; Müller, K.H.: Rück- und Vorschauen : Heinz von Foerster im Gespräch mit Albert Müller und Karl H. Müller (2001) 0.00
    0.004628441 = product of:
      0.009256882 = sum of:
        0.009256882 = product of:
          0.018513763 = sum of:
            0.018513763 = weight(_text_:22 in 5988) [ClassicSimilarity], result of:
              0.018513763 = score(doc=5988,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.116070345 = fieldWeight in 5988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2006 17:22:54
  9. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.00
    0.004628441 = product of:
      0.009256882 = sum of:
        0.009256882 = product of:
          0.018513763 = sum of:
            0.018513763 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.018513763 = score(doc=1184,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 14:08:22
  10. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.00
    0.004628441 = product of:
      0.009256882 = sum of:
        0.009256882 = product of:
          0.018513763 = sum of:
            0.018513763 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.018513763 = score(doc=3035,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  11. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.004628441 = product of:
      0.009256882 = sum of:
        0.009256882 = product of:
          0.018513763 = sum of:
            0.018513763 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.018513763 = score(doc=405,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  12. Rötzer, F.: Computerspiele verbessern die Aufmerksamkeit : Nach einer Untersuchung von Kognitionswissenschaftlern schulen Shooter-Spiele manche Leistungen der visuellen Aufmerksamkeit (2003) 0.00
    0.0044908975 = product of:
      0.008981795 = sum of:
        0.008981795 = product of:
          0.01796359 = sum of:
            0.01796359 = weight(_text_:c in 1643) [ClassicSimilarity], result of:
              0.01796359 = score(doc=1643,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.114332706 = fieldWeight in 1643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Wer stundenlang und täglich vor dem Computer sitzt und spielt, trainiert bestimmte Fähigkeiten (und vernachlässigt andere, die verkümmern, was aber schon sehr viel schwieriger nachzuweisen wäre). Computerspiele erfordern, dass deren Benutzer sich beispielsweise aktiv visuell orientieren müssen - und dies schnell sowie mit anhaltender Konzentration. Zudem muss das Gesehene (oder Gehörte) schnell in Reaktionen umgesetzt werden, was senso-motorische Fähigkeiten, also beispielsweise die Koordination von Auge und Hand, fördert. Das aber war nicht Gegenstand der Studie. Nach den Experimenten der Kognitionswissenschaftler vom Center for Visual Sciences an der University of Rochester, New York, lernen die Computerspieler sogar nicht nur die Bewältigung von bestimmten Aufgaben, sondern können das Gelernte auf andere Aufgaben übertragen, wodurch sie allgemein die visuelle Aufmerksamkeit stärken. Untersucht wurden dabei, wie C. Shawn Green und Daphne Bavellier in [[External Link]] Nature schreiben, Personen zwischen 18 und 23 Jahren, die Action-Spiele wie Grand Theft Auto3, Half-Life, Counter-Strike, 007 oder Spider-Man während des letzten halben Jahres mindestens an vier Tagen in der Woche und mindestens eine Stunde am Tag gespielt haben. Darunter befanden sich allerdings keine Frauen! Die Wissenschaftler hatten keine Studentinnen mit der notwendigen Shooter-Spiele--Praxis finden können. Verglichen wurden die Leistungen in den Tests mit denen von Nichtspielern. Zur Kontrolle mussten Nichtspieler - darunter dann auch Frauen - an 10 aufeinander folgenden Tagen jeweils mindestens eine Stunde sich an Shooter-Spielen trainieren, wodurch sich tatsächlich die visuellen Aufmerksamkeitsleistungen steigerten. Das mag schließlich in der Tat bei manchen Aufgaben hilfreich sein, verbessert aber weder allgemein die Aufmerksamkeit noch andere kognitive Fähigkeiten, die nicht mit der visuellen Orientierung und Reaktion zu tun haben. Computerspieler, die Action-Spiele-Erfahrung haben, besitzen beispielsweise eine höhere Aufmerksamkeitskapazität, die sich weit weniger schnell erschöpft wie bei den Nichtspielern. So haben sie auch nach einer anstrengenden Bewältigung von Aufgaben noch die Fähigkeit, neben der Aufgabe Ablenkungen zu verarbeiten. Sie können sich beispielsweise auch längere Zahlenreihen, die den Versuchspersonen kurz auf dem Bildschirm gezeigt werden, merken. Zudem konnten die Spieler ihre Aufmerksamkeit weitaus besser auch in ungewohnten Situationen auf die Erfassung eines räumlichen Feldes erstrecken als Nichtspieler. Dabei mussten zuerst Objekte in einem dichten Feld identifiziert und dann schnell durch Umschalten der Fokussierung ein weiteres Umfeld erkundet werden. Der Druck, schnell auf mehrere visuelle Reize reagieren zu müssen, fördert, so die Wissenschaftler, die Fähigkeit, Reize über die Zeit hinweg zu verarbeiten und "Flaschenhals"-Situationen der Aufmerksamkeit zu vermeiden. Sie sind auch besser in der Lage, von einer Aufgabe zur nächsten zu springen. Wie die Wissenschaftler selbst feststellen, könnte man natürlich angesichts dieser Ergebnisse einwenden, dass die Fähigkeiten nicht mit der Beschäftigung mit Computerspielen entstehen, sondern dass Menschen, deren visuelle Aufmerksamkeit und senso-motorische Koordination besser ist, sich lieber mit dieser Art von Spielen beschäftigen, weil sie dort auch besser belohnt werden als die Ungeschickten. Aus diesem Grund hat man eine Gruppe von Nichtspielern gebeten, mindesten eine Stunde am Tag während zehn aufeinander folgenden Tagen, "Medal of Honor" zu spielen, während eine Kontrollgruppe "Tetris" bekam. Tetris verlangt ganz andere Leistungen wie ein Shooter-Spiel. Der Benutzer muss seine Aufmerksamkeit zu jeder Zeit auf jeweils ein Objekt richten, während die Aufmerksamkeit der Shooter-Spieler auf den ganzen Raum verteilt sein und ständig mit Unvorgesehenem rechnen muss, das aus irgendeiner Ecke auftaucht. Tetris-Spieler müssten also, wenn Aufmerksamkeit spezifisch von Spieleanforderungen trainiert wird, in den Tests zur visuellen Aufmerksamkeit anders abschneiden.
  13. Laaff, M.: Googles genialer Urahn (2011) 0.00
    0.0038570343 = product of:
      0.0077140685 = sum of:
        0.0077140685 = product of:
          0.015428137 = sum of:
            0.015428137 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
              0.015428137 = score(doc=4610,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.09672529 = fieldWeight in 4610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4610)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24.10.2008 14:19:22
  14. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0037424145 = product of:
      0.007484829 = sum of:
        0.007484829 = product of:
          0.014969658 = sum of:
            0.014969658 = weight(_text_:c in 1182) [ClassicSimilarity], result of:
              0.014969658 = score(doc=1182,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.09527725 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
  15. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.00
    0.0037424145 = product of:
      0.007484829 = sum of:
        0.007484829 = product of:
          0.014969658 = sum of:
            0.014969658 = weight(_text_:c in 1216) [ClassicSimilarity], result of:
              0.014969658 = score(doc=1216,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.09527725 = fieldWeight in 1216, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1216)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Authors

Years

Languages

  • e 160
  • d 144
  • a 2
  • el 2
  • i 1
  • nl 1
  • More… Less…

Types

  • a 164
  • i 10
  • m 7
  • s 6
  • r 5
  • x 5
  • b 3
  • n 3
  • p 3
  • More… Less…