Search (9165 results, page 459 of 459)

  1. Reinartz, B.: Zwei Augen der Erkenntnis : Gehirnforscher behaupten, das bewusste Ich als Zentrum der Persönlichkeit sei nur eine raffinierte Täuschung (2002) 0.00
    0.0023093028 = product of:
      0.0069279084 = sum of:
        0.0069279084 = product of:
          0.013855817 = sum of:
            0.013855817 = weight(_text_:22 in 3917) [ClassicSimilarity], result of:
              0.013855817 = score(doc=3917,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.07738023 = fieldWeight in 3917, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3917)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    17. 7.1996 9:33:22
  2. Booth, P.F.: Indexing : the manual of good practice (2001) 0.00
    0.0021394813 = product of:
      0.0064184438 = sum of:
        0.0064184438 = product of:
          0.0128368875 = sum of:
            0.0128368875 = weight(_text_:management in 1968) [ClassicSimilarity], result of:
              0.0128368875 = score(doc=1968,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.07448071 = fieldWeight in 1968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1968)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Zwar ist das Register zu diesem Buch ein Vorbild, und dem Verlag sei gedankt, dass dem Register neun Prozent vom Buchumfang zugestanden worden sind. Aber auch hier vermisst man noch immer Schlagwörter wie z.B. "Interpretation" oder "Cutter's Rule" (für die Notwendigkeit, die jeweils besttreffenden Schlagwörter aus dem Indexsprachenwortschatz beim Indexieren zu benutzen), alles Themen, die im Buch abgehandelt sind. Wohltuend ist auch die undogmatische Art, in welcher verschiedene formale Indexierungsvarianten als zulässige Alternativen nebeneinander gestellt werden. Unkonventionell ist es beispielsweise im Register zu diesem Buch, dass ein Schlagwort dort auch schon dann durch Untereinträge aufgegliedert wird, wenn es weniger als fünf bis sechs Fundstellenangaben hat. Wohltuend auch die Unvoreingenommenheit, in welcher die Stärken von nicht interpretierter Volltextverarbeitung dort hervorgehoben werden, wo sie zum Zug kommen können, wie z.B. bei simplen Erinnerungs- und Namenrecherchen. Ein wenig ins Auge springender Ratschlag an jeden, der beruflichen oder privaten Schriftwechsel führt oder Fachliteratur liest, verdient hervorgehoben zu werden. Es ist ratsam, frühzeitig mit einer wenigstens rudimentären Indexierung seines Schrifttums zu beginnen, nicht erst dann, wenn einem seine private Sammlung über den Kopf gewachsen ist und wenn sich die Suchfehlschläge häufen und an den Nerven zu zehren beginnen. Die Erinnerung an den Wortlaut der gesuchten Dokumente, worauf man sich anfänglich noch stützen kann, ist nämlich bald verblasst und steht dann nicht mehr als Suchhilfe zur Verfügung. Allerdings wird man für den Eigenbedarf keine derartig ausführliche Einführung in die Theorie und Praxis des Indexierens benötigen, wie sie in diesem Buch geboten wird. Hierfür gibt es andernorts gute Kurzfassungen. Wer dieses Buch als Indexierungsneuling gelesen hat, wird die Indexierungsarbeit und ein Buchregister fortan mit anderen Augen ansehen, nämlich als einen wesentlichen Teil eines Buchs, besonders wenn es sich um ein Nachschlagewerk handelt. Schon eine kurze Einblicknahme in das Buch könnte das Management, einen Verleger oder einen Buchautor davor warnen, all denjenigen Glauben zu schenken, welche behaupten, den Indexer durch ihr vollautomatisches Computerprogramm ersetzen zu können. Das Indexieren umfasst das Übersetzen als einen Teilschritt, nämlich das Übersetzen der Essenz eines Textes oder eines Bildes in eine Indexsprache mit ihrem geregelten Wortschatz. Was man in der Praxis von vollautomatischen Übersetzungen her kennt, selbst von den bisher am weitesten entwickelten Programmen, sollte hier zur Warnung dienen."
  3. Burnett, R.: How images think (2004) 0.00
    0.0021394813 = product of:
      0.0064184438 = sum of:
        0.0064184438 = product of:
          0.0128368875 = sum of:
            0.0128368875 = weight(_text_:management in 3884) [ClassicSimilarity], result of:
              0.0128368875 = score(doc=3884,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.07448071 = fieldWeight in 3884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    The sixth chapter looks at this interfacing of humans and machines and begins with a series of questions. The crucial one, to my mind, is this: "Does the distinction between humans and technology contribute to a lack of understanding of the continuous interrelationship and interdependence that exists between humans and all of their creations?" (p. 125) Burnett suggests that to use biological or mechanical views of the computer/mind (the computer as an input/output device) Limits our understanding of the ways in which we interact with machines. He thus points to the role of language, the conversations (including the one we held with machines when we were children) that seem to suggest a wholly different kind of relationship. Peer-to-peer communication (P2P), which is arguably the most widely used exchange mode of images today, is the subject of chapter seven. The issue here is whether P2P affects community building or community destruction. Burnett argues that the trope of community can be used to explore the flow of historical events that make up a continuum-from 17th-century letter writing to e-mail. In the new media-and Burnett uses the example of popular music which can be sampled, and reedited to create new compositions - the interpretive space is more flexible. Private networks can be set up, and the process of information retrieval (about which Burnett has already expended considerable space in the early chapters) involves a lot more of visualization. P2P networks, as Burnett points out, are about information management. They are about the harmony between machines and humans, and constitute a new ecology of communications. Turning to computer games, Burnett looks at the processes of interaction, experience, and reconstruction in simulated artificial life worlds, animations, and video images. For Burnett (like Andrew Darley, 2000 and Richard Doyle, 2003) the interactivity of the new media games suggests a greater degree of engagement with imageworlds. Today many facets of looking, listening, and gazing can be turned into aesthetic forms with the new media. Digital technology literally reanimates the world, as Burnett demonstrates in bis concluding chapter. Burnett concludes that images no longer simply represent the world-they shape our very interaction with it; they become the foundation for our understanding the spaces, places, and historical moments that we inhabit. Burnett concludes his book with the suggestion that intelligence is now a distributed phenomenon (here closely paralleling Katherine Hayles' argument that subjectivity is dispersed through the cybernetic circuit, 1999). There is no one center of information or knowledge. Intersections of human creativity, work, and connectivity "spread" (Burnett's term) "intelligence through the use of mediated devices and images, as well as sounds" (p. 221).
  4. Ratzan, L.: Understanding information systems : what they do and why we need them (2004) 0.00
    0.0021394813 = product of:
      0.0064184438 = sum of:
        0.0064184438 = product of:
          0.0128368875 = sum of:
            0.0128368875 = weight(_text_:management in 4581) [ClassicSimilarity], result of:
              0.0128368875 = score(doc=4581,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.07448071 = fieldWeight in 4581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4581)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    In "Organizing Information" various fundamental organizational schemes are compared. These include hierarchical, relational, hypertext, and random access models. Each is described initially and then expanded an by listing advantages and disadvantages. This comparative format-not found elsewhere in the book-improves access to the subject and overall understanding. The author then affords considerable space to Boolean searching in the chapter "Retrieving Information." Throughout this chapter, the intricacies and problems of pattern matching and relevance are highlighted. The author elucidates the fact that document retrieval by simple pattern matching is not the same as problem solving. Therefore, "always know the nature of the problem you are trying to solve" (p. 56). This chapter is one of the more important ones in the book, covering a large topic swiftly and concisely. Chapters 5 through 11 then delve deeper into various specific issues of information systems. The chapters an securing and concealing information are exceptionally good. Without mentioning specific technologies, Mr. Ratzan is able to clearly present fundamental aspects of information security. Principles of backup security, password management, and encryption are also discussed in some detail. The latter is illustrated with some fascinating examples, from the Navajo Code Talkers to invisible ink and others. The chapters an measuring, counting, and numbering information complement each other well. Some of the more math-centric discussions and examples are found here. "Measuring Information" begins with a brief overview of bibliometrics and then moves quickly through Lotka's law, Zipf's law, and Bradford's law. For an LIS student, exposure to these topics is invaluable. Baseball statistics and web metrics are used for illustration purposes towards the end. In "counting Information," counting devices and methods are first presented, followed by discussion of the Fibonacci sequence and golden ratio. This relatively long chapter ends with examples of the tower of Hanoi, the changes of winning the lottery, and poker odds. The bulk of "Numbering Information" centers an prime numbers and pi. This chapter reads more like something out of an arithmetic book and seems somewhat extraneous here. Three specific types of information systems are presented in the second half of the book, each afforded its own chapter. These examples are universal as not to become dated or irrelevant over time. "The Computer as an Information System" is relatively short and focuses an bits, bytes, and data compression. Considering the Internet as an information system-chapter 13-is an interesting illustration. It brings up issues of IP addressing and the "privilege-vs.-right" access issue. We are reminded that the distinction between information rights and privileges is often unclear. A highlight of this chapter is the discussion of metaphors people use to describe the Internet, derived from the author's own research. He has found that people have varying mental models of the Internet, potentially affecting its perception and subsequent use.
  5. Nuovo soggettario : guida al sistema italiano di indicizzazione per soggetto, prototipo del thesaurus (2007) 0.00
    0.0021394813 = product of:
      0.0064184438 = sum of:
        0.0064184438 = product of:
          0.0128368875 = sum of:
            0.0128368875 = weight(_text_:management in 664) [ClassicSimilarity], result of:
              0.0128368875 = score(doc=664,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.07448071 = fieldWeight in 664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.015625 = fieldNorm(doc=664)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    The guide Nuovo soggettario was presented on February 8' 2007 at a one-day seminar in the Palazzo Vecchio, Florence, in front of some 500 spellbound people. The Nuovo soggettario comes in two parts: the guide in book-form and an accompanying CD-ROM, by way of which a prototype of the thesaurus may be accessed on the Internet. In the former, rules are stated; the latter contains a pdf version of the guide and the first installment of the controlled vocabulary, which is to be further enriched and refined. Syntactic instructions (general application guidelines, as well as special annotations of particular terms) and the compiled subject strings file have yet to be added. The essentials of the new system are: 1) an analytic-synthetic approach, 2) use of terms (units of controlled vocabulary) and subject strings (which represent subjects by combining terms in linear order to form syntactic relationships), instead of main headings and subdivisions, 3) specificity of terms and strings, with a view to the co-extension of subject string and subject matter and 4) a clear distinction between semantic and syntactic relationships, with full control of them both. Basic features of the vocabulary include the uniformity and univocality of terms and thesaural management of a priori (semantic) relationships. Starting from its definition, each term can be categorially analyzed: four macro-categories are represented (agents, action, things, time), for which there are subcategories called facets (e.g., for actions: activities, disciplines, processes), which in turn have sub-facets. Morphological instructions conform to national and international standards, including BS 8723, ANSI/ NISO Z39.19 and the IFLA draft of Guidelines for multilingual thesauri, even for syntactic factorization. Different kinds of semantic relationships are represented thoroughly, and particular attention is paid to poly-hierarchies, which are used only in moderation: both top terms must actually be relevant. Node labels are used to specify the principle of division applied. Instance relationships are also used.

Authors

Languages

Types

  • a 7572
  • m 998
  • s 417
  • el 401
  • r 75
  • x 68
  • b 60
  • i 51
  • n 17
  • ? 11
  • p 8
  • d 6
  • h 3
  • u 2
  • z 2
  • au 1
  • l 1
  • More… Less…

Themes

Subjects

Classifications