Search (318 results, page 16 of 16)

  • × type_ss:"m"
  • × year_i:[2000 TO 2010}
  1. Saxton, M.L.; Richardson, J.V. Jr.: Understanding reference transactions : transforming an art into a science (2002) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 2214) [ClassicSimilarity], result of:
              0.013613109 = score(doc=2214,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 2214, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2214)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    The authors also do a good job of explaining the process of complex model building, making the text a useful resource for dissertation writers. The next two chapters focus an the results of the study. Chapter 5 presents the study findings and introduces four different models of the reference process, derived from the study results. Chapter 6 adds analysis to the discussion of the results. Unfortunately, the "Implications for Practice," "Implications for Research," and "Implications for Education" sections are disappointingly brief-only a few paragraphs each-limiting the utility of the volume to practitioners. Finally, Chapter 7 considers the applicability of systems analysis in modeling the reference process. It also includes a series of data flow diagrams that depict the reference process as an alternative to flowchart depiction. Throughout the book, the authors claim that their study is more complete than any to come before it since previous studies tended to focus an ready reference questions, rather than full-blown reference queries and directional queries, and since previous studies generally excluded telephone reference. They also challenge the long-standing "55% Rule," asserting that "Library users indicate high satisfaction even when they do not find what they want or are not given accurate information" (Saxton & Richardson, 2002, p. 95). Overall, Saxton and Richardson found the major variables that had a statistically significant effect an the outcome measures to be: (1) the extent to which the librarian followed the RUSA Behavioral Guidelines; (2) the difficulty of the query; (3) the user's education level, (4) the user's familiarity with the library; and (5) the level of reference service provided. None of the other variables that were considered, most notably the librarian's experience, the librarian's education level, and the size of the collection, had a statistically significant effect an the outcome measures.
  2. Software for Indexing (2003) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 2294) [ClassicSimilarity], result of:
              0.013613109 = score(doc=2294,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 2294, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2294)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    A chapter an image indexing starts with a useful discussion of the elements of bibliographic description needed for visual materials and of the variations in the functioning and naming of functions in different software packaltes. Sample features are discussed in light of four different software systems: MAVIS, Convera Screening Room, CONTENTdm, and Virage speech and pattern recognition programs. The chapter concludes with an overview of what one has to consider when choosing a system. The last chapter in this section is an oddball one an creating a back-ofthe-book index using Microsoft Excel. The author warns: "It is not pretty, and it is not recommended" (p.209). A curiosity, but it should have been included as a counterpoint in the first part, not as part of the database indexing section. The final section begins with an excellent article an voice recognition software (Dragon Naturally Speaking Preferred), followed by a look at "automatic indexing" through a critique of Sonar Bookends Automatic Indexing Generator. The final two chapters deal with Data Harmony's Machine Aided Indexer; one of them refers specifically to a news content indexing system. In terms of scope, this reviewer would have liked to see thesaurus management software included since thesaurus management and the integration of thesauri with database indexing software are common and time-consuming concerns. There are also a few editorial glitches, such as the placement of the oddball article and inconsistent uses of fonts and caps (eg: VIRAGE and Virage), but achieving consistency with this many authors is, indeed, a difficult task. More serious is the fact that the index is inconsistent. It reads as if authors submitted their own keywords which were then harmonized, so that the level of indexing varies by chapter. For example, there is an entry for "controlled vocabulary" (p.265) (singular) with one locator, no cross-references. There is an entry for "thesaurus software" (p.274) with two locators, plus a separate one for "Thesaurus Master" (p.274) with three locators. There are also references to thesauri/ controlled vocabularies/taxonomies that are not mentioned in the index (e.g., the section Thesaurus management an p.204). This is sad. All too often indexing texts have poor indexes, I suppose because we are as prone to having to work under time pressures as the rest of the authors and editors in the world. But a good index that meets basic criteria should be a highlight in any book related to indexing. Overall this is a useful, if uneven, collection of articles written over the past few years. Because of the great variation between articles both in subject and in approach, there is something for everyone. The collection will be interesting to anyone who wants to be aware of how indexing software works and what it can do. I also definitely recommend it for information science teaching collections since the explanations of the software carry implicit in them descriptions of how the indexing process itself is approached. However, the book's utility as a guide to purchasing choices is limited because of the unevenness; the vendor-written articles and testimonials are interesting and can certainly be helpful, but there are not nearly enough objective reviews. This is not a straight listing and comparison of software packaltes, but it deserves wide circulation since it presents an overall picture of the state of indexing software used by freelancers."
  3. Deegan, M.; Tanner, S.: Digital futures : strategies for the information age (2002) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 13) [ClassicSimilarity], result of:
              0.013613109 = score(doc=13,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 13, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=13)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    The most common definition for metadata is "data about data." What metadata does is provide schemes for describing, organizing, exchanging, and receiving information over networks. The authors explain how metadata is used to describe resources by tagging item attributes like author, title, creation date, key words, file formats, compression, etc. The most well known scheme is MARC, but other schemes are developing for creating and managing digital collections, such as XML, TEI, EAD, and Dublin Core. The authors also do a good job of describing the difference between metadata and mark-up languages like HTML. The next two chapters discuss developing, designing, and providing access to a digital collection. In Chapter Six, "Developing and Designing Systems for Sharing Digital Resources," the authors examine a number of issues related to designing a shared collection. For instance, one issue the authors examine is interoperability. The authors stress that when designing a digital collection the creators should take care to ensure that their collection is "managed in such a way as to maximize opportunities for exchange and reuse of information, whether internally or externally" (p. 140). As a complement to Chapter Six, Chapter Seven, "Portals and Personalization: Mechanisms for End-user Access," focuses an the other end of the process; how the collection is used once it is made available. The majority of this chapter concentrates an the use of portals or gateways to digital collections. One example the authors use is MyLibrary@NCState, which provides the university community with a flexible user-drive customizable portal that allows user to access remote and local resources. The work logically concludes with a chapter an preservation and a chapter an the evolving role of librarians. Chapter Eight, "Preservation," is a thought-provoking discussion an preserving digital data and digitization as a preservation technique. The authors do a good job of relaying the complexity of preservation issues in a digital world in a single chapter. While the authors do not answer their questions, they definitely provide the reader wich some things to ponder. The final chapter, "Digital Librarians: New Roles for the Information Age," outlines where the authors believe librarianship is headed. Throughout the work they stress the role of the librarian in the digital world, but Chapter Nine really brings the point home. As the authors stress, librarians have always managed information and as experienced leaders in the information field, librarians are uniquely suited to take the digital bull by the horns. Also, the role of the librarian and what librarians can do is growing and evolving. The authors suggest that librarians are likely to move into rotes such as knowledge mediator, information architect, hybrid librarian-who brings resources and technologies together, and knowledge preserver. While these librarians must have the technical skills to cope with new technologies, the authors also state that management skills and subject skills will prove equally important.
  4. Context: nature, impact, and role : 5th International Conference on Conceptions of Library and Information Science, CoLIS 2005, Glasgow 2005; Proceedings (2005) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 42) [ClassicSimilarity], result of:
              0.013613109 = score(doc=42,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 42, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=42)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Am interessantesten und wichtigsten erschien mir der Grundsatzartikel von Peter Ingwersen und Kalervo Järvelin (Kopenhagen/Tampere), The sense of information: Understanding the cognitive conditional information concept in relation to information acquisition (S. 7-19). Hier versuchen die Autoren, den ursprünglich von Ingwersen1 vorgeschlagenen und damals ausschliesslich im Zusammenhang mit dem interaktiven Information Retrieval verwendeten Begriff "conditional cognitive information" anhand eines erweiterten Modells nicht nur auf das Gesamtgebiet von "information seeking and retrieval" (IS&R) auszuweiten, sondern auch auf den menschlichen Informationserwerb aus der Sinneswahrnehmung, wie z.B. im Alltag oder im Rahmen der wissenschaftlichen Erkenntnistätigkeit. Dabei werden auch alternative Informationsbegriffe sowie die Beziehung von Information und Bedeutung diskutiert. Einen ebenfalls auf Ingwersen zurückgehenden Ansatz thematisiert der Beitrag von Birger Larsen (Kopenhagen), indem er sich mit dessen vor über 10 Jahren veröffentlichten2 Principle of Polyrepresentation befasst. Dieses beruht auf der Hypothese, wonach die Überlappung zwischen unterschiedlichen kognitiven Repräsentationen - nämlich jenen der Situation des Informationssuchenden und der Dokumente - zur Reduktion der einer Retrievalsituation anhaftenden Unsicherheit und damit zur Verbesserung der Performance des IR-Systems genutzt werden könne. Das Prinzip stellt die Dokumente, ihre Autoren und Indexierer, aber auch die sie zugänglich machende IT-Lösung in einen umfassenden und kohärenten theoretischen Bezugsrahmen, der die benutzerorientierte Forschungsrichtung "Information-Seeking" mit der systemorientierten IR-Forschung zu integrieren trachtet. Auf der Basis theoretischer Überlegungen sowie der (wenigen) dazu vorliegenden empirischen Studien hält Larsen das Model, das von Ingwersen sowohl für "exact match-IR" als auch für "best match-IR" intendiert war, allerdings schon in seinen Grundzügen für "Boolean" (d.h. "exact match"-orientiert) und schlägt ein "polyrepresentation continuum" als Verbesserungsmöglichkeit vor.
  5. Semantic Web : Wege zur vernetzten Wissensgesellschaft (2006) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 117) [ClassicSimilarity], result of:
              0.013613109 = score(doc=117,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Der dritte Teil des Bandes thematisiert die organisationalen Dimensionen des Semantic Web und demonstriert unter dem Stichwort "Wissensmanagement" eine Reihe von Konzepten und Anwendungen im betrieblichen und kollaborativen Umgang mit Information. Der Beitrag von Andreas Blumauer und Thomas Fundneider bietet einen Überblick über den Einsatz semantischer Technologien am Beispiel eines integrierten Wissensmanagement-Systems. Michael John und Jörg Drescher zeichnen den historischen Entwicklungsprozess des IT-Einsatzes für das Management von Informations- und Wissensprozessen im betrieblichen Kontext. Vor dem Hintergrund der betrieblichen Veränderungen durch Globalisierung und angeheizten Wettbewerb zeigt Heiko Beier, welche Rollen, Prozesse und Instrumente in wissensbasierten Organisationen die effiziente Nutzung von Wissen unterstützen. Mit dem Konzept des kollaborativen Wissensmanagement präsentiert das Autorenteam Schmitz et al. einen innovativen WissensmanagementAnsatz auf Peer-to-Peer-Basis mit dem Ziel der kollaborativen Einbindung und Pflege von dezentralisierten Wissensbasen. York Sure und Christoph Tempich demonstrieren anhand der Modellierungsmethode DILIGENT, welchen Beitrag Ontologien bei der Wissensvernetzung in Organisationen leisten können. Hannes Werthner und Michael Borovicka adressieren die Bedeutung semantischer Technologien für eCommerce und demonstrieren am Beispiel HARMONISE deren Einsatz im Bereich des eTourismus. Erweitert wird diese Perspektive durch den Beitrag von Fill et al., in dem das Zusammenspiel zwischen Web-Services und Geschäftsprozessen aus der Perspektive der Wirtschaftsinformatik analysiert wird. Abschließend präsentiert das Autorenteam Angele et al. eine Reihe von realisierten Anwendungen auf Basis semantischer Technologien und identifiziert kritische Faktoren für deren Einsatz.
  6. Willinsky, J.: ¬The access principle : the case for open access to research and scholarship (2006) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 298) [ClassicSimilarity], result of:
              0.013613109 = score(doc=298,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=298)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 58(2007) no.9, S.1386 (L.A. Ennis): "Written by John Willinsky. Pacific Press Professor of Literacy and Technology at the University of British Columbia and Open Journals Systems Software des eloper. the eighth hook in the Digital Libraries and Electronic Publishing series (edited by William Y. Arms) provides a compelling and convincing argument in favor of open access. At the core of this work is Willinsky's "access principle." a commitment that "research carries with it a responsibility to extend circulation of such work as far as possible and ideally to all who are interested in it and all who might profit from it" (p.xii). One by one Willinsky tackles the obstacles. both real and perceived, to open access. succeeding in his goal to "inform and inspire a larger debate over the political and moral economy of knowledge" (p.xiv). The author does note the irony of publishing a hook while advocating for open access, but points out that he does so to reach a larger audience. Willinsky also points out that most of the chapters' earlier versions can be found in open-access journals and on his Web site (http://www.11ed.educubc.ca/faculty/willinsky.html). The Access Principle is organized topically into thirteen chapters covering a broad range of practical and theoretical issues. Taken together. these chapters provide the reader with an excellent introduction to the open-access debate as well as all the potential benefits and possible impacts of the open-access movement. The author also includes six appendices. with information on metadata and indexing. os er twenty pages of references, and an index. ... All of Willinsky's arguments arc convincing and heartfelt. It is apparent throughout the hook that the author deeply believes in the principles behind open access. and his passion and conviction come through in the work. making the hook a thought-provoking and very interesting read. While he offers numerous examples to illustrate his points throughout the work. he does not. however. offer solutions or state that he has all the answers. In that, he succeeds in his goal to craft a hook that "informs and inspires. As a result, The Access Principle is an important read for information professionals, researchers, and academics of all kinds, whether or not the reader agrees with Willinsky."
  7. Shaping the network society : the new role of civil society in cyberspace (2004) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 441) [ClassicSimilarity], result of:
              0.013613109 = score(doc=441,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=441)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Peter Day and Douglas Schuler wind up the book by taking a close look at the sociotechnical context in the 1990s. They argue that utopian schemes for the development of civil society and/or the public sphere may entail a degree of risk. However. Day and Schuler argue that community networks should be ''networks of awareness. advocacy and action" with a high degree of grassroots involvement. This can be done through more responsive policies. Local citizens-the first beneficiaries or victims of policy-should he brought into the decision-making process via civic dialogue. Public funding must be provided for projects that enable dissemination of information about a variety of cultures and belief systems. Shaping the Network Society is understandably more cautious than earlier accounts of cyberculture in its reception of new information and communications technology. Haunted by post 9/11 security measures. increasing surveillance, the faster erosion of liberal humanist ideals, and the internationalization/ commercialization of the media, the essays prefer to be wary about the potential of cyberpower. However, the optimist tone of every essay is unmistakable. While admitting that much more needs to be done to overcome the digital divide and the (mis)appropriation of cyberpower. the essays and ease studies draw attention to the potential for public debate and alternative ideologies. The case studies demonstrate success stories, but invariably conclude with a moral: about the need for vigilance against appropriation and fascist control! What emerges clearly is that the new media have achieved considerable progress in opening up the space for greater citizen involvement, more locally-responsive policy decisions. and socially relevant information-dissemination. Shaping the Network Society, with a strangely messianic slant, is a useful step in the mapping of the present and future cyberspace as the space of new democracies to come of a justice to he worked and prepared for."
  8. Rösch, H.: Academic libraries und cyberinfrastructure in den USA : das System wissenschaftlicher Kommunikation zu Beginn des 21. Jahrhunderts (2008) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 3074) [ClassicSimilarity], result of:
              0.013613109 = score(doc=3074,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 3074, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3074)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Spätestens im 19. Jahrhundert führte die wachsende Publikationsflut zur Erkenntnis, dass dies nicht leistbar ist; einzelne Bibliotheken gewannen nun die Funktion einer mehr oder minder anleitenden Instanz (stratifikatorisch differenziertes Bibliothekssystem), besonders in Frankreich, England und Preußen, wo die Königliche Bibliothek in Berlin mit dem Gesamtkatalog begann. An der Schwelle zum 20. Jahrhundert wandelte sich dieses System, das freilich im Verhältnis der Universitätsbibliothek zu den Institutsbibliotheken oft erst mit beträchtlicher Zeitverschiebung eingeführt wurde, zu einem funktional differenzierten Bibliothekssystem, das durch eine vernetzte und koordinierte Kooperation (zum Beispiel Leihverkehr, abgestimmte Erwerbung, Sondersammelgebietesplan) gekennzeichnet ist. Damit liefert Rösch nebenbei ein bisher von den Bibliothekshistorikern nicht beachtetes Erklärungsmodell. Dieses wendet Rösch auf das Bibliotheksystem und seine Rahmenbedingungen in den USA an, das er ausführlich, zielstrebig und besonders an aktuellen Entwicklungen interessiert analysiert (etwa die Rolle der Library of Congress, der Verbände oder des OCLC, einzelne Projekte, Organisationen und Initiativen wie Coalition for Networked Information, Educause, Digital Library Federation). Funktional differenziertes System Der Autor kommt zu dem Ergebnis, dass sich das US-amerikanische Bibliothekswesen teils aufgrund der föderalen Struktur und der Zurückhaltung der Bundesebene, teils aufgrund der gesplitteten Trägerschaft (privat - staatlich) bei in vielen Fällen außerordentlichen Etatmitteln überwiegend noch in der Phase eines stratifikatorisch differenzierten, teilweise noch eines segmentär differenzierten Systems befindet.
  9. Grundlagen der praktischen Information und Dokumentation (2004) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 693) [ClassicSimilarity], result of:
              0.013613109 = score(doc=693,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Jiri Panyr: Technische Redaktion Wolfgang F. Finke: E-Learning Harald H. Zimmermann: Maschinelle und Computergestützte Übersetzung Franziskus Geeb und Ulrike Spree: Wörterbücher und Enzyklopädien Angelika Menne-Haritz: Archive Hans-Christoph Hobohm: Bibliotheken Günter Peters: Medien, Medienwirtschaft Ulrich Riehm: Buchhandel Helmut Wittenzellner: Transformationsprozesse für die Druckbranche auf dem Weg zum Mediendienstleister Dietmar Strauch: Verlagswesen Ulrich Riehm, Knud Böhle und Bernd Wingert: Elektronisches Publizieren Heike Andermann: Initiativen zur Reformierung des Systems wissenschaftlicher Kommunikation Ute Schwens und Hans Liegmann: Langzeitarchivierung digitaler Ressourcen Achim OBwald: Document Delivery/ Dokumentlieferung Willi Bredemeier und Patrick Müller: Informationswirtschaft Martin Michelson: Wirtschaftsinformation Ulrich Kämper: Chemie-Information Wilhelm Gaus: Information und Dokumentation in der Medizin Gottfried Herzog und Hans Jörg Wiesner: Normung Jürgen Krause: Standardisierung und Heterogenität Reinhard Schramm: Patentinformation Wolfgang Semar: E-Commerce Wolfgang Semar: Kryptografie Knud Böhle: Elektronische Zahlungssysteme Herbert Stoyan: Information in der Informatik Gerhard Roth und Christian Eurich: Der Begriff der Information in der Neurobiologie Margarete Boos: Information in der Psychologie Harald H. Zimmermann: Information in der Sprachwissenschaft Ulrich Glowalla: Information und Lernen Eric Schoop: Information in der Betriebswirtschaft: ein neuer Produktionsfaktor? Gerhard Vowe: Der Informationsbegriff in der Politikwissenschaft - eine historische und systematische Bestandsaufnahme Jürgen Krause: Information in den Sozialwissenschaften Holger Lyre: Information in den Naturwissenschaften Norbert Henrichs: Information in der Philosophie
  10. XML in libraries (2002) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 3100) [ClassicSimilarity], result of:
              0.010890487 = score(doc=3100,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 3100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  11. Learning XML (2003) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 3101) [ClassicSimilarity], result of:
              0.010890487 = score(doc=3101,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 3101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3101)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  12. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 3102) [ClassicSimilarity], result of:
              0.010890487 = score(doc=3102,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 3102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  13. Pace, A.K.: ¬The ultimate digital library : where the new information players meet (2003) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 3198) [ClassicSimilarity], result of:
              0.010890487 = score(doc=3198,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 3198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3198)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez.: ZfBB 52(2005) H.1, S.52-53 (N. Lossau: "Service-Gedanke Digitale Bibliotheken gehören heute bereits zum selbstverständlichen Vokabular von Bibliothekaren und es gibt wohl kaum einen Internetauftritt von Bibliotheken, der nicht eine Digitale Bibliothek beinhaltet. Fast ebenso vielfältig wie die Vorkommen sind auch die Ausprägungen und Definitionen von Digitalen Bibliotheken, weshalb man mit einer Mischung aus Interesse und Skepsis das vorliegende Buch in die Hand nimmt. »The ultimate digital library«, ein ambitionierter Titel, vom Autor und der American Library Association, in deren Reihe die Publikation erschienen ist, wohl nicht zuletzt aus Marketinggründen wohlbedacht gewählt, suggeriert dem Leser, dass hier die vollendete, perfekte Form einer Digitalen Bibliothek beschrieben wird, die sich seit den goer Jahren mit rasantem Tempo entwickelt hat. Es dauert eine ganze Weile, bis der Leser auf die Definition von Pace stößt, die sich als roter Faden durch sein Werk zieht: »The digital library - a comprehensive definition will not be attempted here - encompasses not only collections in digital form, but digital services that continue to define the library as a place.« (S.73) Pace konzentriert sich auf den ServiceAspekt von Digitalen Bibliotheken und zielt damit auf eine Entwicklung ab, die in der Tat als zukunftsweisend für Bibliotheken und Digitale Bibliotheken zu gelten hat. Zu lange haben Bibliotheken sich schwerpunktmäßig auf die digitalen Sammlungen und ihre Produktion (durch Digitalisierung) oder Kauf und Lizenzierung konzentriert, wie Pace zu Recht an der gleichen Stelle beklagt. Die Zukunft mussfür Bibliotheken in der Entwicklung und Bereitstellung von digitalen Services liegen, die den Endnutzern einen echten Mehrwert zu bieten haben. Darin liegt sein Verständnis einer ultimativen Digitalen Bibliothek begründet, ohne dass er die Definition ausführlicher thematisiert. Pace räumt in diesem Zusammenhang auch mit einem Mythos auf, der die Digitalen Bibliotheken lediglich als »Hilfsdienste« einer traditionellen Bibliothek betrachtet. Wesentlich sympathischer und realistischer erscheint dem Leser die folgende Charakterisierung des Verhältnisses: »The digital-traditional relationship is symbiotic, not parasitic: digital tools, services, and expertise exist to enhance the services and collections of libraries, not necessarily to replace them.« (S. 73) Kooperation mit SoftwareAnbietern Der inhaltliche Leitgedanke der digitalen Services ist auch eine ideale Basis für eine weitere Botschaft von Pace, die er mit seinem Buch vermitteln möchte: Bibliothekare und Anbietervon BibliotheksSoftware müssen bei der Entwicklung dieser Services eng zusammenarbeiten. Glaubt man dem Vorwort, dann stellt das Verhältnis von »libraries and vendors« [Bibliotheken und Anbietern] die Ausgangsthematik für die Publikation dar, wie sie von der American Library Association bei Pace in Auftrag gegeben wurde. Dieserverfügt offensichtlich über den geeigneten Erfahrungshintergrund, um eine solche Beschreibung abzuliefern. Nach seinem Studiumsabschluss als M.S.L.S. begann er seine berufliche Laufbahn zunächst für mehr als drei Jahre bei der Firma für Software zur Bibliotheksautomatisierung, Innovative Interfaces,woer unteranderem als Spezialist zur Produktintegration von z.B. WebPAC,Advanced Keyword Search arbeitete. Heute ist Pace »Head of Systems« an den North Carolina State University Libraries (Raleigh, N.C.) und ständiger Kolumnist in dem Magazin Computers in Libraries.
  14. Mandl, T.: Tolerantes Information Retrieval : Neuronale Netze zur Erhöhung der Adaptivität und Flexibilität bei der Informationssuche (2001) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 5965) [ClassicSimilarity], result of:
              0.010890487 = score(doc=5965,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 5965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5965)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ein wesentliches Bedürfnis im Rahmen der Mensch-Maschine-Interaktion ist die Suche nach Information. Um Information Retrieval (IR) Systeme kognitiv adäquat zu gestalten und sie an den Menschen anzupassen bieten sich Modelle des Soft Computing an. Ein umfassender state-of-the-art Bericht zu neuronalen Netzen im IR zeigt dass die meisten bestehenden Modelle das Potential neuronaler Netze nicht ausschöpfen. Das vorgestellte COSIMIR-Modell (Cognitive Similarity learning in Information Retrieval) basiert auf neuronalen Netzen und lernt, die Ähnlichkeit zwischen Anfrage und Dokument zu berechnen. Es trägt somit die kognitive Modellierung in den Kern eines IR Systems. Das Transformations-Netzwerk ist ein weiteres neuronales Netzwerk, das die Behandlung von Heterogenität anhand von Expertenurteilen lernt. Das COSIMIR-Modell und das Transformations-Netzwerk werden ausführlich diskutiert und anhand realer Datenmengen evaluiert
  15. Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries : July 14 - 18, 2002, Portland, Oregon, USA. (2002) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 172) [ClassicSimilarity], result of:
              0.010890487 = score(doc=172,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    SESSION: NSDL Core services in the architecture of the national science digital library (NSDL) (Carl Lagoze, William Arms, Stoney Gan, Diane Hillmann, Christopher Ingram, Dean Krafft, Richard Marisa, Jon Phipps, John Saylor, Carol Terrizzi, Walter Hoehn, David Millman, James Allan, Sergio Guzman-Lara, Tom Kalt) - Creating virtual collections in digital libraries: benefits and implementation issues (Gary Geisler, Sarah Giersch, David McArthur, Marty McClelland) - Ontology services for curriculum development in NSDL (Amarnath Gupta, Bertram Ludäscher, Reagan W. Moore) - Interactive digital library resource information system: a web portal for digital library education (Ahmad Rafee Che Kassim, Thomas R. Kochtanek) SESSION: Digital library communities and change Cross-cultural usability of the library metaphor (Elke Duncker) - Trust and epistemic communities in biodiversity data sharing (Nancy A. Van House) - Evaluation of digital community information systems (K. T. Unruh, K. E. Pettigrew, J. C. Durrance) - Adapting digital libraries to continual evolution (Bruce R. Barkstrom, Melinda Finch, Michelle Ferebee, Calvin Mackey) SESSION: Models and tools for generating digital libraries Localizing experience of digital content via structural metadata (Naomi Dushay) - Collection synthesis (Donna Bergmark) - 5SL: a language for declarative specification and generation of digital libraries (Marcos André, Gonçalves, Edward A. Fox) SESSION: Novel user interfaces A digital library of conversational expressions: helping profoundly disabled users communicate (Hayley Dunlop, Sally Jo Cunningham, Matt Jones) - Enhancing the ENVISION interface for digital libraries (Jun Wang, Abhishek Agrawal, Anil Bazaza, Supriya Angle, Edward A. Fox, Chris North) - A wearable digital library of personal conversations (Wei-hao Lin, Alexander G. Hauptmann) - Collaborative visual interfaces to digital libraries (Katy Börner, Ying Feng, Tamara McMahon) - Binding browsing and reading activities in a 3D digital library (Pierre Cubaud, Pascal Stokowski, Alexandre Topol)
  16. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 1796) [ClassicSimilarity], result of:
              0.010890487 = score(doc=1796,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 1796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
  17. Libraries and Google (2005) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 1973) [ClassicSimilarity], result of:
              0.010890487 = score(doc=1973,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 1973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1973)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Co-published simultaneously as Internet reference services quarterly, vol. 10(1005), nos. 3/4 Rez. in: ZfBB 54(2007) H.2, S.98-99 (D. Lewandowski): "Google und Bibliotheken? Meist hat man leider den Eindruck, dass hier eher ein oder gedacht wird. Dies sehen auch die Herausgeber des vorliegenden Bandes und nehmen deshalb neben Beiträgen zur Diskussion um die Rolle der Bibliotheken im Zeitalter von Google auch solche auf, die Tipps zur Verwendung unterschiedlicher Google-Dienste geben. Die allgemeine Diskussion um Google und die Bibliotheken dreht sich vor allem um die Rolle, die Bibliotheken (mit ihren Informationsportalen) noch spielen können, wenn ihre Nutzer sowieso bei Google suchen, auch wenn die Bibliotheksangebote (zumindest von den Bibliothekaren) als überlegen empfunden werden. Auch wenn die Nutzer geschult werden, greifen sie doch meist lieber zur einfachen Recherchemöglichkeit bei Google oder anderen Suchmaschinen - vielleicht lässt sich die Situation am besten mit dem Satz eines im Buch zitierten Bibliothekars ausdrücken: »Everyone starts with Google except librarians.« (5.95) Sollen die Bibliotheken nun Google die einfache Recherche ganz überlassen und sich auf die komplexeren Suchfragen konzentrieren? Oder verlieren sie dadurch eine Nutzerschaft, die sich mittlerweile gar nicht mehr vorstellen kann, dass man mit anderen Werkzeugen als Suchmaschinen bessere Ergebnisse erzielen kann? Diese sicherlich für die Zukunft der Bibliotheken maßgebliche Frage wird in mehreren Beiträgen diskutiert, wobei auffällt, dass die jeweiligen Autoren keine klare Antwort bieten können, wie Bibliotheken ihre Quellen so präsentieren können, dass die Nutzer mit der Recherche so zufrieden sind, dass sie freiwillig in den Bibliotheksangeboten anstatt in Google recherchieren. Den Schwerpunkt des Buchs machen aber nicht diese eher theoretischen Aufsätze aus, sondern solche, die sich mit konkreten Google-Diensten beschäftigen. Aufgrund ihrer Nähe zu den Bibliotheksangeboten bzw. den Aufgaben der Bibliotheken sind dies vor allem Google Print und Google Scholar, aber auch die Google Search Appliance. Bei letzterer handelt es sich um eine integrierte Hard- und Softwarelösung, die die Indexierung von Inhalten aus unterschiedlichen Datenquellen ermöglicht. Der Aufsatz von Mary Taylor beschreibt die Vor- und Nachteile des Systems anhand der praktischen Anwendung in der University of Nevada.
  18. Broughton, V.: Essential thesaurus construction (2006) 0.00
    0.0027226217 = product of:
      0.0054452433 = sum of:
        0.0054452433 = product of:
          0.010890487 = sum of:
            0.010890487 = weight(_text_:systems in 2924) [ClassicSimilarity], result of:
              0.010890487 = score(doc=2924,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.0679082 = fieldWeight in 2924, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many information professionals working in small units today fail to find the published tools for subject-based organization that are appropriate to their local needs, whether they are archivists, special librarians, information officers, or knowledge or content managers. Large established standards for document description and organization are too unwieldy, unnecessarily detailed, or too expensive to install and maintain. In other cases the available systems are insufficient for a specialist environment, or don't bring things together in a helpful way. A purpose built, in-house system would seem to be the answer, but too often the skills necessary to create one are lacking. This practical text examines the criteria relevant to the selection of a subject-management system, describes the characteristics of some common types of subject tool, and takes the novice step by step through the process of creating a system for a specialist environment. The methodology employed is a standard technique for the building of a thesaurus that incidentally creates a compatible classification or taxonomy, both of which may be used in a variety of ways for document or information management. Key areas covered are: What is a thesaurus? Tools for subject access and retrieval; what a thesaurus is used for? Why use a thesaurus? Examples of thesauri; the structure of a thesaurus; thesaural relationships; practical thesaurus construction; the vocabulary of the thesaurus; building the systematic structure; conversion to alphabetic format; forms of entry in the thesaurus; maintaining the thesaurus; thesaurus software; and; the wider environment. Essential for the practising information professional, this guide is also valuable for students of library and information science.

Languages

  • e 204
  • d 109
  • m 5
  • es 2
  • More… Less…

Types

  • s 114
  • i 4
  • el 1
  • n 1
  • r 1
  • x 1
  • More… Less…

Themes

Subjects

Classifications