Search (14 results, page 1 of 1)

  • × classification_ss:"06.74 / Informationssysteme"
  • × language_ss:"e"
  1. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.02149811 = sum of:
      0.0070624975 = product of:
        0.02824999 = sum of:
          0.02824999 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.02824999 = score(doc=150,freq=2.0), product of:
              0.22434758 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04921183 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.25 = coord(1/4)
      0.014435613 = product of:
        0.028871225 = sum of:
          0.028871225 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.028871225 = score(doc=150,freq=6.0), product of:
              0.17233144 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04921183 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.5 = coord(1/2)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  2. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.014657827 = sum of:
      0.007990303 = product of:
        0.031961214 = sum of:
          0.031961214 = weight(_text_:authors in 1789) [ClassicSimilarity], result of:
            0.031961214 = score(doc=1789,freq=4.0), product of:
              0.22434758 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04921183 = queryNorm
              0.14246294 = fieldWeight in 1789, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
        0.25 = coord(1/4)
      0.006667523 = product of:
        0.013335046 = sum of:
          0.013335046 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.013335046 = score(doc=1789,freq=2.0), product of:
              0.17233144 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04921183 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
        0.5 = coord(1/2)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  3. Hare, C.E.; McLeod, J.: How to manage records in the e-environment : 2nd ed. (2006) 0.01
    0.009887496 = product of:
      0.019774992 = sum of:
        0.019774992 = product of:
          0.07909997 = sum of:
            0.07909997 = weight(_text_:authors in 1749) [ClassicSimilarity], result of:
              0.07909997 = score(doc=1749,freq=2.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.35257778 = fieldWeight in 1749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1749)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    A practical approach to developing and operating an effective programme to manage hybrid records within an organization. This title positions records management as an integral business function linked to the organisation's business aims and objectives. The authors also address the records requirements of new and significant pieces of legislation, such as data protection and freedom of information, as well as exploring strategies for managing electronic records. Bullet points, checklists and examples assist the reader throughout, making this a one-stop resource for information in this area.
  4. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (1999) 0.01
    0.008474997 = product of:
      0.016949994 = sum of:
        0.016949994 = product of:
          0.06779998 = sum of:
            0.06779998 = weight(_text_:authors in 5777) [ClassicSimilarity], result of:
              0.06779998 = score(doc=5777,freq=2.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.30220953 = fieldWeight in 5777, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5777)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This book discusses many of the key design issues for building search engines and emphazises the important role that applied mathematics can play in improving information retrieval. The authors discuss not only important data structures, algorithms, and software but also user-centered issues such as interfaces, manual indexing, and document preparation. They also present some of the current problems in information retrieval that many not be familiar to applied mathematicians and computer scientists and some of the driving computational methods (SVD, SDD) for automated conceptual indexing
  5. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.008334405 = product of:
      0.01666881 = sum of:
        0.01666881 = product of:
          0.03333762 = sum of:
            0.03333762 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.03333762 = score(doc=1397,freq=2.0), product of:
                0.17233144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04921183 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:29:25
  6. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.01
    0.007990303 = product of:
      0.015980607 = sum of:
        0.015980607 = product of:
          0.06392243 = sum of:
            0.06392243 = weight(_text_:authors in 7) [ClassicSimilarity], result of:
              0.06392243 = score(doc=7,freq=4.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.28492588 = fieldWeight in 7, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=7)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
  7. Grossman, D.A.; Frieder, O.: Information retrieval : algorithms and heuristics (2004) 0.01
    0.007990303 = product of:
      0.015980607 = sum of:
        0.015980607 = product of:
          0.06392243 = sum of:
            0.06392243 = weight(_text_:authors in 1486) [ClassicSimilarity], result of:
              0.06392243 = score(doc=1486,freq=4.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.28492588 = fieldWeight in 1486, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1486)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Interested in how an efficient search engine works? Want to know what algorithms are used to rank resulting documents in response to user requests? The authors answer these and other key information on retrieval design and implementation questions is provided. This book is not yet another high level text. Instead, algorithms are thoroughly described, making this book ideally suited for both computer science students and practitioners who work on search-related applications. As stated in the foreword, this book provides a current, broad, and detailed overview of the field and is the only one that does so. Examples are used throughout to illustrate the algorithms. The authors explain how a query is ranked against a document collection using either a single or a combination of retrieval strategies, and how an assortment of utilities are integrated into the query processing scheme to improve these rankings. Methods for building and compressing text indexes, querying and retrieving documents in multiple languages, and using parallel or distributed processing to expedite the search are likewise described. This edition is a major expansion of the one published in 1998. Neuaufl. 2005: Besides updating the entire book with current techniques, it includes new sections on language models, cross-language information retrieval, peer-to-peer processing, XML search, mediators, and duplicate document detection.
  8. Hars, A.: From publishing to knowledge networks : reinventing online knowledge infrastructures (2003) 0.01
    0.0070624975 = product of:
      0.014124995 = sum of:
        0.014124995 = product of:
          0.05649998 = sum of:
            0.05649998 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
              0.05649998 = score(doc=1634,freq=2.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.25184128 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1634)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Today's publishing infrastructure is rapidly changing. As electronic journals, digital libraries, collaboratories, logic servers, and other knowledge infrastructures emerge an the internet, the key aspects of this transformation need to be identified. Knowledge is becoming increasingly dynamic and integrated. Instead of writing self-contained articles, authors are turning to the new practice of embedding their findings into dynamic networks of knowledge. Here, the author details the implications that this transformation is having an the creation, dissemination and organization of academic knowledge. The author Shows that many established publishing principles need to be given up in order to facilitate this transformation. The text provides valuable insights for knowledge managers, designers of internet-based knowledge infrastructures, and professionals in the publishing industry. Researchers will find the scenarios and implications for research processes stimulating and thought-provoking.
  9. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.01
    0.006667523 = product of:
      0.013335046 = sum of:
        0.013335046 = product of:
          0.026670093 = sum of:
            0.026670093 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
              0.026670093 = score(doc=2426,freq=2.0), product of:
                0.17233144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04921183 = queryNorm
                0.15476047 = fieldWeight in 2426, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2426)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.01
    0.006667523 = product of:
      0.013335046 = sum of:
        0.013335046 = product of:
          0.026670093 = sum of:
            0.026670093 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
              0.026670093 = score(doc=2428,freq=2.0), product of:
                0.17233144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04921183 = queryNorm
                0.15476047 = fieldWeight in 2428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.00
    0.0042374986 = product of:
      0.008474997 = sum of:
        0.008474997 = product of:
          0.03389999 = sum of:
            0.03389999 = weight(_text_:authors in 6) [ClassicSimilarity], result of:
              0.03389999 = score(doc=6,freq=2.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.15110476 = fieldWeight in 6, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=6)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Why doesn't your home page appear on the first page of search results, even when you query your own name? How do other Web pages always appear at the top? What creates these powerful rankings? And how? The first book ever about the science of Web page rankings, "Google's PageRank and Beyond" supplies the answers to these and other questions and more. The book serves two very different audiences: the curious science reader and the technical computational reader. The chapters build in mathematical sophistication, so that the first five are accessible to the general academic reader. While other chapters are much more mathematical in nature, each one contains something for both audiences. For example, the authors include entertaining asides such as how search engines make money and how the Great Firewall of China influences research. The book includes an extensive background chapter designed to help readers learn more about the mathematics of search engines, and it contains several MATLAB codes and links to sample Web data sets. The philosophy throughout is to encourage readers to experiment with the ideas and algorithms in the text. Any business seriously interested in improving its rankings in the major search engines can benefit from the clear examples, sample code, and list of resources provided. It includes: many illustrative examples and entertaining asides; MATLAB code; accessible and informal style; and complete and self-contained section for mathematics review.
  12. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0035312488 = product of:
      0.0070624975 = sum of:
        0.0070624975 = product of:
          0.02824999 = sum of:
            0.02824999 = weight(_text_:authors in 636) [ClassicSimilarity], result of:
              0.02824999 = score(doc=636,freq=2.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.12592064 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Footnote
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
  13. Theories of information behavior (2005) 0.00
    0.002824999 = product of:
      0.005649998 = sum of:
        0.005649998 = product of:
          0.022599991 = sum of:
            0.022599991 = weight(_text_:authors in 68) [ClassicSimilarity], result of:
              0.022599991 = score(doc=68,freq=2.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.10073651 = fieldWeight in 68, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.015625 = fieldNorm(doc=68)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Footnote
    Weitere Rez. in: JASIST 58(2007) no.2, S.303 (D.E. Agosto): "Due to the brevity of the entries, they serve more as introductions to a wide array of theories than as deep explorations of a select few. The individual entries are not as deep as those in more traditional reference volumes, such as The Encyclopedia of Library and Information Science (Drake, 2003) or The Annual Review of Information Science and Technology (ARIST) (Cronin, 2005), but the overall coverage is much broader. This volume is probably most useful to doctoral students who are looking for theoretical frameworks for nascent research projects or to more veteran researchers interested in an introductory overview of information behavior research, as those already familiar with this subfield also will probably already be familiar with most of the theories presented here. Since different authors have penned each of the various entries, the writing styles vary somewhat, but on the whole, this is a readable, pithy volume that does an excellent job of encapsulating this important area of information research."
  14. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    0.002824999 = product of:
      0.005649998 = sum of:
        0.005649998 = product of:
          0.022599991 = sum of:
            0.022599991 = weight(_text_:authors in 1796) [ClassicSimilarity], result of:
              0.022599991 = score(doc=1796,freq=2.0), product of:
                0.22434758 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04921183 = queryNorm
                0.10073651 = fieldWeight in 1796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.

Types

  • m 14
  • s 7

Subjects

Classifications