Search (3015 results, page 151 of 151)

  • × type_ss:"m"
  1. Ronan, J.S.: Chat reference : A guide to live virtual reference services (2003) 0.00
    1.3072761E-4 = product of:
      0.0031374625 = sum of:
        0.0031374625 = product of:
          0.009412387 = sum of:
            0.009412387 = weight(_text_:p in 2230) [ClassicSimilarity], result of:
              0.009412387 = score(doc=2230,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11917553 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2230)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    Training techniques are the focus in Chapter 6, including ways to relax trainees and reduce cognitive load as well as to maximize training utility when the software limits the number of logins available. Ronan covers everyday administration and policy issues in Chapters 7 and 8. These include a list of daily routines such as checking that the software is functioning, plus monthly routines of updating statistics, policies, and procedures. Chapter 9 offers guidance an the chat reference interview, which Ronan likens to "information therapy" within an online environment of diminished contextual cues. Marketing and publicity are discussed in Chapter 10, with advice an advertising and publicity campaigns as well as a checklist of 20 promotional strategies for attracting users to a new chat service (p. 165). In the final section of the book, Chapters 11-15 provide individual case studies written by six contributors describing how live different academic libraries have been able to launch and operate chat reference services using a variety of different types of software including instant messaging, MOO, Internet Relay Chat, and call center software. Each case study begins with a statement of the software used, launch date, staffing, and hours of the service, and most include statistical information an chat reference traffic. These final live chapters provide "voices from the front lines" giving details of individual librarians' experiences in launching chat services.
  2. Shaviro, S.: Connected, or what it means to live in the network society (2003) 0.00
    1.3072761E-4 = product of:
      0.0031374625 = sum of:
        0.0031374625 = product of:
          0.009412387 = sum of:
            0.009412387 = weight(_text_:p in 3885) [ClassicSimilarity], result of:
              0.009412387 = score(doc=3885,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11917553 = fieldWeight in 3885, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3885)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    Connected not only poses the "problem of connection," but also seeks, through Shaviro's scattered mode of critical analysis, to pose new ways of intervening in and viewing connections of all sorts. It does not work. Most of the time, Shaviro's efforts to il illuminate the "connections" never rise above a random, technophilic pastiche, lacking any powerful synthesis but often overflowing with the academic version of New Age babble. (It is ironic that Shaviro is concerned with the "problem of connection," because Connected is largely inaccessible to readers who have not read the novels of Samuel Delany, Philip K. Dick, William Gibson, and several other science fiction novelists, or seen the films in The Matrix series.) For example, Shaviro thinks that everything "flows." He writes: In postmodern society, everything flows. There are flows of commodities, flows of expressions, flows of embodiment, and flows of affect. The organizing material of each flow is a universal equivalent: money, information, DNA, or LSD. But how are these flows related among themselves? Strictly speaking, they should interchangeable. All the equivalents should themselves be mutually equivalent. (p. 193) This is, of course, patent nonsense. More to the point, it is one of many passages in Conneeted that has the ring of authority, but little else. In the midst of rising concerns over computer security, personal privacy, and freedom of expression, how much significance should we assign to Shaviro's assertion that the user's relationship to the network is like the junkie's need for heroin, as described by William Burroughs in The Naked Lunch? Or the claim that Microsoft's design for the "Horne of the Future" is part of a middle-class plot to repress itself sexually? Or the recommendation that a novel of the future in which copyright violators are put to death is a harbinger of things to come? In the end, a generous reading might conclude that Connected is an experimental meditation an the relevance of science fiction, whereas a less generous view would be that the author's misguided preoccupation with literary effect resulted in a book that never makes an effective case for the writers we are supposed to take so seriously."
  3. Liebowitz, J.: What they didn't tell you about knowledge management (2006) 0.00
    1.3072761E-4 = product of:
      0.0031374625 = sum of:
        0.0031374625 = product of:
          0.009412387 = sum of:
            0.009412387 = weight(_text_:p in 609) [ClassicSimilarity], result of:
              0.009412387 = score(doc=609,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11917553 = fieldWeight in 609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=609)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    The concluding chapter addresses the future of KM. Liebowitz asserts that knowledge management will not become a discipline in its own right but that its practices will continue to integrate with other fields such as organizational learning and computer science. He envisions LIS professionals as brokers making connections between the people of an organization and the knowledge it creates, with the library or information center as the middle ground between codification and personalization. In that vision, he sees a role for LIS professionals in pushing information to employees rather than taking the more traditional role of reacting to information requests. He sees a future in which LIS professionals take leadership roles in KM programs through the integration of their technological, organizational, and human interaction skills. He is hopeful that in time libraries will take ownership of KM programs within organizations. His statement, "The library has always been a treasure house of information, and it needs to continue to expand into the knowledge chest as well" (p. 33) expresses Liehowitz's charge to corporate and government LIS professionals. The ideas presented in What They Didn't Tell You about Knowledge Management are certainly in support of that charge.' This work provides a broad overview of the KM field and serves as an initial source for exploration for LIS professionals working in a corporate setting or considering doing so."
  4. Learning XML (2003) 0.00
    1.2325117E-4 = product of:
      0.002958028 = sum of:
        0.002958028 = product of:
          0.008874084 = sum of:
            0.008874084 = weight(_text_:p in 3101) [ClassicSimilarity], result of:
              0.008874084 = score(doc=3101,freq=4.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11235977 = fieldWeight in 3101, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3101)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
  5. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.00
    1.2325117E-4 = product of:
      0.002958028 = sum of:
        0.002958028 = product of:
          0.008874084 = sum of:
            0.008874084 = weight(_text_:p in 3102) [ClassicSimilarity], result of:
              0.008874084 = score(doc=3102,freq=4.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11235977 = fieldWeight in 3102, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3102)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
  6. Cross-language information retrieval (1998) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 6299) [ClassicSimilarity], result of:
              0.007843656 = score(doc=6299,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 6299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
  7. Saxton, M.L.; Richardson, J.V. Jr.: Understanding reference transactions : transforming an art into a science (2002) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 2214) [ClassicSimilarity], result of:
              0.007843656 = score(doc=2214,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 2214, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2214)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    The authors also do a good job of explaining the process of complex model building, making the text a useful resource for dissertation writers. The next two chapters focus an the results of the study. Chapter 5 presents the study findings and introduces four different models of the reference process, derived from the study results. Chapter 6 adds analysis to the discussion of the results. Unfortunately, the "Implications for Practice," "Implications for Research," and "Implications for Education" sections are disappointingly brief-only a few paragraphs each-limiting the utility of the volume to practitioners. Finally, Chapter 7 considers the applicability of systems analysis in modeling the reference process. It also includes a series of data flow diagrams that depict the reference process as an alternative to flowchart depiction. Throughout the book, the authors claim that their study is more complete than any to come before it since previous studies tended to focus an ready reference questions, rather than full-blown reference queries and directional queries, and since previous studies generally excluded telephone reference. They also challenge the long-standing "55% Rule," asserting that "Library users indicate high satisfaction even when they do not find what they want or are not given accurate information" (Saxton & Richardson, 2002, p. 95). Overall, Saxton and Richardson found the major variables that had a statistically significant effect an the outcome measures to be: (1) the extent to which the librarian followed the RUSA Behavioral Guidelines; (2) the difficulty of the query; (3) the user's education level, (4) the user's familiarity with the library; and (5) the level of reference service provided. None of the other variables that were considered, most notably the librarian's experience, the librarian's education level, and the size of the collection, had a statistically significant effect an the outcome measures.
  8. Deegan, M.; Tanner, S.: Digital futures : strategies for the information age (2002) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 13) [ClassicSimilarity], result of:
              0.007843656 = score(doc=13,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 13, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=13)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    The most common definition for metadata is "data about data." What metadata does is provide schemes for describing, organizing, exchanging, and receiving information over networks. The authors explain how metadata is used to describe resources by tagging item attributes like author, title, creation date, key words, file formats, compression, etc. The most well known scheme is MARC, but other schemes are developing for creating and managing digital collections, such as XML, TEI, EAD, and Dublin Core. The authors also do a good job of describing the difference between metadata and mark-up languages like HTML. The next two chapters discuss developing, designing, and providing access to a digital collection. In Chapter Six, "Developing and Designing Systems for Sharing Digital Resources," the authors examine a number of issues related to designing a shared collection. For instance, one issue the authors examine is interoperability. The authors stress that when designing a digital collection the creators should take care to ensure that their collection is "managed in such a way as to maximize opportunities for exchange and reuse of information, whether internally or externally" (p. 140). As a complement to Chapter Six, Chapter Seven, "Portals and Personalization: Mechanisms for End-user Access," focuses an the other end of the process; how the collection is used once it is made available. The majority of this chapter concentrates an the use of portals or gateways to digital collections. One example the authors use is MyLibrary@NCState, which provides the university community with a flexible user-drive customizable portal that allows user to access remote and local resources. The work logically concludes with a chapter an preservation and a chapter an the evolving role of librarians. Chapter Eight, "Preservation," is a thought-provoking discussion an preserving digital data and digitization as a preservation technique. The authors do a good job of relaying the complexity of preservation issues in a digital world in a single chapter. While the authors do not answer their questions, they definitely provide the reader wich some things to ponder. The final chapter, "Digital Librarians: New Roles for the Information Age," outlines where the authors believe librarianship is headed. Throughout the work they stress the role of the librarian in the digital world, but Chapter Nine really brings the point home. As the authors stress, librarians have always managed information and as experienced leaders in the information field, librarians are uniquely suited to take the digital bull by the horns. Also, the role of the librarian and what librarians can do is growing and evolving. The authors suggest that librarians are likely to move into rotes such as knowledge mediator, information architect, hybrid librarian-who brings resources and technologies together, and knowledge preserver. While these librarians must have the technical skills to cope with new technologies, the authors also state that management skills and subject skills will prove equally important.
  9. Warner, J.: Humanizing information technology (2004) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 438) [ClassicSimilarity], result of:
              0.007843656 = score(doc=438,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=438)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    Like Daniel Bell, the author of The Coming of Post-Industrial Society (1973), who used aspects of Marx's thinking as the basis for his social forecasting models, Warner uses Marxist thought as a tool for social and historical analysis. Unlike Bell, Warner's approach to Marx tends to be doctrinaire. As a result, "An Information View of History" and "Origins of the Human Brain," two of the essays in which Warner sets out to establish the connections between information science and information technology, are less successful. Warner argues, "the classic source for an understanding of technology as a human construction is Marx," and that "a Marxian perspective an information technology could be of high marginal Utility," noting additionally that with the exception of Norbert Wiener and John Desmond Bernal, "there has only been a limited penetration of Marxism into information science" (p. 9). But Warner's efforts to persuade the reader that these views are cogent never go beyond academic protocol. Nor does his support for the assertion that the second half of the 19th century was the critical period for innovation and diffusion of modern information technologies. The closing essay, "Whither Information Science?" is particularly disappointing, in part, because the preface and opening chapters of the book promised more than was delivered at the end. Warner asserts that the theoretical framework supporting information science is negligible, and that the discipline is limited even further by the fact that many of its members do not recognize or understand the effects of such a limitation. However cogent the charges may be, none of this is news. But the essay fails most notably because Warner does not have any new directions to offer, save that information scientists should pay closer artention to what is going an in allied disciplines. Moreover, he does not seem to understand that at its heart the "information revolution" is not about the machines, but about the growing legions of men and women who can and do write programming code to exert control over and find new uses for these devices. Nor does he seem to understand that information science, in the grip of what he terms a "quasi-global crisis," suffers grievously because it is a community situated not at the center but rather an the periphery of this revolution."
  10. Kochtanek, T.R.; Matthews, J.R.: Library information systems : from library automation to distributed information systems (2002) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 1792) [ClassicSimilarity], result of:
              0.007843656 = score(doc=1792,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 1792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1792)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    Rez. in: JASIST 54(2003) no.12, S.1166-1167 (Brenda Chawner): "Kochtanek and Matthews have written a welcome addition to the small set of introductory texts an applications of information technology to library and information Services. The book has fourteen chapters grouped into four sections: "The Broader Context," "The Technologies," "Management Issues," and "Future Considerations." Two chapters provide the broad content, with the first giving a historical overview of the development and adoption of "library information systems." Kochtanek and Matthews define this as "a wide array of solutions that previously might have been considered separate industries with distinctly different marketplaces" (p. 3), referring specifically to integrated library systems (ILS, and offen called library management systems in this part of the world), and online databases, plus the more recent developments of Web-based resources, digital libraries, ebooks, and ejournals. They characterize technology adoption patterns in libraries as ranging from "bleeding edge" to "leading edge" to "in the wedge" to "trailing edge"-this is a catchy restatement of adopter categories from Rogers' diffusion of innovation theory, where they are more conventionally known as "early adopters," "early majority," "late majority," and "laggards." This chapter concludes with a look at more general technology trends that have affected library applications, including developments in hardware (moving from mainframes to minicomputers to personal Computers), changes in software development (from in-house to packages), and developments in communications technology (from dedicated host Computers to more open networks to the current distributed environment found with the Internet). This is followed by a chapter describing the ILS and online database industries in some detail. "The Technologies" begins with a chapter an the structure and functionality of integrated library systems, which also includes a brief discussion of precision versus recall, managing access to internal documents, indexing and searching, and catalogue maintenance. This is followed by a chapter an open systems, which concludes with a useful list of questions to consider to determine an organization's readiness to adopt open source solutions. As one world expect, this section also includes a detailed chapter an telecommunications and networking, which includes types of networks, transmission media, network topologies, switching techniques (ranging from dial up and leased lines to ISDN/DSL, frame relay, and ATM). It concludes with a chapter an the role and importance of standards, which covers the need for standards and standards organizations, and gives examples of different types of standards, such as MARC, Dublin Core, Z39.50, and markup standards such as SGML, HTML, and XML. Unicode is also covered but only briefly. This section world be strengthened by a chapter an hardware concepts-the authors assume that their reader is already familiar with these, which may not be true in all cases (for example, the phrase "client-Server" is first used an page 11, but only given a brief definition in the glossary). Burke's Library Technology Companion: A Basic Guide for Library Staff (New York: Neal-Schuman, 2001) might be useful to fill this gap at an introductory level, and Saffady's Introduction to Automation for Librarians, 4th ed. (Chicago: American Library Association, 1999) world be better for those interested in more detail. The final two sections, however, are the book's real strength, with a strong focus an management issues, and this content distinguishes it from other books an this topic such as Ferguson and Hebels Computers for Librarians: an Introduction to Systems and Applications (Waggawagga, NSW: Centre for Information Studies, Charles Sturt University, 1998). ...
  11. Culture and identity in knowledge organization : Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada (2008) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 2494) [ClassicSimilarity], result of:
              0.007843656 = score(doc=2494,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 2494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2494)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Content
    EPISTEMOLOGICAL FOUNDATIONS OF KNOWLEDGE ORGANIZATION H. Peter Ohly. Knowledge Organization Pro and Retrospective. Judith Simon. Knowledge and Trust in Epistemology and Social Software/Knowledge Technologies. - D. Grant Campbell. Derrida, Logocentrism, and the Concept of Warrant on the Semantic Web. - Jian Qin. Controlled Semantics Versus Social Semantics: An Epistemological Analysis. - Hope A. Olson. Wind and Rain and Dark of Night: Classification in Scientific Discourse Communities. - Thomas M. Dousa. Empirical Observation, Rational Structures, and Pragmatist Aims: Epistemology and Method in Julius Otto Kaiser's Theory of Systematic Indexing. - Richard P. Smiraglia. Noesis: Perception and Every Day Classification. Birger Hjorland. Deliberate Bias in Knowledge Organization? Joseph T. Tennis and Elin K. Jacob. Toward a Theory of Structure in Information Organization Frameworks. - Jack Andersen. Knowledge Organization as a Cultural Form: From Knowledge Organization to Knowledge Design. - Hur-Li Lee. Origins of the Main Classes in the First Chinese Bibliographie Classification. NON-TEXTUAL MATERIALS Abby Goodrum, Ellen Hibbard, Deborah Fels and Kathryn Woodcock. The Creation of Keysigns American Sign Language Metadata. - Ulrika Kjellman. Visual Knowledge Organization: Towards an International Standard or a Local Institutional Practice?
  12. Next generation search engines : advanced models for information retrieval (2012) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 357) [ClassicSimilarity], result of:
              0.007843656 = score(doc=357,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 357, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=357)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Content
    Enthält die Beiträge: Das, A., A. Jain: Indexing the World Wide Web: the journey so far. Ke, W.: Decentralized search and the clustering paradox in large scale information networks. Roux, M.: Metadata for search engines: what can be learned from e-Sciences? Fluhr, C.: Crosslingual access to photo databases. Djioua, B., J.-P. Desclés u. M. Alrahabi: Searching and mining with semantic categories. Ghorbel, H., A. Bahri u. R. Bouaziz: Fuzzy ontologies building platform for Semantic Web: FOB platform. Lassalle, E., E. Lassalle: Semantic models in information retrieval. Berry, M.W., R. Esau u. B. Kiefer: The use of text mining techniques in electronic discovery for legal matters. Sleem-Amer, M., I. Bigorgne u. S. Brizard u.a.: Intelligent semantic search engines for opinion and sentiment mining. Hoeber, O.: Human-centred Web search.
  13. Exploring artificial intelligence in the new millennium (2003) 0.00
    8.715174E-5 = product of:
      0.0020916418 = sum of:
        0.0020916418 = product of:
          0.006274925 = sum of:
            0.006274925 = weight(_text_:p in 2099) [ClassicSimilarity], result of:
              0.006274925 = score(doc=2099,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.079450354 = fieldWeight in 2099, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2099)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    The book does achieve its aim of being a starting point for someone interested in the state of some areas of AI research at the beginning of the new millennium. The book's most irritating feature is the different writing styles of the authors. The book is organized as a collection of papers similar to a typical graduate survey course packet, and as a result the book does not possess a narrative flow. Also the book contains a number of other major weaknesses such as a lack of an introductory or concluding chapter. The book could greatly benefit from an introductory chapter that would introduce readers to the areas of AI, explain why such a book is needed, and explain why each author's research is important. The manner in which the book currently handles these issues is a preface that talks about some of the above issues in a superficial manner. Also such an introductory chapter could be used to expound an what level of AI mathematical and statistical knowledge is expected from readers in order to gain maximum benefit from this book. A concluding chapter would be useful to readers interested in the other areas of AI not covered by the book, as well as open issues common to all of the research presented. In addition, most of the contributors come exclusively from the computer science field, which heavily slants the work toward the computer science community. A great deal of the research presented is being used by a number of research communities outside of computer science, such as biotechnology and information technology. A wider audience for this book could have been achieved by including a more diverse range of authors showing the interdisciplinary nature of many of these fields. Also the book's editors state, "The reader is expected to have basic knowledge of AI at the level of an introductory course to the field" (p vii), which is not the case for this book. Readers need at least a strong familiarity with many of the core concepts within AI, because a number of the chapters are shallow and terse in their historical overviews. Overall, this book would be a useful tool for a professor putting together a survey course an AI research. Most importantly the book would be useful for eager graduate students in need of a starting point for their research for their thesis. This book is best suited as a reference guide to be used by individuals with a strong familiarity with AI."
  14. Intner, S.S.; Lazinger, S.S.; Weihs, J.: Metadata and its impact on libraries (2005) 0.00
    8.715174E-5 = product of:
      0.0020916418 = sum of:
        0.0020916418 = product of:
          0.006274925 = sum of:
            0.006274925 = weight(_text_:p in 339) [ClassicSimilarity], result of:
              0.006274925 = score(doc=339,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.079450354 = fieldWeight in 339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=339)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Footnote
    Chapter 8 discusses issues of archiving and preserving digital materials. The chapter reiterates, "What is the point of all of this if the resources identified and catalogued are not preserved?" (Gorman, 2003, p. 16). Discussion about preservation and related issues is organized in five sections that successively ask why, what, who, how, and how much of the plethora of digital materials should be archived and preserved. These are not easy questions because of media instability and technological obsolescence. Stakeholders in communities with diverse interests compete in terms of which community or representative of a community has an authoritative say in what and how much get archived and preserved. In discussing the above-mentioned questions, the authors once again provide valuable information and lessons from a number of initiatives in Europe, Australia, and from other global initiatives. The Draft Charter on the Preservation of the Digital Heritage and the Guidelines for the Preservation of Digital Heritage, both published by UNESCO, are discussed and some of the preservation principles from the Guidelines are listed. The existing diversity in administrative arrangements for these new projects and resources notwithstanding, the impact on content produced for online reserves through work done in digital projects and from the use of metadata and the impact on levels of reference services and the ensuing need for different models to train users and staff is undeniable. In terms of education and training, formal coursework, continuing education, and informal and on-the-job training are just some of the available options. The intensity in resources required for cataloguing digital materials, the questions over the quality of digital resources, and the threat of the new digital environment to the survival of the traditional library are all issues quoted by critics and others, however, who are concerned about a balance for planning and resources allocated for traditional or print-based resources and newer digital resources. A number of questions are asked as part of the book's conclusions in Chapter 10. Of these questions, one that touches on all of the rest and upon much of the book's content is the question: What does the future hold for metadata in libraries? Metadata standards are alive and well in many communities of practice, as Chapters 2-6 have demonstrated. The usefulness of metadata continues to be high and innovation in various elements should keep information professionals engaged for decades to come. There is no doubt that metadata have had a tremendous impact in how we organize information for access and in terms of who, how, when, and where contact is made with library services and collections online. Planning and commitment to a diversity of metadata to serve the plethora of needs in communities of practice are paramount for the continued success of many digital projects and for online preservation of our digital heritage."
  15. Lambe, P.: Organising knowledge : taxonomies, knowledge and organisational effectiveness (2007) 0.00
    8.715174E-5 = product of:
      0.0020916418 = sum of:
        0.0020916418 = product of:
          0.006274925 = sum of:
            0.006274925 = weight(_text_:p in 1804) [ClassicSimilarity], result of:
              0.006274925 = score(doc=1804,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.079450354 = fieldWeight in 1804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1804)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    

Languages

Types

  • s 417
  • i 131
  • b 31
  • el 30
  • x 22
  • n 11
  • d 9
  • fi 3
  • h 2
  • u 2
  • r 1
  • More… Less…

Themes

Subjects

Classifications