Search (27138 results, page 1357 of 1357)

  1. Liebowitz, J.: What they didn't tell you about knowledge management (2006) 0.00
    1.0247629E-4 = product of:
      0.0023569546 = sum of:
        0.0023569546 = product of:
          0.0047139092 = sum of:
            0.0047139092 = weight(_text_:1 in 609) [ClassicSimilarity], result of:
              0.0047139092 = score(doc=609,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.08142233 = fieldWeight in 609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=609)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    0-8108-5725-1
  2. Thaller, M.: From the digitized to the digital library (2001) 0.00
    1.0247629E-4 = product of:
      0.0023569546 = sum of:
        0.0023569546 = product of:
          0.0047139092 = sum of:
            0.0047139092 = weight(_text_:1 in 1159) [ClassicSimilarity], result of:
              0.0047139092 = score(doc=1159,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.08142233 = fieldWeight in 1159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1159)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Content
    Theses: 1. Who should be addressed by digital libraries? How shall we measure whether we have reached the desired audience? Thesis: The primary audience for a digital library is neither the leading specialist in the respective field, nor the freshman, but the advanced student or young researcher and the "almost specialist". The primary topic of digitization projects should not be the absolute top range of the "treasures" of a collection, but those materials that we always have wanted to promote if they were just marginally more important. Whether we effectively serve them to the appropriate community of serious users can only be measured according to criteria that have yet to be developed. 2. The appropriate size of digital libraries and their access tools Thesis: Digital collections need a critical, minimal size to make their access worthwhile. In the end, users want to access information, not metadata or gimmicks. 3. The quality of digital objects Thesis: If digital library resources are to be integrated into the daily work of the research community, they must appear on the screen of the researcher in a quality that is useful in actual work. 4. The granularity / modularity of digital repositories Thesis: While digital libraries are self-contained bodies of information, they are not the basic unit that most users want to access. Users are, as a rule, more interested in the individual objects in the library and need a straightforward way to access them. 5. Digital collections as integrated reference systems Thesis: Traditional libraries support their collections with reference material. Digital collections need to find appropriate models to replicate this functionality. 6. Library and teaching Thesis: The use of multimedia in teaching is as much of a current buzzword as the creation of digital collections. It is obvious that they should be connected. A clear-cut separation of the two approaches is nevertheless necessary.
  3. Lavoie, B.; Henry, G.; Dempsey, L.: ¬A service framework for libraries (2006) 0.00
    1.0247629E-4 = product of:
      0.0023569546 = sum of:
        0.0023569546 = product of:
          0.0047139092 = sum of:
            0.0047139092 = weight(_text_:1 in 1175) [ClassicSimilarity], result of:
              0.0047139092 = score(doc=1175,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.08142233 = fieldWeight in 1175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1175)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    Libraries have not been idle in the face of the changes re-shaping their environments: in fact, much work is underway and major advances have already been achieved. But these efforts lack a unifying framework, a means for libraries, as a community, to gather the strands of individual projects and weave them into a cohesive whole. A framework of this kind would help in articulating collective expectations, assessing progress, and identifying critical gaps. As the information landscape continually shifts and changes, a framework would promote the design and implementation of flexible, interoperable library systems that can respond more quickly to the needs of libraries in serving their constituents. It will provide a port of entry for organizations outside the library domain, and help them understand the critical points of contact between their services and those of libraries. Perhaps most importantly, a framework would assist libraries in strategic planning. It would provide a tool to help them establish priorities, guide investment, and anticipate future needs in uncertain environments. It was in this context, and in recognition of efforts already underway to align library services with emerging information environments, that the Digital Library Federation (DLF) in 2005 sponsored the formation of the Service Framework Group (SFG) [1] to consider a more systematic, community-based approach to aligning the functions of libraries with increasing automation in fulfilling the needs of information environments. The SFG seeks to understand and model the research library in today's environment, by developing a framework within which the services offered by libraries, represented both as business logic and computer processes, can be understood in relation to other parts of the institutional and external information landscape. This framework will help research institutions plan wisely for providing the services needed to meet the current and emerging information needs of their constituents. A service framework is a tool for documenting a shared view of library services in changing environments; communicating it among libraries and others, and applying it to best advantage in meeting library goals. It is a means of focusing attention and organizing discussion. It is not, however, a substitute for innovation and creativity. It does not supply the answers, but facilitates the process by which answers are sought, found, and applied. This paper discusses the SFG's vision of a service framework for libraries, its approach to developing the framework, and the group's work agenda going forward.
  4. Zia, L.L.: ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) Program : new projects from fiscal year 2004 (2005) 0.00
    1.0247629E-4 = product of:
      0.0023569546 = sum of:
        0.0023569546 = product of:
          0.0047139092 = sum of:
            0.0047139092 = weight(_text_:1 in 1221) [ClassicSimilarity], result of:
              0.0047139092 = score(doc=1221,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.08142233 = fieldWeight in 1221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1221)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    In fall 2004, the National Science Foundation's (NSF) National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program made new grants in three tracks: Pathways, Services, and Targeted Research. Together with projects started in fiscal years (FY) 2000-03 these new grants continue the development of a national digital library of high quality educational resources to support learning at all levels in science, technology, engineering, and mathematics (STEM). By enabling broad access to reliable and authoritative learning and teaching materials and associated services in a digital environment, the National Science Digital Library expects to promote continual improvements in the quality of formal STEM education, and also to serve as a resource for informal and lifelong learning. Proposals for the FY05 funding cycle are due April 11, 2005, and the full solicitation is available at <http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf05545>. Two NSF directorates, the Directorate for Geosciences (GEO) and the Directorate for Mathematical and Physical Sciences (MPS) have both provided significant co-funding for over twenty projects in the first four years of the program, illustrating the NSDL program's facilitation of the integration of research and education, an important strategic objective of the NSF. In FY2004, the NSDL program introduced a new Pathways track, replacing the earlier Collections track. The Services track strongly encouraged two particular types of projects: (1) selection services and (2) usage development workshops. * Pathways projects provide stewardship for educational content and services needed by a broad community of learners; * Selection services projects identify and increase the high-quality STEM educational content known to NSDL; and * Usage development workshops engage new communities of learners in the use of NSDL and its resources.
  5. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.00
    1.0247629E-4 = product of:
      0.0023569546 = sum of:
        0.0023569546 = product of:
          0.0047139092 = sum of:
            0.0047139092 = weight(_text_:1 in 1253) [ClassicSimilarity], result of:
              0.0047139092 = score(doc=1253,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.08142233 = fieldWeight in 1253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    D-Lib magazine. 4(1998) no.1, xx S
  6. Case, D.O.: Looking for information : a survey on research on information seeking, needs, and behavior (2002) 0.00
    1.0247629E-4 = product of:
      0.0023569546 = sum of:
        0.0023569546 = product of:
          0.0047139092 = sum of:
            0.0047139092 = weight(_text_:1 in 1270) [ClassicSimilarity], result of:
              0.0047139092 = score(doc=1270,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.08142233 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    1. 5.2014 18:59:44
  7. Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries : July 14 - 18, 2002, Portland, Oregon, USA. (2002) 0.00
    9.661558E-5 = product of:
      0.0022221582 = sum of:
        0.0022221582 = product of:
          0.0044443165 = sum of:
            0.0044443165 = weight(_text_:1 in 172) [ClassicSimilarity], result of:
              0.0044443165 = score(doc=172,freq=4.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.07676571 = fieldWeight in 172, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Content
    SESSION: Digital libraries for spatial data The ADEPT digital library architecture (Greg Janée, James Frew) - G-Portal: a map-based digital library for distributed geospatial and georeferenced resources (Ee-Peng Lim, Dion Hoe-Lian Goh, Zehua Liu, Wee-Keong Ng, Christopher Soo-Guan Khoo, Susan Ellen Higgins) PANEL SESSION: Panels You mean I have to do what with whom: statewide museum/library DIGI collaborative digitization projects---the experiences of California, Colorado & North Carolina (Nancy Allen, Liz Bishoff, Robin Chandler, Kevin Cherry) - Overcoming impediments to effective health and biomedical digital libraries (William Hersh, Jan Velterop, Alexa McCray, Gunther Eynsenbach, Mark Boguski) - The challenges of statistical digital libraries (Cathryn Dippo, Patricia Cruse, Ann Green, Carol Hert) - Biodiversity and biocomplexity informatics: policy and implementation science versus citizen science (P. Bryan Heidorn) - Panel on digital preservation (Joyce Ray, Robin Dale, Reagan Moore, Vicky Reich, William Underwood, Alexa T. McCray) - NSDL: from prototype to production to transformational national resource (William Y. Arms, Edward Fox, Jeanne Narum, Ellen Hoffman) - How important is metadata? (Hector Garcia-Molina, Diane Hillmann, Carl Lagoze, Elizabeth Liddy, Stuart Weibel) - Planning for future digital libraries programs (Stephen M. Griffin) DEMONSTRATION SESSION: Demonstrations u.a.: FACET: thesaurus retrieval with semantic term expansion (Douglas Tudhope, Ceri Binding, Dorothee Blocks, Daniel Cunliffe) - MedTextus: an intelligent web-based medical meta-search system (Bin Zhu, Gondy Leroy, Hsinchun Chen, Yongchi Chen) POSTER SESSION: Posters TUTORIAL SESSION: Tutorials u.a.: Thesauri and ontologies in digital libraries: 1. structure and use in knowledge-based assistance to users (Dagobert Soergel) - How to build a digital library using open-source software (Ian H. Witten) - Thesauri and ontologies in digital libraries: 2. design, evaluation, and development (Dagobert Soergel) WORKSHOP SESSION: Workshops Document search interface design for large-scale collections and intelligent access (Javed Mostafa) - Visual interfaces to digital libraries (Katy Börner, Chaomei Chen) - Text retrieval conference (TREC) genomics pre-track workshop (William Hersh)
    Isbn
    1-581-13513-0
  8. Information systems and the economies of innovation (2003) 0.00
    8.53969E-5 = product of:
      0.0019641288 = sum of:
        0.0019641288 = product of:
          0.0039282576 = sum of:
            0.0039282576 = weight(_text_:1 in 3586) [ClassicSimilarity], result of:
              0.0039282576 = score(doc=3586,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.06785194 = fieldWeight in 3586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3586)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    1-84376-018-5
  9. Deegan, M.; Tanner, S.: Digital futures : strategies for the information age (2002) 0.00
    8.53969E-5 = product of:
      0.0019641288 = sum of:
        0.0019641288 = product of:
          0.0039282576 = sum of:
            0.0039282576 = weight(_text_:1 in 13) [ClassicSimilarity], result of:
              0.0039282576 = score(doc=13,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.06785194 = fieldWeight in 13, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=13)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    1-555-70437-9
  10. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    8.53969E-5 = product of:
      0.0019641288 = sum of:
        0.0019641288 = product of:
          0.0039282576 = sum of:
            0.0039282576 = weight(_text_:1 in 468) [ClassicSimilarity], result of:
              0.0039282576 = score(doc=468,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.06785194 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    1. 2.1997 9:16:32
  11. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.00
    8.53969E-5 = product of:
      0.0019641288 = sum of:
        0.0019641288 = product of:
          0.0039282576 = sum of:
            0.0039282576 = weight(_text_:1 in 3262) [ClassicSimilarity], result of:
              0.0039282576 = score(doc=3262,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.06785194 = fieldWeight in 3262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
  12. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    8.53969E-5 = product of:
      0.0019641288 = sum of:
        0.0019641288 = product of:
          0.0039282576 = sum of:
            0.0039282576 = weight(_text_:1 in 3370) [ClassicSimilarity], result of:
              0.0039282576 = score(doc=3370,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.06785194 = fieldWeight in 3370, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3370)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Content
    Precombined subjects, such as those shown above from Dewey, may be expressed in UDC Summary as examples of combination within various records. To express an exact match UDC class 07 has to contain example of combination 07(7) Journals. The Press - North America. In some cases we have, therefore, added examples to UDC Summary that represent exact match to Dewey Summaries. It is unfortunate that DDC has so many classes on the top level that deal with a selection of countries or languages that are given a preferred status in the scheme, and repeating these preferences in examples of combinations of UDC emulates an unwelcome cultural bias which we have to balance out somehow. This brings us to another challenge.. UDC 913(7) Regional Geography - North America [contains 2 concepts each of which has its URI] is an exact match to Dewey 917 [represented as one concept, 1 URI]. It seems that, because they represent an exact match to Dewey numbers, these UDC examples of combinations may also need a separate URIs so that they can be published as SKOS data. Albeit challenging, mapping proves to be a very useful exercise and I am looking forward to future work here especially in relation to our plans to map UDC Summary to Colon Classification. We are discussing this project with colleagues from DRTC in Bangalore (India)."
  13. Hartel, J.: ¬The case against Information and the Body in Library and Information Science (2018) 0.00
    8.53969E-5 = product of:
      0.0019641288 = sum of:
        0.0019641288 = product of:
          0.0039282576 = sum of:
            0.0039282576 = weight(_text_:1 in 5523) [ClassicSimilarity], result of:
              0.0039282576 = score(doc=5523,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.06785194 = fieldWeight in 5523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=5523)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Footnote
    Vgl.: DOI: 10.1353/lib.2018.0018. Vgl. auch den Kommentar in: Lueg, C.: To be or not to be (embodied): that is not the question. In: Journal of the Association for Information Science and Technology. 71(2020) no.1, S.114-117. (Opinion paper) Two articles in a recent special issue on Information and the Body published in the journal Library Trends stand out because of the way they are identifying, albeit indirectly, a formidable challenge to library information science (LIS). In her contribution, Bates warns that understanding information behavior demands recognizing and studying "any one important element of the ecology [in which humans are embedded]." Hartel, on the other hand, suggests that LIS would not lose much but would have lots to gain by focusing on core LIS themes instead of embodied information, since the latter may be unproductive, as LIS scholars are "latecomer[s] to a mature research domain." I would argue that LIS as a discipline cannot avoid dealing with those pesky mammals aka patrons or users; like the cognate discipline and "community of communities" human computer interaction (HCI), LIS needs the interdisciplinarity to succeed. LIS researchers are uniquely positioned to help bring together LIS's deep understanding of "information" and embodiment perspectives that may or may not have been developed in other disciplines. LIS researchers need to be more explicit about what their original contribution is, though, and what may have been appropriated from other disciplines.
  14. XML data management : native XML and XML-enabled database systems (2003) 0.00
    6.831753E-5 = product of:
      0.0015713031 = sum of:
        0.0015713031 = product of:
          0.0031426062 = sum of:
            0.0031426062 = weight(_text_:1 in 2073) [ClassicSimilarity], result of:
              0.0031426062 = score(doc=2073,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.05428155 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Footnote
    Rez. in: JASIST 55(2004) no.1, S.90-91 (N. Rhodes): "The recent near-exponential increase in XML-based technologies has exposed a gap between these technologies and those that are concerned with more fundamental data management issues. This very comprehensive and well-organized book has quite neatly filled the gap, thus achieving most of its stated intentions. The target audiences are database and XML professionals wishing to combine XML with modern database technologies and such is the breadth of scope of this book (hat few would not find it useful in some way. The editors have assembled a collection of chapters from a wide selection of industry heavyweights and as with most books of this type, it exhibits many disparate styles but thanks to careful editing it reads well as a cohesive whole. Certain sections have already appeared in print elsewhere and there is a deal of corporate flag-waving but nowhere does it become over-intrusive. The preface provides only the very brietest of introductions to XML but instead sets the tone for the remainder of the book. The twin terms of data- and document-centric XML (Bourret, 2003) that have achieved so much recent currency are re-iterated before XML data management issues are considered. lt is here that the book's aims are stated, mostly concerned with the approaches and features of the various available XML data management solutions. Not surprisingly, in a specialized book such as this one an introduction to XML consists of a single chapter. For issues such as syntax, DTDs and XML Schemas the reader is referred elsewhere, here, Chris Brandin provides a practical guide to achieving good grammar and style and argues convincingly for the use of XML as an information-modeling tool. Using a well-chosen and simple example, a practical guide to modeling information is developed, replete with examples of the pitfalls. This brief but illuminating chapter (incidentally available as a "taster" from the publisher's web site) notes that one of the most promising aspects of XML is that applications can be built to use a single mutable information model, obviating the need to change the application code but that good XML design is the basis of such mutability.
  15. Burnett, R.: How images think (2004) 0.00
    6.831753E-5 = product of:
      0.0015713031 = sum of:
        0.0015713031 = product of:
          0.0031426062 = sum of:
            0.0031426062 = weight(_text_:1 in 3884) [ClassicSimilarity], result of:
              0.0031426062 = score(doc=3884,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.05428155 = fieldWeight in 3884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    1. 2.1997 9:16:32
  16. Current theory in library and information science (2002) 0.00
    6.831753E-5 = product of:
      0.0015713031 = sum of:
        0.0015713031 = product of:
          0.0031426062 = sum of:
            0.0031426062 = weight(_text_:1 in 822) [ClassicSimilarity], result of:
              0.0031426062 = score(doc=822,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.05428155 = fieldWeight in 822, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Footnote
    However, for well over a century, major libraries in developed nations have been engaging in sophisticated measure of their operations, and thoughtful scholars have been involved along the way; if no "unified theory" has emerged thus far, why would it happen in the near future? What if "libraries" are a historicallydetermined conglomeration of distinct functions, some of which are much less important than others? It is telling that McGrath cites as many studies an brittle paper as he does investigations of reference services among his constellation of measurable services, even while acknowledging that the latter (as an aspect of "circulation") is more "essential." If one were to include in a unified theory similar phenomena outside of libraries-e.g., what happens in bookstores and WWW searches-it can be seen how difficult a coordinated explanation might become. Ultimately the value of McGrath's chapter is not in convincing the reader that a unified theory might emerge, but rather in highlighting the best in recent studies that examine library operations, identifying robust conclusions, and arguing for the necessity of clarifying and coordinating common variables and units of analysis. McGrath's article is one that would be useful for a general course in LIS methodology, and certainly for more specific lectures an the evaluation of libraries. Fra going to focus most of my comments an the remaining articles about theory, rather than the others that offer empirical results about the growth or quality of literature. I'll describe the latter only briefly. The best way to approach this issue is by first reading McKechnie and Pettigrew's thorough survey of the "Use of Theory in LIS research." Earlier results of their extensive content analysis of 1, 160 LIS articles have been published in other journals before, but is especially pertinent here. These authors find that only a third of LIS literature makes overt reference to theory, and that both usage and type of theory are correlated with the specific domain of the research (e.g., historical treatments versus user studies versus information retrieval). Lynne McKechnie and Karen Pettigrew identify four general sources of theory: LIS, the Humanities, Social Sciences and Sciences. This approach makes it obvious that the predominant source of theory is the social sciences (45%), followed by LIS (30%), the sciences (19%) and the humanities (5%) - despite a predominance (almost 60%) of articles with science-related content. The authors discuss interdisciplinarity at some length, noting the great many non-LIS authors and theories which appear in the LIS literature, and the tendency for native LIS theories to go uncited outside of the discipline. Two other articles emphasize the ways in which theory has evolved. The more general of three two is Jack Glazier and Robert Grover's update of their classic 1986 Taxonomy of Theory in LIS. This article describes an elaborated version, called the "Circuits of Theory," offering definitions of a hierarchy of terms ranging from "world view" through "paradigm," "grand theory" and (ultimately) "symbols." Glazier & Grover's one-paragraph example of how theory was applied in their study of city managers is much too brief and is at odds with the emphasis an quantitative indicators of literature found in the rest of the volume. The second article about the evolution of theory, Richard Smiraglia's "The progress of theory in knowledge organization," restricts itself to the history of thinking about cataloging and indexing. Smiraglia traces the development of theory from a pragmatic concern with "what works," to a reliance an empirical tests, to an emerging flirtation with historicist approaches to knowledge.
  17. Lazar, J.: Web usability : a user-centered design approach (2006) 0.00
    6.831753E-5 = product of:
      0.0015713031 = sum of:
        0.0015713031 = product of:
          0.0031426062 = sum of:
            0.0031426062 = weight(_text_:1 in 340) [ClassicSimilarity], result of:
              0.0031426062 = score(doc=340,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.05428155 = fieldWeight in 340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=340)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Footnote
    The many hands-on examples throughout the book and the four case studies at the end of the book are obvious strong points linking theory with practice. The four case studies are very useful, and it is hard to find such cases in the literature since few companies want to publicize such information. The four case studies are not just simple repeats; they are very different from each other and provide readers specific examples to analyze and follow. Web Usability is an excellent textbook, with a wrap-up (including discussion questions, design exercises, and suggested reading) at the end of each chapter. Each wrap-up first outlines where the focus should be placed, corresponding to what was presented at the very beginning of each chapter. Discussion questions help recall in an active way the main points in each chapter. The design exercises make readers apply to a design project what they have just obtained from the chapter, leading to a deeper understanding of knowledge. Suggested reading provides additional information sources for people who want to further study the research topic, which bridges the educational community back to academia. The book is enhanced by two universal resource locators (URLs) linking to the Addison-Wesley instructor resource center (http://www. aw.com/irc) and the Web-Star survey and project deliverables (http:// www. aw.com/cssupport), respectively. There are valuable resources in these two URLs, which can be used together with Web Usability. Like the Web, books are required to possess good information architecture to facilitate understanding. Fortunately, Web Usability has very clear information architecture. Chap. 1 introduces the user-centered Web-development life cycle, which is composed of seven stages. Chap. 2 discusses Stage l, chaps. 3 and 4 detail Stage 2, chaps. 5 through 7 outline Stage 3, and chaps. 8 through I1 present Stages 4 through 7, respectively. In chaps. 2 through 11, details (called "methods" in this review) are given for every stage of the methodology. The main clue of the book is how to design a new Web site; however, this does not mean that Web redesign is trivial and ignored. The author mentions Web redesign issues from time to time, and a dedicated section is presented to discuss redesign in chaps. 2, 3, 10, and 11.
  18. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    6.831753E-5 = product of:
      0.0015713031 = sum of:
        0.0015713031 = product of:
          0.0031426062 = sum of:
            0.0031426062 = weight(_text_:1 in 1796) [ClassicSimilarity], result of:
              0.0031426062 = score(doc=1796,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.05428155 = fieldWeight in 1796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    1-57387-171-0

Authors

Languages

Types

Themes

Subjects

Classifications