Search (10692 results, page 535 of 535)

  1. XML data management : native XML and XML-enabled database systems (2003) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 2073) [ClassicSimilarity], result of:
              0.00838847 = score(doc=2073,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  2. Challenges in knowledge representation and organization for the 21st century : integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference, 10-13 July 2002, Granada, Spain (2003) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 2679) [ClassicSimilarity], result of:
              0.00838847 = score(doc=2679,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 2679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2679)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    10. Applications of Artificial Intelligence Techniques to Information Retrieval (Part I) Christopher S.G. KHOO, Karen NG and Shiyan OU: An Exploratory Study of Human Clustering Of Web Pages; Stephane CHAUDIRON, Majid IHADJADENE and François ROLE: Authorial Index Browsing in an XML Digital Library; Xavier POLANCO: Clusters, Graphs, and Networks for Analyzing Internet-Web-Supported Communication within a Virtual Community; E. HERRERA-VIEDMA, O. CORDÓN, J.C. HERRERA, M. LUQUE: An IRS Based an Multi-Granular Linguistic Information; Pedro CUESTA, Alma M. GÓMEZ and Francisco J. RODRÍGUEZ: Using Agents for Information Retrieval; 11. Integration of Knowledge in Multicultural Domain-Oriented and General Systems. (Part I) Antonio GARCIA JIMANEZ, Alberto DÍAZ ESTEBAN and Pablo GERVÁS: Knowledge Organization in a Multilingual System for the Personalization of Digital News Services: How to Integrate Knowledge; Marfa J. LÓPEZ-HUERTAS and Mario BARITA: Knowledge Representation and Organization of Gender Studies an the Internet: Towards Integration; Victoria FRANCU: Language-Independent Structures and Multilingual Information Access Annelise Mark PEJTERSEN and Hanne ALBRECHTSEN Models for Collaborative Integration of Knowledge; 12. Applications of Artificial Intelligence Techniques to Information Retrieval (Part II) C. LOPEZ-PUJALTE, V.P. GUERRERO, F. de MOYA-ANEGÓN: Evaluation of the Application of Genetic Algorithms to Relevance Feedback; O. CORDÓN, E. HERRERA-VIEDMA, M. LUQUE, F. de MOYA,ANEGÓN and C. ZARCO: An Inductive Query by Example Technique for Extended Boolean Queries Based an Simulated Annealing-Programming; Vfctor HERRERO-SOLANA and F. de MOYA-ANEGÓN: Graphical Table of Contents (GTOC) for Library Collections: the Aplication of UDC Codes for the Subject Maps; Luis M. CAMPOS, Juan M. FERNEZ-LUNA and Juan HUSTE: Managing Documents with Bayesian Belief Networks: A Brief Survey of Applications and Models; 13. Epistemological Approaches to Classification Principles, Design and Construction Birger HJOERLAND: The Methodology Of Constructing Classification Schemes: A discussion of the State-of-the-Art; Hope OLSON, Juliet NIELSEN and Shona R. DIPPIE: Encyclopaedist Rivalry, Classificatory Commonality, Illusory Universality; Jian QIN: Evolving Paradigms of Knowledge Representation and Organization: A Comparative Study of classification, XML/DTD and Ontology; Jens-Erik MAI: Is Classification Theory Possible? Rethinking Classification Research; I.C. McILWAINE: Where Have All The Flowers Gone? An Investigation Into The Fate of Some Special Classification Schemes; 14. Professional Ethics. Users and Information Structures. Evaluation of Systems J. Carlos FERNÁNDEZ-MOLINA and J. Augusto c. GUIMARAES: Ethical Aspects of Knowledge Organization and Representation in the Digital Environment: Their Articulation in Professional Codes of Ethics; Ali Asghar SHIRI, Crawford REVIE and Gobinda CHOWDHURY: Assessing the Impact of User Interaction with Thesaural Knowledge Structures: A Quantitative Analysis Framework; Carmen CARO CASTRO and Críspulo TRAVIESO RODRÍGUEZ: Ariadne's Thread: Knowledge Structures for Browsing in OPAC's; Linda BANWELL: Developing and Evaluation Framework For a Supranational Digital Library; Antonio L. GARCIA GUTIÉRREZ: Knowledge Organization From a "culture of the Border": Towards a Transcultural Ethics of Mediation; Christopher KING, David H. MARWICK and M. Howard WILLIAMS: The Importance of Context in Resolving of Confliets when Sharing User Profiles;
  3. XML in libraries (2002) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 3100) [ClassicSimilarity], result of:
              0.00838847 = score(doc=3100,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 3100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3100)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  4. Learning XML (2003) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 3101) [ClassicSimilarity], result of:
              0.00838847 = score(doc=3101,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 3101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3101)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  5. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 3102) [ClassicSimilarity], result of:
              0.00838847 = score(doc=3102,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 3102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3102)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  6. Ratzan, L.: Understanding information systems : what they do and why we need them (2004) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 4581) [ClassicSimilarity], result of:
              0.00838847 = score(doc=4581,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 4581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4581)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    In "Organizing Information" various fundamental organizational schemes are compared. These include hierarchical, relational, hypertext, and random access models. Each is described initially and then expanded an by listing advantages and disadvantages. This comparative format-not found elsewhere in the book-improves access to the subject and overall understanding. The author then affords considerable space to Boolean searching in the chapter "Retrieving Information." Throughout this chapter, the intricacies and problems of pattern matching and relevance are highlighted. The author elucidates the fact that document retrieval by simple pattern matching is not the same as problem solving. Therefore, "always know the nature of the problem you are trying to solve" (p. 56). This chapter is one of the more important ones in the book, covering a large topic swiftly and concisely. Chapters 5 through 11 then delve deeper into various specific issues of information systems. The chapters an securing and concealing information are exceptionally good. Without mentioning specific technologies, Mr. Ratzan is able to clearly present fundamental aspects of information security. Principles of backup security, password management, and encryption are also discussed in some detail. The latter is illustrated with some fascinating examples, from the Navajo Code Talkers to invisible ink and others. The chapters an measuring, counting, and numbering information complement each other well. Some of the more math-centric discussions and examples are found here. "Measuring Information" begins with a brief overview of bibliometrics and then moves quickly through Lotka's law, Zipf's law, and Bradford's law. For an LIS student, exposure to these topics is invaluable. Baseball statistics and web metrics are used for illustration purposes towards the end. In "counting Information," counting devices and methods are first presented, followed by discussion of the Fibonacci sequence and golden ratio. This relatively long chapter ends with examples of the tower of Hanoi, the changes of winning the lottery, and poker odds. The bulk of "Numbering Information" centers an prime numbers and pi. This chapter reads more like something out of an arithmetic book and seems somewhat extraneous here. Three specific types of information systems are presented in the second half of the book, each afforded its own chapter. These examples are universal as not to become dated or irrelevant over time. "The Computer as an Information System" is relatively short and focuses an bits, bytes, and data compression. Considering the Internet as an information system-chapter 13-is an interesting illustration. It brings up issues of IP addressing and the "privilege-vs.-right" access issue. We are reminded that the distinction between information rights and privileges is often unclear. A highlight of this chapter is the discussion of metaphors people use to describe the Internet, derived from the author's own research. He has found that people have varying mental models of the Internet, potentially affecting its perception and subsequent use.
  7. Lazar, J.: Web usability : a user-centered design approach (2006) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 340) [ClassicSimilarity], result of:
              0.00838847 = score(doc=340,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=340)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    The many hands-on examples throughout the book and the four case studies at the end of the book are obvious strong points linking theory with practice. The four case studies are very useful, and it is hard to find such cases in the literature since few companies want to publicize such information. The four case studies are not just simple repeats; they are very different from each other and provide readers specific examples to analyze and follow. Web Usability is an excellent textbook, with a wrap-up (including discussion questions, design exercises, and suggested reading) at the end of each chapter. Each wrap-up first outlines where the focus should be placed, corresponding to what was presented at the very beginning of each chapter. Discussion questions help recall in an active way the main points in each chapter. The design exercises make readers apply to a design project what they have just obtained from the chapter, leading to a deeper understanding of knowledge. Suggested reading provides additional information sources for people who want to further study the research topic, which bridges the educational community back to academia. The book is enhanced by two universal resource locators (URLs) linking to the Addison-Wesley instructor resource center (http://www. aw.com/irc) and the Web-Star survey and project deliverables (http:// www. aw.com/cssupport), respectively. There are valuable resources in these two URLs, which can be used together with Web Usability. Like the Web, books are required to possess good information architecture to facilitate understanding. Fortunately, Web Usability has very clear information architecture. Chap. 1 introduces the user-centered Web-development life cycle, which is composed of seven stages. Chap. 2 discusses Stage l, chaps. 3 and 4 detail Stage 2, chaps. 5 through 7 outline Stage 3, and chaps. 8 through I1 present Stages 4 through 7, respectively. In chaps. 2 through 11, details (called "methods" in this review) are given for every stage of the methodology. The main clue of the book is how to design a new Web site; however, this does not mean that Web redesign is trivial and ignored. The author mentions Web redesign issues from time to time, and a dedicated section is presented to discuss redesign in chaps. 2, 3, 10, and 11.
  8. Mossberger, K.; Tolbert, C.J.; Stansbury, M.: Virtual inequality : beyond the digital divide (2003) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 1795) [ClassicSimilarity], result of:
              0.00838847 = score(doc=1795,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 1795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1795)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    That there is a "digital divide" - which falls between those who have and can afford the latest in technological tools and those who have neither in our society - is indisputable. "Virtual Inequality" redefines the issue as it explores the cascades of that divide, which involve access, skill, political participation, as well as the obvious economics. Computer and Internet access are insufficient without the skill to use the technology, and economic opportunity and political participation provide primary justification for realizing that this inequality is a public problem and not simply a matter of private misfortune. Defying those who say the divide is growing smaller, this volume, based on a national survey that includes data from over 1800 respondents in low-income communities, shows otherwise. In addition to demonstrating why disparities persist in such areas as technological abilities, the survey also shows that the digitally disadvantaged often share many of the same beliefs as their more privileged counterparts. African-Americans, for instance, are even more positive in their attitudes toward technology than whites are in many respects, contrary to conventional wisdom. The rigorous research on which the conclusions are based is presented accessibly and in an easy-to-follow manner. Not content with analysis alone, nor the untangling of the complexities of policymaking, "Virtual Inequality" views the digital divide compassionately in its human dimensions and recommends a set of practical and common-sense policy strategies. Inequality, even in a virtual form this book reminds us, is unacceptable and a situation that society is compelled to address.
  9. Lambe, P.: Organising knowledge : taxonomies, knowledge and organisational effectiveness (2007) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 1804) [ClassicSimilarity], result of:
              0.00838847 = score(doc=1804,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 1804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1804)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Summary Taxonomies are often thought to play a niche role within content-oriented knowledge management projects. They are thought to be 'nice to have' but not essential. In this groundbreaking book, Patrick Lambe shows how they play an integral role in helping organizations coordinate and communicate effectively. Through a series of case studies, he demonstrates the range of ways in which taxonomies can help organizations to leverage and articulate their knowledge. A step-by-step guide in the book to running a taxonomy project is full of practical advice for knowledge managers and business owners alike. Key Features Written in a clear, accessible style, demystifying the jargon surrounding taxonomies Case studies give real world examples of taxonomies in use Step-by-step guides take the reader through the key stages in a taxonomy project Decision-making frameworks and example questionnaires Clear description of how taxonomies relate to technology applications The Author Patrick Lambe is a widely respected knowledge management consultant based in Singapore. His Master's degree from University College London is in Information Studies and Librarianship, and he has worked as a professional librarian, as a trainer and instructional designer, and as a business manager in operational and strategic roles. He has been active in the field of knowledge management and e-learning since 1998, and in 2002 founded his own consulting and research firm, Straits Knowledge, with a partner. He is former President of the Information and Knowledge Society, and is Adjunct Professor at Hong Kong Polytechnic University. Patrick speaks and writes internationally on knowledge management. Readership This book is written primarily for knowledge managers and key stakeholders in knowledge management projects. However, it is also useful to all information professionals who wish to understand the role of taxonomies in a corporate setting. It may be used as a teaching text for postgraduate students in Information Studies, Library Science, and Knowledge Management, as well as at MBA level. Contents Part One: Dealing with Babel - the problem of coordination; why taxonomies are important; definitions; taxonomy as a common language; taxonomies express what is important; socially constructed; the business case for taxonomies; taxonomies in KM, collaboration, expertise management and information management; taxonomies, typologies and sensemaking Part Two: Fixing the foundations: planning your taxonomy project - understanding your context; identifying and engaging stakeholders; defining your purpose; planning your approach; communicating and setting expectations; managing myths; how NOT to do a taxonomy project; a taxonomy as a standard; digital information, hierarchies and facets Part Three: Building the floors: implementing your taxonomy project - Implicit taxonomies; evidence gathering; analysis or sensemaking; validation principles and techniques; change management and learning; taxonomy sustainability and governance; taxonomies and technology; measuring success Part Four: Looking skywards: the future of taxonomies - complexity and sensemaking; taxonomies as sensemaking frameworks and patterns; taxonomies and serendipity; taxonomies and ambiguity; anti-taxonomy and folksonomies; taxonomies, ignorance and power; taxonomies and organisational renewal
  10. Libraries and Google (2005) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 1973) [ClassicSimilarity], result of:
              0.00838847 = score(doc=1973,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 1973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1973)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Introduction: Libraries and Their Interrelationships with Google - William Miller Disruptive Beneficence: The Google Print Program and the Future of Libraries - Mark Sandler The Google Library Project at Oxford - Ronald Milne The (Uncertain) Future of Libraries in a Google World: Sounding an Alarm - Rick Anderson A Gaggle of Googles: Limitations and Defects of Electronic Access as Panacea - -Mark Y. Herring Using the Google Search Appliance for Federated Searching: A Case Study - Mary Taylor Google's Print and Scholar Initiatives: The Value of and Impact on Libraries and Information Services - Robert J. Lackie Google Scholar vs. Library Scholar: Testing the Performance of Schoogle - Burton Callicott; Debbie Vaughn Google, the Invisible Web, and Librarians: Slaying the Research Goliath - Francine Egger-Sider; Jane Devine Choices in the Paradigm Shift: Where Next for Libraries? - Shelley E. Phipps; Krisellen Maloney Calling the Scholars Home: Google Scholar as a Tool for Rediscovering the Academic Library - Maurice C. York Checking Under the Hood: Evaluating Google Scholar for Reference Use - Janice Adlington; Chris Benda Running with the Devil: Accessing Library-Licensed Full Text Holdings Through Google Scholar - Rebecca Donlan; Rachel Cooke Directing Students to New Information Types: A New Role for Google in Literature Searches? - Mike Thelwall Evaluating Google Scholar as a Tool for Information Literacy Rachael Cathcart - Amanda Roberts Optimising Publications for Google Users - Alan Dawson Google and Privacy - Paul S. Piper Image: Google's Most Important Product - Ron Force Keeping Up with Google: Resources and Strategies for Staying Ahead of the Pack - Michael J. Krasulski; Steven J. Bell
  11. Broughton, V.: Essential thesaurus construction (2006) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 2924) [ClassicSimilarity], result of:
              0.00838847 = score(doc=2924,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 2924, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Diese Abschnitte sind verständlich geschrieben und trotz der mitunter gar nicht so einfachen Thematik auch für Einsteiger geeignet. Vorteilhaft ist sicherlich, dass die Autorin die Thesauruserstellung konsequent anhand eines einzelnen thematischen Beispiels demonstriert und dafür das Gebiet "animal welfare" gewählt hat, wohl nicht zuletzt auch deshalb, da die hier auftretenden Facetten und Beziehungen ohne allzu tiefgreifende fachwissenschaftliche Kenntnisse für die meisten Leser nachvollziehbar sind. Das methodische Gerüst der Facettenanalyse wird hier deutlich stärker betont als etwa in der (spärlichen) deutschsprachigen Thesaurusliteratur. Diese Vorgangsweise soll neben der Ordnungsbildung auch dazu verhelfen, die Zahl der Deskriptoren überschaubar zu halten und weniger auf komplexe (präkombinierte) Deskriptoren als auf postkoordinierte Indexierung zu setzen. Dafür wird im übrigen das als Verfeinerung der bekannten Ranganathanschen PMEST-Formel geltende Schema der 13 "fundamental categories" der UK Classification Research Group (CRG) vorgeschlagen bzw. in dem Beispiel verwendet (Thing / Kind / Part / Property; Material / Process / Operation; Patient / Product / By-product / Agent; Space; Time). Als "minor criticism" sei erwähnt, dass Broughton in ihrem Demonstrationsbeispiel als Notation für die erarbeitete Ordnung eine m.E. schwer lesbare Buchstabenfolge verwendet, obwohl sie zugesteht (S. 165), dass ein Zifferncode vielfach als einfacher handhabbar empfunden wird.
  12. Morville, P.: Ambient findability : what we find changes who we become (2005) 0.00
    0.0010485587 = product of:
      0.004194235 = sum of:
        0.004194235 = product of:
          0.00838847 = sum of:
            0.00838847 = weight(_text_:research in 312) [ClassicSimilarity], result of:
              0.00838847 = score(doc=312,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.063042626 = fieldWeight in 312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.015625 = fieldNorm(doc=312)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The book's central thesis is that information literacy, information architecture, and usability are all critical components of this new world order. Hand in hand with that is the contention that only by planning and designing the best possible software, devices, and Internet, will we be able to maintain this connectivity in the future. Morville's book is highlighted with full color illustrations and rich examples that bring his prose to life. Ambient Findability doesn't preach or pretend to know all the answers. Instead, it presents research, stories, and examples in support of its novel ideas. Are w truly at a critical point in our evolution where the quality of our digital networks will dictate how we behave as a species? Is findability indeed the primary key to a successful global marketplace in the 21st century and beyond. Peter Morville takes you on a thought-provoking tour of these memes and more -- ideas that will not only fascinate but will stir your creativity in practical ways that you can apply to your work immediately.

Authors

Languages

Types

  • a 9132
  • m 821
  • el 491
  • s 458
  • r 145
  • x 65
  • b 58
  • i 52
  • ? 12
  • p 7
  • d 6
  • n 5
  • u 2
  • z 2
  • ag 1
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications