Search (828 results, page 1 of 42)

  • × theme_ss:"Internet"
  1. Veittes, M.: Electronic Book (1995) 0.19
    0.19124489 = product of:
      0.28686732 = sum of:
        0.21820296 = weight(_text_:book in 3204) [ClassicSimilarity], result of:
          0.21820296 = score(doc=3204,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.9753932 = fieldWeight in 3204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.15625 = fieldNorm(doc=3204)
        0.06866435 = product of:
          0.1373287 = sum of:
            0.1373287 = weight(_text_:22 in 3204) [ClassicSimilarity], result of:
              0.1373287 = score(doc=3204,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.77380234 = fieldWeight in 3204, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.15625 = fieldNorm(doc=3204)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    RRZK-Kompass. 1995, Nr.65, S.21-22
  2. Friesel, U.: ¬Das Buch wie Cola aus dem Automaten : Book-on-Demand: Gedruckt aus dem Internet, was gewünscht wird, und zwar sofort (1999) 0.11
    0.11474694 = product of:
      0.1721204 = sum of:
        0.1309218 = weight(_text_:book in 3902) [ClassicSimilarity], result of:
          0.1309218 = score(doc=3902,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 3902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=3902)
        0.041198608 = product of:
          0.082397215 = sum of:
            0.082397215 = weight(_text_:22 in 3902) [ClassicSimilarity], result of:
              0.082397215 = score(doc=3902,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.46428138 = fieldWeight in 3902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3902)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    17. 7.1996 9:33:22
  3. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.10
    0.0963002 = product of:
      0.14445029 = sum of:
        0.13092178 = weight(_text_:book in 2222) [ClassicSimilarity], result of:
          0.13092178 = score(doc=2222,freq=18.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5852359 = fieldWeight in 2222, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=2222)
        0.013528514 = product of:
          0.027057027 = sum of:
            0.027057027 = weight(_text_:search in 2222) [ClassicSimilarity], result of:
              0.027057027 = score(doc=2222,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.15360467 = fieldWeight in 2222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2222)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: JASIST 55(2004) no.3, S.275-276 (C. Chen): "This is a book about finding significant statistical patterns on the Web - in particular, patterns that are associated with hypertext documents, topics, hyperlinks, and queries. The term pattern in this book refers to dependencies among such items. On the one hand, the Web contains useful information an just about every topic under the sun. On the other hand, just like searching for a needle in a haystack, one would need powerful tools to locate useful information an the vast land of the Web. Soumen Chakrabarti's book focuses an a wide range of techniques for machine learning and data mining an the Web. The goal of the book is to provide both the technical Background and tools and tricks of the trade of Web content mining. Much of the technical content reflects the state of the art between 1995 and 2002. The targeted audience is researchers and innovative developers in this area, as well as newcomers who intend to enter this area. The book begins with an introduction chapter. The introduction chapter explains fundamental concepts such as crawling and indexing as well as clustering and classification. The remaining eight chapters are organized into three parts: i) infrastructure, ii) learning and iii) applications.
    Part I, Infrastructure, has two chapters: Chapter 2 on crawling the Web and Chapter 3 an Web search and information retrieval. The second part of the book, containing chapters 4, 5, and 6, is the centerpiece. This part specifically focuses an machine learning in the context of hypertext. Part III is a collection of applications that utilize the techniques described in earlier chapters. Chapter 7 is an social network analysis. Chapter 8 is an resource discovery. Chapter 9 is an the future of Web mining. Overall, this is a valuable reference book for researchers and developers in the field of Web mining. It should be particularly useful for those who would like to design and probably code their own Computer programs out of the equations and pseudocodes an most of the pages. For a student, the most valuable feature of the book is perhaps the formal and consistent treatments of concepts across the board. For what is behind and beyond the technical details, one has to either dig deeper into the bibliographic notes at the end of each chapter, or resort to more in-depth analysis of relevant subjects in the literature. lf you are looking for successful stories about Web mining or hard-way-learned lessons of failures, this is not the book."
  4. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.09
    0.08930281 = product of:
      0.13395421 = sum of:
        0.10689719 = weight(_text_:book in 4497) [ClassicSimilarity], result of:
          0.10689719 = score(doc=4497,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.47784314 = fieldWeight in 4497, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
        0.027057027 = product of:
          0.054114055 = sum of:
            0.054114055 = weight(_text_:search in 4497) [ClassicSimilarity], result of:
              0.054114055 = score(doc=4497,freq=8.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.30720934 = fieldWeight in 4497, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4497)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book provides practical strategies which enable the advanced web user to locate information effectively and to form a precise evaluation of the accuracy of that information. Although the book provides a brief but thorough review of the technologies which are currently available for these purposes, most of the book concerns practical `future-proof' techniques which are independent of changes in the tools available. For example, the book covers: how to retrieve salient information quickly; how to remove or compensate for bias; and tuition of novice Internet users.
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  5. Müller, J.F.: ¬A librarian's guide to the Internet : a guide to searching and evaluating information (2003) 0.08
    0.07893328 = product of:
      0.118399926 = sum of:
        0.094484664 = weight(_text_:book in 4502) [ClassicSimilarity], result of:
          0.094484664 = score(doc=4502,freq=6.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.42235768 = fieldWeight in 4502, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4502)
        0.023915261 = product of:
          0.047830522 = sum of:
            0.047830522 = weight(_text_:search in 4502) [ClassicSimilarity], result of:
              0.047830522 = score(doc=4502,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.27153727 = fieldWeight in 4502, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4502)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    There is a great challenge for librarians to keep up-to-date with how best to use the Internet. This book helps them achieve that goal. Covers for example: how to search in order to achieve the best results (strategies, what to ask and examples) and interpreting results (including examples). Not only does the book show how to use the Internet, but it also links this to perfect customer service - how to teach your customers what you know and how to properly interpret what your customers want.
    Content
    Key Features - Helps a librarian deliver perfect customer service with confidence - Provides practical tips and hints; is pragmatic rather than technical - Is written by a highly respected and experienced practitioner The Author Jeanne Froidevaux Müller is a frequent contributor to the respected magazine Managing Information; the author was, from 1992-2002, head of the library at the Swiss Cancer League. Jeanne is currently based at the public library of Thun, Switzerland. Readership The book is aimed at all librarians and informational professionals: in the academic, public and private sectors. It will be of interest to both large and small libraries. Contents Introduction Basis of confidence - the Internet as a tool and not something to be afraid of Data - Information - Knowledge How to search - simple strategies; what to ask; examples Interpreting results-including examples Maintaining a link list an your browser How to teach your customers what you know and how to know what your customers want Perfect customer service
  6. Notess, G.R.: Using CD-ROMs with the Internet (1995) 0.08
    0.07622548 = product of:
      0.11433822 = sum of:
        0.08728119 = weight(_text_:book in 4096) [ClassicSimilarity], result of:
          0.08728119 = score(doc=4096,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 4096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=4096)
        0.027057027 = product of:
          0.054114055 = sum of:
            0.054114055 = weight(_text_:search in 4096) [ClassicSimilarity], result of:
              0.054114055 = score(doc=4096,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.30720934 = fieldWeight in 4096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4096)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    CD-ROMs are being used in conjunction with the Internet. The Internet on CD-ROM from Ventana Press accompanies a book with in text URLs marked up in proper HTML. The Superhighway Access CyberSearch CD-ROM is used to store the Lycos Internet index and search engine. Reviews these products and outlines other uses for CD-ROMs
  7. Van Epps, A.S.: ¬The evolution of electronic reference sources (2005) 0.07
    0.07426354 = product of:
      0.11139531 = sum of:
        0.094484664 = weight(_text_:book in 2581) [ClassicSimilarity], result of:
          0.094484664 = score(doc=2581,freq=6.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.42235768 = fieldWeight in 2581, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2581)
        0.016910642 = product of:
          0.033821285 = sum of:
            0.033821285 = weight(_text_:search in 2581) [ClassicSimilarity], result of:
              0.033821285 = score(doc=2581,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.19200584 = fieldWeight in 2581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2581)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - To provide a historical look at the development of web versions of reference materials and discuss what makes an easy-to-use and useful electronic handbook. Design/methodology/approach - Electronic reference materials were limited to handbooks available on the web. Observations and assumptions about usability are tested with an information retrieval test for specific tasks in print and online editions of the same texts. Findings - Recommended adoption of those elements which create a well-designed book in combination with robust search capabilities and online presentation result in an easy-to-use and useful electronic reference source. Research limitations/implications - The small sample size that was used for testing limits the ability to draw conclusions, and is used only as an indication of the differences between models. A more thorough look at difference between electronic book aggregates, such as ENGnetBASE, Knovel® and Referex would highlight the best features for electronic reference materials. Practical implications - Advantages to particular models for electronic reference publishing are discussed, raising awareness for product evaluation. Areas of development for electronic reference book publishers or providers are identified. Work in these areas would help ensure maximum efficiency through cross title searching via meta-searching and data manipulation. Originality/value - The paper presents results from some human computer interaction studies about electronic books which have been implemented in a web interface, and the positive effects achieved.
  8. Stuart, D.: Web metrics for library and information professionals (2014) 0.07
    0.07059233 = product of:
      0.105888486 = sum of:
        0.08538542 = weight(_text_:book in 2274) [ClassicSimilarity], result of:
          0.08538542 = score(doc=2274,freq=10.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.38168296 = fieldWeight in 2274, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.020503066 = product of:
          0.041006133 = sum of:
            0.041006133 = weight(_text_:search in 2274) [ClassicSimilarity], result of:
              0.041006133 = score(doc=2274,freq=6.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.23279473 = fieldWeight in 2274, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2274)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
  9. Lazar, J.: Web usability : a user-centered design approach (2006) 0.07
    0.067917824 = product of:
      0.10187673 = sum of:
        0.09511247 = weight(_text_:book in 340) [ClassicSimilarity], result of:
          0.09511247 = score(doc=340,freq=38.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.42516404 = fieldWeight in 340, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.015625 = fieldNorm(doc=340)
        0.006764257 = product of:
          0.013528514 = sum of:
            0.013528514 = weight(_text_:search in 340) [ClassicSimilarity], result of:
              0.013528514 = score(doc=340,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.076802336 = fieldWeight in 340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.015625 = fieldNorm(doc=340)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: JASIST 58(2007) no.7, S.1066-1067 (X. Zhu u. J. Liao): "The user, without whom any product or service would be nothing, plays a very important role during the whole life cycle of products or services. The user's involvement should be from the very beginning, not just after products or services are ready to work. According to ISO 9241-11: 1998, Part 11, Usability refers to "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of user." As an academic topic of human-computer interaction, Web usability has been studied widely for a long time. This classroom-oriented book, bridging academia and the educational community, talks about Web usability in a student-friendly fashion. It outlines not only the methodology of user-centered Web site design but also details the methods to implement at every stage of the methodology. That is, the book presents the user-centered Web-design approach from both macrocosm and microcosm points of view, which makes it both recapitulative and practical. The most important key word in Web Usability is "user-centered," which means Web developers should not substitute their own personal preferences for the users' needs. The book classifies Web sites into five types: E-commerce, informational, entertainment, community, and intranet. Since the methods used during Web development differ somewhat depending on the type of Web site, it is necessary to have a classification in advance. With Figure 1.3 on p. 17, the book explains the whole user-centered Webdevelopment life cycle (called "methodology" in this book review), which provides a clear path for Web development that is easy to understand, remember, and perform. Since all the following chapters are based on the methodology, a clear presentation of it is paramount. The table on p. 93 summarizes concisely all types of methods for requirements gathering and their advantages and disadvantages. According to this table, appropriate methods can be easily chosen for different Web site development projects. As the author remarked, "requirements gathering is central to the concept of user-centered design," (p. 98) and "one of the hallmarks of user-centered design is usability testing" (p. 205). Stage 2 (collect user requirements) and Stage 5 (perform usability testing) of the user-centered Web-development life cycle are the two stages with the most user involvement: however, this does not mean that all other stages are user unrelated. For example, in Stage 4 (create and modify physical design), frame is not suggested to be used just because most users are unfamiliar with the concept of frame (p. 201). Note that frequently there are several rounds of usability testing to be performed in the four case studies, and some of them are performed before the physical-design stage or even the conceptual-design stage, which embodies the idea of an iterative design process.
    The many hands-on examples throughout the book and the four case studies at the end of the book are obvious strong points linking theory with practice. The four case studies are very useful, and it is hard to find such cases in the literature since few companies want to publicize such information. The four case studies are not just simple repeats; they are very different from each other and provide readers specific examples to analyze and follow. Web Usability is an excellent textbook, with a wrap-up (including discussion questions, design exercises, and suggested reading) at the end of each chapter. Each wrap-up first outlines where the focus should be placed, corresponding to what was presented at the very beginning of each chapter. Discussion questions help recall in an active way the main points in each chapter. The design exercises make readers apply to a design project what they have just obtained from the chapter, leading to a deeper understanding of knowledge. Suggested reading provides additional information sources for people who want to further study the research topic, which bridges the educational community back to academia. The book is enhanced by two universal resource locators (URLs) linking to the Addison-Wesley instructor resource center (http://www. aw.com/irc) and the Web-Star survey and project deliverables (http:// www. aw.com/cssupport), respectively. There are valuable resources in these two URLs, which can be used together with Web Usability. Like the Web, books are required to possess good information architecture to facilitate understanding. Fortunately, Web Usability has very clear information architecture. Chap. 1 introduces the user-centered Web-development life cycle, which is composed of seven stages. Chap. 2 discusses Stage l, chaps. 3 and 4 detail Stage 2, chaps. 5 through 7 outline Stage 3, and chaps. 8 through I1 present Stages 4 through 7, respectively. In chaps. 2 through 11, details (called "methods" in this review) are given for every stage of the methodology. The main clue of the book is how to design a new Web site; however, this does not mean that Web redesign is trivial and ignored. The author mentions Web redesign issues from time to time, and a dedicated section is presented to discuss redesign in chaps. 2, 3, 10, and 11.
    Besides the major well-known software applications such as FrontPage and Dreamweaver (pp. 191-194), many useful software tools can be adopted to assist and accelerate the Web-development process, resulting in improvement of the productivity of the Web industry. Web Usability mentions such tools as the "code validator" (p. 189) to identify problematic areas of the handwritten code against spelling and usage, the tool available at a given URL address to convert portable document format (PDF) files into hypertext markup language (HTML) files (p. 201), WEBXACT, WebSAT, A-Prompt, Dottie, InFocus, and RAMP (pp. 226-227) to automate usability testing, and ClickTracks, NetTracker, WebTrends, and Spotfire (p. 263) to summarize Web-usage data and analyze the trends. Thus, Web developers are able to find these tools and benefit from them. Other strengths of the book include the layout of each page, which has a wide margin in which readers may easily place notes, and the fact that the book is easy to read and understand. Although there are many strengths in this book, a few weaknesses are evident. All chapter wrap-ups should have an identical layout. Without numbering for sections and subsections, it is very likely that readers will lose sense of where they are in the overall information architecture of the book. At present, the only solution is to frequently refer to the table of contents to confirm the location. The hands-on example on p. 39 would be better placed in chap. 4 because it focuses on a requirements gathering method, the interview. There are two similar phrases, namely "user population" and "user group," that are used widely in this book. User population is composed of user groups; however, they are not strictly used in this book. The section title "Using a Search Engine" (p. 244) should be on the same level as that of the section "Linking to a URL," and not as that of the section entitled "Marketing: Bringing Users to Your Web Site," according to what the author argued at the top of p. 236.
    Web Usability is undoubtedly a success. After reading this book, Web designers will pay attention to both the content and the usability; otherwise, the majority might overlook the usability. Although this book mainly focuses on students and instructors, it also is appropriate for those who want to develop a user-centered Web site but do not know how. We would suggest that an initial reading is necessary to know what is included under each section title; from then on, when the methodology and methods are applied to guide a real-world project, only the table of contents and the chapter wrap-ups need to be reread, and other parts only when important details are forgotten. With the help of so many examples and strongly viable methods, Web Usability explains almost everything necessary during user-centered Web development and provides tips to help avoid some common mistakes. All of these characteristics facilitate effective and efficient Web-development processes. Similarly, the book reaches its content goal and usability goal as well. In short, Web Usability is an excellent case for book usability: a user-centered edit approach!"
  10. Rowland, M.J.: <Meta> tags (2000) 0.07
    0.067072675 = product of:
      0.100609004 = sum of:
        0.0654609 = weight(_text_:book in 222) [ClassicSimilarity], result of:
          0.0654609 = score(doc=222,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.29261798 = fieldWeight in 222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=222)
        0.03514811 = product of:
          0.07029622 = sum of:
            0.07029622 = weight(_text_:search in 222) [ClassicSimilarity], result of:
              0.07029622 = score(doc=222,freq=6.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.39907667 = fieldWeight in 222, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=222)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    <META> tags are used to create meta-information, or information about the information in a Web site. There are many types of <META> tags, but those most relevant to indexing are the description and keyword tags. Description tags provide a short summary of the site contents that are often displayed by search engines when they list search results. Keyword tags are used to define words or phrases that someone using a search engine might use to look for relevant sites. <META> tags are of interest to indexers for two reasons. They provide a means of making your indexing business Web site more visible to those searching the Web for indexing services, and they offer indexers a potential new source of work: writing keyword and description tags for Web site developers and companies with Web sites. <META> tag writing makes good use of an indexer's ability to choose relevant key terms, and the closely related skill of abstracting: conveying the essence of a document in a sentence or two.
    Issue
    Beyond book indexing: how to get started in Web indexing, embedded indexing and other computer-based media. Ed. by D. Brenner u. M. Rowland.
  11. Schwartz, E.: Like a book on a wire (1993) 0.07
    0.06693571 = product of:
      0.10040356 = sum of:
        0.076371044 = weight(_text_:book in 582) [ClassicSimilarity], result of:
          0.076371044 = score(doc=582,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 582, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=582)
        0.02403252 = product of:
          0.04806504 = sum of:
            0.04806504 = weight(_text_:22 in 582) [ClassicSimilarity], result of:
              0.04806504 = score(doc=582,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.2708308 = fieldWeight in 582, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=582)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Publishers weekly. 240(1993) no.47, 22 Nov., S.33-35,38
  12. Laverty, C.Y.C.: Library instruction on the Web : inventing options and opportunities (1997) 0.07
    0.0666973 = product of:
      0.10004594 = sum of:
        0.076371044 = weight(_text_:book in 522) [ClassicSimilarity], result of:
          0.076371044 = score(doc=522,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=522)
        0.0236749 = product of:
          0.0473498 = sum of:
            0.0473498 = weight(_text_:search in 522) [ClassicSimilarity], result of:
              0.0473498 = score(doc=522,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.2688082 = fieldWeight in 522, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=522)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the establishment of the WWW as a standard information tool in academic libraries, there is a greater demand for research assistance than ever before. Reference questions involve more teaching time given the number of interfaces clients confront as they navigate the book catalogue, electronic databases, and the WWW. Librarians require expert knowledge of multiple search strategies as well as the ability to teach others how to apply them effectively. Outlines hoe the WWW can function as a desktop publishing system, revitalize subject pathfinders and 'how to' guides, and promote the invention of interactive library tutorials. A Web site presenting design ideas accompanies this article at: http://stauffer.queensu.ca/inforef/tutorials/cla/clahome.htm
  13. Notess, G.R.: DejaNews and other Usenet search tools (1998) 0.07
    0.06622919 = product of:
      0.19868755 = sum of:
        0.19868755 = sum of:
          0.121002704 = weight(_text_:search in 5229) [ClassicSimilarity], result of:
            0.121002704 = score(doc=5229,freq=10.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.68694097 = fieldWeight in 5229, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0625 = fieldNorm(doc=5229)
          0.07768484 = weight(_text_:22 in 5229) [ClassicSimilarity], result of:
            0.07768484 = score(doc=5229,freq=4.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.4377287 = fieldWeight in 5229, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5229)
      0.33333334 = coord(1/3)
    
    Abstract
    Internet Newsgroup archives on services such as DejaNews offer important sources of information that may not be found elsewhere online. Describes the content of the DejaNews Database which goes back to 1995 and covers more than 14,000 newsgroups. There are 2 search options: quick search and power search. Most Web search engines offer links to DejaNews, but AltaVista offers a smaller alternative and supplement to DejaNews. Reference.COM also offers a searchable archive, as well as a useful current awareness service which allows setting up multiple searches under the user profile tab
    Source
    Online. 22(1998) no.4, S.22-28
  14. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.07
    0.06544225 = product of:
      0.09816337 = sum of:
        0.043640595 = weight(_text_:book in 2428) [ClassicSimilarity], result of:
          0.043640595 = score(doc=2428,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.19507864 = fieldWeight in 2428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=2428)
        0.054522768 = sum of:
          0.027057027 = weight(_text_:search in 2428) [ClassicSimilarity], result of:
            0.027057027 = score(doc=2428,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.15360467 = fieldWeight in 2428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.03125 = fieldNorm(doc=2428)
          0.027465738 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
            0.027465738 = score(doc=2428,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.15476047 = fieldWeight in 2428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2428)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2006, held in Alicante, Spain in September 2006. The 36 revised full papers presented together with the extended abstracts of 18 demo papers and 15 revised poster papers were carefully reviewed and selected from a total of 159 submissions. The papers are organized in topical sections on architectures, preservation, retrieval, applications, methodology, metadata, evaluation, user studies, modeling, audiovisual content, and language technologies.
    Content
    Inhalt u.a.: Architectures I Preservation Retrieval - The Use of Summaries in XML Retrieval / Zoltdn Szldvik, Anastasios Tombros, Mounia Laimas - An Enhanced Search Interface for Information Discovery from Digital Libraries / Georgia Koutrika, Alkis Simitsis - The TIP/Greenstone Bridge: A Service for Mobile Location-Based Access to Digital Libraries / Annika Hinze, Xin Gao, David Bainbridge Architectures II Applications Methodology Metadata Evaluation User Studies Modeling Audiovisual Content Language Technologies - Incorporating Cross-Document Relationships Between Sentences for Single Document Summarizations / Xiaojun Wan, Jianwu Yang, Jianguo Xiao - Semantic Web Techniques for Multiple Views on Heterogeneous Collections: A Case Study / Marjolein van Gendt, Antoine Isaac, Lourens van der Meij, Stefan Schlobach Posters - A Tool for Converting from MARC to FRBR / Trond Aalberg, Frank Berg Haugen, Ole Husby
  15. Rogers, R.: Information politics on the Web (2004) 0.06
    0.0643805 = product of:
      0.096570745 = sum of:
        0.078674205 = weight(_text_:book in 442) [ClassicSimilarity], result of:
          0.078674205 = score(doc=442,freq=26.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.35168302 = fieldWeight in 442, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.015625 = fieldNorm(doc=442)
        0.017896542 = product of:
          0.035793085 = sum of:
            0.035793085 = weight(_text_:search in 442) [ClassicSimilarity], result of:
              0.035793085 = score(doc=442,freq=14.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.2031999 = fieldWeight in 442, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.015625 = fieldNorm(doc=442)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: JASIST 58(2007) no.4, S.608-609 (K.D. Desouza): "Richard Rogers explores the distinctiveness of the World Wide Web as a politically contested space where information searchers may encounter multiple explanations of reality. Sources of information on the Web are in constant competition with each other for attention. The attention a source receives will determine its prominence, the ability to be a provider of leading information, and its inclusion in authoritative spaces. Rogers explores the politics behind evaluating sources that are collected and housed on authoritative spaces. Information politics on the Web can be looked at in terms of frontend or back-end politics. Front-end politics is concerned with whether sources on the Web pay attention to principles of inclusivity, fairness, and scope of representation in how information is presented, while back-end politics examines the logic behind how search engines or portals select and index information. Concerning front-end politics, Rogers questions the various versions of reality one can derive from examining information on the Web, especially when issues of information inclusivity and scope of representation are toiled with. In addition, Rogers is concerned with how back-end politics are being controlled by dominant forces of the market (i.e., the more an organization is willing to pay, the greater will be the site's visibility and prominence in authoritative spaces), regardless of whether the information presented on the site justifies such a placement. In the book, Rogers illustrates the issues involved in back-end and front-end politics (though heavily slanted on front-end politics) using vivid cases, all of which are derived from his own research. The main thrust is the exploration of how various "information instruments," defined as "a digital and analytical means of recording (capturing) and subsequently reading indications of states of defined information streams (p. 19)," help capture the politics of the Web. Rogers employs four specific instruments (Lay Decision Support System, Issue Barometer, Web Issue Index of Civil Society, and Election Issue Tracker), which are covered in detail in core chapters of the book (Chapter 2-Chapter 5). The book is comprised of six chapters, with Chapter 1 being the traditional introduction and Chapter 6 being a summary of the major concepts discussed.
    Chapter 2 examines the politics of information retrieval in the context of collaborative filtering techniques. Rogers begins by discussing the underpinnings of modern search engine design by examining medieval practices of knowledge seeking, following up with a critique of the collaborative filtering techniques. Rogers's major contention is that collaborative filtering rids us of user idiosyncrasies as search query strings, preferences, and recommendations are shared among users and without much care for the differences among them, both in terms of their innate characteristics and also their search goals. To illustrate Rogers' critiques of collaborative filtering, he describes an information searching experiment that he conducted with students at University of Vienna and University of Amsterdam. Students were asked to search for information on Viagra. As one can imagine, depending on a number of issues, not the least of which is what sources did one extract information from, a student would find different accounts of reality about Viagra, everything from a medical drug to a black-market drug ideal for underground trade. Rogers described how information on the Web differed from official accounts for certain events. The information on the Web served as an alternative reality. Chapter 3 describes the Web as a dynamic debate-mapping tool, a political instrument. Rogers introduces the "Issue Barometer," an information instrument that measures the social pressure on a topic being debated by analyzing data available from the Web. Measures used by the Issue Barometer include temperature of the issue (cold to hot), activity level of the debate (mild to intense), and territorialization (one country to many countries). The Issues Barometer is applied to an illustrative case of the public debate surrounding food safety in the Netherlands in 2001. Chapter 4 introduces "The Web Issue Index," which provides an indication of leading societal issues discussed on the Web. The empirical research on the Web Issues Index was conducted on the Genoa G8 Summit in 1999 and the anti-globalization movement. Rogers focus here was to examine the changing nature of prominent issues over time, i.e., how issues gained and lost attention and traction over time.
    In Chapter 5, the "Election Issue Tracker" is introduced. The Election Issue Tracker calculates currency that is defined as "frequency of mentions of the issue terms per newspaper and across newspapers" in the three major national newspapers. The Election Issue Tracker is used to study which issues resonate with the press and which do not. As one would expect, Rogers found that not all issues that are considered important or central to a political party resonate with the press. This book contains a wealth of information that can be accessed by both researcher and practitioner. Even more interesting is the fact that researchers from a wide assortment of disciplines, from political science to information science and even communication studies, will appreciate the research and insights put forth by Rogers. Concepts presented in each chapter are thoroughly described using a wide variety of cases. Albeit all the cases are of a European flavor, mainly Dutch, they are interesting and thought-provoking. I found the descriptions of Rogers various information instruments to be very interesting. Researchers can gain from an examination of these instruments as it points to an interesting method for studying activities and behaviors on the Internet. In addition, each chapter has adequate illustrations and the bibliography is comprehensive. This book will make for an ideal supplementary text for graduate courses in information science, communication and media studies, and even political science. Like all books, however, this book had its share of shortcomings. While I was able to appreciate the content of the book, and certainly commend Rogers for studying an issue of immense significance, I found the book to be very difficult to read and parse through. The book is laden with jargon, political statements, and even has several instances of deficient writing. The book also lacked a sense of structure, and this affected the presentation of Rogers' material. I would have also hoped to see some recommendations by Rogers in terms of how should researchers further the ideas he has put forth. Areas of future research, methods for studying future problems, and even insights on what the future might hold for information politics were not given enough attention in the book; in my opinion, this was a major shortcoming. Overall, I commend Rogers for putting forth a very informative book on the issues of information politics on the Web. Information politics, especially when delivered on the communication technologies such as the Web, is going to play a vital role in our societies for a long time to come. Debates will range from the politics of how information is searched for and displayed on the Web to how the Web is used to manipulate or politicize information to meet the agendas of various entities. Richard Rogers' book will be of the seminal and foundational readings on the topic for any curious minds that want to explore these issues."
    LCSH
    Web search engines / Political aspects
    Subject
    Web search engines / Political aspects
  16. Keller, R.M.: ¬A bookmarking service for organizing and sharing URLs (1997) 0.06
    0.06306181 = product of:
      0.09459271 = sum of:
        0.0654609 = weight(_text_:book in 2721) [ClassicSimilarity], result of:
          0.0654609 = score(doc=2721,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.29261798 = fieldWeight in 2721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
        0.029131817 = product of:
          0.058263633 = sum of:
            0.058263633 = weight(_text_:22 in 2721) [ClassicSimilarity], result of:
              0.058263633 = score(doc=2721,freq=4.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.32829654 = fieldWeight in 2721, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2721)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents WebTagger, an implemented prototype of a personal book marking service that provides both individuals and groups with a customisable means of organizing and accessing Web-based information resources. The service enables users to supply feedback on the utility of these resources relative to their informatio needs, and provides dynamically updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet and require no special software. The service simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers and enables rapid access to stored bookmarks
    Date
    1. 8.1996 22:08:06
    17. 1.1999 14:22:14
  17. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.06
    0.06191672 = product of:
      0.09287508 = sum of:
        0.07216386 = weight(_text_:book in 2050) [ClassicSimilarity], result of:
          0.07216386 = score(doc=2050,freq=14.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.322581 = fieldWeight in 2050, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
        0.020711223 = product of:
          0.041422445 = sum of:
            0.041422445 = weight(_text_:search in 2050) [ClassicSimilarity], result of:
              0.041422445 = score(doc=2050,freq=12.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.23515818 = fieldWeight in 2050, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2050)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: KO 50(2003) no.1, S.45-46 (L.M. Given): "In her own preface to this work, the author notes her lifelong fascination with classification and order, as well as her more recent captivation with the Internet - a place of "chaos in need of organization" (xi). Sorting out the Web examines current efforts to organize the Web and is well-informed by the author's academic and professional expertise in information organization, information retrieval, and Web development. Although the book's level and tone are particularly relevant to a student audience (or others interested in Web-based subject access at an introductory level), it will also appeal to information professionals developing subject access systems across a range of information contexts. There are six chapters in the book, each describing and analyzing one core concept related to the organization of Web content. All topics are presented in a manner ideal for newcomers to the area, with clear definitions, examples, and visuals that illustrate the principles under discussion. The first chapter provides a brief introduction to developments in information technology, including an historical overview of information services, users' needs, and libraries' responses to the Internet. Chapter two introduces metadata, including core concepts and metadata formats. Throughout this chapter the author presents a number of figures that aptly illustrate the application of metadata in HTML, SGML, and MARC record environments, and the use of metadata tools (e.g., XML, RDF). Chapter three begins with an overview of classification theory and specific schemes, but the author devotes most of the discussion to the application of classification systems in the Web environment (e.g., Dewey, LCC, UDC). Web screen captures illustrate the use of these schemes for information sources posted to sites around the world. The chapter closes with a discussion of the future of classification; this is a particularly useful section as the author presents a listing of core journal and conference venues where new approaches to Web classification are explored. In chapter four, the author extends the discussion of classification to the use of controlled vocabularies. As in the first few chapters, the author first presents core background material, including reasons to use controlled vocabularies and the differences between preand post-coordinate indexing, and then discusses the application of specific vocabularies in the Web environment (e.g., Infomine's use of LCSH). The final section of the chapter explores failure in subject searching and the limitations of controlled vocabularies for the Web. Chapter five discusses one of the most common and fast-growing topics related to subject access an the Web: search engines. The author presents a clear definition of the term that encompasses classified search lists (e.g., Yahoo) and query-based engines (e.g., Alta Vista). In addition to historical background an the development of search engines, Schwartz also examines search service types, features, results, and system performance.
    The chapter concludes with an appendix of search tips that even seasoned searchers will appreciate; these tips cover the complete search process, from preparation to the examination of results. Chapter six is appropriately entitled "Around the Corner," as it provides the reader with a glimpse of the future of subject access for the Web. Text mining, visualization, machine-aided indexing, and other topics are raised here to whet the reader's appetite for what is yet to come. As the author herself notes in these final pages, librarians will likely increase the depth of their collaboration with software engineers, knowledge managers and others outside of the traditional library community, and thereby push the boundaries of subject access for the digital world. This final chapter leaves this reviewer wanting a second volume of the book, one that might explore these additional topics, as they evolve over the coming years. One characteristic of any book that addresses trends related to the Internet is how quickly the text becomes dated. However, as the author herself asserts, there are core principles related to subject analysis that stand the test of time, leaving the reader with a text that may be generalized well beyond the publication date. In this, Schwartz's text is similar to other recent publications (e.g., Jakob Nielsen's Web Usability, also published in 2001) that acknowledge the mutability of the Web, and therefore discuss core principles and issues that may be applied as the medium itself evolves. This approach to the writing makes this a useful book for those teaching in the areas of subject analysis, information retrieval and Web development for possible consideration as a course text. Although the websites used here may need to be supplemented with more current examples in the classroom, the core content of the book will be relevant for many years to come. Although one might expect that any book taking subject access as its focus world, itself, be easy to navigate, this is not always the case. In this text, however, readers will be pleased to find that no small detail in content access has been spared. The subject Index is thorough and well-crafted, and the inclusion of an exhaustive author index is particularly useful for quick reference. In addition, the table of contents includes sub-themes for each chapter, and a complete table of figures is provided. While the use of colour figures world greatly enhance the text, all black-andwhite images are clear and sharp, a notable fact given that most of the figures are screen captures of websites or database entries. In addition, the inclusion of comprehensive reference lists at the close of each chapter makes this a highly readable text for students and instructors alike; each section of the book can stand as its own "expert review" of the topic at hand. In both content and structure this text is highly recommended. It certainly meets its intended goal of providing a timely introduction to the methods and problems of subject access in the Web environment, and does so in a way that is readable, interesting and engaging."
  18. Conner-Sax, K.; Krol, E.: ¬The whole Internet : the next generation (1999) 0.06
    0.059547067 = product of:
      0.0893206 = sum of:
        0.075587735 = weight(_text_:book in 1448) [ClassicSimilarity], result of:
          0.075587735 = score(doc=1448,freq=6.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.33788615 = fieldWeight in 1448, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=1448)
        0.013732869 = product of:
          0.027465738 = sum of:
            0.027465738 = weight(_text_:22 in 1448) [ClassicSimilarity], result of:
              0.027465738 = score(doc=1448,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.15476047 = fieldWeight in 1448, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1448)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    For a snapshot of something that is mutating as quickly as the Internet, The Whole Internet: The Next Generation exhibits remarkable comprehensiveness and accuracy. It's a good panoramic shot of Web sites, Usenet newsgroups, e-mail, mailing lists, chat software, electronic commerce, and the communities that have begun to emerge around all of these. This is the book to buy if you have a handle on certain aspects of the Internet experience--e-mail and Web surfing, for example--but want to learn what else the global network has to offer--say, Web banking or mailing-list management. The authors clearly have seen a thing or two online and are able to share their experiences entertainingly and with clarity. However, they commit the mistake of misidentifying an Amazon.com book review as a publisher's synopsis of a book. Aside from that transgression, The Whole Internet presents detailed information on much of the Internet. In most cases, coverage explains what something (online stock trading, free homepage sites, whatever) is all about and then provides you with enough how-to information to let you start exploring on your own. Coverage ranges from the super-basic (how to surf) to the fairly complex (sharing an Internet connection among several home computers on a network). Along the way, readers get insight into buying, selling, meeting, relating, and doing most everything else on the Internet. While other books explain the first steps into the Internet community with more graphics, this one will remain useful to the newcomer long after he or she has become comfortable using the Internet.
    Footnote
    Rez. in: Internet Professionell. 2000, H.2, S.22
  19. Sherman, C.; Price, G.: ¬The invisible Web : uncovering information sources search engines can't see (2001) 0.06
    0.058914684 = product of:
      0.08837202 = sum of:
        0.05455074 = weight(_text_:book in 62) [ClassicSimilarity], result of:
          0.05455074 = score(doc=62,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.2438483 = fieldWeight in 62, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=62)
        0.033821285 = product of:
          0.06764257 = sum of:
            0.06764257 = weight(_text_:search in 62) [ClassicSimilarity], result of:
              0.06764257 = score(doc=62,freq=8.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.3840117 = fieldWeight in 62, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Enormous expanses of the Internet are unreachable with standard Web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible Web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, information in databases is generally inaccessible to the software spiders and crawlers that compile search engine indexes. As Web technology improves, more and more information is being stored in databases that feed into dynamically generated Web pages. The tips provided in this resource will ensure that those databases are exposed and Net-based research will be conducted in the most thorough and effective manner. Discusses the use of online information resources and problems caused by dynamically generated Web pages, paying special attention to information mapping, assessing the validity of information, and the future of Web searching.
  20. Clyde, L.A.: Weblogs and libraries (2004) 0.06
    0.058805667 = product of:
      0.0882085 = sum of:
        0.076371044 = weight(_text_:book in 4496) [ClassicSimilarity], result of:
          0.076371044 = score(doc=4496,freq=8.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 4496, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4496)
        0.01183745 = product of:
          0.0236749 = sum of:
            0.0236749 = weight(_text_:search in 4496) [ClassicSimilarity], result of:
              0.0236749 = score(doc=4496,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.1344041 = fieldWeight in 4496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4496)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book discusses the topic of 'weblogs and libraries' from two main perspectives: weblogs as sources of information for libraries and librarians; and weblogs as tools that libraries can use to promote their services and to provide a means of communication with their clients. It begins with an overview of the whole weblog and blogging phenomenon and traces its development over the last six years. The many different kinds of weblogs are outlined (including personal weblogs, community weblogs, multimedia weblogs). The problem of locating weblogs is addressed through a discussion of weblog directories, search engines and other finding tools. Chapters include using weblogs as sources of information in the library or information service, the options for creating a weblog, and managing the library's own weblog.
    Content
    Key Features - No other book currently available specifically addresses this highly topical subject - Weblogs are becoming more important as sources of up-to-date information an many different topics, and so librarians need to be aware of these resources, how they are created and by whom - Weblogs are already important as sources of news and current professional information in the field of library and information science; this book helps librarians to become familiar with the best weblogs in this field - While relatively few libraries have created their own weblogs, the use of weblogs has been recommended in the library/information press as a way of providing information for library patrons; this book helps library managers to make decisions about a weblog for their library

Years

Languages

Types

  • a 677
  • m 113
  • s 37
  • el 25
  • r 5
  • b 3
  • x 2
  • More… Less…

Subjects

Classifications