Search (102 results, page 1 of 6)

  • × language_ss:"e"
  • × theme_ss:"Internet"
  • × type_ss:"m"
  1. Sherman, C.; Price, G.: ¬The invisible Web : uncovering information sources search engines can't see (2001) 0.16
    0.16329612 = product of:
      0.21772815 = sum of:
        0.09647507 = weight(_text_:web in 62) [ClassicSimilarity], result of:
          0.09647507 = score(doc=62,freq=22.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.59793836 = fieldWeight in 62, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=62)
        0.06598687 = weight(_text_:search in 62) [ClassicSimilarity], result of:
          0.06598687 = score(doc=62,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3840117 = fieldWeight in 62, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=62)
        0.05526622 = product of:
          0.11053244 = sum of:
            0.11053244 = weight(_text_:engine in 62) [ClassicSimilarity], result of:
              0.11053244 = score(doc=62,freq=4.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.41792953 = fieldWeight in 62, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Enormous expanses of the Internet are unreachable with standard Web search engines. This book provides the key to finding these hidden resources by identifying how to uncover and use invisible Web resources. Mapping the invisible Web, when and how to use it, assessing the validity of the information, and the future of Web searching are topics covered in detail. Only 16 percent of Net-based information can be located using a general search engine. The other 84 percent is what is referred to as the invisible Web-made up of information stored in databases. Unlike pages on the visible Web, information in databases is generally inaccessible to the software spiders and crawlers that compile search engine indexes. As Web technology improves, more and more information is being stored in databases that feed into dynamically generated Web pages. The tips provided in this resource will ensure that those databases are exposed and Net-based research will be conducted in the most thorough and effective manner. Discusses the use of online information resources and problems caused by dynamically generated Web pages, paying special attention to information mapping, assessing the validity of information, and the future of Web searching.
  2. Stuart, D.: Web metrics for library and information professionals (2014) 0.15
    0.14830285 = product of:
      0.19773714 = sum of:
        0.13037933 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.13037933 = score(doc=2274,freq=82.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.04000242 = weight(_text_:search in 2274) [ClassicSimilarity], result of:
          0.04000242 = score(doc=2274,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.23279473 = fieldWeight in 2274, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.027355384 = product of:
          0.05471077 = sum of:
            0.05471077 = weight(_text_:engine in 2274) [ClassicSimilarity], result of:
              0.05471077 = score(doc=2274,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.20686457 = fieldWeight in 2274, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2274)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  3. Social information retrieval systems : emerging technologies and applications for searching the Web effectively (2008) 0.13
    0.13101415 = product of:
      0.17468554 = sum of:
        0.0735883 = weight(_text_:web in 4127) [ClassicSimilarity], result of:
          0.0735883 = score(doc=4127,freq=20.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.45608947 = fieldWeight in 4127, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4127)
        0.06983395 = weight(_text_:search in 4127) [ClassicSimilarity], result of:
          0.06983395 = score(doc=4127,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4063998 = fieldWeight in 4127, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=4127)
        0.031263296 = product of:
          0.06252659 = sum of:
            0.06252659 = weight(_text_:engine in 4127) [ClassicSimilarity], result of:
              0.06252659 = score(doc=4127,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23641664 = fieldWeight in 4127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4127)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Inhalt Collaborating to search effectively in different searcher modes through cues and specialty search / Naresh Kumar Agarwal and Danny C.C. Poo -- Collaborative querying using a hybrid content and results-based approach / Chandrani Sinha Ray ... [et al.] -- Collaborative classification for group-oriented organization of search results / Keiichi Nakata and Amrish Singh -- A case study of use-centered descriptions : archival descriptions of what can be done with a collection / Richard Butterworth -- Metadata for social recommendations : storing, sharing, and reusing evaluations of learning resources / Riina Vuorikari, Nikos Manouselis, and Erik Duval -- Social network models for enhancing reference-based search engine rankings / Nikolaos Korfiatis ... [et al.] -- From PageRank to social rank : authority-based retrieval in social information spaces / Sebastian Marius Kirsch ... [et al.] -- Adaptive peer-to-peer social networks for distributed content-based Web search / Le-Shin Wu ... [et al.] -- The ethics of social information retrieval / Brendan Luyt and Chu Keong Lee -- The social context of knowledge / Daniel Memmi -- Social information seeking in digital libraries / George Buchanan and Annika Hinze -- Relevant intra-actions in networked environments / Theresa Dirndorfer Anderson -- Publication and citation analysis as a tool for information retrieval / Ronald Rousseau -- Personalized information retrieval in a semantic-based learning environment / Antonella Carbonaro and Rodolfo Ferrini -- Multi-agent tourism system (MATS) / Soe Yu Maw and Myo-Myo Naing -- Hybrid recommendation systems : a case study on the movies domain / Konstantinos Markellos ... [et al.].
    LCSH
    Web search engines
    World Wide Web / Subject access
    RSWK
    World Wide Web 2.0
    Information Retrieval / World Wide Web / Suchmaschine
    Subject
    Web search engines
    World Wide Web / Subject access
    World Wide Web 2.0
    Information Retrieval / World Wide Web / Suchmaschine
  4. Rogers, R.: Digital methods (2013) 0.13
    0.12834272 = product of:
      0.17112362 = sum of:
        0.08707084 = weight(_text_:web in 2354) [ClassicSimilarity], result of:
          0.08707084 = score(doc=2354,freq=28.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5396523 = fieldWeight in 2354, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
        0.052789498 = weight(_text_:search in 2354) [ClassicSimilarity], result of:
          0.052789498 = score(doc=2354,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 2354, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
        0.031263296 = product of:
          0.06252659 = sum of:
            0.06252659 = weight(_text_:engine in 2354) [ClassicSimilarity], result of:
              0.06252659 = score(doc=2354,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23641664 = fieldWeight in 2354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2354)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such "methods of the medium" as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By "thinking along" with devices and the objects they handle, digital research methods can follow the evolving methods of the medium. Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates' social media "friends," and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web's objects of study, from tiny particles (hyperlinks) to large masses (social media).
    Content
    The end of the virtual : digital methods -- The link and the politics of Web space -- The website as archived object -- Googlization and the inculpable engine -- Search as research -- National Web studies -- Social media and post-demographics -- Wikipedia as cultural reference -- After cyberspace : big data, small data.
    LCSH
    Web search engines
    World Wide Web / Research
    RSWK
    Internet / Recherche / World Wide Web 2.0
    Subject
    Internet / Recherche / World Wide Web 2.0
    Web search engines
    World Wide Web / Research
  5. Stacey, Alison; Stacey, Adrian: Effective information retrieval from the Internet : an advanced user's guide (2004) 0.11
    0.105790526 = product of:
      0.14105403 = sum of:
        0.057001244 = weight(_text_:web in 4497) [ClassicSimilarity], result of:
          0.057001244 = score(doc=4497,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35328537 = fieldWeight in 4497, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
        0.052789498 = weight(_text_:search in 4497) [ClassicSimilarity], result of:
          0.052789498 = score(doc=4497,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 4497, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=4497)
        0.031263296 = product of:
          0.06252659 = sum of:
            0.06252659 = weight(_text_:engine in 4497) [ClassicSimilarity], result of:
              0.06252659 = score(doc=4497,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23641664 = fieldWeight in 4497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4497)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This book provides practical strategies which enable the advanced web user to locate information effectively and to form a precise evaluation of the accuracy of that information. Although the book provides a brief but thorough review of the technologies which are currently available for these purposes, most of the book concerns practical `future-proof' techniques which are independent of changes in the tools available. For example, the book covers: how to retrieve salient information quickly; how to remove or compensate for bias; and tuition of novice Internet users.
    Content
    Key Features - Importantly, the book enables readers to develop strategies which will continue to be useful despite the rapidly-evolving state of the Internet and Internet technologies - it is not about technological `tricks'. - Enables readers to be aware of and compensate for bias and errors which are ubiquitous an the Internet. - Provides contemporary information an the deficiencies in web skills of novice users as well as practical techniques for teaching such users. The Authors Dr Alison Stacey works at the Learning Resource Centre, Cambridge Regional College. Dr Adrian Stacey, formerly based at Cambridge University, is a software programmer. Readership The book is aimed at a wide range of librarians and other information professionals who need to retrieve information from the Internet efficiently, to evaluate their confidence in the information they retrieve and/or to train others to use the Internet. It is primarily aimed at intermediate to advanced users of the Internet. Contents Fundamentals of information retrieval from the Internet - why learn web searching technique; types of information requests; patterns for information retrieval; leveraging the technology: Search term choice: pinpointing information an the web - why choose queries carefully; making search terms work together; how to pick search terms; finding the 'unfindable': Blas an the Internet - importance of bias; sources of bias; usergenerated bias: selecting information with which you already agree; assessing and compensating for bias; case studies: Query reformulation and longer term strategies - how to interact with your search engine; foraging for information; long term information retrieval: using the Internet to find trends; automating searches: how to make your machine do your work: Assessing the quality of results- how to assess and ensure quality: The novice user and teaching internet skills - novice users and their problems with the web; case study: research in a college library; interpreting 'second hand' web information.
  6. Bizer, C.; Mendes, P.N.; Jentzsch, A.: Topology of the Web of Data (2012) 0.09
    0.08756581 = product of:
      0.17513162 = sum of:
        0.095947385 = weight(_text_:web in 425) [ClassicSimilarity], result of:
          0.095947385 = score(doc=425,freq=34.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.59466785 = fieldWeight in 425, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
        0.07918424 = weight(_text_:search in 425) [ClassicSimilarity], result of:
          0.07918424 = score(doc=425,freq=18.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.460814 = fieldWeight in 425, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=425)
      0.5 = coord(2/4)
    
    Abstract
    The degree of structure of Web content is the determining factor for the types of functionality that search engines can provide. The more well structured the Web content is, the easier it is for search engines to understand Web content and provide advanced functionality, such as faceted filtering or the aggregation of content from multiple Web sites, based on this understanding. Today, most Web sites are generated from structured data that is stored in relational databases. Thus, it does not require too much extra effort for Web sites to publish this structured data directly on the Web in addition to HTML pages, and thus help search engines to understand Web content and provide improved functionality. An early approach to realize this idea and help search engines to understand Web content is Microformats, a technique for markingup structured data about specific types on entities-such as tags, blog posts, people, or reviews-within HTML pages. As Microformats are focused on a few entity types, the World Wide Web Consortium (W3C) started in 2004 to standardize RDFa as an alternative, more generic language for embedding any type of data into HTML pages. Today, major search engines such as Google, Yahoo, and Bing extract Microformat and RDFa data describing products, reviews, persons, events, and recipes from Web pages and use the extracted data to improve the user's search experience. The search engines have started to aggregate structured data from different Web sites and augment their search results with these aggregated information units in the form of rich snippets which combine, for instance, data This chapter gives an overview of the topology of the Web of Data that has been created by publishing data on the Web using the microformats RDFa, Microdata and Linked Data publishing techniques.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  7. Rogers, R.: Information politics on the Web (2004) 0.08
    0.07976228 = product of:
      0.10634971 = sum of:
        0.055801086 = weight(_text_:web in 442) [ClassicSimilarity], result of:
          0.055801086 = score(doc=442,freq=46.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.34584695 = fieldWeight in 442, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=442)
        0.034916975 = weight(_text_:search in 442) [ClassicSimilarity], result of:
          0.034916975 = score(doc=442,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2031999 = fieldWeight in 442, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.015625 = fieldNorm(doc=442)
        0.015631648 = product of:
          0.031263296 = sum of:
            0.031263296 = weight(_text_:engine in 442) [ClassicSimilarity], result of:
              0.031263296 = score(doc=442,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.11820832 = fieldWeight in 442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.015625 = fieldNorm(doc=442)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Footnote
    Rez. in: JASIST 58(2007) no.4, S.608-609 (K.D. Desouza): "Richard Rogers explores the distinctiveness of the World Wide Web as a politically contested space where information searchers may encounter multiple explanations of reality. Sources of information on the Web are in constant competition with each other for attention. The attention a source receives will determine its prominence, the ability to be a provider of leading information, and its inclusion in authoritative spaces. Rogers explores the politics behind evaluating sources that are collected and housed on authoritative spaces. Information politics on the Web can be looked at in terms of frontend or back-end politics. Front-end politics is concerned with whether sources on the Web pay attention to principles of inclusivity, fairness, and scope of representation in how information is presented, while back-end politics examines the logic behind how search engines or portals select and index information. Concerning front-end politics, Rogers questions the various versions of reality one can derive from examining information on the Web, especially when issues of information inclusivity and scope of representation are toiled with. In addition, Rogers is concerned with how back-end politics are being controlled by dominant forces of the market (i.e., the more an organization is willing to pay, the greater will be the site's visibility and prominence in authoritative spaces), regardless of whether the information presented on the site justifies such a placement. In the book, Rogers illustrates the issues involved in back-end and front-end politics (though heavily slanted on front-end politics) using vivid cases, all of which are derived from his own research. The main thrust is the exploration of how various "information instruments," defined as "a digital and analytical means of recording (capturing) and subsequently reading indications of states of defined information streams (p. 19)," help capture the politics of the Web. Rogers employs four specific instruments (Lay Decision Support System, Issue Barometer, Web Issue Index of Civil Society, and Election Issue Tracker), which are covered in detail in core chapters of the book (Chapter 2-Chapter 5). The book is comprised of six chapters, with Chapter 1 being the traditional introduction and Chapter 6 being a summary of the major concepts discussed.
    Chapter 2 examines the politics of information retrieval in the context of collaborative filtering techniques. Rogers begins by discussing the underpinnings of modern search engine design by examining medieval practices of knowledge seeking, following up with a critique of the collaborative filtering techniques. Rogers's major contention is that collaborative filtering rids us of user idiosyncrasies as search query strings, preferences, and recommendations are shared among users and without much care for the differences among them, both in terms of their innate characteristics and also their search goals. To illustrate Rogers' critiques of collaborative filtering, he describes an information searching experiment that he conducted with students at University of Vienna and University of Amsterdam. Students were asked to search for information on Viagra. As one can imagine, depending on a number of issues, not the least of which is what sources did one extract information from, a student would find different accounts of reality about Viagra, everything from a medical drug to a black-market drug ideal for underground trade. Rogers described how information on the Web differed from official accounts for certain events. The information on the Web served as an alternative reality. Chapter 3 describes the Web as a dynamic debate-mapping tool, a political instrument. Rogers introduces the "Issue Barometer," an information instrument that measures the social pressure on a topic being debated by analyzing data available from the Web. Measures used by the Issue Barometer include temperature of the issue (cold to hot), activity level of the debate (mild to intense), and territorialization (one country to many countries). The Issues Barometer is applied to an illustrative case of the public debate surrounding food safety in the Netherlands in 2001. Chapter 4 introduces "The Web Issue Index," which provides an indication of leading societal issues discussed on the Web. The empirical research on the Web Issues Index was conducted on the Genoa G8 Summit in 1999 and the anti-globalization movement. Rogers focus here was to examine the changing nature of prominent issues over time, i.e., how issues gained and lost attention and traction over time.
    In Chapter 5, the "Election Issue Tracker" is introduced. The Election Issue Tracker calculates currency that is defined as "frequency of mentions of the issue terms per newspaper and across newspapers" in the three major national newspapers. The Election Issue Tracker is used to study which issues resonate with the press and which do not. As one would expect, Rogers found that not all issues that are considered important or central to a political party resonate with the press. This book contains a wealth of information that can be accessed by both researcher and practitioner. Even more interesting is the fact that researchers from a wide assortment of disciplines, from political science to information science and even communication studies, will appreciate the research and insights put forth by Rogers. Concepts presented in each chapter are thoroughly described using a wide variety of cases. Albeit all the cases are of a European flavor, mainly Dutch, they are interesting and thought-provoking. I found the descriptions of Rogers various information instruments to be very interesting. Researchers can gain from an examination of these instruments as it points to an interesting method for studying activities and behaviors on the Internet. In addition, each chapter has adequate illustrations and the bibliography is comprehensive. This book will make for an ideal supplementary text for graduate courses in information science, communication and media studies, and even political science. Like all books, however, this book had its share of shortcomings. While I was able to appreciate the content of the book, and certainly commend Rogers for studying an issue of immense significance, I found the book to be very difficult to read and parse through. The book is laden with jargon, political statements, and even has several instances of deficient writing. The book also lacked a sense of structure, and this affected the presentation of Rogers' material. I would have also hoped to see some recommendations by Rogers in terms of how should researchers further the ideas he has put forth. Areas of future research, methods for studying future problems, and even insights on what the future might hold for information politics were not given enough attention in the book; in my opinion, this was a major shortcoming. Overall, I commend Rogers for putting forth a very informative book on the issues of information politics on the Web. Information politics, especially when delivered on the communication technologies such as the Web, is going to play a vital role in our societies for a long time to come. Debates will range from the politics of how information is searched for and displayed on the Web to how the Web is used to manipulate or politicize information to meet the agendas of various entities. Richard Rogers' book will be of the seminal and foundational readings on the topic for any curious minds that want to explore these issues."
    LCSH
    Web search engines / Political aspects
    Web portals / Political aspects
    Subject
    Web search engines / Political aspects
    Web portals / Political aspects
  8. Lazar, J.: Web usability : a user-centered design approach (2006) 0.08
    0.077498615 = product of:
      0.10333149 = sum of:
        0.07450247 = weight(_text_:web in 340) [ClassicSimilarity], result of:
          0.07450247 = score(doc=340,freq=82.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4617554 = fieldWeight in 340, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=340)
        0.0131973745 = weight(_text_:search in 340) [ClassicSimilarity], result of:
          0.0131973745 = score(doc=340,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.076802336 = fieldWeight in 340, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.015625 = fieldNorm(doc=340)
        0.015631648 = product of:
          0.031263296 = sum of:
            0.031263296 = weight(_text_:engine in 340) [ClassicSimilarity], result of:
              0.031263296 = score(doc=340,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.11820832 = fieldWeight in 340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.015625 = fieldNorm(doc=340)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Introduction to Web usability -- Defining the mission and target user population -- Requirements gathering: what information is needed? -- Methods for requirements gathering -- Information architecture and site navigation -- Page design -- Designing for universal usability -- Physical design -- Usability testing -- Implementation and marketing -- Maintaining and evaluating Web sites
    Footnote
    Rez. in: JASIST 58(2007) no.7, S.1066-1067 (X. Zhu u. J. Liao): "The user, without whom any product or service would be nothing, plays a very important role during the whole life cycle of products or services. The user's involvement should be from the very beginning, not just after products or services are ready to work. According to ISO 9241-11: 1998, Part 11, Usability refers to "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of user." As an academic topic of human-computer interaction, Web usability has been studied widely for a long time. This classroom-oriented book, bridging academia and the educational community, talks about Web usability in a student-friendly fashion. It outlines not only the methodology of user-centered Web site design but also details the methods to implement at every stage of the methodology. That is, the book presents the user-centered Web-design approach from both macrocosm and microcosm points of view, which makes it both recapitulative and practical. The most important key word in Web Usability is "user-centered," which means Web developers should not substitute their own personal preferences for the users' needs. The book classifies Web sites into five types: E-commerce, informational, entertainment, community, and intranet. Since the methods used during Web development differ somewhat depending on the type of Web site, it is necessary to have a classification in advance. With Figure 1.3 on p. 17, the book explains the whole user-centered Webdevelopment life cycle (called "methodology" in this book review), which provides a clear path for Web development that is easy to understand, remember, and perform. Since all the following chapters are based on the methodology, a clear presentation of it is paramount. The table on p. 93 summarizes concisely all types of methods for requirements gathering and their advantages and disadvantages. According to this table, appropriate methods can be easily chosen for different Web site development projects. As the author remarked, "requirements gathering is central to the concept of user-centered design," (p. 98) and "one of the hallmarks of user-centered design is usability testing" (p. 205). Stage 2 (collect user requirements) and Stage 5 (perform usability testing) of the user-centered Web-development life cycle are the two stages with the most user involvement: however, this does not mean that all other stages are user unrelated. For example, in Stage 4 (create and modify physical design), frame is not suggested to be used just because most users are unfamiliar with the concept of frame (p. 201). Note that frequently there are several rounds of usability testing to be performed in the four case studies, and some of them are performed before the physical-design stage or even the conceptual-design stage, which embodies the idea of an iterative design process.
    The many hands-on examples throughout the book and the four case studies at the end of the book are obvious strong points linking theory with practice. The four case studies are very useful, and it is hard to find such cases in the literature since few companies want to publicize such information. The four case studies are not just simple repeats; they are very different from each other and provide readers specific examples to analyze and follow. Web Usability is an excellent textbook, with a wrap-up (including discussion questions, design exercises, and suggested reading) at the end of each chapter. Each wrap-up first outlines where the focus should be placed, corresponding to what was presented at the very beginning of each chapter. Discussion questions help recall in an active way the main points in each chapter. The design exercises make readers apply to a design project what they have just obtained from the chapter, leading to a deeper understanding of knowledge. Suggested reading provides additional information sources for people who want to further study the research topic, which bridges the educational community back to academia. The book is enhanced by two universal resource locators (URLs) linking to the Addison-Wesley instructor resource center (http://www. aw.com/irc) and the Web-Star survey and project deliverables (http:// www. aw.com/cssupport), respectively. There are valuable resources in these two URLs, which can be used together with Web Usability. Like the Web, books are required to possess good information architecture to facilitate understanding. Fortunately, Web Usability has very clear information architecture. Chap. 1 introduces the user-centered Web-development life cycle, which is composed of seven stages. Chap. 2 discusses Stage l, chaps. 3 and 4 detail Stage 2, chaps. 5 through 7 outline Stage 3, and chaps. 8 through I1 present Stages 4 through 7, respectively. In chaps. 2 through 11, details (called "methods" in this review) are given for every stage of the methodology. The main clue of the book is how to design a new Web site; however, this does not mean that Web redesign is trivial and ignored. The author mentions Web redesign issues from time to time, and a dedicated section is presented to discuss redesign in chaps. 2, 3, 10, and 11.
    Besides the major well-known software applications such as FrontPage and Dreamweaver (pp. 191-194), many useful software tools can be adopted to assist and accelerate the Web-development process, resulting in improvement of the productivity of the Web industry. Web Usability mentions such tools as the "code validator" (p. 189) to identify problematic areas of the handwritten code against spelling and usage, the tool available at a given URL address to convert portable document format (PDF) files into hypertext markup language (HTML) files (p. 201), WEBXACT, WebSAT, A-Prompt, Dottie, InFocus, and RAMP (pp. 226-227) to automate usability testing, and ClickTracks, NetTracker, WebTrends, and Spotfire (p. 263) to summarize Web-usage data and analyze the trends. Thus, Web developers are able to find these tools and benefit from them. Other strengths of the book include the layout of each page, which has a wide margin in which readers may easily place notes, and the fact that the book is easy to read and understand. Although there are many strengths in this book, a few weaknesses are evident. All chapter wrap-ups should have an identical layout. Without numbering for sections and subsections, it is very likely that readers will lose sense of where they are in the overall information architecture of the book. At present, the only solution is to frequently refer to the table of contents to confirm the location. The hands-on example on p. 39 would be better placed in chap. 4 because it focuses on a requirements gathering method, the interview. There are two similar phrases, namely "user population" and "user group," that are used widely in this book. User population is composed of user groups; however, they are not strictly used in this book. The section title "Using a Search Engine" (p. 244) should be on the same level as that of the section "Linking to a URL," and not as that of the section entitled "Marketing: Bringing Users to Your Web Site," according to what the author argued at the top of p. 236.
    Web Usability is undoubtedly a success. After reading this book, Web designers will pay attention to both the content and the usability; otherwise, the majority might overlook the usability. Although this book mainly focuses on students and instructors, it also is appropriate for those who want to develop a user-centered Web site but do not know how. We would suggest that an initial reading is necessary to know what is included under each section title; from then on, when the methodology and methods are applied to guide a real-world project, only the table of contents and the chapter wrap-ups need to be reread, and other parts only when important details are forgotten. With the help of so many examples and strongly viable methods, Web Usability explains almost everything necessary during user-centered Web development and provides tips to help avoid some common mistakes. All of these characteristics facilitate effective and efficient Web-development processes. Similarly, the book reaches its content goal and usability goal as well. In short, Web Usability is an excellent case for book usability: a user-centered edit approach!"
    LCSH
    Web sites / Design
    RSWK
    Web-Seite / Gestaltung / Benutzerorientierung / Benutzerfreundlichkeit / Kundenorientierung
    Subject
    Web-Seite / Gestaltung / Benutzerorientierung / Benutzerfreundlichkeit / Kundenorientierung
    Web sites / Design
  9. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (2007) 0.08
    0.07502268 = product of:
      0.15004537 = sum of:
        0.13329946 = weight(_text_:web in 5135) [ClassicSimilarity], result of:
          0.13329946 = score(doc=5135,freq=42.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.8261705 = fieldWeight in 5135, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5135)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 5135) [ClassicSimilarity], result of:
              0.03349182 = score(doc=5135,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 5135, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5135)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The scale of web site design has grown so that what was once comparable to decorating a room is now comparable to designing buildings or even cities. Designing sites so that people can find their way around is an ever-growing challenge as sites contain more and more information. In the past, Information Architecture for the World Wide Web has helped developers and designers establish consistent and usable structures for their sites and their information. This edition of the classic primer on web site design and navigation is updated with recent examples, new scenarios, and new information on best practices. Readers will learn how to present large volumes of information to visitors who need to find what they're looking for quickly. With topics that range from aesthetics to mechanics, this valuable book explains how to create interfaces that users can understand easily.
    Classification
    ST 252 Informatik / Monographien / Software und -entwicklung / Web-Programmierung, allgemein
    Date
    22. 3.2008 16:18:27
    LCSH
    Web sites / Design
    RSWK
    World Wide Web / Web-Seite / Gestaltung
    World Wide Web / Server
    Softwarearchitektur / Gestaltung / Web-Seite / World Wide Web (GBV)
    Informationsmanagement / World Wide Web (GBV)
    RVK
    ST 252 Informatik / Monographien / Software und -entwicklung / Web-Programmierung, allgemein
    Subject
    World Wide Web / Web-Seite / Gestaltung
    World Wide Web / Server
    Softwarearchitektur / Gestaltung / Web-Seite / World Wide Web (GBV)
    Informationsmanagement / World Wide Web (GBV)
    Web sites / Design
  10. Lynch, P.J.; Horton, S.: Web style guide : basic design principles for creating Web sites (1999) 0.07
    0.06945962 = product of:
      0.13891923 = sum of:
        0.09872905 = weight(_text_:web in 1580) [ClassicSimilarity], result of:
          0.09872905 = score(doc=1580,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6119082 = fieldWeight in 1580, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=1580)
        0.04019018 = product of:
          0.08038036 = sum of:
            0.08038036 = weight(_text_:22 in 1580) [ClassicSimilarity], result of:
              0.08038036 = score(doc=1580,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.46428138 = fieldWeight in 1580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1580)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 8.2000 14:46:22
  11. Creating Web-accessible databases : case studies for libraries, museums, and other nonprofits (2001) 0.07
    0.06712837 = product of:
      0.13425674 = sum of:
        0.100764915 = weight(_text_:web in 4806) [ClassicSimilarity], result of:
          0.100764915 = score(doc=4806,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6245262 = fieldWeight in 4806, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=4806)
        0.03349182 = product of:
          0.06698364 = sum of:
            0.06698364 = weight(_text_:22 in 4806) [ClassicSimilarity], result of:
              0.06698364 = score(doc=4806,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.38690117 = fieldWeight in 4806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4806)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 3.2008 12:21:28
    LCSH
    Web databases
    Subject
    Web databases
  12. Internet searching and indexing : the subject approach (2000) 0.06
    0.06059847 = product of:
      0.12119694 = sum of:
        0.046541322 = weight(_text_:web in 1468) [ClassicSimilarity], result of:
          0.046541322 = score(doc=1468,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 1468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1468)
        0.07465562 = weight(_text_:search in 1468) [ClassicSimilarity], result of:
          0.07465562 = score(doc=1468,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.43445963 = fieldWeight in 1468, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=1468)
      0.5 = coord(2/4)
    
    Abstract
    This comprehensive volume offers usable information for people at all levels of Internet savvy. It can teach librarians, students, and patrons how to search the Internet more systematically. It also helps information professionals design more efficient, effective search engines and Web pages.
  13. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.06
    0.060073085 = product of:
      0.080097444 = sum of:
        0.04030597 = weight(_text_:web in 2428) [ClassicSimilarity], result of:
          0.04030597 = score(doc=2428,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.24981049 = fieldWeight in 2428, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2428)
        0.026394749 = weight(_text_:search in 2428) [ClassicSimilarity], result of:
          0.026394749 = score(doc=2428,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.15360467 = fieldWeight in 2428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2428)
        0.013396727 = product of:
          0.026793454 = sum of:
            0.026793454 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
              0.026793454 = score(doc=2428,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.15476047 = fieldWeight in 2428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2428)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Inhalt u.a.: Architectures I Preservation Retrieval - The Use of Summaries in XML Retrieval / Zoltdn Szldvik, Anastasios Tombros, Mounia Laimas - An Enhanced Search Interface for Information Discovery from Digital Libraries / Georgia Koutrika, Alkis Simitsis - The TIP/Greenstone Bridge: A Service for Mobile Location-Based Access to Digital Libraries / Annika Hinze, Xin Gao, David Bainbridge Architectures II Applications Methodology Metadata Evaluation User Studies Modeling Audiovisual Content Language Technologies - Incorporating Cross-Document Relationships Between Sentences for Single Document Summarizations / Xiaojun Wan, Jianwu Yang, Jianguo Xiao - Semantic Web Techniques for Multiple Views on Heterogeneous Collections: A Case Study / Marjolein van Gendt, Antoine Isaac, Lourens van der Meij, Stefan Schlobach Posters - A Tool for Converting from MARC to FRBR / Trond Aalberg, Frank Berg Haugen, Ole Husby
    RSWK
    World Wide Web / Elektronische Bibliothek / Information Retrieval / Kongress / Alicante <2006>
    Subject
    World Wide Web / Elektronische Bibliothek / Information Retrieval / Kongress / Alicante <2006>
  14. Theriault, L.F.; Jean, J.: Confused? A kid's guide to the Internet's World Wide Web (1995) 0.06
    0.05930443 = product of:
      0.11860886 = sum of:
        0.06581937 = weight(_text_:web in 1208) [ClassicSimilarity], result of:
          0.06581937 = score(doc=1208,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 1208, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1208)
        0.052789498 = weight(_text_:search in 1208) [ClassicSimilarity], result of:
          0.052789498 = score(doc=1208,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 1208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=1208)
      0.5 = coord(2/4)
    
    Abstract
    Children learn how to send online postcards, search for information, get help for school projects, play games and more! Adults find out how to access the World Wide Web, get computer software, and monitor kids' safety on the Internet
  15. Vocabulary as a central concept in digital libraries : interdisciplinary concepts, challenges, and opportunities : proceedings of the Third International Conference on Conceptions of Library and Information Science (COLIS3), Dubrovnik, Croatia, 23-26 May 1999 (1999) 0.05
    0.05189138 = product of:
      0.10378276 = sum of:
        0.05759195 = weight(_text_:web in 3850) [ClassicSimilarity], result of:
          0.05759195 = score(doc=3850,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 3850, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3850)
        0.046190813 = weight(_text_:search in 3850) [ClassicSimilarity], result of:
          0.046190813 = score(doc=3850,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 3850, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3850)
      0.5 = coord(2/4)
    
    Content
    Enthält u.a. die Beiträge: Pharo, N.: Web information search strategies: a model for classifying Web interaction; Wang, Z., L.L. Hill u. T.R. Smith: Alexandria Digital Library metadata creator based an extensible markup language; Reid, J.: A new, task-oriented paradigm for information retrieval: implications for evaluation of information retrieval systems; Ornager, S.: Image archives in newspaper editorial offices: a service activity; Ruthven, I., M. Lalmas: Selective relevance feedback using term characteristics
  16. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.05
    0.051787402 = product of:
      0.103574805 = sum of:
        0.07718006 = weight(_text_:web in 2222) [ClassicSimilarity], result of:
          0.07718006 = score(doc=2222,freq=22.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.47835067 = fieldWeight in 2222, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2222)
        0.026394749 = weight(_text_:search in 2222) [ClassicSimilarity], result of:
          0.026394749 = score(doc=2222,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.15360467 = fieldWeight in 2222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2222)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: JASIST 55(2004) no.3, S.275-276 (C. Chen): "This is a book about finding significant statistical patterns on the Web - in particular, patterns that are associated with hypertext documents, topics, hyperlinks, and queries. The term pattern in this book refers to dependencies among such items. On the one hand, the Web contains useful information an just about every topic under the sun. On the other hand, just like searching for a needle in a haystack, one would need powerful tools to locate useful information an the vast land of the Web. Soumen Chakrabarti's book focuses an a wide range of techniques for machine learning and data mining an the Web. The goal of the book is to provide both the technical Background and tools and tricks of the trade of Web content mining. Much of the technical content reflects the state of the art between 1995 and 2002. The targeted audience is researchers and innovative developers in this area, as well as newcomers who intend to enter this area. The book begins with an introduction chapter. The introduction chapter explains fundamental concepts such as crawling and indexing as well as clustering and classification. The remaining eight chapters are organized into three parts: i) infrastructure, ii) learning and iii) applications.
    Part I, Infrastructure, has two chapters: Chapter 2 on crawling the Web and Chapter 3 an Web search and information retrieval. The second part of the book, containing chapters 4, 5, and 6, is the centerpiece. This part specifically focuses an machine learning in the context of hypertext. Part III is a collection of applications that utilize the techniques described in earlier chapters. Chapter 7 is an social network analysis. Chapter 8 is an resource discovery. Chapter 9 is an the future of Web mining. Overall, this is a valuable reference book for researchers and developers in the field of Web mining. It should be particularly useful for those who would like to design and probably code their own Computer programs out of the equations and pseudocodes an most of the pages. For a student, the most valuable feature of the book is perhaps the formal and consistent treatments of concepts across the board. For what is behind and beyond the technical details, one has to either dig deeper into the bibliographic notes at the end of each chapter, or resort to more in-depth analysis of relevant subjects in the literature. lf you are looking for successful stories about Web mining or hard-way-learned lessons of failures, this is not the book."
  17. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.05
    0.050187834 = product of:
      0.10037567 = sum of:
        0.05996712 = weight(_text_:web in 2050) [ClassicSimilarity], result of:
          0.05996712 = score(doc=2050,freq=34.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.37166741 = fieldWeight in 2050, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
        0.040408544 = weight(_text_:search in 2050) [ClassicSimilarity], result of:
          0.040408544 = score(doc=2050,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.23515818 = fieldWeight in 2050, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: KO 50(2003) no.1, S.45-46 (L.M. Given): "In her own preface to this work, the author notes her lifelong fascination with classification and order, as well as her more recent captivation with the Internet - a place of "chaos in need of organization" (xi). Sorting out the Web examines current efforts to organize the Web and is well-informed by the author's academic and professional expertise in information organization, information retrieval, and Web development. Although the book's level and tone are particularly relevant to a student audience (or others interested in Web-based subject access at an introductory level), it will also appeal to information professionals developing subject access systems across a range of information contexts. There are six chapters in the book, each describing and analyzing one core concept related to the organization of Web content. All topics are presented in a manner ideal for newcomers to the area, with clear definitions, examples, and visuals that illustrate the principles under discussion. The first chapter provides a brief introduction to developments in information technology, including an historical overview of information services, users' needs, and libraries' responses to the Internet. Chapter two introduces metadata, including core concepts and metadata formats. Throughout this chapter the author presents a number of figures that aptly illustrate the application of metadata in HTML, SGML, and MARC record environments, and the use of metadata tools (e.g., XML, RDF). Chapter three begins with an overview of classification theory and specific schemes, but the author devotes most of the discussion to the application of classification systems in the Web environment (e.g., Dewey, LCC, UDC). Web screen captures illustrate the use of these schemes for information sources posted to sites around the world. The chapter closes with a discussion of the future of classification; this is a particularly useful section as the author presents a listing of core journal and conference venues where new approaches to Web classification are explored. In chapter four, the author extends the discussion of classification to the use of controlled vocabularies. As in the first few chapters, the author first presents core background material, including reasons to use controlled vocabularies and the differences between preand post-coordinate indexing, and then discusses the application of specific vocabularies in the Web environment (e.g., Infomine's use of LCSH). The final section of the chapter explores failure in subject searching and the limitations of controlled vocabularies for the Web. Chapter five discusses one of the most common and fast-growing topics related to subject access an the Web: search engines. The author presents a clear definition of the term that encompasses classified search lists (e.g., Yahoo) and query-based engines (e.g., Alta Vista). In addition to historical background an the development of search engines, Schwartz also examines search service types, features, results, and system performance.
    The chapter concludes with an appendix of search tips that even seasoned searchers will appreciate; these tips cover the complete search process, from preparation to the examination of results. Chapter six is appropriately entitled "Around the Corner," as it provides the reader with a glimpse of the future of subject access for the Web. Text mining, visualization, machine-aided indexing, and other topics are raised here to whet the reader's appetite for what is yet to come. As the author herself notes in these final pages, librarians will likely increase the depth of their collaboration with software engineers, knowledge managers and others outside of the traditional library community, and thereby push the boundaries of subject access for the digital world. This final chapter leaves this reviewer wanting a second volume of the book, one that might explore these additional topics, as they evolve over the coming years. One characteristic of any book that addresses trends related to the Internet is how quickly the text becomes dated. However, as the author herself asserts, there are core principles related to subject analysis that stand the test of time, leaving the reader with a text that may be generalized well beyond the publication date. In this, Schwartz's text is similar to other recent publications (e.g., Jakob Nielsen's Web Usability, also published in 2001) that acknowledge the mutability of the Web, and therefore discuss core principles and issues that may be applied as the medium itself evolves. This approach to the writing makes this a useful book for those teaching in the areas of subject analysis, information retrieval and Web development for possible consideration as a course text. Although the websites used here may need to be supplemented with more current examples in the classroom, the core content of the book will be relevant for many years to come. Although one might expect that any book taking subject access as its focus world, itself, be easy to navigate, this is not always the case. In this text, however, readers will be pleased to find that no small detail in content access has been spared. The subject Index is thorough and well-crafted, and the inclusion of an exhaustive author index is particularly useful for quick reference. In addition, the table of contents includes sub-themes for each chapter, and a complete table of figures is provided. While the use of colour figures world greatly enhance the text, all black-andwhite images are clear and sharp, a notable fact given that most of the figures are screen captures of websites or database entries. In addition, the inclusion of comprehensive reference lists at the close of each chapter makes this a highly readable text for students and instructors alike; each section of the book can stand as its own "expert review" of the topic at hand. In both content and structure this text is highly recommended. It certainly meets its intended goal of providing a timely introduction to the methods and problems of subject access in the Web environment, and does so in a way that is readable, interesting and engaging."
  18. XML data management : native XML and XML-enabled database systems (2003) 0.04
    0.038062796 = product of:
      0.050750397 = sum of:
        0.016454842 = weight(_text_:web in 2073) [ClassicSimilarity], result of:
          0.016454842 = score(doc=2073,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.1019847 = fieldWeight in 2073, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=2073)
        0.018663906 = weight(_text_:search in 2073) [ClassicSimilarity], result of:
          0.018663906 = score(doc=2073,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.10861491 = fieldWeight in 2073, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.015625 = fieldNorm(doc=2073)
        0.015631648 = product of:
          0.031263296 = sum of:
            0.031263296 = weight(_text_:engine in 2073) [ClassicSimilarity], result of:
              0.031263296 = score(doc=2073,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.11820832 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Footnote
    Rez. in: JASIST 55(2004) no.1, S.90-91 (N. Rhodes): "The recent near-exponential increase in XML-based technologies has exposed a gap between these technologies and those that are concerned with more fundamental data management issues. This very comprehensive and well-organized book has quite neatly filled the gap, thus achieving most of its stated intentions. The target audiences are database and XML professionals wishing to combine XML with modern database technologies and such is the breadth of scope of this book (hat few would not find it useful in some way. The editors have assembled a collection of chapters from a wide selection of industry heavyweights and as with most books of this type, it exhibits many disparate styles but thanks to careful editing it reads well as a cohesive whole. Certain sections have already appeared in print elsewhere and there is a deal of corporate flag-waving but nowhere does it become over-intrusive. The preface provides only the very brietest of introductions to XML but instead sets the tone for the remainder of the book. The twin terms of data- and document-centric XML (Bourret, 2003) that have achieved so much recent currency are re-iterated before XML data management issues are considered. lt is here that the book's aims are stated, mostly concerned with the approaches and features of the various available XML data management solutions. Not surprisingly, in a specialized book such as this one an introduction to XML consists of a single chapter. For issues such as syntax, DTDs and XML Schemas the reader is referred elsewhere, here, Chris Brandin provides a practical guide to achieving good grammar and style and argues convincingly for the use of XML as an information-modeling tool. Using a well-chosen and simple example, a practical guide to modeling information is developed, replete with examples of the pitfalls. This brief but illuminating chapter (incidentally available as a "taster" from the publisher's web site) notes that one of the most promising aspects of XML is that applications can be built to use a single mutable information model, obviating the need to change the application code but that good XML design is the basis of such mutability.
    There is some debate over what exactly constitutes a native XML database. Bourret (2003) favors the wider definition; other authors such as the Butler Group (2002) restrict the use of the term to databases systems designed and built solely for storage and manipulation of XML. Two examples of the lauer (Tamino and eXist) are covered in detailed chapters here but also included in this section is the embedded XML database system, Berkeley DB XML, considered by makers Sleepycat Software to be "native" in that it is capable of storing XML natively but built an top of the Berkeley DB engine. To the uninitiated, the revelation that schemas and DTDs are not required by either Tamino or eXist might seem a little strange. Tamino implements "loose coupling" where the validation behavior can be set to "strict," "lax" (i.e., apply only to parts of a document) or "skip" (no checking), in eXist, schemas are simply optional. Many DTDs and schemas evolve as the XML documents are acquired and so these may adhere to slightly different schemas, thus the database should support queries an similar documents that do not share the same structune. In fact, because of the difficulties in mappings between XML and database (especially relational) schemas native XML databases are very useful for storage of semi-structured data, a point not made in either chapter. The chapter an embedded databases represents a "third way," being neither native nor of the XML-enabled relational type. These databases run inside purpose-written applications and are accessed via an API or similar, meaning that the application developer does not need to access database files at the operating system level but can rely an supplied routines to, for example, fetch and update database records. Thus, end-users do not use the databases directly; the applications do not usually include ad hoc end-user query tools. This property renders embedded databases unsuitable for a large number of situations and they have become very much a niche market but this market is growing rapidly. Embedded databases share an address space with the application so the overhead of calls to the server is reduced, they also confer advantages in that they are easier to deploy, manage and administer compared to a conventional client-server solution. This chapter is a very good introduction to the subject, primers an generic embedded databases and embedded XML databases are helpfully provided before the author moves to an overview of the Open Source Berkeley system. Building an embedded database application makes far greater demands an the software developer and the remainder of the chapter is devoted to consideration of these programming issues.
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  19. Dawson, H.: Using the Internet for political research : practical tips and hints (2003) 0.04
    0.037874047 = product of:
      0.07574809 = sum of:
        0.029088326 = weight(_text_:web in 4511) [ClassicSimilarity], result of:
          0.029088326 = score(doc=4511,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 4511, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4511)
        0.046659768 = weight(_text_:search in 4511) [ClassicSimilarity], result of:
          0.046659768 = score(doc=4511,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 4511, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4511)
      0.5 = coord(2/4)
    
    Content
    Key Features - Includes chapters an key topics such as elections, parliaments, prime ministers and presidents - Contains case studies of typical searches - Highlights useful political science Internet sites. The Author Heather Dawson is an Assistant Librarian at the British Library of Political and Economic Science and Politics and Government Editor of SOSIG (The Social Science Information Gateway). Readership This book is aimed at researchers, librarians/ information workers handling reference enquiries and students. Contents Getting started an using the Internet - search tools available, information gateways, search terms, getting further information Political science research - getting started, key organisations, key web sites Elections - using the Internet to follow an election, information an electoral systems, tracing election results, future developments (e.g. digital archive) Political parties - what is online, constructing searches, key sites, where to find information Heads of state (Presidents and Prime Ministers) - tracing news stories, Speeches, directories worldwide Parliaments - what is happening in Parliament, tracing MPs, Bills, devolution and regional parliaments in the UK; links to useful sites with directories of parliaments worldwide Government departments - tracing legislation, statistics and consultation papers Political science education - information an courses, grants, libraries, searching library catalogues, tracing academic staff members Keeping up-to-date - political news stories, political research and forthcoming events
  20. Research and advanced technology for digital libraries : 11th European conference, ECDL 2007 / Budapest, Hungary, September 16-21, 2007, proceedings (2007) 0.04
    0.036468036 = product of:
      0.07293607 = sum of:
        0.046541322 = weight(_text_:web in 2430) [ClassicSimilarity], result of:
          0.046541322 = score(doc=2430,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 2430, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2430)
        0.026394749 = weight(_text_:search in 2430) [ClassicSimilarity], result of:
          0.026394749 = score(doc=2430,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.15360467 = fieldWeight in 2430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2430)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 11th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2007, held in Budapest, Hungary, in September 2007. The 36 revised full papers presented together with the extended abstracts of 36 revised poster, demo papers and 2 panel descriptions were carefully reviewed and selected from a total of 153 submissions. The papers are organized in topical sections on ontologies, digital libraries and the web, models, multimedia and multilingual DLs, grid and peer-to-peer, preservation, user interfaces, document linking, information retrieval, personal information management, new DL applications, and user studies.
    Content
    Inhalt u.a.: Ontologies - Ontology-Based Question Answering for Digital Libraries / Stephan Bloehdorn, Philipp Cimiano, Alistair Duke, Peter Haase, Jörg Heizmann, Ian Thurlow and Johanna Völker Digital libraries and the Web Models Multimedia and multilingual DLs - Roadmap for MultiLingual Information Access in the European Library / Maristella Agosti, Martin Braschler, Nicola Ferro, Carol Peters and Sjoerd Siebinga Grid and peer-to-peer Preservation User interfaces Document linking Information retrieval - Thesaurus-Based Feedback to Support Mixed Search and Browsing Environments / Edgar Meij and Maarten de Rijke - Extending Semantic Matching Towards Digital Library Contexts / László Kovács and András Micsik Personal information management New DL applications User studies
    RSWK
    World Wide Web / Elektronische Bibliothek / Information Retrieval / Kongress / Budapest <2007> / Online-Publikation
    Subject
    World Wide Web / Elektronische Bibliothek / Information Retrieval / Kongress / Budapest <2007> / Online-Publikation

Types

  • s 20
  • b 1
  • i 1
  • More… Less…

Subjects

Classifications