Search (2852 results, page 1 of 143)

  • × year_i:[2000 TO 2010}
  1. Buzinkay, M.: Neue Entwicklungen im Web : eSnips, meX, Google Book Search und World Digital Library Project (2005) 0.16
    0.16169867 = product of:
      0.242548 = sum of:
        0.18515138 = weight(_text_:book in 7601) [ClassicSimilarity], result of:
          0.18515138 = score(doc=7601,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.82764864 = fieldWeight in 7601, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=7601)
        0.05739662 = product of:
          0.11479324 = sum of:
            0.11479324 = weight(_text_:search in 7601) [ClassicSimilarity], result of:
              0.11479324 = score(doc=7601,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.6516894 = fieldWeight in 7601, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7601)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Object
    Google book search
  2. Sandler, M.: Disruptive beneficence : the Google Print program and the future of libraries (2005) 0.15
    0.15498653 = product of:
      0.23247978 = sum of:
        0.123434246 = weight(_text_:book in 208) [ClassicSimilarity], result of:
          0.123434246 = score(doc=208,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.55176574 = fieldWeight in 208, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=208)
        0.109045535 = sum of:
          0.054114055 = weight(_text_:search in 208) [ClassicSimilarity], result of:
            0.054114055 = score(doc=208,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.30720934 = fieldWeight in 208, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0625 = fieldNorm(doc=208)
          0.054931477 = weight(_text_:22 in 208) [ClassicSimilarity], result of:
            0.054931477 = score(doc=208,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.30952093 = fieldWeight in 208, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=208)
      0.6666667 = coord(2/3)
    
    Abstract
    Libraries must learn to accommodate themselves to Google, and complement its mass digitization efforts with niche digitization of our own. We need to plan for what our activities and services will look like when our primary activity is no longer the storage and circulation of widely-available print materials, and once the printed book is no longer the only major vehicle for scholarly communication.
    Object
    Google book search
    Pages
    S.5-22
  3. Golderman, G.M.; Connolly, B.: Between the book covers : going beyond OPAC keyword searching with the deep linking capabilities of Google Scholar and Google Book Search (2004/05) 0.13
    0.1346759 = product of:
      0.20201385 = sum of:
        0.10910148 = weight(_text_:book in 731) [ClassicSimilarity], result of:
          0.10910148 = score(doc=731,freq=8.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.4876966 = fieldWeight in 731, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.09291236 = sum of:
          0.058580182 = weight(_text_:search in 731) [ClassicSimilarity], result of:
            0.058580182 = score(doc=731,freq=6.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.33256388 = fieldWeight in 731, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=731)
          0.034332175 = weight(_text_:22 in 731) [ClassicSimilarity], result of:
            0.034332175 = score(doc=731,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.19345059 = fieldWeight in 731, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=731)
      0.6666667 = coord(2/3)
    
    Abstract
    One finding of the 2006 OCLC study of College Students' Perceptions of Libraries and Information Resources was that students expressed equal levels of trust in libraries and search engines when it came to meeting their information needs in a way that they felt was authoritative. Seeking to incorporate this insight into our own instructional methodology, Schaffer Library at Union College has attempted to engineer a shift from Google to Google Scholar among our student users by representing Scholar as a viable adjunct to the catalog and to snore traditional electronic resources. By attempting to engage student researchers on their own terms, we have discovered that most of them react enthusiastically to the revelation that the Google they think they know so well is, it turns out, a multifaceted resource that is capable of delivering the sort of scholarly information that will meet with their professors' approval. Specifically, this article focuses on the fact that many Google Scholar searches link hack to our own Web catalog where they identify useful book titles that direct OPAC keyword searches have missed.
    Date
    2.12.2007 19:39:22
    Object
    Google Book Search
  4. Weinberg, B.H.: Book indexes in France : medieval specimens and modern practices (2000) 0.13
    0.13387142 = product of:
      0.20080712 = sum of:
        0.15274209 = weight(_text_:book in 486) [ClassicSimilarity], result of:
          0.15274209 = score(doc=486,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.68277526 = fieldWeight in 486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.109375 = fieldNorm(doc=486)
        0.04806504 = product of:
          0.09613008 = sum of:
            0.09613008 = weight(_text_:22 in 486) [ClassicSimilarity], result of:
              0.09613008 = score(doc=486,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5416616 = fieldWeight in 486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=486)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Indexer. 22(2000) no.1, S.2-13
  5. OCLC und Google vereinbaren Datenaustausch (2008) 0.13
    0.13375923 = product of:
      0.20063883 = sum of:
        0.10689719 = weight(_text_:book in 2326) [ClassicSimilarity], result of:
          0.10689719 = score(doc=2326,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.47784314 = fieldWeight in 2326, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=2326)
        0.09374165 = sum of:
          0.06627591 = weight(_text_:search in 2326) [ClassicSimilarity], result of:
            0.06627591 = score(doc=2326,freq=12.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.37625307 = fieldWeight in 2326, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.03125 = fieldNorm(doc=2326)
          0.027465738 = weight(_text_:22 in 2326) [ClassicSimilarity], result of:
            0.027465738 = score(doc=2326,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.15476047 = fieldWeight in 2326, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2326)
      0.6666667 = coord(2/3)
    
    Content
    "Die Vereinbarung legt fest, dass alle OCLC Mitgliedsbibliotheken, die am Google Book Search(TM) Programm teilnehmen - welches die Volltextsuche von mehr als einer Million Bücher ermöglicht - nun ihre aus WorldCat stammenden MARC-Katalogdaten in Google einbringen können und somit das Finden ihrer Bestände über Google maßgeblich erleichtert wird. Google wird von Google Book Search auf WorldCat.org verweisen, was die Anfrage an Bibliotheks-OPACs und andere Bibliotheksdienste verstärken wird. Google und OCLC werden Daten und Verweise auf digitalisierte Bücher gemeinsam nutzen. Das ermöglicht es OCLC, digitalisierte Bestände seiner Mitgliederbibliotheken in WorldCat zu präsentieren. "Diese Vereinbarung ist im Sinne der teilnehmenden OCLC Bibliotheken. Der erweiterte Zugriff auf die Bibliotheksbestände und -dienste wird durch die höhere Verfügbarkeit im Web gefördert", sagt Jay Jordan, OCLC Präsident und CEO. "Wir freuen uns über die Partnerschaft mit Google. Es nutzt unserem Ziel, Menschen durch internationale Bibliothekskooperation den Zugang zu weltweitem Wissen zu erleichtern." WorldCat Metadaten werden Google direkt von OCLC oder über die Mitgliederbibliotheken, die am beteiligten Google Book Search Programm teilnehmen, bereitgestellt. Google hat kürzlich eine API (Application Programming Interface) freigegeben, die Verweise in Google Book Search auf Basis von ISBNs (Internationale Standardbuchnummer), LCCNs (Library of Congress Control Number) und OCLC Nummern zulässt. Wenn ein Nutzer ein Buch in Google Book Search findet, kann die Verknüpfung in WorldCat.org bis zur lokalen Bibliothek zurückverfolgt werden.
    Date
    26.10.2008 11:22:04
    Object
    Google Book Search
  6. OCLC und Google vereinbaren Datenaustausch (2008) 0.12
    0.116695955 = product of:
      0.17504393 = sum of:
        0.13362148 = weight(_text_:book in 1701) [ClassicSimilarity], result of:
          0.13362148 = score(doc=1701,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5973039 = fieldWeight in 1701, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1701)
        0.041422445 = product of:
          0.08284489 = sum of:
            0.08284489 = weight(_text_:search in 1701) [ClassicSimilarity], result of:
              0.08284489 = score(doc=1701,freq=12.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.47031635 = fieldWeight in 1701, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1701)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    "Die Vereinbarung legt fest, dass alle OCLC Mitgliedsbibliotheken, die am Google Book Search Programm teilnehmen - welches die Volltextsuche von mehr als einer Million Bücher ermöglicht - nun ihre aus WorldCat stammenden MARCKatalogdaten in Google einbringen können und somit das Finden ihrer Bestände über Google maßgeblich erleichtert wird. Google wird von Google Book Search auf WorldCat.org verweisen, was die Anfrage an Bibliotheks-OPACs und andere Bibliotheksdienste verstärken wird. Google und OCLC werden Daten und Verweise auf digitalisierte Bücher gemeinsam nutzen. Das ermöglicht es OCLC, digitalisierte Bestände seiner Mitgliederbibliotheken in WorldCat zu präsentieren. WorldCat Metadaten werden Google direkt von OCLC oder über die Mitgliederbibliotheken, die am beteiligten Google Book Search Programm teilnehmen, bereitgestellt. Google hat kürzlich eine API (Application Programming Interface) freigegeben, die Verweise in Google Book Search auf Basis von ISBNs (Internationale Standardbuchnummer), LCCN (Library of Congress Control Number) und OCLC Nummer zulässt. Wenn ein Nutzer ein Buch in Google Book Search findet, kann die Verknüpfung bis zur lokalen Bibliothek in WorldCat.org zurückverfolgt werden. Die Vereinbarung macht es OCLC möglich, MARC-Datensätze von Google digitalisierten Büchern der OCLC Mitgliedsbibliotheken darzustellen und zu verknüpfen. Diese Verknüpfungsübereinkunft soll sowohl die elektronischen als auch persönlichen Anfragen an die Bibliotheken erhöhen. Die neue Vereinbarung zwischen OCLC und Google ist die jüngste von verschiedenen vorgesehenen Partnerschaften zwischen den beiden, um die Präsenz von Bibliotheken im Web zu verstärken und Nutzern die Informationen dort zu geben, wo sie gebraucht werden. In den kommenden Monaten wird OCLC auch mit anderen Organisationen zusammenarbeiten, um digitalisierte Inhalte in WorldCat zu integrieren."
    Object
    Google Book Search
  7. Kuhlthau, C.C.: Seeking meaning : a process approach to library and information services (2003) 0.11
    0.11474694 = product of:
      0.1721204 = sum of:
        0.1309218 = weight(_text_:book in 4585) [ClassicSimilarity], result of:
          0.1309218 = score(doc=4585,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 4585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=4585)
        0.041198608 = product of:
          0.082397215 = sum of:
            0.082397215 = weight(_text_:22 in 4585) [ClassicSimilarity], result of:
              0.082397215 = score(doc=4585,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.46428138 = fieldWeight in 4585, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4585)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    First published in 1993, this book presents a new process approach to library and information services.
    Date
    25.11.2005 18:58:22
  8. Baksik, C.: Google Book Search library project (2009) 0.11
    0.11071327 = product of:
      0.16606991 = sum of:
        0.1309218 = weight(_text_:book in 3790) [ClassicSimilarity], result of:
          0.1309218 = score(doc=3790,freq=8.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 3790, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=3790)
        0.03514811 = product of:
          0.07029622 = sum of:
            0.07029622 = weight(_text_:search in 3790) [ClassicSimilarity], result of:
              0.07029622 = score(doc=3790,freq=6.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.39907667 = fieldWeight in 3790, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3790)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Google Book Search, initially released as Google Print, allows the full-text searching of millions of books supplied by both publishers and libraries. More than 10,000 publishers and dozens of research libraries contribute. The Library Project is significant because it is a partnership with a commercial entity, because Google is funding the digitization, because the project exists on such a massive scale, and because of the speed with which so many works have been and are being scanned. The aspect that has created the most controversy, and legal action, is that some libraries are contributing works that are protected by copyright. A fascinating and critical debate has arisen around copyright protection, the fair use privilege, and what these mean in the digital age.
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
    Object
    Google Book Search
  9. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.11
    0.10890546 = product of:
      0.16335818 = sum of:
        0.12246612 = weight(_text_:book in 1184) [ClassicSimilarity], result of:
          0.12246612 = score(doc=1184,freq=28.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5474381 = fieldWeight in 1184, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.040892072 = sum of:
          0.02029277 = weight(_text_:search in 1184) [ClassicSimilarity], result of:
            0.02029277 = score(doc=1184,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.1152035 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.020599304 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.020599304 = score(doc=1184,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.6666667 = coord(2/3)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
    Object
    Google book search
  10. Anderson, J.D.; Perez-Carballo, J.: Information retrieval design : principles and options for information description, organization, display, and access in information retrieval databases, digital libraries, catalogs, and indexes (2005) 0.11
    0.1067259 = product of:
      0.16008885 = sum of:
        0.10910148 = weight(_text_:book in 1833) [ClassicSimilarity], result of:
          0.10910148 = score(doc=1833,freq=32.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.4876966 = fieldWeight in 1833, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1833)
        0.05098737 = sum of:
          0.033821285 = weight(_text_:search in 1833) [ClassicSimilarity], result of:
            0.033821285 = score(doc=1833,freq=8.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.19200584 = fieldWeight in 1833, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
          0.017166087 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.017166087 = score(doc=1833,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.09672529 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
      0.6666667 = coord(2/3)
    
    Content
    Inhalt: Chapters 2 to 5: Scopes, Domains, and Display Media (pp. 47-102) Chapters 6 to 8: Documents, Analysis, and Indexing (pp. 103-176) Chapters 9 to 10: Exhaustivity and Specificity (pp. 177-196) Chapters 11 to 13: Displayed/Nondisplayed Indexes, Syntax, and Vocabulary Management (pp. 197-364) Chapters 14 to 16: Surrogation, Locators, and Surrogate Displays (pp. 365-390) Chapters 17 and 18: Arrangement and Size of Displayed Indexes (pp. 391-446) Chapters 19 to 21: Search Interface, Record Format, and Full-Text Display (pp. 447-536) Chapter 22: Implementation and Evaluation (pp. 537-541)
    Footnote
    Rez. in JASIST 57(2006) no.10, S.1412-1413 (R. W. White): "Information Retrieval Design is a textbook that aims to foster the intelligent user-centered design of databases for Information Retrieval (IR). The book outlines a comprehensive set of 20 factors. chosen based on prior research and the authors' experiences. that need to he considered during the design process. The authors provide designers with information on those factors to help optimize decision making. The book does not cover user-needs assessment, implementation of IR databases, or retries al systems, testing. or evaluation. Most textbooks in IR do not offer a substantive walkthrough of the design factors that need to be considered Mien des eloping IR databases. Instead. they focus on issues such as the implementation of data structures, the explanation of search algorithms, and the role of human-machine interaction in the search process. The book touches on all three, but its focus is on designing databases that can be searched effectively. not the tools to search them. This is an important distinction: despite its title. this book does not describe how to build retrieval systems. Professor Anderson utilizes his wealth of experience in cataloging and classification to bring a unique perspective on IR database design that may be useful for novices. for developers seeking to make sense of the design process, and for students as a text to supplement classroom tuition. The foreword and preface. by Jessica Milstead and James Anderson. respectively, are engaging and worthwhile reading. It is astounding that it has taken some 20 years for anyone to continue the stork of Milstead and write as extensively as Anderson does about such an important issue as IR database design. The remainder of the book is divided into two parts: Introduction and Background Issues and Design Decisions. Part 1 is a reasonable introduction and includes a glossary of the terminology that authors use in the book. It is very helpful to have these definitions early on. but the subject descriptors in the right margin are distracting and do not serve their purpose as access points to the text. The terminology is useful to have. as the authors definitions of concepts do not lit exactly with what is traditionally accepted in IR. For example. they use the term 'message' to icier to what would normally be called .'document" or "information object." and do not do a good job at distinguishing between "messages" and "documentary units". Part 2 describes components and attributes of 1R databases to help designers make design choices. The book provides them with information about the potential ramifications of their decisions and advocates a user-oriented approach to making them. Chapters are arranged in a seemingly sensible order based around these factors. and the authors remind us of the importance of integrating them. The authors are skilled at selecting the important factors in the development of seemingly complex entities, such as IR databases: how es er. the integration of these factors. or the interaction between them. is not handled as well as perhaps should be. Factors are presented in the order in which the authors feel then should be addressed. but there is no chapter describing how the factors interact. The authors miss an opportunity at the beginning of Part 2 where they could illustrate using a figure the interactions between the 20 factors they list in a way that is not possible with the linear structure of the book.
    . . . Those interested in using the book to design IR databases can work through the chapters in the order provided and end up with a set of requirements for database design. The steps outlined in this book can be rearranged in numerous orders depending on the particular circumstances. This book would benefit from a discussion of what orders are appropriate for different circumstances and bow the requirements outlined interact. I come away from Information Retrieval Design with mixed, although mainly positive feelings. Even though the aims of this book are made clear from the outset, it was still a disappointment to see issues such as implementation and evaluation covered in only a cursory manner. The book is very well structured. well written, and operates in a part of the space that bas been neglected for too long. The authors whet my appetite with discussion of design, and I would have liked to have heard a bit more about what happens in requirements' elicitation before the design issues base been identified and to impIementation after they have been addressed. Overall, the book is a comprehensive review of previous research supplemented by the authors' views on IR design. This book focuses on breadth of coverage rather than depth of coverage and is therefore potentially of more use to novices in the field. The writing style is clear, and the authors knowledge of the subject area is undoubted. I wouId recommend this book to anyone who wants to learn about IR database design and take advantage of the experience and insights of Anderson, one of tile visionaries it the field."
  11. Smith, L.: Subject access in interdisciplinary research (2000) 0.11
    0.10620606 = product of:
      0.15930909 = sum of:
        0.0771464 = weight(_text_:book in 1185) [ClassicSimilarity], result of:
          0.0771464 = score(doc=1185,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34485358 = fieldWeight in 1185, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1185)
        0.08216269 = sum of:
          0.047830522 = weight(_text_:search in 1185) [ClassicSimilarity], result of:
            0.047830522 = score(doc=1185,freq=4.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.27153727 = fieldWeight in 1185, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1185)
          0.034332175 = weight(_text_:22 in 1185) [ClassicSimilarity], result of:
            0.034332175 = score(doc=1185,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.19345059 = fieldWeight in 1185, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1185)
      0.6666667 = coord(2/3)
    
    Abstract
    In a series of lectures presented in 1970, Pauline Cochrane offered an American view of Ranganathan's five laws of library science (Atherton, 1973). According to Cochrane, Ranganathan first con ceived of the five laws in 1924. They include: (1) books are for use; (2) every reader his book; (3) every book its reader; (4) save the time of the reader; and (5) a library is a growing organism. With respect to law 4, Cochrane cited the need for more research to understand the match between a user's information needs and the descriptions of information resources. In constructing the catalog and other search tools, do we save the time of the reader? Success in this effort requires knowing more about the reader's information needs and search behavior. Cochrane (1992) revisited the laws two decades later, recommending that they serve as guidelines and criteria for assessing the value of information technology in library and information services. In particular she suggested the need to determine whether information technology improves the timeliness, precision, and comprehensiveness of information provision to users. This article focuses an how information technology may enable us better to meet the needs of a particular category of information users - those undertaking interdisciplinary research. In a study completed twenty-five years ago, this author investigated the feasibility of developing a mapping of portions of controlled vocabularies as a tool for assisting in cross-database searching (Smith, 1974).
    Date
    22. 9.1997 19:16:05
  12. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.11
    0.105359 = product of:
      0.1580385 = sum of:
        0.094484664 = weight(_text_:book in 150) [ClassicSimilarity], result of:
          0.094484664 = score(doc=150,freq=24.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.42235768 = fieldWeight in 150, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.063553825 = sum of:
          0.033821285 = weight(_text_:search in 150) [ClassicSimilarity], result of:
            0.033821285 = score(doc=150,freq=8.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.19200584 = fieldWeight in 150, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.029732537 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.029732537 = score(doc=150,freq=6.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.6666667 = coord(2/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  13. Kousha, K.; Thelwall, M.: Google book search : citation analysis for social science and the humanities (2009) 0.11
    0.1050245 = product of:
      0.15753675 = sum of:
        0.13362148 = weight(_text_:book in 2946) [ClassicSimilarity], result of:
          0.13362148 = score(doc=2946,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5973039 = fieldWeight in 2946, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2946)
        0.023915261 = product of:
          0.047830522 = sum of:
            0.047830522 = weight(_text_:search in 2946) [ClassicSimilarity], result of:
              0.047830522 = score(doc=2946,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.27153727 = fieldWeight in 2946, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In both the social sciences and the humanities, books and monographs play significant roles in research communication. The absence of citations from most books and monographs from the Thomson Reuters/Institute for Scientific Information databases (ISI) has been criticized, but attempts to include citations from or to books in the research evaluation of the social sciences and humanities have not led to widespread adoption. This article assesses whether Google Book Search (GBS) can partially fill this gap by comparing citations from books with citations from journal articles to journal articles in 10 science, social science, and humanities disciplines. Book citations were 31% to 212% of ISI citations and, hence, numerous enough to supplement ISI citations in the social sciences and humanities covered, but not in the sciences (3%-5%), except for computing (46%), due to numerous published conference proceedings. A case study was also made of all 1,923 articles in the 51 information science and library science ISI-indexed journals published in 2003. Within this set, highly book-cited articles tended to receive many ISI citations, indicating a significant relationship between the two types of citation data, but with important exceptions that point to the additional information provided by book citations. In summary, GBS is clearly a valuable new source of citation data for the social sciences and humanities. One practical implication is that book-oriented scholars should consult it for additional citations to their work when applying for promotion and tenure.
  14. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.10
    0.0963002 = product of:
      0.14445029 = sum of:
        0.13092178 = weight(_text_:book in 2222) [ClassicSimilarity], result of:
          0.13092178 = score(doc=2222,freq=18.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5852359 = fieldWeight in 2222, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=2222)
        0.013528514 = product of:
          0.027057027 = sum of:
            0.027057027 = weight(_text_:search in 2222) [ClassicSimilarity], result of:
              0.027057027 = score(doc=2222,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.15360467 = fieldWeight in 2222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2222)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: JASIST 55(2004) no.3, S.275-276 (C. Chen): "This is a book about finding significant statistical patterns on the Web - in particular, patterns that are associated with hypertext documents, topics, hyperlinks, and queries. The term pattern in this book refers to dependencies among such items. On the one hand, the Web contains useful information an just about every topic under the sun. On the other hand, just like searching for a needle in a haystack, one would need powerful tools to locate useful information an the vast land of the Web. Soumen Chakrabarti's book focuses an a wide range of techniques for machine learning and data mining an the Web. The goal of the book is to provide both the technical Background and tools and tricks of the trade of Web content mining. Much of the technical content reflects the state of the art between 1995 and 2002. The targeted audience is researchers and innovative developers in this area, as well as newcomers who intend to enter this area. The book begins with an introduction chapter. The introduction chapter explains fundamental concepts such as crawling and indexing as well as clustering and classification. The remaining eight chapters are organized into three parts: i) infrastructure, ii) learning and iii) applications.
    Part I, Infrastructure, has two chapters: Chapter 2 on crawling the Web and Chapter 3 an Web search and information retrieval. The second part of the book, containing chapters 4, 5, and 6, is the centerpiece. This part specifically focuses an machine learning in the context of hypertext. Part III is a collection of applications that utilize the techniques described in earlier chapters. Chapter 7 is an social network analysis. Chapter 8 is an resource discovery. Chapter 9 is an the future of Web mining. Overall, this is a valuable reference book for researchers and developers in the field of Web mining. It should be particularly useful for those who would like to design and probably code their own Computer programs out of the equations and pseudocodes an most of the pages. For a student, the most valuable feature of the book is perhaps the formal and consistent treatments of concepts across the board. For what is behind and beyond the technical details, one has to either dig deeper into the bibliographic notes at the end of each chapter, or resort to more in-depth analysis of relevant subjects in the literature. lf you are looking for successful stories about Web mining or hard-way-learned lessons of failures, this is not the book."
  15. Milne, R.: ¬The Google Library Project at Oxford (2005) 0.10
    0.09528184 = product of:
      0.14292276 = sum of:
        0.10910148 = weight(_text_:book in 7134) [ClassicSimilarity], result of:
          0.10910148 = score(doc=7134,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.4876966 = fieldWeight in 7134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.078125 = fieldNorm(doc=7134)
        0.033821285 = product of:
          0.06764257 = sum of:
            0.06764257 = weight(_text_:search in 7134) [ClassicSimilarity], result of:
              0.06764257 = score(doc=7134,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.3840117 = fieldWeight in 7134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7134)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Object
    Google book search
  16. hel: Bayerische Staatsbibliothek paktiert mit Google (2007) 0.09
    0.09432422 = product of:
      0.14148633 = sum of:
        0.108004965 = weight(_text_:book in 586) [ClassicSimilarity], result of:
          0.108004965 = score(doc=586,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.48279503 = fieldWeight in 586, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=586)
        0.033481363 = product of:
          0.06696273 = sum of:
            0.06696273 = weight(_text_:search in 586) [ClassicSimilarity], result of:
              0.06696273 = score(doc=586,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.38015217 = fieldWeight in 586, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die Bayerische Staatsbibliothek hat eine Kooperation mit Google vereinbart. Der Internet-Gigant soll sämtliche Buchbestände digitalisieren, die nicht dem Urheberschutz unterliegen und in »Google Book Search« integrieren.
    Object
    Google book search
  17. Hock, R.: Search engines (2009) 0.09
    0.09267263 = product of:
      0.13900894 = sum of:
        0.076371044 = weight(_text_:book in 3876) [ClassicSimilarity], result of:
          0.076371044 = score(doc=3876,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 3876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3876)
        0.0626379 = product of:
          0.1252758 = sum of:
            0.1252758 = weight(_text_:search in 3876) [ClassicSimilarity], result of:
              0.1252758 = score(doc=3876,freq=14.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.71119964 = fieldWeight in 3876, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3876)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This entry provides an overview of Web search engines, looking at the definition, components, leading engines, searching capabilities, and types of engines. It examines the components that make up a search engine and briefly discusses the process involved in identifying content for the engines' databases and the indexing of that content. Typical search options are reviewed and the major Web search engines are identified and described. Also identified and described are various specialty search engines, such as those for special content such as video and images, and engines that take significantly different approaches to the search problem, such as visualization engines and metasearch engines.
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  18. Olsen, K.A.: ¬The Internet, the Web, and eBusiness : formalizing applications for the real world (2005) 0.09
    0.08993193 = product of:
      0.13489789 = sum of:
        0.09758334 = weight(_text_:book in 149) [ClassicSimilarity], result of:
          0.09758334 = score(doc=149,freq=40.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.4362091 = fieldWeight in 149, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.03731454 = sum of:
          0.013528514 = weight(_text_:search in 149) [ClassicSimilarity], result of:
            0.013528514 = score(doc=149,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.076802336 = fieldWeight in 149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.015625 = fieldNorm(doc=149)
          0.023786027 = weight(_text_:22 in 149) [ClassicSimilarity], result of:
            0.023786027 = score(doc=149,freq=6.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.1340265 = fieldWeight in 149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=149)
      0.6666667 = coord(2/3)
    
    Classification
    004.678 22
    DDC
    004.678 22
    Footnote
    Rez. in: JASIST 57(2006) no.14, S.1979-1980 (J.G. Williams): "The Introduction and Part I of this book presents the world of computing with a historical and philosophical overview of computers, computer applications, networks, the World Wide Web, and eBusiness based on the notion that the real world places constraints on the application of these technologies and without a formalized approach, the benefits of these technologies cannot be realized. The concepts of real world constraints and the need for formalization are used as the cornerstones for a building-block approach for helping the reader understand computing, networking, the World Wide Web, and the applications that use these technologies as well as all the possibilities that these technologies hold for the future. The author's building block approach to understanding computing, networking and application building makes the book useful for science, business, and engineering students taking an introductory computing course and for social science students who want to understand more about the social impact of computers, the Internet, and Web technology. It is useful as well for managers and designers of Web and ebusiness applications, and for the general public who are interested in understanding how these technologies may impact their lives, their jobs, and the social context in which they live and work. The book does assume some experience and terminology in using PCs and the Internet but is not intended for computer science students, although they could benefit from the philosophical basis and the diverse viewpoints presented. The author uses numerous analogies from domains outside the area of computing to illustrate concepts and points of view that make the content understandable as well as interesting to individuals without any in-depth knowledge of computing, networking, software engineering, system design, ebusiness, and Web design. These analogies include interesting real-world events ranging from the beginning of railroads, to Henry Ford's mass produced automobile, to the European Space Agency's loss of the 7 billion dollar Adriane rocket, to travel agency booking, to medical systems, to banking, to expanding democracy. The book gives the pros and cons of the possibilities offered by the Internet and the Web by presenting numerous examples and an analysis of the pros and cons of these technologies for the examples provided. The author shows, in an interesting manner, how the new economy based on the Internet and the Web affects society and business life on a worldwide basis now and how it will affect the future, and how society can take advantage of the opportunities that the Internet and the Web offer.
    The book is organized into six sections or parts with several chapters within each part. Part 1, does a good job of building an understanding some of the historical aspects of computing and why formalization is important for building computer-based applications. A distinction is made between formalized and unformalized data, processes, and procedures, which the author cleverly uses to show how the level of formalization of data, processes, and procedures determines the functionality of computer applications. Part I also discusses the types of data that can be represented in symbolic form, which is crucial to using computer and networking technology in a virtual environment. This part also discusses the technical and cultural constraints upon computing, networking, and web technologies with many interesting examples. The cultural constraints discussed range from copyright to privacy issues. Part 1 is critical to understanding the author's point of view and discussions in other sections of the book. The discussion on machine intelligence and natural language processing is particularly well done. Part 2 discusses the fundamental concepts and standards of the Internet and Web. Part 3 introduces the need for formalization to construct ebusiness applications in the business-to-consumer category (B2C). There are many good and interesting examples of these B2C applications and the associated analyses of them using the concepts introduced in Parts I and 2 of the book. Part 4 examines the formalization of business-to-business (B2B) applications and discusses the standards that are needed to transmit data with a high level of formalization. Part 5 is a rather fascinating discussion of future possibilities and Part 6 presents a concise summary and conclusion. The book covers a wide array of subjects in the computing, networking, and Web areas and although all of them are presented in an interesting style, some subjects may be more relevant and useful to individuals depending on their background or academic discipline. Part 1 is relevant to all potential readers no matter what their background or academic discipline but Part 2 is a little more technical; although most people with an information technology or computer science background will not find much new here with the exception of the chapters on "Dynamic Web Pages" and "Embedded Scripts." Other readers will find this section informative and useful for understanding other parts of the book. Part 3 does not offer individuals with a background in computing, networking, or information science much in addition to what they should already know, but the chapters on "Searching" and "Web Presence" may be useful because they present some interesting notions about using the Web. Part 3 gives an overview of B2C applications and is where the author provides examples of the difference between services that are completely symbolic and services that have both a symbolic portion and a physical portion. Part 4 of the book discusses B2B technology once again with many good examples. The chapter on "XML" in Part 4 is not appropriate for readers without a technical background. Part 5 is a teacher's dream because it offers a number of situations that can be used for classroom discussions or case studies independent of background or academic discipline.
    Each chapter provides suggestions for exercises and discussions, which makes the book useful as a textbook. The suggestions in the exercise and discussion section at the end of each chapter are simply delightful to read and provide a basis for some lively discussion and fun exercises by students. These exercises appear to be well thought out and are intended to highlight the content of the chapter. The notes at the end of chapters provide valuable data that help the reader to understand a topic or a reference to an entity that the reader may not know. Chapter 1 on "formalism," chapter 2 on "symbolic data," chapter 3 on "constraints on technology," and chapter 4 on "cultural constraints" are extremely well presented and every reader needs to read these chapters because they lay the foundation for most of the chapters that follow. The analogies, examples, and points of view presented make for some really interesting reading and lively debate and discussion. These chapters comprise Part 1 of the book and not only provide a foundation for the rest of the book but could be used alone as the basis of a social science course on computing, networking, and the Web. Chapters 5 and 6 on Internet protocols and the development of Web protocols may be more detailed and filled with more acronyms than the average person wants to deal with but content is presented with analogies and examples that make it easier to digest. Chapter 7 will capture most readers attention because it discusses how e-mail works and many of the issues with e-mail, which a majority of people in developed countries have dealt with. Chapter 8 is also one that most people will be interested in reading because it shows how Internet browsers work and the many issues such as security associated with these software entities. Chapter 9 discusses the what, why, and how of the World Wide Web, which is a lead-in to chapter 10 on "Searching the Web" and chapter 11 on "Organizing the Web-Portals," which are two chapters that even technically oriented people should read since it provides information that most people outside of information and library science are not likely to know.
    Chapter 12 on "Web Presence" is a useful discussion of what it means to have a Web site that is indexed by a spider from a major Web search engine. Chapter 13 on "Mobile Computing" is very well done and gives the reader a solid basis of what is involved with mobile computing without overwhelming them with technical details. Chapter 14 discusses the difference between pull technologies and push technologies using the Web that is understandable to almost anyone who has ever used the Web. Chapters 15, 16, and 17 are for the technically stout at heart; they cover "Dynamic Web Pages," " Embedded Scripts," and "Peer-to-Peer Computing." These three chapters will tend to dampen the spirits of anyone who does not come from a technical background. Chapter 18 on "Symbolic Services-Information Providers" and chapter 19 on "OnLine Symbolic Services-Case Studies" are ideal for class discussion and students assignments as is chapter 20, "Online Retail Shopping-Physical Items." Chapter 21 presents a number of case studies on the "Technical Constraints" discussed in chapter 3 and chapter 22 presents case studies on the "Cultural Constraints" discussed in chapter 4. These case studies are not only presented in an interesting manner they focus on situations that most Web users have encountered but never really given much thought to. Chapter 24 "A Better Model?" discusses a combined "formalized/unformalized" model that might make Web applications such as banking and booking travel work better than the current models. This chapter will cause readers to think about the role of formalization and the unformalized processes that are involved in any application. Chapters 24, 25, 26, and 27 which discuss the role of "Data Exchange," "Formalized Data Exchange," "Electronic Data Interchange-EDI," and "XML" in business-to-business applications on the Web may stress the limits of the nontechnically oriented reader even though it is presented in a very understandable manner. Chapters 28, 29, 30, and 31 discuss Web services, the automated value chain, electronic market places, and outsourcing, which are of high interest to business students, businessmen, and designers of Web applications and can be skimmed by others who want to understand ebusiness but are not interested in the details. In Part 5, the chapters 32, 33, and 34 on "Interfacing with the Web of the Future," "A Disruptive Technology," "Virtual Businesses," and "Semantic Web," were, for me, who teaches courses in IT and develops ebusiness applications the most interesting chapters in the book because they provided some useful insights about what is likely to happen in the future. The summary in part 6 of the book is quite well done and I wish I had read it before I started reading the other parts of the book.
    The book is quite large with over 400 pages and covers a myriad of topics, which is probably more than any one course could cover but an instructor could pick and choose those chapters most appropriate to the course content. The book could be used for multiple courses by selecting the relevant topics. I enjoyed the first person, rather down to earth, writing style and the number of examples and analogies that the author presented. I believe most people could relate to the examples and situations presented by the author. As a teacher in Information Technology, the discussion questions at the end of the chapters and the case studies are a valuable resource as are the end of chapter notes. I highly recommend this book for an introductory course that combines computing, networking, the Web, and ebusiness for Business and Social Science students as well as an introductory course for students in Information Science, Library Science, and Computer Science. Likewise, I believe IT managers and Web page designers could benefit from selected chapters in the book."
  19. Vidmar, D.J.; Anderson-Cahoon, C.J.: Internet search tools : history to 2000 (2009) 0.09
    0.08957498 = product of:
      0.13436246 = sum of:
        0.076371044 = weight(_text_:book in 3824) [ClassicSimilarity], result of:
          0.076371044 = score(doc=3824,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 3824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3824)
        0.05799142 = product of:
          0.11598284 = sum of:
            0.11598284 = weight(_text_:search in 3824) [ClassicSimilarity], result of:
              0.11598284 = score(doc=3824,freq=12.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.65844285 = fieldWeight in 3824, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3824)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The early history of Internet search systems was quite diverse, and went through several stages before settling into the more mature recent environment of a few major search engines. The authors note: "From the early beginnings of Telnet, File Transfer Protocol (FTP), Archie, Veronica, and Gopher to the current iterations of Web search engines and search directories that use graphical interfaces, spiders, worms, robots, complex algorithms, proprietary information, competing interfaces, and advertising, access to the vast store of materials that is the Internet has depended upon search tools."
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  20. Carroll, N.: Search engine optimization (2009) 0.09
    0.08943023 = product of:
      0.13414533 = sum of:
        0.08728119 = weight(_text_:book in 3874) [ClassicSimilarity], result of:
          0.08728119 = score(doc=3874,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 3874, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=3874)
        0.04686415 = product of:
          0.0937283 = sum of:
            0.0937283 = weight(_text_:search in 3874) [ClassicSimilarity], result of:
              0.0937283 = score(doc=3874,freq=6.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.5321022 = fieldWeight in 3874, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3874)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Search engine optimization (SEO) is the craft of elevating Web sites or individual Web site pages to higher rankings on search engines through programming, marketing, or content acumen. This section covers the origins of SEO, strategies and tactics, history and trends, and the evolution of user behavior in online searching.
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.

Languages

Types

  • a 2289
  • m 416
  • el 187
  • s 125
  • b 29
  • x 20
  • i 15
  • r 8
  • n 4
  • p 1
  • More… Less…

Themes

Subjects

Classifications