Search (4972 results, page 2 of 249)

  1. Gray, B.: Cataloging the special collections of Allegheny college (2005) 0.08
    0.07927456 = product of:
      0.15854912 = sum of:
        0.15854912 = sum of:
          0.10972058 = weight(_text_:book in 127) [ClassicSimilarity], result of:
            0.10972058 = score(doc=127,freq=4.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.48279503 = fieldWeight in 127, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0546875 = fieldNorm(doc=127)
          0.04882853 = weight(_text_:22 in 127) [ClassicSimilarity], result of:
            0.04882853 = score(doc=127,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.2708308 = fieldWeight in 127, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=127)
      0.5 = coord(1/2)
    
    Abstract
    Scholars have long noted the significance of Allegheny College's special collections to American cultural and educational history. Special collections have value to colleges and universities as publicity devices to draw scholars, students, and funding to the institution. Catalogers have an important role to play in marketing the library and the college through improved bibliographic access to these collections. Rare book and manuscript cataloging presents many challenges to catalogers, especially at smaller institutions. This report traces the evolution of Allegheny College's catalog, from book format in 1823, through card format, and finally to online. It also explores the bibliographic challenges created as the library moved from one format to another.
    Date
    10. 9.2000 17:38:22
  2. Teper, J.H.; Erekson, S.M.: ¬The condition of our "hidden" rare book collections : a conservation survey at the University of Illinois at Urbana-Champaign (2006) 0.08
    0.078517824 = product of:
      0.15703565 = sum of:
        0.15703565 = sum of:
          0.11518262 = weight(_text_:book in 770) [ClassicSimilarity], result of:
            0.11518262 = score(doc=770,freq=6.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.50682926 = fieldWeight in 770, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.046875 = fieldNorm(doc=770)
          0.041853026 = weight(_text_:22 in 770) [ClassicSimilarity], result of:
            0.041853026 = score(doc=770,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.23214069 = fieldWeight in 770, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=770)
      0.5 = coord(1/2)
    
    Abstract
    In response to the Association of Research Libraries' Special Collections Task Force's interest in "hidden" special collection materials, the University of Illinois at Urbana-Champaign's Conservation Unit undertook a conservation needs survey of the Rare Book and Special Collections Library's backlog of uncataloged rare book materials. The survey evaluated the binding structure; physical, biological, and chemical damage; and unique features of more than 4,000 randomly sampled pieces from the collection. The information gathered would aid in planning for the integration of immediate preservation actions with future cataloging projects and to better direct future conservation efforts. This paper details the development of the survey, interprets the results, and suggests methodologies for assessing other rare collections as well as approaches to integrating the identified immediate preservation needs with cataloging and processing projects.
    Date
    10. 9.2000 17:38:22
  3. Golderman, G.M.; Connolly, B.: Between the book covers : going beyond OPAC keyword searching with the deep linking capabilities of Google Scholar and Google Book Search (2004/05) 0.07
    0.07285602 = product of:
      0.14571203 = sum of:
        0.14571203 = sum of:
          0.11083451 = weight(_text_:book in 731) [ClassicSimilarity], result of:
            0.11083451 = score(doc=731,freq=8.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.4876966 = fieldWeight in 731, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0390625 = fieldNorm(doc=731)
          0.034877524 = weight(_text_:22 in 731) [ClassicSimilarity], result of:
            0.034877524 = score(doc=731,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.19345059 = fieldWeight in 731, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=731)
      0.5 = coord(1/2)
    
    Abstract
    One finding of the 2006 OCLC study of College Students' Perceptions of Libraries and Information Resources was that students expressed equal levels of trust in libraries and search engines when it came to meeting their information needs in a way that they felt was authoritative. Seeking to incorporate this insight into our own instructional methodology, Schaffer Library at Union College has attempted to engineer a shift from Google to Google Scholar among our student users by representing Scholar as a viable adjunct to the catalog and to snore traditional electronic resources. By attempting to engage student researchers on their own terms, we have discovered that most of them react enthusiastically to the revelation that the Google they think they know so well is, it turns out, a multifaceted resource that is capable of delivering the sort of scholarly information that will meet with their professors' approval. Specifically, this article focuses on the fact that many Google Scholar searches link hack to our own Web catalog where they identify useful book titles that direct OPAC keyword searches have missed.
    Date
    2.12.2007 19:39:22
    Object
    Google Book Search
  4. Dominich, S.: Mathematical foundations of information retrieval (2001) 0.07
    0.07285602 = product of:
      0.14571203 = sum of:
        0.14571203 = sum of:
          0.11083451 = weight(_text_:book in 1753) [ClassicSimilarity], result of:
            0.11083451 = score(doc=1753,freq=8.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.4876966 = fieldWeight in 1753, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
          0.034877524 = weight(_text_:22 in 1753) [ClassicSimilarity], result of:
            0.034877524 = score(doc=1753,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.19345059 = fieldWeight in 1753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
      0.5 = coord(1/2)
    
    Abstract
    This book offers a comprehensive and consistent mathematical approach to information retrieval (IR) without which no implementation is possible, and sheds an entirely new light upon the structure of IR models. It contains the descriptions of all IR models in a unified formal style and language, along with examples for each, thus offering a comprehensive overview of them. The book also creates mathematical foundations and a consistent mathematical theory (including all mathematical results achieved so far) of IR as a stand-alone mathematical discipline, which thus can be read and taught independently. Also, the book contains all necessary mathematical knowledge on which IR relies, to help the reader avoid searching different sources. The book will be of interest to computer or information scientists, librarians, mathematicians, undergraduate students and researchers whose work involves information retrieval.
    Date
    22. 3.2008 12:26:32
  5. Kumbhar, R.: Library classification trends in the 21st century (2012) 0.07
    0.07285602 = product of:
      0.14571203 = sum of:
        0.14571203 = sum of:
          0.11083451 = weight(_text_:book in 736) [ClassicSimilarity], result of:
            0.11083451 = score(doc=736,freq=8.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.4876966 = fieldWeight in 736, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0390625 = fieldNorm(doc=736)
          0.034877524 = weight(_text_:22 in 736) [ClassicSimilarity], result of:
            0.034877524 = score(doc=736,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.19345059 = fieldWeight in 736, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=736)
      0.5 = coord(1/2)
    
    Abstract
    This book would serve as a good introductory textbook for a library science student or as a reference work on the types of classification currently in use. College and Research Libraries - covers all aspects of library classification - it is the only book that reviews literature published over a decade's time span (1999-2009) - well thought chapterization which is in tune with the LIS and classification curriculum - useful reference tool for researchers in classification - a valuable contribution to the bibliographic control of classification literature Library Classification Trends in the 21st Century traces development in and around library classification as reported in literature published in the first decade of the 21st century. It reviews literature published on various aspects of library classification, including modern applications of classification such as internet resource discovery, automatic book classification, text categorization, modern manifestations of classification such as taxonomies, folksonomies and ontologies and interoperable systems enabling crosswalk. The book also features classification education and an exploration of relevant topics.
    Date
    22. 2.2013 12:23:55
  6. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.07
    0.07266897 = product of:
      0.14533794 = sum of:
        0.14533794 = sum of:
          0.124411434 = weight(_text_:book in 1184) [ClassicSimilarity], result of:
            0.124411434 = score(doc=1184,freq=28.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.5474381 = fieldWeight in 1184, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.020926513 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.020926513 = score(doc=1184,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
    Object
    Google book search
  7. Schaefer, B.: Mathematics literature : history (2009) 0.07
    0.07223582 = product of:
      0.14447165 = sum of:
        0.14447165 = sum of:
          0.08866761 = weight(_text_:book in 3843) [ClassicSimilarity], result of:
            0.08866761 = score(doc=3843,freq=2.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.39015728 = fieldWeight in 3843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0625 = fieldNorm(doc=3843)
          0.055804037 = weight(_text_:22 in 3843) [ClassicSimilarity], result of:
            0.055804037 = score(doc=3843,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.30952093 = fieldWeight in 3843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3843)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:22:48
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  8. Nahl, D.: User-centered revolution: 1995-2008 (2009) 0.07
    0.07223582 = product of:
      0.14447165 = sum of:
        0.14447165 = sum of:
          0.08866761 = weight(_text_:book in 3902) [ClassicSimilarity], result of:
            0.08866761 = score(doc=3902,freq=2.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.39015728 = fieldWeight in 3902, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0625 = fieldNorm(doc=3902)
          0.055804037 = weight(_text_:22 in 3902) [ClassicSimilarity], result of:
            0.055804037 = score(doc=3902,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.30952093 = fieldWeight in 3902, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3902)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:32:22
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  9. #220 0.07
    0.06905397 = product of:
      0.13810794 = sum of:
        0.13810794 = product of:
          0.27621588 = sum of:
            0.27621588 = weight(_text_:22 in 219) [ClassicSimilarity], result of:
              0.27621588 = score(doc=219,freq=4.0), product of:
                0.18029164 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051484983 = queryNorm
                1.5320505 = fieldWeight in 219, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=219)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22
  10. #1387 0.07
    0.06905397 = product of:
      0.13810794 = sum of:
        0.13810794 = product of:
          0.27621588 = sum of:
            0.27621588 = weight(_text_:22 in 1386) [ClassicSimilarity], result of:
              0.27621588 = score(doc=1386,freq=4.0), product of:
                0.18029164 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051484983 = queryNorm
                1.5320505 = fieldWeight in 1386, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=1386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22
  11. #2103 0.07
    0.06905397 = product of:
      0.13810794 = sum of:
        0.13810794 = product of:
          0.27621588 = sum of:
            0.27621588 = weight(_text_:22 in 2102) [ClassicSimilarity], result of:
              0.27621588 = score(doc=2102,freq=4.0), product of:
                0.18029164 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051484983 = queryNorm
                1.5320505 = fieldWeight in 2102, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=2102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22
  12. Knowledge management in practice : connections and context. (2008) 0.07
    0.06849766 = product of:
      0.13699532 = sum of:
        0.13699532 = sum of:
          0.08866761 = weight(_text_:book in 2749) [ClassicSimilarity], result of:
            0.08866761 = score(doc=2749,freq=8.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.39015728 = fieldWeight in 2749, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.03125 = fieldNorm(doc=2749)
          0.048327714 = weight(_text_:22 in 2749) [ClassicSimilarity], result of:
            0.048327714 = score(doc=2749,freq=6.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.268053 = fieldWeight in 2749, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2749)
      0.5 = coord(1/2)
    
    Classification
    658.4/038 22
    Date
    22. 3.2009 18:43:51
    DDC
    658.4/038 22
    Footnote
    Rez. in: JASIST 60(2006) no.3, S.642 (A.E. Prentice): "What is knowledge management (KM)? How do we define it? How do we use it and what are the benefits? KM is still an operational discipline that has yet to have an academic foundation. Its core has yet to solidify and concepts and practices remain fluid, making it difficult to discuss or even to identify the range of relevant elements. Being aware of this lack of a well-structured retrievable disciplinary literature, the editors made a practice of attending trade shows and conferences attended by KM professionals to look for presentations that would in some way expand knowledge of the field. They asked presenters to turn their paper into a book chapter, which is the major source of the material in this book. Although this is a somewhat chancy method of identifying authors and topics, several of the papers are excellent and a number add to an understanding of KM. Because of the fluidity of the area of study, the editors devised a three-dimensional topic expansion approach to the content so that the reader can follow themes in the papers that would not have been easy to do if one relied solely on the table of contents. The table of contents organizes the presentations into eight subject sections, each section with a foreword that introduces the topic and indicates briefly the contribution of each chapter to the overall section title. Following this, the Roadmap lists 18 topics or themes that appear in the book and relevant chapters where information on the theme can be found. Readers have the choice of following themes using the roadmap or of reading the book section by section. ..."
  13. OCLC und Google vereinbaren Datenaustausch (2008) 0.07
    0.068248615 = product of:
      0.13649723 = sum of:
        0.13649723 = sum of:
          0.10859521 = weight(_text_:book in 2326) [ClassicSimilarity], result of:
            0.10859521 = score(doc=2326,freq=12.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.47784314 = fieldWeight in 2326, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.03125 = fieldNorm(doc=2326)
          0.027902018 = weight(_text_:22 in 2326) [ClassicSimilarity], result of:
            0.027902018 = score(doc=2326,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.15476047 = fieldWeight in 2326, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2326)
      0.5 = coord(1/2)
    
    Content
    "Die Vereinbarung legt fest, dass alle OCLC Mitgliedsbibliotheken, die am Google Book Search(TM) Programm teilnehmen - welches die Volltextsuche von mehr als einer Million Bücher ermöglicht - nun ihre aus WorldCat stammenden MARC-Katalogdaten in Google einbringen können und somit das Finden ihrer Bestände über Google maßgeblich erleichtert wird. Google wird von Google Book Search auf WorldCat.org verweisen, was die Anfrage an Bibliotheks-OPACs und andere Bibliotheksdienste verstärken wird. Google und OCLC werden Daten und Verweise auf digitalisierte Bücher gemeinsam nutzen. Das ermöglicht es OCLC, digitalisierte Bestände seiner Mitgliederbibliotheken in WorldCat zu präsentieren. "Diese Vereinbarung ist im Sinne der teilnehmenden OCLC Bibliotheken. Der erweiterte Zugriff auf die Bibliotheksbestände und -dienste wird durch die höhere Verfügbarkeit im Web gefördert", sagt Jay Jordan, OCLC Präsident und CEO. "Wir freuen uns über die Partnerschaft mit Google. Es nutzt unserem Ziel, Menschen durch internationale Bibliothekskooperation den Zugang zu weltweitem Wissen zu erleichtern." WorldCat Metadaten werden Google direkt von OCLC oder über die Mitgliederbibliotheken, die am beteiligten Google Book Search Programm teilnehmen, bereitgestellt. Google hat kürzlich eine API (Application Programming Interface) freigegeben, die Verweise in Google Book Search auf Basis von ISBNs (Internationale Standardbuchnummer), LCCNs (Library of Congress Control Number) und OCLC Nummern zulässt. Wenn ein Nutzer ein Buch in Google Book Search findet, kann die Verknüpfung in WorldCat.org bis zur lokalen Bibliothek zurückverfolgt werden.
    Date
    26.10.2008 11:22:04
    Object
    Google Book Search
  14. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.068143174 = product of:
      0.13628635 = sum of:
        0.13628635 = product of:
          0.408859 = sum of:
            0.408859 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.408859 = score(doc=1826,freq=2.0), product of:
                0.43649027 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051484983 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  15. Kellsey, C.: Cooperative cataloging, vendor records, and European language monographs (2002) 0.07
    0.06794962 = product of:
      0.13589925 = sum of:
        0.13589925 = sum of:
          0.09404621 = weight(_text_:book in 160) [ClassicSimilarity], result of:
            0.09404621 = score(doc=160,freq=4.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.41382432 = fieldWeight in 160, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.046875 = fieldNorm(doc=160)
          0.041853026 = weight(_text_:22 in 160) [ClassicSimilarity], result of:
            0.041853026 = score(doc=160,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.23214069 = fieldWeight in 160, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=160)
      0.5 = coord(1/2)
    
    Abstract
    The appearance in OCLC and RLIN of minimal level catalog records from European book vendors for European language monographs and their effect on cataloging department workflows and cooperative cataloging efforts have been matters of concern expressed recently at ALA meetings and in the library literature. A study of 8,778 catalog records was undertaken to discover how many current European language monographs were being cataloged by the Library of Congress, by member libraries, and by vendors. It was found that vendor records accounted for 16. 7% of Spanish books, 18% of French books, 33.6% of German books, and 52.5% of those in Italian. The number of libraries enhancing vendor records in OCLC was found to be only approximately one-third the number of libraries contributing original records for European language books. Ongoing increases in European book publishing and the increasing globalization of cataloging databases mean that the results of this study have implications not only for local cataloging practice but for cooperative cataloging as a whole.
    Date
    10. 9.2000 17:38:22
  16. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.07
    0.06794962 = product of:
      0.13589925 = sum of:
        0.13589925 = sum of:
          0.09404621 = weight(_text_:book in 987) [ClassicSimilarity], result of:
            0.09404621 = score(doc=987,freq=4.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.41382432 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.041853026 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.041853026 = score(doc=987,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(1/2)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Date
    23. 7.2017 13:49:22
  17. Bayer, M.: ¬Die Gier der Bits und Bytes auf Gutenberg : Elektronisches Publizieren, Drucken und das papierlose E-Book melden sich in Frankfurt zu Wort (2000) 0.07
    0.06543152 = product of:
      0.13086304 = sum of:
        0.13086304 = sum of:
          0.09598551 = weight(_text_:book in 5391) [ClassicSimilarity], result of:
            0.09598551 = score(doc=5391,freq=6.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.42235768 = fieldWeight in 5391, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5391)
          0.034877524 = weight(_text_:22 in 5391) [ClassicSimilarity], result of:
            0.034877524 = score(doc=5391,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.19345059 = fieldWeight in 5391, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5391)
      0.5 = coord(1/2)
    
    Abstract
    Eines ist gleich geblieben an der neuen Luxus-Variante des E-Book: der Einband aus Leder, der wenigsten ein bisschen an den guten, alten Papier-Schinken erinnert. Dazwischen verbirgt sich jede Menge Technik. Neu ist vor allem der hoch auflösende Bildschirm: Er bietet 32 000 Farben und reagiert auf Berührung; somit lassen sich handschriftliche Beinerkungen in die Bücher schreiben. Das Gerät mit der Bezeichnung REB 1200 soll den Softbook Reader ablösen. Es wiegt 940 Gramm und ist von Haus aus mit 8 MB Speicher ausgestattet. Darauf passen etwa 3000 farbige Seiten. Wer mehr Lesestoff braucht, kann eine 128 MB Steckkarte zusätzlich einstecken. Zu dem Gerät gehört ein Modern, mit dem sich Bücher auch ohne Internet-Anbindung laden lassen, und eine Ethernet-Netzwerkkarte für den schnellen Kontakt zum Computer. Kostenpunkt in den USA - 700 US-Dollar. Der kleinere REB 1100 arbeitet ohne Farbe. Daher passen auf den gleichen Speicherplatz 8000 Seiten. Die Hardware wiegt noch 500 Gramm und damit 130 Gramm weniger als sein Vorgänger, das Rocket E-Book. Sie verfügt nun ebenfalls über ein Modem. Zur Ausstattung gehört ferner ein USB-Port und eine InfrarotSchnittstelle. Kosten: 300 Dollar. Hergestellt werden beide Geräte von Thomson Multimedia. Gemstar kümmert sich nun ausschließlich um die Geschäfte mit den Verlagen
    Date
    3. 5.1997 8:44:22
  18. Anderson, J.D.; Perez-Carballo, J.: Information retrieval design : principles and options for information description, organization, display, and access in information retrieval databases, digital libraries, catalogs, and indexes (2005) 0.06
    0.06413664 = product of:
      0.12827328 = sum of:
        0.12827328 = sum of:
          0.11083451 = weight(_text_:book in 1833) [ClassicSimilarity], result of:
            0.11083451 = score(doc=1833,freq=32.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.4876966 = fieldWeight in 1833, product of:
                5.656854 = tf(freq=32.0), with freq of:
                  32.0 = termFreq=32.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
          0.017438762 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.017438762 = score(doc=1833,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.09672529 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
      0.5 = coord(1/2)
    
    Content
    Inhalt: Chapters 2 to 5: Scopes, Domains, and Display Media (pp. 47-102) Chapters 6 to 8: Documents, Analysis, and Indexing (pp. 103-176) Chapters 9 to 10: Exhaustivity and Specificity (pp. 177-196) Chapters 11 to 13: Displayed/Nondisplayed Indexes, Syntax, and Vocabulary Management (pp. 197-364) Chapters 14 to 16: Surrogation, Locators, and Surrogate Displays (pp. 365-390) Chapters 17 and 18: Arrangement and Size of Displayed Indexes (pp. 391-446) Chapters 19 to 21: Search Interface, Record Format, and Full-Text Display (pp. 447-536) Chapter 22: Implementation and Evaluation (pp. 537-541)
    Footnote
    Rez. in JASIST 57(2006) no.10, S.1412-1413 (R. W. White): "Information Retrieval Design is a textbook that aims to foster the intelligent user-centered design of databases for Information Retrieval (IR). The book outlines a comprehensive set of 20 factors. chosen based on prior research and the authors' experiences. that need to he considered during the design process. The authors provide designers with information on those factors to help optimize decision making. The book does not cover user-needs assessment, implementation of IR databases, or retries al systems, testing. or evaluation. Most textbooks in IR do not offer a substantive walkthrough of the design factors that need to be considered Mien des eloping IR databases. Instead. they focus on issues such as the implementation of data structures, the explanation of search algorithms, and the role of human-machine interaction in the search process. The book touches on all three, but its focus is on designing databases that can be searched effectively. not the tools to search them. This is an important distinction: despite its title. this book does not describe how to build retrieval systems. Professor Anderson utilizes his wealth of experience in cataloging and classification to bring a unique perspective on IR database design that may be useful for novices. for developers seeking to make sense of the design process, and for students as a text to supplement classroom tuition. The foreword and preface. by Jessica Milstead and James Anderson. respectively, are engaging and worthwhile reading. It is astounding that it has taken some 20 years for anyone to continue the stork of Milstead and write as extensively as Anderson does about such an important issue as IR database design. The remainder of the book is divided into two parts: Introduction and Background Issues and Design Decisions. Part 1 is a reasonable introduction and includes a glossary of the terminology that authors use in the book. It is very helpful to have these definitions early on. but the subject descriptors in the right margin are distracting and do not serve their purpose as access points to the text. The terminology is useful to have. as the authors definitions of concepts do not lit exactly with what is traditionally accepted in IR. For example. they use the term 'message' to icier to what would normally be called .'document" or "information object." and do not do a good job at distinguishing between "messages" and "documentary units". Part 2 describes components and attributes of 1R databases to help designers make design choices. The book provides them with information about the potential ramifications of their decisions and advocates a user-oriented approach to making them. Chapters are arranged in a seemingly sensible order based around these factors. and the authors remind us of the importance of integrating them. The authors are skilled at selecting the important factors in the development of seemingly complex entities, such as IR databases: how es er. the integration of these factors. or the interaction between them. is not handled as well as perhaps should be. Factors are presented in the order in which the authors feel then should be addressed. but there is no chapter describing how the factors interact. The authors miss an opportunity at the beginning of Part 2 where they could illustrate using a figure the interactions between the 20 factors they list in a way that is not possible with the linear structure of the book.
    . . . Those interested in using the book to design IR databases can work through the chapters in the order provided and end up with a set of requirements for database design. The steps outlined in this book can be rearranged in numerous orders depending on the particular circumstances. This book would benefit from a discussion of what orders are appropriate for different circumstances and bow the requirements outlined interact. I come away from Information Retrieval Design with mixed, although mainly positive feelings. Even though the aims of this book are made clear from the outset, it was still a disappointment to see issues such as implementation and evaluation covered in only a cursory manner. The book is very well structured. well written, and operates in a part of the space that bas been neglected for too long. The authors whet my appetite with discussion of design, and I would have liked to have heard a bit more about what happens in requirements' elicitation before the design issues base been identified and to impIementation after they have been addressed. Overall, the book is a comprehensive review of previous research supplemented by the authors' views on IR design. This book focuses on breadth of coverage rather than depth of coverage and is therefore potentially of more use to novices in the field. The writing style is clear, and the authors knowledge of the subject area is undoubted. I wouId recommend this book to anyone who wants to learn about IR database design and take advantage of the experience and insights of Anderson, one of tile visionaries it the field."
  19. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.06
    0.06351771 = product of:
      0.12703542 = sum of:
        0.12703542 = sum of:
          0.09913341 = weight(_text_:book in 168) [ClassicSimilarity], result of:
            0.09913341 = score(doc=168,freq=10.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.4362091 = fieldWeight in 168, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.027902018 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.027902018 = score(doc=168,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  20. Benemann, W.E.: Reference implications of digital technology in a library photograph collection (1994) 0.06
    0.063206345 = product of:
      0.12641269 = sum of:
        0.12641269 = sum of:
          0.07758416 = weight(_text_:book in 210) [ClassicSimilarity], result of:
            0.07758416 = score(doc=210,freq=2.0), product of:
              0.2272612 = queryWeight, product of:
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.051484983 = queryNorm
              0.34138763 = fieldWeight in 210, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.414126 = idf(docFreq=1454, maxDocs=44218)
                0.0546875 = fieldNorm(doc=210)
          0.04882853 = weight(_text_:22 in 210) [ClassicSimilarity], result of:
            0.04882853 = score(doc=210,freq=2.0), product of:
              0.18029164 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051484983 = queryNorm
              0.2708308 = fieldWeight in 210, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=210)
      0.5 = coord(1/2)
    
    Abstract
    Research undertaken in photograph collections presents unique challenges, both for the researcher and fot the librarian or archivist attempting to provide access. Unlike a book, which most frequently has a recognizable authors and/or title, a photograph is not easily described or indexed. The identity of the photographer is sometime unknown. The names of any people shown in the images are frequently not readily available, and even the location may be ambiguous. But now the challenges of providing reference access to photograph collection can be met by canning images into a digitize format sa that they can be searched using a computer terminal. In this article, Benemeann describes a feasibility study focusing on the Japanese-American Evacuation and Relocation Photographs housed in the Bancroft Library of the University of California, Berkeley
    Source
    Reference services review. 22(1994) no.4, S.45-50

Authors

Languages

Types

  • a 3749
  • m 890
  • s 278
  • el 209
  • b 48
  • i 45
  • x 38
  • r 28
  • ? 8
  • p 5
  • n 4
  • d 3
  • u 3
  • h 2
  • z 2
  • au 1
  • More… Less…

Themes

Subjects

Classifications