Search (298 results, page 1 of 15)

  • × type_ss:"el"
  1. Matylonek, J.C.; Ottow, C.; Reese, T.: Organizing ready reference and administrative information with the reference desk manager (2001) 0.12
    0.12375209 = product of:
      0.18562813 = sum of:
        0.14687328 = weight(_text_:reference in 1156) [ClassicSimilarity], result of:
          0.14687328 = score(doc=1156,freq=14.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.7135521 = fieldWeight in 1156, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=1156)
        0.03875485 = product of:
          0.0775097 = sum of:
            0.0775097 = weight(_text_:database in 1156) [ClassicSimilarity], result of:
              0.0775097 = score(doc=1156,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.37897915 = fieldWeight in 1156, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1156)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Non-academic questions regarding special services, phone numbers, web-sites, library policies, current procedures, technical notices, and other pertinent local institutional information are often asked at the academic library reference desk. These frequent and urgent information requests require tools and resources to answer efficiently. Although ready reference collections at the desk provide a tool for academic information, specialized local information resources are more difficult to create and maintain. As reference desk responsibilities become increasingly complex and communication becomes more problematic, a web database to collect and manage this non-academic, local information can be very useful. At the Oregon State University, librarians in the Reference Services Management group created a custom-designed web-log bulletin board to deal with this non-academic, local information. The resulting database provides reference librarians a one-stop location for the information and makes it easier for them to update the information, via email, as conditions, procedures, and information needs change in their busy, highly computerized information commons.
  2. EndNote Plus 2.3 : Enhanced reference database and bibliography maker. With EndLink 2.1, link to on-line and CD-ROM databases (1997) 0.09
    0.09212967 = product of:
      0.1381945 = sum of:
        0.09252147 = weight(_text_:reference in 1717) [ClassicSimilarity], result of:
          0.09252147 = score(doc=1717,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.44949555 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.078125 = fieldNorm(doc=1717)
        0.045673028 = product of:
          0.091346055 = sum of:
            0.091346055 = weight(_text_:database in 1717) [ClassicSimilarity], result of:
              0.091346055 = score(doc=1717,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.4466312 = fieldWeight in 1717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  3. Atkins, H.; Lyons, C.; Ratner, H.; Risher, C.; Shillum, C.; Sidman, D.; Stevens, A.: Reference linking with DOIs : a case study (2000) 0.08
    0.07817462 = product of:
      0.117261924 = sum of:
        0.07850707 = weight(_text_:reference in 1229) [ClassicSimilarity], result of:
          0.07850707 = score(doc=1229,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.38140965 = fieldWeight in 1229, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=1229)
        0.03875485 = product of:
          0.0775097 = sum of:
            0.0775097 = weight(_text_:database in 1229) [ClassicSimilarity], result of:
              0.0775097 = score(doc=1229,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.37897915 = fieldWeight in 1229, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1229)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    DOI-X is a prototype metadata database designed to support DOI lookups. The prototype is intended to address the integration of metadata registration and maintenance with basic DOI registration and maintenance, enabling publishers to use a single mechanism and a single quality-assurance process to register both DOIs and their associated metadata. It also contains the lookup mechanisms necessary to access the journal article metadata, both on a single-item lookup basis and on a batch basis, such as would facilitate reference linking. The prototype database was introduced and demonstrated to attendees at the STM International Meeting and the Frankfurt Book Fair in October 1999. This paper discusses the background for the creation of DOI-X and its salient features.
  4. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.08
    0.07753032 = product of:
      0.11629547 = sum of:
        0.052338045 = weight(_text_:reference in 1447) [ClassicSimilarity], result of:
          0.052338045 = score(doc=1447,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2542731 = fieldWeight in 1447, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=1447)
        0.06395743 = sum of:
          0.036538422 = weight(_text_:database in 1447) [ClassicSimilarity], result of:
            0.036538422 = score(doc=1447,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.17865248 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
          0.02741901 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
            0.02741901 = score(doc=1447,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.15476047 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
      0.6666667 = coord(2/3)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31
  5. Definition of the CIDOC Conceptual Reference Model (2003) 0.07
    0.06604756 = product of:
      0.09907133 = sum of:
        0.07850707 = weight(_text_:reference in 1652) [ClassicSimilarity], result of:
          0.07850707 = score(doc=1652,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.38140965 = fieldWeight in 1652, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=1652)
        0.020564256 = product of:
          0.041128512 = sum of:
            0.041128512 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
              0.041128512 = score(doc=1652,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.23214069 = fieldWeight in 1652, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1652)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
  6. ALA / Subcommittee on Subject Relationships/Reference Structures: Final Report to the ALCTS/CCS Subject Analysis Committee (1997) 0.06
    0.06353746 = product of:
      0.09530619 = sum of:
        0.07932063 = weight(_text_:reference in 1800) [ClassicSimilarity], result of:
          0.07932063 = score(doc=1800,freq=12.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.38536215 = fieldWeight in 1800, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1800)
        0.01598556 = product of:
          0.03197112 = sum of:
            0.03197112 = weight(_text_:database in 1800) [ClassicSimilarity], result of:
              0.03197112 = score(doc=1800,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.15632091 = fieldWeight in 1800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1800)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The SAC Subcommittee on Subject Relationships/Reference Structures was authorized at the 1995 Midwinter Meeting and appointed shortly before Annual Conference. Its creation was one result of a discussion of how (and why) to promote the display and use of broader-term subject heading references, and its charge reads as follows: To investigate: (1) the kinds of relationships that exist between subjects, the display of which are likely to be useful to catalog users; (2) how these relationships are or could be recorded in authorities and classification formats; (3) options for how these relationships should be presented to users of online and print catalogs, indexes, lists, etc. By the summer 1996 Annual Conference, make some recommendations to SAC about how to disseminate the information and/or implement changes. At that time assess the need for additional time to investigate these issues. The Subcommittee's work on each of the imperatives in the charge was summarized in a report issued at the 1996 Annual Conference (Appendix A). Highlights of this work included the development of a taxonomy of 165 subject relationships; a demonstration that, using existing MARC coding, catalog systems could be programmed to generate references they do not currently support; and an examination of reference displays in several CD-ROM database products. Since that time, work has continued on identifying term relationships and display options; on tracking research, discussion, and implementation of subject relationships in information systems; and on compiling a list of further research needs.
    Content
    Enthält: Appendix A: Subcommittee on Subject Relationships/Reference Structures - REPORT TO THE ALCTS/CCS SUBJECT ANALYSIS COMMITTEE - July 1996 Appendix B (part 1): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (alphabetical display) (Separat in: http://web2.ala.org/ala/alctscontent/CCS/committees/subjectanalysis/subjectrelations/msrscu2.pdf) Appendix B (part 2): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (hierarchical display) Appendix C: Checklist of Candidate Subject Relationships for Information Retrieval. Compiled by Dee Michel, Pat Kuhr, and Jane Greenberg; edited by Greg Wool - June 1997 Appendix D: Review of Reference Displays in Selected CD-ROM Abstracts and Indexes by Harriette Hemmasi and Steven Riel Appendix E: Analysis of Relationships in Six LC Subject Authority Records by Harriette Hemmasi and Gary Strawn Appendix F: Report of a Preliminary Survey of Subject Referencing in OPACs by Gregory Wool Appendix G: LC Subject Referencing in OPACs--Why Bother? by Gregory Wool Appendix H: Research Needs on Subject Relationships and Reference Structures in Information Access compiled by Jane Greenberg and Steven Riel with contributions from Dee Michel and others edited by Gregory Wool Appendix I: Bibliography on Subject Relationships compiled mostly by Dee Michel with additional contributions from Jane Greenberg, Steven Riel, and Gregory Wool
  7. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.04
    0.04464237 = product of:
      0.1339271 = sum of:
        0.1339271 = product of:
          0.40178132 = sum of:
            0.40178132 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40178132 = score(doc=1826,freq=2.0), product of:
                0.42893425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050593734 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  8. Encyclopædia Britannica 2003 : Ultmate Reference Suite (2002) 0.04
    0.044031702 = product of:
      0.06604755 = sum of:
        0.052338045 = weight(_text_:reference in 2182) [ClassicSimilarity], result of:
          0.052338045 = score(doc=2182,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2542731 = fieldWeight in 2182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=2182)
        0.013709505 = product of:
          0.02741901 = sum of:
            0.02741901 = weight(_text_:22 in 2182) [ClassicSimilarity], result of:
              0.02741901 = score(doc=2182,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.15476047 = fieldWeight in 2182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2182)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: c't 2002, H.23, S.229 (T.J. Schult): "Mac-Anwender hatten bisher keine große Auswahl bei Multimedia-Enzyklopädien: entweder ein grottenschlechtes Kosmos Kompaktwissen, das dieses Jahr letztmalig erscheinen soll und sich dabei als Systhema Universallexikon tarnt. Oder ein Brockhaus in Text und Bild mit exzellenten Texten, aber flauer Medienausstattung. Die von Acclaim in Deutschland vertriebenen Britannica-Enzyklopädien stellen eine ausgezeichnete Alternative für den des Englischen Kundigen dar. Während früher nur Einfach-Britannicas auf dem Mac liefen, gilt dies nun für alle drei Versionen Student, Deluxe und Ultimate Reference Suite. Die Suite enthält dabei nicht nur alle 75 000 Artikel der 32 Britannica-Bände, sondern auch die 15 000 der Student Encyclopaedia, eines eigenen Schülerlexikons, das durch sein einfaches Englisch gerade für Nicht-Muttersprachler als Einstieg taugt. Wer es noch elementarer haben möchte, klickt sich zur Britannica Elementary Encyclopaedia, welche unter der gleichen Oberfläche wie die anderen Werke zugänglich ist. Schließlich umfasst die Suite einen Weltatlas sowie einsprachige Wörterbücher und Thesauri von Merriam-Webster in der Collegiate- und Student-Ausbaustufe mit allein 555 000 Definitionen, Synonymen und Antonymen. Wer viel in englischer Sprache recherchiert oder gar schreibt, leckt sich angesichts dieses Angebots (EUR 99,95) die Finger, zumal die Printausgabe gut 1600 Euro kostet. Die Texte sind einfach kolossal - allein das Inhaltsverzeichnis des Artikels Germany füllt sieben Bildschirmseiten. Schon die Inhalte aus den BritannicaBänden bieten mehr als doppelt so viel Text wie die rund tausend Euro kostende Brockhaus Enzyklopädie digital (c't 22/02, S. 38). Allein die 220 000 thematisch einsortierten Web-Links sind das Geld wert. Wer die 2,4 Gigabyte belegende Komplettinstallation wählt, muss sogar nie mehr die DVD (alternativ vier CD-ROMs) einlegen. Dieses Jahr muss sich niemand mehr mit dem Britannica-typischen Kuddelmuddel aus Lexikonartikeln und vielen, vielen Jahrbüchern herumschlagen - außer dem Basistext der drei Enzyklopädien sind 'nur' die zwei Jahrbücher 2001 und 2002 getrennt aufgeführt. Wer des Englischen mächtig ist, mag hier die gute Gelegenheit zum Kauf nutzen."
  9. Encyclopædia Britannica 2005 DVD : Ultimate reference suite (2005) 0.04
    0.04361504 = product of:
      0.13084511 = sum of:
        0.13084511 = weight(_text_:reference in 1700) [ClassicSimilarity], result of:
          0.13084511 = score(doc=1700,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.6356827 = fieldWeight in 1700, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.078125 = fieldNorm(doc=1700)
      0.33333334 = coord(1/3)
    
    Content
    4 in 1 - Encyclopedia, dictionary, thesaurus, atlas and more. Over 100.000 articles. 17.891 photos, illustrations and maps. 646 videos and audio clips. - 3 Reference libraries: (1) Encyclopaedia Britannica library (2) Britannica student library (3) Britannica elementary library. - Neu: Britannica BrainStormer
  10. Cartopedia : the ultimate world reference atlas (1995) 0.04
    0.04317668 = product of:
      0.12953004 = sum of:
        0.12953004 = weight(_text_:reference in 5254) [ClassicSimilarity], result of:
          0.12953004 = score(doc=5254,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.62929374 = fieldWeight in 5254, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.109375 = fieldNorm(doc=5254)
      0.33333334 = coord(1/3)
    
  11. Dublin Core Metadata Element Set Reference Description (1999) 0.04
    0.04317668 = product of:
      0.12953004 = sum of:
        0.12953004 = weight(_text_:reference in 3468) [ClassicSimilarity], result of:
          0.12953004 = score(doc=3468,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.62929374 = fieldWeight in 3468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.109375 = fieldNorm(doc=3468)
      0.33333334 = coord(1/3)
    
  12. Electronic Dewey (1993) 0.04
    0.042638287 = product of:
      0.12791486 = sum of:
        0.12791486 = sum of:
          0.073076844 = weight(_text_:database in 1088) [ClassicSimilarity], result of:
            0.073076844 = score(doc=1088,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.35730496 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.0625 = fieldNorm(doc=1088)
          0.05483802 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
            0.05483802 = score(doc=1088,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.30952093 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1088)
      0.33333334 = coord(1/3)
    
    Abstract
    The CD-ROM version of the 20th DDC ed., featuring advanced online search and windowing techniques, full-text indexing, personal notepad, LC subject headings linked to DDC numbers and a database of all DDC changes
    Footnote
    Rez. in: Cataloging and classification quarterly 19(1994) no.1, S.134-137 (M. Carpenter). - Inzwischen existiert auch eine Windows-Version: 'Electronic Dewey for Windows', vgl. Knowledge organization 22(1995) no.1, S.17
  13. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.04
    0.04209289 = product of:
      0.063139334 = sum of:
        0.051721074 = weight(_text_:reference in 1182) [ClassicSimilarity], result of:
          0.051721074 = score(doc=1182,freq=10.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.25127566 = fieldWeight in 1182, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.011418257 = product of:
          0.022836514 = sum of:
            0.022836514 = weight(_text_:database in 1182) [ClassicSimilarity], result of:
              0.022836514 = score(doc=1182,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.1116578 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  14. Kenney, A.R.; McGovern, N.Y.; Martinez, I.T.; Heidig, L.J.: Google meets eBay : what academic librarians can learn from alternative information providers (2003) 0.04
    0.03901048 = product of:
      0.11703143 = sum of:
        0.11703143 = weight(_text_:reference in 1200) [ClassicSimilarity], result of:
          0.11703143 = score(doc=1200,freq=20.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.5685719 = fieldWeight in 1200, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=1200)
      0.33333334 = coord(1/3)
    
    Abstract
    In April 2002, the dominant Internet search engine, GoogleT, introduced a beta version of its expert service, Google Answers, with little fanfare. Almost immediately the buzz within the information community focused on implications for reference librarians. Google had already been lauded as the cheaper and faster alternative for finding information, and declining reference statistics and Online Public Access Catalog (OPAC) use in academic libraries had been attributed in part to its popularity. One estimate suggests that the Google search engine handles more questions in a day and a half than all the libraries in the country provide in a year. Indeed, Craig Silverstein, Google's Director of Technology, indicated that the raison d'être for the search engine was to "seem as smart as a reference librarian," even as he acknowledged that this goal was "hundreds of years away". Bill Arms had reached a similar conclusion regarding the more nuanced reference functions in a thought-provoking article in this journal on automating digital libraries. But with the launch of Google Answers, the power of "brute force computing" and simple algorithms could be combined with human intelligence to represent a market-driven alternative to library reference services. Google Answers is part of a much larger trend to provide networked reference assistance. Expert services have sprung up in both the commercial and non-profit sector. Libraries too have responded to the Web, providing a suite of services through the virtual reference desk (VRD) movement, from email reference to chat reference to collaborative services that span the globe. As the Internet's content continues to grow and deepen - encompassing over 40 million web sites - it has been met by a groundswell of services to find and filter information. These services include an extensive range from free to fee-based, cost-recovery to for-profit, and library providers to other information providers - both new and traditional. As academic libraries look towards the future in a dynamic and competitive information landscape, what implications do these services have for their programs, and what can be learned from them to improve library offerings? This paper presents the results of a modest study conducted by Cornell University Library (CUL) to compare and contrast its digital reference services with those of Google Answers. The study provided an opportunity for librarians to shift their focus from fearing the impact of Google, as usurper of the library's role and diluter of the academic experience, to gaining insights into how Google's approach to service development and delivery has made it so attractive.
  15. Riva, P.; Boeuf, P. le; Zumer, M.: IFLA Library Reference Model : a conceptual model for bibliographic information (2017) 0.04
    0.03739211 = product of:
      0.11217632 = sum of:
        0.11217632 = weight(_text_:reference in 5179) [ClassicSimilarity], result of:
          0.11217632 = score(doc=5179,freq=6.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.5449844 = fieldWeight in 5179, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5179)
      0.33333334 = coord(1/3)
    
    Abstract
    Definition of a conceptual reference model to provide a framework for the analysis of non-administrative metadata relating to library resources. The resulting model definition was approved by the FRBR Review Group (November 2016), and then made available to the Standing Committees of the Sections on Cataloguing and Subject Analysis & Access, as well as to the ISBD Review Group, for comment in December 2016. The final document was approved by the IFLACommittee on Standards (August 2017).
    Object
    IFLA Library Reference Model
  16. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.04
    0.036851868 = product of:
      0.0552778 = sum of:
        0.037008587 = weight(_text_:reference in 1178) [ClassicSimilarity], result of:
          0.037008587 = score(doc=1178,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.17979822 = fieldWeight in 1178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=1178)
        0.018269211 = product of:
          0.036538422 = sum of:
            0.036538422 = weight(_text_:database in 1178) [ClassicSimilarity], result of:
              0.036538422 = score(doc=1178,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.17865248 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  17. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.04
    0.035713896 = product of:
      0.10714169 = sum of:
        0.10714169 = product of:
          0.32142505 = sum of:
            0.32142505 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.32142505 = score(doc=230,freq=2.0), product of:
                0.42893425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050593734 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  18. Bates, M.E.: Quick answers to odd questions (2004) 0.04
    0.03530363 = product of:
      0.052955445 = sum of:
        0.039253537 = weight(_text_:reference in 3071) [ClassicSimilarity], result of:
          0.039253537 = score(doc=3071,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.19070482 = fieldWeight in 3071, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3071)
        0.013701909 = product of:
          0.027403818 = sum of:
            0.027403818 = weight(_text_:database in 3071) [ClassicSimilarity], result of:
              0.027403818 = score(doc=3071,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.13398936 = fieldWeight in 3071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3071)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    "One of the things I enjoyed the most when I was a reference librarian was the wide range of questions my clients sent my way. What was the original title of the first Godzilla movie? (Gojira, released in 1954) Who said 'I'm as pure as the driven slush'? (Tallulah Bankhead) What percentage of adults have gone to a jazz performance in the last year? (11%) I have found that librarians, speech writers and journalists have one thing in common - we all need to find information on all kinds of topics, and we usually need the answers right now. The following are a few of my favorite sites for finding answers to those there-must-be-an-answer-out-there questions. - For the electronic equivalent to the "ready reference" shelf of resources that most librarians keep hidden behind their desks, check out RefDesk . It is particularly good for answering factual questions - Where do I get the new Windows XP Service Pack? Where is the 386 area code? How do I contact my member of Congress? - Another resource for lots of those quick-fact questions is InfoPlease, the publishers of the Information Please almanac .- Right now, it's full of Olympics data, but it also has links to facts and factoids that you would look up in an almanac, atlas, or encyclopedia. - If you want numbers, start with the Statistical Abstract of the US. This source, produced by the U.S. Census Bureau, gives you everything from the divorce rate by state to airline cost indexes going back to 1980. It is many librarians' secret weapon for pulling numbers together quickly. - My favorite question is "how does that work?" Haven't you ever wondered how they get that Olympic torch to continue to burn while it is being carried by runners from one city to the next? Or how solar sails manage to propel a spacecraft? For answers, check out the appropriately-named How Stuff Works. - For questions about movies, my first resource is the Internet Movie Database. It is easy to search, is such a popular site that mistakes are corrected quickly, and is a fun place to catch trailers of both upcoming movies and those dating back to the 30s. - When I need to figure out who said what, I still tend to rely on the print sources such as Bartlett's Familiar Quotations . No, the current edition is not available on the web, but - and this is the librarian in me - I really appreciate the fact that I not only get the attribution but I also see the source of the quote. There are far too many quotes being attributed to a celebrity, but with no indication of the publication in which the quote appeared. Take, for example, the much-cited quote of Margaret Meade, "Never doubt that a small group of thoughtful committed people can change the world; indeed, it's the only thing that ever has!" Then see the page on the Institute for Intercultural Studies site, founded by Meade, and read its statement that it has never been able to verify this alleged quote from Meade. While there are lots of web-based sources of quotes (see QuotationsPage.com and Bartleby, for example), unless the site provides the original source for the quotation, I wouldn't rely on the citation. Of course, if you have a hunch as to the source of a quote, and it was published prior to 1923, head over to Project Gutenberg , which includes the full text of over 12,000 books that are in the public domain. When I needed to confirm a quotation of the Red Queen in "Through the Looking Glass", this is where I started. - And if you are stumped as to where to go to find information, instead of Googling it, try the Librarians' Index to the Internet. While it is somewhat US-centric, it is a great directory of web resources."
  19. Sowards, S.W.: ¬A typology for ready reference Web sites in libraries (1996) 0.03
    0.03489203 = product of:
      0.10467609 = sum of:
        0.10467609 = weight(_text_:reference in 944) [ClassicSimilarity], result of:
          0.10467609 = score(doc=944,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.5085462 = fieldWeight in 944, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=944)
      0.33333334 = coord(1/3)
    
    Abstract
    Many libraries manage Web sites intended to provide their users with online resources suitable for answering reference questions. Most of these sites can be analyzed in terms of their depth, and their organizing and searching features. Composing a typology based on these factors sheds light on the critical design decisions that influence whether users of these sites succees or fail to find information easily, rapidly and accurately. The same analysis highlights some larger design issues, both for Web sites and for information management at large
  20. Heller, L.: Ergebnisse der Benutzerumfrage "Literaturverwaltung - Was ich benutze und was ich brauche", TIB/UB Hannover 2011 (2011) 0.03
    0.03489203 = product of:
      0.10467609 = sum of:
        0.10467609 = weight(_text_:reference in 4884) [ClassicSimilarity], result of:
          0.10467609 = score(doc=4884,freq=4.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.5085462 = fieldWeight in 4884, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=4884)
      0.33333334 = coord(1/3)
    
    Abstract
    Raw data set (in CSV format) of a user survey about usage and needs regarding reference management software (like Endnote, Zotero, Citavi) in germany 2011. Participants were mainly college students, librarians, and other users of reference management software.

Authors

Years

Languages

  • e 193
  • d 94
  • el 3
  • a 2
  • i 1
  • m 1
  • nl 1
  • More… Less…

Types

  • a 133
  • i 14
  • m 7
  • r 6
  • n 5
  • s 4
  • x 3
  • b 2
  • p 1
  • More… Less…