Search (2950 results, page 1 of 148)

  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.22
    0.21907313 = product of:
      0.36512187 = sum of:
        0.07156433 = product of:
          0.214693 = sum of:
            0.214693 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.214693 = score(doc=562,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.214693 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.214693 = score(doc=562,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.078864545 = sum of:
          0.042235978 = weight(_text_:data in 562) [ClassicSimilarity], result of:
            0.042235978 = score(doc=562,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.29644224 = fieldWeight in 562, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.036628567 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.036628567 = score(doc=562,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  2. Furrie, B.; Data Base Development Department of The Follett Software Company: Understanding MARC Bibliographic : Machine-readable cataloging (2000) 0.15
    0.15336421 = product of:
      0.255607 = sum of:
        0.15033525 = weight(_text_:readable in 6772) [ClassicSimilarity], result of:
          0.15033525 = score(doc=6772,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.5430516 = fieldWeight in 6772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0625 = fieldNorm(doc=6772)
        0.08536155 = weight(_text_:bibliographic in 6772) [ClassicSimilarity], result of:
          0.08536155 = score(doc=6772,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.4866305 = fieldWeight in 6772, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=6772)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 6772) [ClassicSimilarity], result of:
              0.03982046 = score(doc=6772,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 6772, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6772)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Vgl. auch unter: http://lcweb.loc.gov/marc/umb/. - Understanding MARC: Bibliographic was a copyrighted work originally published by the Follett Software Co. in 1988 (second edition, 1989, third edition, 1990, fourth edition, 1994, fifth edition, 1998)
  3. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.15
    0.15267058 = product of:
      0.38167644 = sum of:
        0.09541911 = product of:
          0.28625733 = sum of:
            0.28625733 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.28625733 = score(doc=140,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.28625733 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.28625733 = score(doc=140,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.4 = coord(2/5)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  4. Maxwell, R.L.: Bibliographic control (2009) 0.15
    0.14847405 = product of:
      0.24745674 = sum of:
        0.11275144 = weight(_text_:readable in 3750) [ClassicSimilarity], result of:
          0.11275144 = score(doc=3750,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 3750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=3750)
        0.11977262 = weight(_text_:bibliographic in 3750) [ClassicSimilarity], result of:
          0.11977262 = score(doc=3750,freq=14.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.6828017 = fieldWeight in 3750, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3750)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 3750) [ClassicSimilarity], result of:
              0.029865343 = score(doc=3750,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 3750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3750)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Bibliographic control is the process of creation, exchange, preservation, and use of data about information resources. Formal bibliographic control has been practiced for millennia, but modern techniques began to be developed and implemented in the nineteenth and twentieth centuries. A series of cataloging codes characterized this period. These codes governed the creation of library catalogs, first in book form, then on cards, and finally in electronic formats, including MAchine-Readable Cataloging (MARC). The period was also characterized by the rise of shared cataloging programs, allowing the development of resource-saving copy cataloging procedures. Such programs were assisted by the development of cataloging networks such as OCLC and RLG. The twentieth century saw progress in the theory of bibliographic control, including the 1961 Paris Principles, culminating with the early twenty-first century Statement of International Cataloguing Principles and IFLA's Functional Requirements for Bibliographic Records (FRBR). Toward the end of the period bibliographic control began to be applied to newly invented electronic media, as "metadata." Trends point toward continued development of collaborative and international approaches to bibliographic control.
  5. El-Sherbini, M.A.: Cataloging and classification : review of the literature 2005-06 (2008) 0.14
    0.14106841 = product of:
      0.23511402 = sum of:
        0.15033525 = weight(_text_:readable in 249) [ClassicSimilarity], result of:
          0.15033525 = score(doc=249,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.5430516 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.060359728 = weight(_text_:bibliographic in 249) [ClassicSimilarity], result of:
          0.060359728 = score(doc=249,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.024419045 = product of:
          0.04883809 = sum of:
            0.04883809 = weight(_text_:22 in 249) [ClassicSimilarity], result of:
              0.04883809 = score(doc=249,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.30952093 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This paper reviews library literature on cataloging and classification published in 2005-06. It covers pertinent literature in the following areas: the future of cataloging; Functional Requirement for Bibliographic Records (FRBR); metadata and its applications and relation to Machine-Readable Cataloging (MARC); cataloging tools and standards; authority control; and recruitment, training, and the changing role of catalogers.
    Date
    10. 9.2000 17:38:22
  6. Kushwoh, S.S.; Gautam, J.N.; Singh, R.: Migration from CDS / ISIS to KOHA : a case study of data conversion from CCF to MARC 21 (2009) 0.13
    0.1347309 = product of:
      0.2245515 = sum of:
        0.11275144 = weight(_text_:readable in 2279) [ClassicSimilarity], result of:
          0.11275144 = score(doc=2279,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 2279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=2279)
        0.07840959 = weight(_text_:bibliographic in 2279) [ClassicSimilarity], result of:
          0.07840959 = score(doc=2279,freq=6.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.44699866 = fieldWeight in 2279, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=2279)
        0.03339047 = product of:
          0.06678094 = sum of:
            0.06678094 = weight(_text_:data in 2279) [ClassicSimilarity], result of:
              0.06678094 = score(doc=2279,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46871632 = fieldWeight in 2279, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2279)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Standards are important for quality and interoperability in any system. Bibliographic record creation standards such as MARC 21 (Machine Readable Catalogue), CCF (Common Communication Format), UNIMARC (Universal MARC) and their local variations, are in practice all across the library community. ILMS (Integrated Library Management Systems) are using these standards for the design of databases and the creation of bibliographic records. Their use is important for uniformity of the system and bibliographic data, but there are problems when a library wants to switch over from one system to another using different standards. This paper discusses migration from one record standard to another, mapping of data and related issues. Data exported from CDS/ISIS CCF based records to KOHA MARC 21 based records are discussed as a case study. This methodology, with few modifications, can be applied for migration of data in other bibliographicformats too. Freeware tools can be utilized for migration.
  7. McCallum, S.H.: Machine Readable Cataloging (MARC): 1975-2007 (2009) 0.13
    0.1347091 = product of:
      0.22451515 = sum of:
        0.11275144 = weight(_text_:readable in 3841) [ClassicSimilarity], result of:
          0.11275144 = score(doc=3841,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.045269795 = weight(_text_:bibliographic in 3841) [ClassicSimilarity], result of:
          0.045269795 = score(doc=3841,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 3841) [ClassicSimilarity], result of:
            0.029865343 = score(doc=3841,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 3841, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=3841)
          0.036628567 = weight(_text_:22 in 3841) [ClassicSimilarity], result of:
            0.036628567 = score(doc=3841,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 3841, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3841)
      0.6 = coord(3/5)
    
    Abstract
    This entry describes the development of the MARC Communications format. After a brief overview of the initial 10 years it describes the succeeding phases of development up to the present. This takes the reader through the expansion of the format for all types of bibliographic data and for a multiple character scripts. At the same time a large business community was developing that offered products based on the format to the library community. The introduction of the Internet in the 1990s and the Web technology brought new opportunities and challenges and the format was adapted to this new environment. There has been a great deal of international adoption of the format that has continued into the 2000s. More recently new syntaxes for MARC 21 and models are being explored.
    Date
    27. 8.2011 14:22:38
  8. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.13
    0.13358676 = product of:
      0.3339669 = sum of:
        0.08349173 = product of:
          0.25047517 = sum of:
            0.25047517 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.25047517 = score(doc=306,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.25047517 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.25047517 = score(doc=306,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  9. Chandrakar, R.: Mapping CCF to MARC21 : an experimental approach (2001) 0.12
    0.123656236 = product of:
      0.20609371 = sum of:
        0.11275144 = weight(_text_:readable in 5437) [ClassicSimilarity], result of:
          0.11275144 = score(doc=5437,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 5437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=5437)
        0.07840959 = weight(_text_:bibliographic in 5437) [ClassicSimilarity], result of:
          0.07840959 = score(doc=5437,freq=6.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.44699866 = fieldWeight in 5437, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=5437)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 5437) [ClassicSimilarity], result of:
              0.029865343 = score(doc=5437,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 5437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5437)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The purpose of this article is to raise and address a number of issues pertaining to the conversion of Common Communication Format (CCF) into MARC21. In this era of global resource sharing, exchange of bibliographic records from one system to another is imperative in today's library communities. Instead of using a single standard to create machine-readable catalogue records, more than 20 standards have emerged and are being used by different institutions. Because of these variations in standards, sharing of resources and transfer of data from one system to another among the institutions locally and globally has become a significant problem. Addressing this problem requires keeping in mind that countries such as India and others in southeast Asia are using the CCF as a standard for creating bibliographic cataloguing records. This paper describes a way to map the bibliographic catalogue records from CCF to MARC21, although 100% mapping is not possible. In addition, the paper describes an experimental approach that enumerates problems that may occur during the mapping of records/exchanging of records and how these problems can be overcome.
  10. Condron, L.; Tittemore, C.P.: Functional Requirements for Bibliographic Records (2004) 0.12
    0.11992097 = product of:
      0.19986828 = sum of:
        0.09395953 = weight(_text_:readable in 5654) [ClassicSimilarity], result of:
          0.09395953 = score(doc=5654,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 5654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5654)
        0.08435529 = weight(_text_:bibliographic in 5654) [ClassicSimilarity], result of:
          0.08435529 = score(doc=5654,freq=10.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.480894 = fieldWeight in 5654, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5654)
        0.021553457 = product of:
          0.043106914 = sum of:
            0.043106914 = weight(_text_:data in 5654) [ClassicSimilarity], result of:
              0.043106914 = score(doc=5654,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.30255508 = fieldWeight in 5654, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5654)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This article provides information on World Wide Web resources that would help catalogers understand the implications of the documents Functional Requirements for Bibliographic Records (FRBR), a report of the International Federation of Library Associations Study Group, completed in September 1997. The Online Computer Library Center Office of Research has carried out a number of experiments to assess methods for the WorldCat database. The reports help explain the implications of FRBR concepts for a database such as WorldCat or for one's library catalog. The Research Libraries Group (RLG) has also been experimenting with FRBR concepts as part of the RedLightGreen project. This document, Mining the Catalog, includes a section Delivering the Goods, which includes a description of the group's work with FRBR concepts in a test subset of the RLG Bibliographic Database. The FRBR Display Tool link leads to a download page for the tool. This tool transforms the bibliographic data found in machine-readable cataloguing record files into meaningful by grouping the bibliographic data into the Work, Expression and Manifestation FRBR concepts. By experimenting with the FRBR Display Tool, librarians can see actual displays of library catalog data arranged in the manner described in the publication Displays for Multiple Versions From MARC 21 and FRBR.
  11. German, L.: Bibliographic utilities (2009) 0.12
    0.11927431 = product of:
      0.29818577 = sum of:
        0.15033525 = weight(_text_:readable in 3858) [ClassicSimilarity], result of:
          0.15033525 = score(doc=3858,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.5430516 = fieldWeight in 3858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0625 = fieldNorm(doc=3858)
        0.14785053 = weight(_text_:bibliographic in 3858) [ClassicSimilarity], result of:
          0.14785053 = score(doc=3858,freq=12.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.84286875 = fieldWeight in 3858, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=3858)
      0.4 = coord(2/5)
    
    Abstract
    Bibliographic utilities have been in existence for more than 40 years. From the beginning, they were designed to promote resource sharing among their members. The core of a bibliographic utility is the database of bibliographic records. The structure of the bibliographic record is based upon Machine Readable Cataloging (MARC). Other services have evolved from the utilities' bibliographic database.
  12. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.11
    0.11450293 = product of:
      0.28625733 = sum of:
        0.07156433 = product of:
          0.214693 = sum of:
            0.214693 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.214693 = score(doc=2918,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.214693 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.214693 = score(doc=2918,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.4 = coord(2/5)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  13. Salgáné, M.M.: Our electronic era and bibliographic informations computer-related bibliographic data formats, metadata formats and BDML (2005) 0.11
    0.114236705 = product of:
      0.1903945 = sum of:
        0.075167626 = weight(_text_:readable in 3005) [ClassicSimilarity], result of:
          0.075167626 = score(doc=3005,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.2715258 = fieldWeight in 3005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
        0.08536155 = weight(_text_:bibliographic in 3005) [ClassicSimilarity], result of:
          0.08536155 = score(doc=3005,freq=16.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.4866305 = fieldWeight in 3005, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 3005) [ClassicSimilarity], result of:
              0.059730686 = score(doc=3005,freq=18.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 3005, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3005)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Using new communication technologies libraries must face continuously new questions, possibilities and expectations. This study discusses library-related aspects of our electronic era and how computer-related data formats affect bibliographic dataprocessing to give a summary of the most important results. First bibliographic formats for the exchange of bibliographic and related information in the machine-readable form between different types of computer systems were created more than 30 years ago. The evolution of information technologies leads to the improvement of computer systems. In addition to the development of computers and media types Internet has a great influence on data structure as well. Since the introduction of MARC bibliographic format, technology of data exchange between computers and between different computer systems has reached a very sophisticated stage and has contributed to the creation of new standards in this field. Today libraries work with this new infrastructure that induces many challenges. One of the most significant challenges is moving from a relatively homogenous bibliographic environment to a diverse one. Despite these challenges such changes are achievable and necessary to exploit possibilities of new metadata and technologies like the Internet and XML (Extensible Markup Language). XML is an open standard, a universal language for data on the Web. XML is nearly six-years-old standard designed for the description and computer-based management of (semi)-structured data and structured texts. XML gives developers the power to deliver structured data from a wide variety of applications and it is also an ideal format from server-to-server transfer of structured data. XML also isn't limited for Internet use and is an especially valuable tool in the field of library. In fact, XML's main strength - organizing information - makes it perfect for exchanging data between different systems. Tools that work with the XML can be used to process XML records without incurring additional costs associated with one's own software development. In addition, XML is also a suitable format for library web services. The Department of Computer-related Graphic Design and Library and Information Sciences of Debrecen University launched the BDML (Bibliographic Description Markup Language) development project in order to standardize bibliogrphic description with the help of XML.
  14. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.11
    0.112731956 = product of:
      0.18788658 = sum of:
        0.11275144 = weight(_text_:readable in 2651) [ClassicSimilarity], result of:
          0.11275144 = score(doc=2651,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 2651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=2651)
        0.045269795 = weight(_text_:bibliographic in 2651) [ClassicSimilarity], result of:
          0.045269795 = score(doc=2651,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 2651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=2651)
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 2651) [ClassicSimilarity], result of:
              0.059730686 = score(doc=2651,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 2651, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
  15. Taniguchi, S.: ¬A system for supporting evidence recording in bibliographic records (2006) 0.11
    0.110513195 = product of:
      0.18418865 = sum of:
        0.09395953 = weight(_text_:readable in 282) [ClassicSimilarity], result of:
          0.09395953 = score(doc=282,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 282, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=282)
        0.06534132 = weight(_text_:bibliographic in 282) [ClassicSimilarity], result of:
          0.06534132 = score(doc=282,freq=6.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.3724989 = fieldWeight in 282, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=282)
        0.024887787 = product of:
          0.049775574 = sum of:
            0.049775574 = weight(_text_:data in 282) [ClassicSimilarity], result of:
              0.049775574 = score(doc=282,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.34936053 = fieldWeight in 282, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=282)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Recording evidence for data values, in addition to the values themselves, in bibliographic records and descriptive metadata has been proposed in a previous study. Recorded evidence indicates why and how data values are recorded for elements. As a continuation of that study, this article first proposes a scenario in which a cataloger and a system interact with each other in recording evidence in bibliographic records for books, with the aim of minimizing costs and effort in recording evidence. Second, it reports on prototype system development in accordance with the scenario. The system (1) searches a string, corresponding to the data value entered by a cataloger or extracted from the Machine Readable Cataloging (MARC) record, within the scanned and optical character recognition (OCR)-converted title page and verso of the title page of an item being cataloged; (2) identifies the place where the string appears within the source of information; (3) identifies the procedure being used to form the value entered or recorded; and finally (4) displays the place and procedure identified for the data value as its candidate evidence. Third, this study reports on an experiment conducted to examine the system's performance. The results of the experiment show the usefulness of the system and the validity of the proposed scenario.
  16. Wang, J.: Automatic thesaurus development : term extraction from title metadata (2006) 0.10
    0.09585264 = product of:
      0.1597544 = sum of:
        0.09395953 = weight(_text_:readable in 5063) [ClassicSimilarity], result of:
          0.09395953 = score(doc=5063,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 5063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5063)
        0.053350966 = weight(_text_:bibliographic in 5063) [ClassicSimilarity], result of:
          0.053350966 = score(doc=5063,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.30414405 = fieldWeight in 5063, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5063)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 5063) [ClassicSimilarity], result of:
              0.024887787 = score(doc=5063,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 5063, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5063)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The application of thesauri in networked environments is seriously hampered by the challenges of introducing new concepts and terminology into the formal controlled vocabulary, which is critical for enhancing its retrieval capability. The author describes an automated process of adding new terms to thesauri as entry vocabulary by analyzing the association between words/phrases extracted from bibliographic titles and subject descriptors in the metadata record (subject descriptors are terms assigned from controlled vocabularies of thesauri to describe the subjects of the objects [e.g., books, articles] represented by the metadata records). The investigated approach uses a corpus of metadata for scientific and technical (S&T) publications in which the titles contain substantive words for key topics. The three steps of the method are (a) extracting words and phrases from the title field of the metadata; (b) applying a method to identify and select the specific and meaningful keywords based on the associated controlled vocabulary terms from the thesaurus used to catalog the objects; and (c) inserting selected keywords into the thesaurus as new terms (most of them are in hierarchical relationships with the existing concepts), thereby updating the thesaurus with new terminology that is being used in the literature. The effectiveness of the method was demonstrated by an experiment with the Chinese Classification Thesaurus (CCT) and bibliographic data in China Machine-Readable Cataloging Record (MARC) format (CNMARC) provided by Peking University Library. This approach is equally effective in large-scale collections and in other languages.
  17. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.10
    0.09541912 = product of:
      0.23854779 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.17891084 = score(doc=5895,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.17891084 = score(doc=5895,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.4 = coord(2/5)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  18. Carvalho, J.: ¬An XML representation of the UNIMARC manual : a working prototype (2005) 0.09
    0.08647696 = product of:
      0.14412826 = sum of:
        0.09395953 = weight(_text_:readable in 4355) [ClassicSimilarity], result of:
          0.09395953 = score(doc=4355,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 4355, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4355)
        0.03772483 = weight(_text_:bibliographic in 4355) [ClassicSimilarity], result of:
          0.03772483 = score(doc=4355,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 4355, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4355)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4355) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4355,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4355, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4355)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The UNIMARC manual defines a standard for the formal representation of bibliographic information. For that purpose the UNIMARC manual contains different types of information: structural rules that define that records are composed of a leader, a set of control fields and a set of data fields, with certain syntactic characteristics; content rules, that define required fields and acceptable values for various components of the record; and, finally, examples, explanatory notes, cross references to other points of the manual. Much of this information must find its way into computer systems where it will be used to validate records, produce indexes, adequately format records for display and, in some cases, provide human readable help. Providing the UNIMARC manual in XML greatly simplifies the full implementation of the format in computer systems. Our goal was to produce a formal representation of the UNIMARC format, so that the standard can be incorporated in software systems in a transparent way. The outcome is an XML representation of the UNIMARC manual, which can be processed automatically by applications that need to enforce the format rules, provide help information, or vocabularies. We developed a scheme for the UNIMARC manual and a set of software tools that demonstrate its usage.
  19. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.08
    0.080129206 = product of:
      0.20032302 = sum of:
        0.09053959 = weight(_text_:bibliographic in 3565) [ClassicSimilarity], result of:
          0.09053959 = score(doc=3565,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.5161496 = fieldWeight in 3565, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3565)
        0.109783426 = sum of:
          0.07315486 = weight(_text_:data in 3565) [ClassicSimilarity], result of:
            0.07315486 = score(doc=3565,freq=12.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.513453 = fieldWeight in 3565, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=3565)
          0.036628567 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
            0.036628567 = score(doc=3565,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 3565, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3565)
      0.4 = coord(2/5)
    
    Abstract
    In this article recording evidence for data values in addition to the values themselves in bibliographic records and descriptive metadata is proposed, with the aim of improving the expressiveness and reliability of those records and metadata. Recorded evidence indicates why and how data values are recorded for elements. Recording the history of changes in data values is also proposed, with the aim of reinforcing recorded evidence. First, evidence that can be recorded is categorized into classes: identifiers of rules or tasks, action descriptions of them, and input and output data of them. Dates of recording values and evidence are an additional class. Then, the relative usefulness of evidence classes and also levels (i.e., the record, data element, or data value level) to which an individual evidence class is applied, is examined. Second, examples that can be viewed as recorded evidence in existing bibliographic records and current cataloging rules are shown. Third, some examples of bibliographic records and descriptive metadata with notes of evidence are demonstrated. Fourth, ways of using recorded evidence are addressed.
    Date
    18. 6.2005 13:16:22
  20. Anderson, J.D.; Perez-Carballo, J.: Information retrieval design : principles and options for information description, organization, display, and access in information retrieval databases, digital libraries, catalogs, and indexes (2005) 0.08
    0.07795817 = product of:
      0.12993027 = sum of:
        0.06643943 = weight(_text_:readable in 1833) [ClassicSimilarity], result of:
          0.06643943 = score(doc=1833,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.23999718 = fieldWeight in 1833, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1833)
        0.026675483 = weight(_text_:bibliographic in 1833) [ClassicSimilarity], result of:
          0.026675483 = score(doc=1833,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.15207203 = fieldWeight in 1833, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1833)
        0.03681536 = sum of:
          0.021553457 = weight(_text_:data in 1833) [ClassicSimilarity], result of:
            0.021553457 = score(doc=1833,freq=6.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.15127754 = fieldWeight in 1833, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
          0.015261904 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.015261904 = score(doc=1833,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.09672529 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1833)
      0.6 = coord(3/5)
    
    Content
    Inhalt: Chapters 2 to 5: Scopes, Domains, and Display Media (pp. 47-102) Chapters 6 to 8: Documents, Analysis, and Indexing (pp. 103-176) Chapters 9 to 10: Exhaustivity and Specificity (pp. 177-196) Chapters 11 to 13: Displayed/Nondisplayed Indexes, Syntax, and Vocabulary Management (pp. 197-364) Chapters 14 to 16: Surrogation, Locators, and Surrogate Displays (pp. 365-390) Chapters 17 and 18: Arrangement and Size of Displayed Indexes (pp. 391-446) Chapters 19 to 21: Search Interface, Record Format, and Full-Text Display (pp. 447-536) Chapter 22: Implementation and Evaluation (pp. 537-541)
    Footnote
    Rez. in JASIST 57(2006) no.10, S.1412-1413 (R. W. White): "Information Retrieval Design is a textbook that aims to foster the intelligent user-centered design of databases for Information Retrieval (IR). The book outlines a comprehensive set of 20 factors. chosen based on prior research and the authors' experiences. that need to he considered during the design process. The authors provide designers with information on those factors to help optimize decision making. The book does not cover user-needs assessment, implementation of IR databases, or retries al systems, testing. or evaluation. Most textbooks in IR do not offer a substantive walkthrough of the design factors that need to be considered Mien des eloping IR databases. Instead. they focus on issues such as the implementation of data structures, the explanation of search algorithms, and the role of human-machine interaction in the search process. The book touches on all three, but its focus is on designing databases that can be searched effectively. not the tools to search them. This is an important distinction: despite its title. this book does not describe how to build retrieval systems. Professor Anderson utilizes his wealth of experience in cataloging and classification to bring a unique perspective on IR database design that may be useful for novices. for developers seeking to make sense of the design process, and for students as a text to supplement classroom tuition. The foreword and preface. by Jessica Milstead and James Anderson. respectively, are engaging and worthwhile reading. It is astounding that it has taken some 20 years for anyone to continue the stork of Milstead and write as extensively as Anderson does about such an important issue as IR database design. The remainder of the book is divided into two parts: Introduction and Background Issues and Design Decisions. Part 1 is a reasonable introduction and includes a glossary of the terminology that authors use in the book. It is very helpful to have these definitions early on. but the subject descriptors in the right margin are distracting and do not serve their purpose as access points to the text. The terminology is useful to have. as the authors definitions of concepts do not lit exactly with what is traditionally accepted in IR. For example. they use the term 'message' to icier to what would normally be called .'document" or "information object." and do not do a good job at distinguishing between "messages" and "documentary units". Part 2 describes components and attributes of 1R databases to help designers make design choices. The book provides them with information about the potential ramifications of their decisions and advocates a user-oriented approach to making them. Chapters are arranged in a seemingly sensible order based around these factors. and the authors remind us of the importance of integrating them. The authors are skilled at selecting the important factors in the development of seemingly complex entities, such as IR databases: how es er. the integration of these factors. or the interaction between them. is not handled as well as perhaps should be. Factors are presented in the order in which the authors feel then should be addressed. but there is no chapter describing how the factors interact. The authors miss an opportunity at the beginning of Part 2 where they could illustrate using a figure the interactions between the 20 factors they list in a way that is not possible with the linear structure of the book.
    LCSH
    Machine / readable bibliographic data
    Subject
    Machine / readable bibliographic data

Languages

Types

  • a 2437
  • m 312
  • el 226
  • s 122
  • b 26
  • p 22
  • x 22
  • i 12
  • n 8
  • r 8
  • More… Less…

Themes

Subjects

Classifications