Search (1864 results, page 1 of 94)

  • × theme_ss:"Formalerschließung"
  1. Busch, U.; Zillmann, H.: Kosten-Nutzen-Relation nationaler Katalogisierungsstandards (1995) 0.08
    0.076954 = product of:
      0.23086199 = sum of:
        0.010113529 = weight(_text_:a in 1380) [ClassicSimilarity], result of:
          0.010113529 = score(doc=1380,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.20383182 = fieldWeight in 1380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=1380)
        0.22074847 = weight(_text_:68 in 1380) [ClassicSimilarity], result of:
          0.22074847 = score(doc=1380,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.9522906 = fieldWeight in 1380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.125 = fieldNorm(doc=1380)
      0.33333334 = coord(2/6)
    
    Source
    Mitteilungsblatt der Bibliotheken in Niedersachsen und Sachsen-Anhalt. 1995, H.97/98, S.65-68
    Type
    a
  2. Batten, W.E.: Document description and representation (1973) 0.08
    0.076954 = product of:
      0.23086199 = sum of:
        0.010113529 = weight(_text_:a in 256) [ClassicSimilarity], result of:
          0.010113529 = score(doc=256,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.20383182 = fieldWeight in 256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=256)
        0.22074847 = weight(_text_:68 in 256) [ClassicSimilarity], result of:
          0.22074847 = score(doc=256,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.9522906 = fieldWeight in 256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.125 = fieldNorm(doc=256)
      0.33333334 = coord(2/6)
    
    Source
    Annual review of information science and technology. 8(1973), S.43-68
    Type
    a
  3. Lee, E.: Cataloguing (and reference) at the crossroads (1996) 0.07
    0.07042307 = product of:
      0.14084613 = sum of:
        0.0071513453 = weight(_text_:a in 499) [ClassicSimilarity], result of:
          0.0071513453 = score(doc=499,freq=4.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.14413087 = fieldWeight in 499, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=499)
        0.110374235 = weight(_text_:68 in 499) [ClassicSimilarity], result of:
          0.110374235 = score(doc=499,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.4761453 = fieldWeight in 499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0625 = fieldNorm(doc=499)
        0.023320548 = product of:
          0.046641096 = sum of:
            0.046641096 = weight(_text_:22 in 499) [ClassicSimilarity], result of:
              0.046641096 = score(doc=499,freq=2.0), product of:
                0.15068802 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043031227 = queryNorm
                0.30952093 = fieldWeight in 499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=499)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Alerts librarians to directions in research in artificial intelligence relevant to information retrieval which will change current technology and user expectations and consequently the requirements for data provision and access at the base level. Predicts a reevaluation of priorities for using the expertise of cataloguers (and reference librarians) and of cataloguing methodologies. Debates the future of cataloguing, arguing for the need to monitor developments in adjacent research areas and to plan with these in mind
    Source
    Cataloguing Australia. 22(1996) nos.3/4, S.68-75
    Type
    a
  4. Manning, R.W.: ¬The Anglo-American Cataloguing Rules and their future (1999) 0.07
    0.067334756 = product of:
      0.20200425 = sum of:
        0.008849338 = weight(_text_:a in 6809) [ClassicSimilarity], result of:
          0.008849338 = score(doc=6809,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.17835285 = fieldWeight in 6809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6809)
        0.19315492 = weight(_text_:68 in 6809) [ClassicSimilarity], result of:
          0.19315492 = score(doc=6809,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.8332543 = fieldWeight in 6809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.109375 = fieldNorm(doc=6809)
      0.33333334 = coord(2/6)
    
    Source
    International cataloguing and bibliographic control. 28(1999) no.3, S.68-71
    Type
    a
  5. Hsieh-Yee, I.: Cataloging and metatdata education in North American LIS programs (2004) 0.05
    0.045313142 = product of:
      0.090626284 = sum of:
        0.007067044 = weight(_text_:a in 138) [ClassicSimilarity], result of:
          0.007067044 = score(doc=138,freq=10.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.14243183 = fieldWeight in 138, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=138)
        0.0689839 = weight(_text_:68 in 138) [ClassicSimilarity], result of:
          0.0689839 = score(doc=138,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.29759082 = fieldWeight in 138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0390625 = fieldNorm(doc=138)
        0.014575343 = product of:
          0.029150685 = sum of:
            0.029150685 = weight(_text_:22 in 138) [ClassicSimilarity], result of:
              0.029150685 = score(doc=138,freq=2.0), product of:
                0.15068802 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043031227 = queryNorm
                0.19345059 = fieldWeight in 138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=138)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    This paper presents findings of a survey an the state of cataloging and metadata education. in ALA-accredited library and information science progranis in North America. The survey was conducted in response to Action Item 5.1 of the "Bibliographic Control of Web Resources: A Library of Congress Action Plan," which focuses an providing metadata education to new LIS professionals. The study found LIS programs increased their reliance an introductory courses to cover cataloging and metadata, but fewer programs than before had a cataloging course requirement. The knowledge of cataloging delivered in introductory courses was basic, and the coverage of metadata was limited to an overview. Cataloging courses showed similarity in coverage and practice and focused an print mater!als. Few cataloging educators provided exercises in metadata record creation using non-AACR standards. Advanced cataloging courses provided in-depth coverage of subject cataloging and the cataloging of nonbook resources, but offered very limited coverage of metadata. Few programs offered full courses an metadata, and even fewer offered advanced metadata courses. Metadata topics were well integrated into LIS curricula, but coverage of metadata courses varied from program to program, depending an the interests of instructors. Educators were forward-looking and agreed an the inclusion of specific knowledge and skills in metadata instruction. A series of actions were proposed to assist educators in providing students with competencies in cataloging and metadata.
    Date
    10. 9.2000 17:38:22
    Source
    Library resources and technical services. 48(2004) no.1, S.59-68
    Type
    a
  6. VanAvery, A.R.: Recat vs. Recon of serials : a problem for shared cataloging (1990) 0.04
    0.04016259 = product of:
      0.120487764 = sum of:
        0.010113529 = weight(_text_:a in 473) [ClassicSimilarity], result of:
          0.010113529 = score(doc=473,freq=8.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.20383182 = fieldWeight in 473, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=473)
        0.110374235 = weight(_text_:68 in 473) [ClassicSimilarity], result of:
          0.110374235 = score(doc=473,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.4761453 = fieldWeight in 473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0625 = fieldNorm(doc=473)
      0.33333334 = coord(2/6)
    
    Abstract
    Retrospective conversion of serials cataloging is discussed and compared with professional recataloging. A study of 357 recataloged titles illustrates problems with serials records available on the national bibliographic utilities. During retrospective conversions, libraries choose those records which best suit their needs, but this process does not enhance the utility databases. A possible technological solution is proposed.
    Source
    Cataloging and classification quarterly. 10(1990) no.4, S.51-68
    Type
    a
  7. Hopkins, J.: USMARC as metadata shell (1999) 0.04
    0.039175194 = product of:
      0.11752558 = sum of:
        0.0071513453 = weight(_text_:a in 933) [ClassicSimilarity], result of:
          0.0071513453 = score(doc=933,freq=4.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.14413087 = fieldWeight in 933, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=933)
        0.110374235 = weight(_text_:68 in 933) [ClassicSimilarity], result of:
          0.110374235 = score(doc=933,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.4761453 = fieldWeight in 933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0625 = fieldNorm(doc=933)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper introduces the two concepts of Content and Coding which together define Metadata. The encoding scheme used to hold the data content is referred to as a shell. One such shell is the MARC format. In this paper I describe the MARC format and its application to Internet resources, primarily through the OCLC-sponsored Intercat Project
    Source
    Journal of Internet cataloging. 2(1999) no.1, S.55-68
    Type
    a
  8. Darling, K.; Allen, A.: Using the OCLC CJK350 at the University of Oregon Library (1988) 0.04
    0.039175194 = product of:
      0.11752558 = sum of:
        0.0071513453 = weight(_text_:a in 427) [ClassicSimilarity], result of:
          0.0071513453 = score(doc=427,freq=4.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.14413087 = fieldWeight in 427, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=427)
        0.110374235 = weight(_text_:68 in 427) [ClassicSimilarity], result of:
          0.110374235 = score(doc=427,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.4761453 = fieldWeight in 427, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0625 = fieldNorm(doc=427)
      0.33333334 = coord(2/6)
    
    Source
    Cataloging and classification quarterly. 9(1988) no.1, S.59-68
    Type
    a
  9. Heller, J.R.: Original cataloging at the New York Chiropractic College Media Resources Library (1990) 0.04
    0.039175194 = product of:
      0.11752558 = sum of:
        0.0071513453 = weight(_text_:a in 459) [ClassicSimilarity], result of:
          0.0071513453 = score(doc=459,freq=4.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.14413087 = fieldWeight in 459, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=459)
        0.110374235 = weight(_text_:68 in 459) [ClassicSimilarity], result of:
          0.110374235 = score(doc=459,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.4761453 = fieldWeight in 459, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0625 = fieldNorm(doc=459)
      0.33333334 = coord(2/6)
    
    Abstract
    This article is about the conceptual framework of the original cataloging process that is used to catalog the audiovisual a materials at the New York Chiropractic College Media Resources Library. The cataloging norms serve as the conceptual underpinnings to the Cataloging Policy Manual of the New York Chiropractic College Media Resources Library. The paper describes the use of the various cataloging standards and policies that are used as well as the use and assignment of the National Library of Medicine classification to ensure access to the audiovisual collection.
    Source
    Cataloging and classification quarterly. 10(1990) no.3, S.59-68
    Type
    a
  10. Gömpel, R.; Hengel, C.; Kunz, M.; Münnich, M.; Solberg, S.; Werner, C.: 68. IFLA General Conference in Glasgow : Veranstaltungen der Division IV Bibliographic Control (2002) 0.04
    0.038477 = product of:
      0.115430996 = sum of:
        0.0050567645 = weight(_text_:a in 1198) [ClassicSimilarity], result of:
          0.0050567645 = score(doc=1198,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.10191591 = fieldWeight in 1198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1198)
        0.110374235 = weight(_text_:68 in 1198) [ClassicSimilarity], result of:
          0.110374235 = score(doc=1198,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.4761453 = fieldWeight in 1198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0625 = fieldNorm(doc=1198)
      0.33333334 = coord(2/6)
    
    Type
    a
  11. Phillips, R.F.: ¬The Spanish Comedias Project at the University of Colorado : a case for collection-level cataloguing (1993) 0.04
    0.035805214 = product of:
      0.10741564 = sum of:
        0.010838181 = weight(_text_:a in 562) [ClassicSimilarity], result of:
          0.010838181 = score(doc=562,freq=12.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.21843673 = fieldWeight in 562, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=562)
        0.09657746 = weight(_text_:68 in 562) [ClassicSimilarity], result of:
          0.09657746 = score(doc=562,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.41662714 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0546875 = fieldNorm(doc=562)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper reviews a project at the University of Colorado at Boulder Libraries utilizing the AMC format to u grade the P accessibility of a collection of Spanish plays cataloged in 956 as an unanalyzed set. The project used the original finding aid as a tool to create records providing access at box level to a collection of 3700 plays kept in 145 boxes. Results show 58 percent of authors traced in the project are unique to Colorado's on-line holdings, while another 12 percent are represented by just one other entry. Therefore, an enrichment of the coverage of Spanish literature at CU has been accomplished.
    Source
    Cataloging and classification quarterly. 17(1993) nos.1/2, S.47-68
    Type
    a
  12. Neuböck, I.: 2015 - Das erste Jahr der RDA : ein erster Statusbericht (2015) 0.03
    0.034278296 = product of:
      0.10283489 = sum of:
        0.0062574274 = weight(_text_:a in 1830) [ClassicSimilarity], result of:
          0.0062574274 = score(doc=1830,freq=4.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.12611452 = fieldWeight in 1830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
        0.09657746 = weight(_text_:68 in 1830) [ClassicSimilarity], result of:
          0.09657746 = score(doc=1830,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.41662714 = fieldWeight in 1830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1830)
      0.33333334 = coord(2/6)
    
    Location
    A
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 68(2015) H.1, S.131-134
    Type
    a
  13. Kocher, M.; Savoy, J.: ¬A simple and efficient algorithm for authorship verification (2017) 0.03
    0.030938296 = product of:
      0.092814885 = sum of:
        0.010034206 = weight(_text_:a in 3330) [ClassicSimilarity], result of:
          0.010034206 = score(doc=3330,freq=14.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.20223314 = fieldWeight in 3330, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3330)
        0.08278068 = weight(_text_:68 in 3330) [ClassicSimilarity], result of:
          0.08278068 = score(doc=3330,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.35710898 = fieldWeight in 3330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3330)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper describes and evaluates an unsupervised and effective authorship verification model called Spatium-L1. As features, we suggest using the 200 most frequent terms of the disputed text (isolated words and punctuation symbols). Applying a simple distance measure and a set of impostors, we can determine whether or not the disputed text was written by the proposed author. Moreover, based on a simple rule we can define when there is enough evidence to propose an answer or when the attribution scheme is unable to make a decision with a high degree of certainty. Evaluations based on 6 test collections (PAN CLEF 2014 evaluation campaign) indicate that Spatium-L1 usually appears in the top 3 best verification systems, and on an aggregate measure, presents the best performance. The suggested strategy can be adapted without any problem to different Indo-European languages (such as English, Dutch, Spanish, and Greek) or genres (essay, novel, review, and newspaper article).
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.259-269
    Type
    a
  14. Halverson, J.A.; Gomez, J.; Marner, J.C.: Creation and implementation of an automated authority section at the Texas A&M University Library (1992) 0.03
    0.030690184 = product of:
      0.09207055 = sum of:
        0.009289869 = weight(_text_:a in 538) [ClassicSimilarity], result of:
          0.009289869 = score(doc=538,freq=12.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.18723148 = fieldWeight in 538, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=538)
        0.08278068 = weight(_text_:68 in 538) [ClassicSimilarity], result of:
          0.08278068 = score(doc=538,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.35710898 = fieldWeight in 538, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=538)
      0.33333334 = coord(2/6)
    
    Abstract
    Before the implementation of NOTIS in January of 1988, all authority work at the Evans Library was recorded in an authority card file. Planning began early for the creation of an automated authority section. This section included a cataloger, staff from copy cataloging, and the now obsolete Card Catalog Maintenance Section. This diverse group presented a challenge because of their varying degrees of expertise. Areas of training that needed to be addressed included use of the OCLC and NOTIS systems and basic cataloging rules, especially as they apply to establishing names, subject headings, and series. Issues addressed included: staffing, equipment, materials, training, and procedures and policy decisions. The Library contracted with Blackwell, North America to convert the authority card file to machine readable form, giving the authority section its starting point. The section began training in March 1989 and became functional in July of that year. Even though the section continues to evolve, the original goals were met in the creation of a cohesive group with the basic knowledge and skills needed to transfer authority control from a manual to an automated environment.
    Source
    Cataloging and classification quarterly. 15(1992) no.3, S.57-68
    Type
    a
  15. Santana, A.F.; Gonçalves, M.A.; Laender, A.H.F.; Ferreira, A.A.: Incremental author name disambiguation by exploiting domain-specific heuristics (2017) 0.03
    0.030121943 = product of:
      0.09036583 = sum of:
        0.007585147 = weight(_text_:a in 3587) [ClassicSimilarity], result of:
          0.007585147 = score(doc=3587,freq=8.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.15287387 = fieldWeight in 3587, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3587)
        0.08278068 = weight(_text_:68 in 3587) [ClassicSimilarity], result of:
          0.08278068 = score(doc=3587,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.35710898 = fieldWeight in 3587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=3587)
      0.33333334 = coord(2/6)
    
    Abstract
    The vast majority of the current author name disambiguation solutions are designed to disambiguate a whole digital library (DL) at once considering the entire repository. However, these solutions besides being very expensive and having scalability problems, also may not benefit from eventual manual corrections, as they may be lost whenever the process of disambiguating the entire repository is required. In the real world, in which repositories are updated on a daily basis, incremental solutions that disambiguate only the newly introduced citation records, are likely to produce improved results in the long run. However, the problem of incremental author name disambiguation has been largely neglected in the literature. In this article we present a new author name disambiguation method, specially designed for the incremental scenario. In our experiments, our new method largely outperforms recent incremental proposals reported in the literature as well as the current state-of-the-art non-incremental method.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.931-945
    Type
    a
  16. Eversberg, B.: Zukunft der Regelwerksarbeit und des Katalogisierens (2003) 0.03
    0.029648859 = product of:
      0.08894657 = sum of:
        0.0025283822 = weight(_text_:a in 1552) [ClassicSimilarity], result of:
          0.0025283822 = score(doc=1552,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.050957955 = fieldWeight in 1552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1552)
        0.08641819 = weight(_text_:kritische in 1552) [ClassicSimilarity], result of:
          0.08641819 = score(doc=1552,freq=2.0), product of:
            0.2900761 = queryWeight, product of:
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.043031227 = queryNorm
            0.29791558 = fieldWeight in 1552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.03125 = fieldNorm(doc=1552)
      0.33333334 = coord(2/6)
    
    Content
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
    Type
    a
  17. Hohoff, U.: Versuch einer Zusammenfassung der Diskussion (2003) 0.03
    0.029648859 = product of:
      0.08894657 = sum of:
        0.0025283822 = weight(_text_:a in 2120) [ClassicSimilarity], result of:
          0.0025283822 = score(doc=2120,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.050957955 = fieldWeight in 2120, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2120)
        0.08641819 = weight(_text_:kritische in 2120) [ClassicSimilarity], result of:
          0.08641819 = score(doc=2120,freq=2.0), product of:
            0.2900761 = queryWeight, product of:
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.043031227 = queryNorm
            0.29791558 = fieldWeight in 2120, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.03125 = fieldNorm(doc=2120)
      0.33333334 = coord(2/6)
    
    Content
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
    Type
    a
  18. Hohoff, U.: ¬Die Zukunft der formalen und inhaltlichen Erschließung : Ein Blick über die Grenzen der RAK / AACR - Diskussion. Eine gemeinsame Veranstaltung der Bibliotheksverbände VDB, GBV und BIB (2003) 0.03
    0.029648859 = product of:
      0.08894657 = sum of:
        0.0025283822 = weight(_text_:a in 3361) [ClassicSimilarity], result of:
          0.0025283822 = score(doc=3361,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.050957955 = fieldWeight in 3361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3361)
        0.08641819 = weight(_text_:kritische in 3361) [ClassicSimilarity], result of:
          0.08641819 = score(doc=3361,freq=2.0), product of:
            0.2900761 = queryWeight, product of:
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.043031227 = queryNorm
            0.29791558 = fieldWeight in 3361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.03125 = fieldNorm(doc=3361)
      0.33333334 = coord(2/6)
    
    Content
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
    Type
    a
  19. Wiesenmüller, H.: Vier Thesen (2003) 0.03
    0.029648859 = product of:
      0.08894657 = sum of:
        0.0025283822 = weight(_text_:a in 3362) [ClassicSimilarity], result of:
          0.0025283822 = score(doc=3362,freq=2.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.050957955 = fieldWeight in 3362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3362)
        0.08641819 = weight(_text_:kritische in 3362) [ClassicSimilarity], result of:
          0.08641819 = score(doc=3362,freq=2.0), product of:
            0.2900761 = queryWeight, product of:
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.043031227 = queryNorm
            0.29791558 = fieldWeight in 3362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7410603 = idf(docFreq=141, maxDocs=44218)
              0.03125 = fieldNorm(doc=3362)
      0.33333334 = coord(2/6)
    
    Footnote
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
    Type
    a
  20. Wiesenmüller, H.: ¬Der RDA-Umstieg in Deutschland : Herausforderungen für das Metadatenmanagement (2015) 0.03
    0.029381398 = product of:
      0.08814419 = sum of:
        0.0053635086 = weight(_text_:a in 2149) [ClassicSimilarity], result of:
          0.0053635086 = score(doc=2149,freq=4.0), product of:
            0.049617026 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.043031227 = queryNorm
            0.10809815 = fieldWeight in 2149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2149)
        0.08278068 = weight(_text_:68 in 2149) [ClassicSimilarity], result of:
          0.08278068 = score(doc=2149,freq=2.0), product of:
            0.23180789 = queryWeight, product of:
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.043031227 = queryNorm
            0.35710898 = fieldWeight in 2149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.386969 = idf(docFreq=549, maxDocs=44218)
              0.046875 = fieldNorm(doc=2149)
      0.33333334 = coord(2/6)
    
    Abstract
    Der Umstieg der deutschsprachigen Länder auf den neuen Katalogisierungsstandard RDA wirft die Frage auf, wie mit den gemäß RAK erschlossenen Altdaten umgegangen werden soll. Der vorliegende Beitrag untersucht aus datentechnischer Sicht, ob bzw. inwieweit RAK-Katalogisate (Titel- und Normdatensätze) mit maschinellen Mitteln auf RDA angehoben werden könnten. Beispielhaft betrachtet werden dabei u. a. die zu übertragenden Elemente in der bibliografischen Beschreibung, der Inhaltstyp sowie die Aufspaltung in getrennte bibliografische Identitäten im Fall von Pseudonymen. Es wird gezeigt, dass RDA-Upgrades möglich, aber hochkomplex sind.
    Content
    Beim vorliegenden Beitrag handelt es sich um die erweiterte und aktualisierte Fassung eines Vortrags, der am 5. Dezember 2014 auf dem Symposium "Forschung für die Praxis - Perspektiven für Bibliotheks- und Informationsmanagement" an der Hochschule der Medien in Stuttgart gehalten wurde. Vortragsfolien unter https://www.hdm-stuttgart.de/ bi/symposium/skripte/Wiesenmueller_RDA-Umstieg_Forum1_14-12-05.pdf (10.05.2015). Vgl. auch den Veranstaltungsbericht: Vonhof, Cornelia; Stang, Richard; Wiesenmüller, Heidrun: Forschung für die Praxis - Perspektiven für Bibliotheks- und Informationsmanagement. In: o-bib 2 (2015), H. 1, S. 68-74. http://dx.doi.org/10.5282/o-bib/2015H1S6. Vgl.: http://dx.doi.org/10.5282/o-bib/2015H2.
    Type
    a

Authors

Languages

Types

  • a 1731
  • el 94
  • m 80
  • b 17
  • s 17
  • n 9
  • r 8
  • ag 2
  • l 2
  • x 2
  • ? 1
  • p 1
  • More… Less…

Subjects

Classifications