Search (274 results, page 1 of 14)

  • × theme_ss:"Formalerschließung"
  1. Ballard, T.; Grimaldi, A.: Improve OPAC searching by reducing tagging errors in MARC records (1997) 0.05
    0.05366684 = product of:
      0.16100052 = sum of:
        0.108687915 = weight(_text_:problem in 695) [ClassicSimilarity], result of:
          0.108687915 = score(doc=695,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.5305606 = fieldWeight in 695, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=695)
        0.052312598 = weight(_text_:22 in 695) [ClassicSimilarity], result of:
          0.052312598 = score(doc=695,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=695)
      0.33333334 = coord(2/6)
    
    Abstract
    One of the most common errors in cataloguing library materials involves miscoding of the nonfiling indicator of title fields. Notes the extent of the problem and its negative effect on searching in the library's online catalogue and surveys how librarians have approached solutions to the problems. Describes how the major library automation system address this problem
    Date
    6. 3.1997 16:22:15
  2. Byrd, J.: ¬A cooperative cataloguing proposal for Slavic and East European languages and the languages of the former Soviet Union (1993) 0.05
    0.046958487 = product of:
      0.14087546 = sum of:
        0.09510193 = weight(_text_:problem in 564) [ClassicSimilarity], result of:
          0.09510193 = score(doc=564,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.46424055 = fieldWeight in 564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=564)
        0.04577352 = weight(_text_:22 in 564) [ClassicSimilarity], result of:
          0.04577352 = score(doc=564,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=564)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper proposes, as a backlog reduction strategy, a national cooperative cataloging program among libraries with major collections in the Slavic and East European languages and in the languages of the former Soviet Union. The long-standing problem of cataloging backlogs is discussed, including a brief discussion of some of the other ways that have been used to address the problem. The proposal for a cooperative effort is outlined and some of the cataloging issues to be considered are discussed.
    Date
    12. 1.2007 13:22:35
  3. Brugger, J.M.: Cataloging for digital libraries (1996) 0.04
    0.043055516 = product of:
      0.12916654 = sum of:
        0.07685395 = weight(_text_:problem in 6732) [ClassicSimilarity], result of:
          0.07685395 = score(doc=6732,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 6732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=6732)
        0.052312598 = weight(_text_:22 in 6732) [ClassicSimilarity], result of:
          0.052312598 = score(doc=6732,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 6732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=6732)
      0.33333334 = coord(2/6)
    
    Abstract
    Considers the problem of applying standard concepts of cataloguing and bibliographic control to electronic media by studying the degree of fit between the Standford Integrated Digital Library Project (SDLP) and both the USMARC format and the Text Encoding Initiative (TEI). Notes the lack of fit of both USMARC and TEI but stresses the advantages of the latter due its lack of dependency on 3 digit tags and its use of SGML conventions
    Series
    Cataloging and classification quarterly; vol.22, nos.3/4
  4. Moir, S.; Wells, A.: Descriptive cataloguing and the Internet : recent research (1996) 0.04
    0.043055516 = product of:
      0.12916654 = sum of:
        0.07685395 = weight(_text_:problem in 7230) [ClassicSimilarity], result of:
          0.07685395 = score(doc=7230,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.375163 = fieldWeight in 7230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=7230)
        0.052312598 = weight(_text_:22 in 7230) [ClassicSimilarity], result of:
          0.052312598 = score(doc=7230,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.30952093 = fieldWeight in 7230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=7230)
      0.33333334 = coord(2/6)
    
    Abstract
    Outlines the Coalition for Networked Information's (CNI) analysis of the networked electronic environment where the chief problem for the users appears to be finding resources on the Internet. Presents the CNI's arguments for surrogates and describes one approach to the identification and description of resources on the Internet: the OCLC Internet Cataloguing Project
    Source
    Cataloguing Australia. 22(1996) nos.1/2, S.8-16
  5. Rankin, K.L.: Video cataloguing : reducing backlogs (1996) 0.04
    0.037673578 = product of:
      0.11302073 = sum of:
        0.06724721 = weight(_text_:problem in 6934) [ClassicSimilarity], result of:
          0.06724721 = score(doc=6934,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 6934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6934)
        0.04577352 = weight(_text_:22 in 6934) [ClassicSimilarity], result of:
          0.04577352 = score(doc=6934,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 6934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6934)
      0.33333334 = coord(2/6)
    
    Abstract
    At the University of Nevada, Las Vergas (UNLV), increases in student population and the creation of new academic programmes resulted in the recruitment of additional teaching faculty. This growth increased acquisitions budgets and collections. Library staff then faced a cataloguing backlog of hundreds of video titles. Describes how the library solved the backlog problem by implementing 'cataloguing shortcuts'. Outlines the previous cataloguing process and the elements which were streamlined by implementing shortcuts. The number of videos catalogued went from 604 in the 1991/2 academic year (prior to the shortcuts) to 804 titles in 1992/3 and 883 titles in 1993/4. The Special Formats Cataloguer now has time to work on formats other than video
    Date
    27.11.1995 17:07:22
  6. Aalberg, T.; Haugen, F.B.; Husby, O.: ¬A Tool for Converting from MARC to FRBR (2006) 0.04
    0.037673578 = product of:
      0.11302073 = sum of:
        0.06724721 = weight(_text_:problem in 2425) [ClassicSimilarity], result of:
          0.06724721 = score(doc=2425,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.3282676 = fieldWeight in 2425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2425)
        0.04577352 = weight(_text_:22 in 2425) [ClassicSimilarity], result of:
          0.04577352 = score(doc=2425,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.2708308 = fieldWeight in 2425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2425)
      0.33333334 = coord(2/6)
    
    Abstract
    The FRBR model is by many considered to be an important contribution to the next generation of bibliographic catalogues, but a major challenge for the library community is how to use this model on already existing MARC-based bibliographic catalogues. This problem requires a solution for the interpretation and conversion of MARC records, and a tool for this kind of conversion is developed as a part of the Norwegian BIBSYS FRBR project. The tool is based on a systematic approach to the interpretation and conversion process and is designed to be adaptable to the rules applied in different catalogues.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  7. D'Angelo, C.A.; Giuffrida, C.; Abramo, G.: ¬A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments (2011) 0.03
    0.032291643 = product of:
      0.09687492 = sum of:
        0.05764047 = weight(_text_:problem in 4190) [ClassicSimilarity], result of:
          0.05764047 = score(doc=4190,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.28137225 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
        0.03923445 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
          0.03923445 = score(doc=4190,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.23214069 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
      0.33333334 = coord(2/6)
    
    Abstract
    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because of almost overwhelming difficulties in identifying the true author of each publication. We will address this problem by presenting a heuristic approach to author name disambiguation in bibliometric datasets for large-scale research assessments. The application proposed concerns the Italian university system, comprising 80 universities and a research staff of over 60,000 scientists. The key advantage of the proposed approach is the ease of implementation. The algorithms are of practical application and have considerably better scalability and expandability properties than state-of-the-art unsupervised approaches. Moreover, the performance in terms of precision and recall, which can be further improved, seems thoroughly adequate for the typical needs of large-scale bibliometric research assessments.
    Date
    22. 1.2011 13:06:52
  8. Normore, L.F.: "Here be dragons" : a wayfinding approach to teaching cataloguing (2012) 0.03
    0.0269097 = product of:
      0.0807291 = sum of:
        0.04803372 = weight(_text_:problem in 1903) [ClassicSimilarity], result of:
          0.04803372 = score(doc=1903,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 1903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1903)
        0.032695375 = weight(_text_:22 in 1903) [ClassicSimilarity], result of:
          0.032695375 = score(doc=1903,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.19345059 = fieldWeight in 1903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1903)
      0.33333334 = coord(2/6)
    
    Abstract
    Teaching cataloguing requires the instructor to make strategic decisions about how to approach the variety and complexity of the field and to provide an adequate theoretical foundation while preparing students for their entry into the world of practice. Accompanying these challenges are the tactical demands of providing this instruction in a distance education environment. Rather than focusing on ways to support learners in catalogue record production, instructors may use a problem solving and decision making approach to instruction. In this paper, a way to conceptualize a decision making approach that builds on a foundation provided by theories of information navigation is described. This approach, which is called "wayfinding", teaches by having students learn to find their way in the sets of rules that are commonly used. The method focuses on instruction about the structural features of rule sets, providing basic definitions of what each of the "places" in the rule sets contain (e.g., "formatting personal names" in Chapter 22 of AACR2R) and about ways to navigate those structures, enabling students to learn not only about common rules but also about less well known cataloguing practices ("dragons"). It provides both pragmatic and pedagogical benefits and helps develop links between cataloguing practices and their theoretical foundations.
  9. O'Neill, E.T.: FRBR: Functional requirements for bibliographic records application of the entity-relationship model to Humphry Clinker (2002) 0.03
    0.0269097 = product of:
      0.0807291 = sum of:
        0.04803372 = weight(_text_:problem in 2434) [ClassicSimilarity], result of:
          0.04803372 = score(doc=2434,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 2434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2434)
        0.032695375 = weight(_text_:22 in 2434) [ClassicSimilarity], result of:
          0.032695375 = score(doc=2434,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.19345059 = fieldWeight in 2434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2434)
      0.33333334 = coord(2/6)
    
    Abstract
    The report from the IFLA (International Federation of Library Associations and Institutions) Study Group on the Functional Requirements for Bibliographic Records (FRBR) recommended a new approach to cataloging based on an entity-relationship model. This study examined a single work, The Expedition of Humphry Clinker, to determine benefits and drawbacks associated with creating such an entity-relationship model. Humphry Clinker was selected for several reasons - it has been previously studied, it is widely held, and it is a work of mid-level complexity. In addition to analyzing the bibliographic records, many books were examined to ensure the accuracy of the resulting FRBR model. While it was possible to identify works and manifestations, identifying expressions was problematic. Reliable identification of expressions frequently necessitated the examination of the books themselves. Enhanced manifestation records where the roles of editors, illustrators, translators, and other contributors are explicitly identified may be a viable alternative to expressions. For Humphry Clinker, the enhanced record approach avoids the problem of identifying expressions while providing similar functionality. With the enhanced manifestation record, the three remaining entity-relationship structures - works, manifestations, and items - the FRBR model provides a powerful means to improve bibliographic organization and navigation.
    Date
    10. 9.2000 17:38:22
  10. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.03
    0.0269097 = product of:
      0.0807291 = sum of:
        0.04803372 = weight(_text_:problem in 883) [ClassicSimilarity], result of:
          0.04803372 = score(doc=883,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.23447686 = fieldWeight in 883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
        0.032695375 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
          0.032695375 = score(doc=883,freq=2.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.19345059 = fieldWeight in 883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=883)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
  11. RAK-NBM : Interpretationshilfe zu NBM 3b,3 (2000) 0.02
    0.024660397 = product of:
      0.14796238 = sum of:
        0.14796238 = weight(_text_:22 in 4362) [ClassicSimilarity], result of:
          0.14796238 = score(doc=4362,freq=4.0), product of:
            0.1690115 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04826377 = queryNorm
            0.8754574 = fieldWeight in 4362, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4362)
      0.16666667 = coord(1/6)
    
    Date
    22. 1.2000 19:22:27
  12. Hustand, S.: Problems of duplicate records (1986) 0.02
    0.01921349 = product of:
      0.11528094 = sum of:
        0.11528094 = weight(_text_:problem in 266) [ClassicSimilarity], result of:
          0.11528094 = score(doc=266,freq=8.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.5627445 = fieldWeight in 266, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.046875 = fieldNorm(doc=266)
      0.16666667 = coord(1/6)
    
    Abstract
    Duplicate records is a familiar problem in bibliographic databases. The problem is obvious when a union catalogue is established by automatically merging two or more separate and independent source of catalogue information. However, even in systems with on-line cataloguing and access to previous records, duplication is a problem. Author / title search search prior to cataloguing does not cut duplication to zero. A great deal of effort has been put into developing methods of duplicate detection. A major problem in this work has been efficiency. Particularly in the on-line setting is this of importance. Most studies have dealt with book and article material. The Research Libraries Group Inc. has described matching algorithms also for films, maps, recordings, scores and serials. Various methods of detecting duplicates will be discussed.
  13. Studwell, W.E.: How stable is the cataloguing process? : Pt.1: the problem (1996) 0.02
    0.01921349 = product of:
      0.11528094 = sum of:
        0.11528094 = weight(_text_:problem in 7680) [ClassicSimilarity], result of:
          0.11528094 = score(doc=7680,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.5627445 = fieldWeight in 7680, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.09375 = fieldNorm(doc=7680)
      0.16666667 = coord(1/6)
    
  14. Goossens, P.; Mazur-Rzesos, E.: Hierarchical relationships in bibliographic descriptions : problem analysis (1982) 0.02
    0.01921349 = product of:
      0.11528094 = sum of:
        0.11528094 = weight(_text_:problem in 4619) [ClassicSimilarity], result of:
          0.11528094 = score(doc=4619,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.5627445 = fieldWeight in 4619, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.09375 = fieldNorm(doc=4619)
      0.16666667 = coord(1/6)
    
  15. Haynes, E.; Fountain, J.F.: Unlocking the mysteries of cataloging : a workbook of examples (2005) 0.02
    0.01921349 = product of:
      0.11528094 = sum of:
        0.11528094 = weight(_text_:problem in 4595) [ClassicSimilarity], result of:
          0.11528094 = score(doc=4595,freq=2.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.5627445 = fieldWeight in 4595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.09375 = fieldNorm(doc=4595)
      0.16666667 = coord(1/6)
    
    Abstract
    This workbook pinpoints problem areas that arise in the cataloguing of a wide variety of materials in public, school, special, and academic library settings.
  16. Eversberg, B.: Zukunft der Regelwerksarbeit und des Katalogisierens (2003) 0.02
    0.019021604 = product of:
      0.114129625 = sum of:
        0.114129625 = weight(_text_:seele in 1552) [ClassicSimilarity], result of:
          0.114129625 = score(doc=1552,freq=2.0), product of:
            0.35304275 = queryWeight, product of:
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.04826377 = queryNorm
            0.32327422 = fieldWeight in 1552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.03125 = fieldNorm(doc=1552)
      0.16666667 = coord(1/6)
    
    Content
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
  17. Hohoff, U.: Versuch einer Zusammenfassung der Diskussion (2003) 0.02
    0.019021604 = product of:
      0.114129625 = sum of:
        0.114129625 = weight(_text_:seele in 2120) [ClassicSimilarity], result of:
          0.114129625 = score(doc=2120,freq=2.0), product of:
            0.35304275 = queryWeight, product of:
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.04826377 = queryNorm
            0.32327422 = fieldWeight in 2120, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.03125 = fieldNorm(doc=2120)
      0.16666667 = coord(1/6)
    
    Content
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
  18. Hohoff, U.: ¬Die Zukunft der formalen und inhaltlichen Erschließung : Ein Blick über die Grenzen der RAK / AACR - Diskussion. Eine gemeinsame Veranstaltung der Bibliotheksverbände VDB, GBV und BIB (2003) 0.02
    0.019021604 = product of:
      0.114129625 = sum of:
        0.114129625 = weight(_text_:seele in 3361) [ClassicSimilarity], result of:
          0.114129625 = score(doc=3361,freq=2.0), product of:
            0.35304275 = queryWeight, product of:
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.04826377 = queryNorm
            0.32327422 = fieldWeight in 3361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.03125 = fieldNorm(doc=3361)
      0.16666667 = coord(1/6)
    
    Content
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
  19. Wiesenmüller, H.: Vier Thesen (2003) 0.02
    0.019021604 = product of:
      0.114129625 = sum of:
        0.114129625 = weight(_text_:seele in 3362) [ClassicSimilarity], result of:
          0.114129625 = score(doc=3362,freq=2.0), product of:
            0.35304275 = queryWeight, product of:
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.04826377 = queryNorm
            0.32327422 = fieldWeight in 3362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.03125 = fieldNorm(doc=3362)
      0.16666667 = coord(1/6)
    
    Footnote
    Vgl. auch die Kommentierung von E. Plassmann in einer Rezension in ZfBB 52(2005) H.2, S.99: Vieles wirkt erfrischend. Um das heftig umstrittene Thema RAK und AACR herauszugreifen,das imTagungsband ganz vorne steht: Auf die knappe, gleichwohl qualifizierte und nachdenkliche Einführung von Ulrich Hohoff folgen kurze, unbefangen kritische Statements von der Art, die man sich in der Fachdiskussion öfter wünschte - ohne die verbreitete, nicht selten pharisäische correctness. Bernhard Eversberg wird vielen Kolleginnen und Kollegen aus der Seele gesprochen haben, wenn er sagt: »Das Katalogisierungspersonal sollte frühzeitig die Möglichkeit zum Einblick in neue Texte und Entwürfe haben. Engagierte Mitarbeiterinnen wünschen sich Möglichkeiten, eigene Erfahrungen, Meinungen und Fragen einzubringen.« Oder, wenn er feststellt: »Technische Verbesserungen sind aber kein Ziel an sich. Wichtig ist zuallererst, die eigentlichen Ziele des Katalogisierens neu zu überdenken, zu erweitern oder zu modifizieren. Dies kann eine neue technische Grundlage nur unterstützen, nicht ersetzen. Zu den neuen Herausforderungen gehören vernetzte, internationalisierte Normdateien.« (beide Zitate S.75) - In eine etwas andere Richtung pointiert Heidrun Wiesenmüller das Thema: »Der Plan eines Umstiegs auf AACR2/MARC stellt nicht zuletzt den anachronistischen Versuch dar, eine heterogene Datenwelt durch Verordnung eines Standards von oben zu vereinheitlichen. Eine zeitgemäße Lösung kann jedoch nur darin bestehen, Tools und Informationssysteme auf Meta-Ebene zu entwickeln.« (S. 79) Dem ist zu diesem Thema nichts hinzuzufügen.
  20. Olson, N.B.: Cataloging kits (2001) 0.02
    0.018114652 = product of:
      0.108687915 = sum of:
        0.108687915 = weight(_text_:problem in 5411) [ClassicSimilarity], result of:
          0.108687915 = score(doc=5411,freq=4.0), product of:
            0.20485485 = queryWeight, product of:
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.04826377 = queryNorm
            0.5305606 = fieldWeight in 5411, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.244485 = idf(docFreq=1723, maxDocs=44218)
              0.0625 = fieldNorm(doc=5411)
      0.16666667 = coord(1/6)
    
    Abstract
    The major problem in cataloging kits is that of identifying what is actually a kit according to AACR2-this problem is discussed, with examples given. The rules themselves are discussed and examples of kits are included. Sections also discuss processing these materials for circulation, weeding and preservation, and the future of kits.

Authors

Years

Languages

  • e 218
  • d 50
  • i 3
  • f 1
  • nl 1
  • s 1
  • More… Less…

Types

  • a 258
  • b 15
  • m 12
  • s 6
  • el 4
  • ? 1
  • x 1
  • More… Less…