Search (54 results, page 1 of 3)

  • × theme_ss:"Datenformate"
  • × theme_ss:"Formalerschließung"
  1. Leazer, G.H.: ¬A conceptual schema for the control of bibliographic works (1994) 0.01
    0.011855094 = product of:
      0.055323772 = sum of:
        0.02328653 = weight(_text_:system in 3033) [ClassicSimilarity], result of:
          0.02328653 = score(doc=3033,freq=6.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.30135927 = fieldWeight in 3033, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3033)
        0.0072343214 = weight(_text_:information in 3033) [ClassicSimilarity], result of:
          0.0072343214 = score(doc=3033,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16796975 = fieldWeight in 3033, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3033)
        0.024802918 = weight(_text_:retrieval in 3033) [ClassicSimilarity], result of:
          0.024802918 = score(doc=3033,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.33420905 = fieldWeight in 3033, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3033)
      0.21428572 = coord(3/14)
    
    Abstract
    In this paper I describe a conceptual design of a bibliographic retrieval system that enables more thourough control of bibliographic entities. A bibliographic entity has 2 components: the intellectual work and the physical item. Users searching bibliographic retrieval systems generally do not search for a specific item, but are willing to retrieve one of several alternative manifestations of a work. However, contemporary bibliographic retrieval systems are based solely on the descriptions of items. Works are described only implcitly by collocating descriptions of items. This method has resulted in a tool that does not include important descriptive attributes of the work, e.g. information regarding its history, its genre, or its bibliographic relationships. A bibliographic relationship is an association between 2 bibliographic entities. A system evaluation methodology wasused to create a conceptual schema for a bibliographic retrieval system. The model is based upon an analysis of data elements in the USMARC Formats for Bibliographic Data. The conceptual schema describes a database comprising 2 separate files of bibliographic descriptions, one of works and the other of items. Each file consists of individual descriptive surrogates of their respective entities. the specific data content of each file is defined by a data dictionary. Data elements used in the description of bibliographic works reflect the nature of works as intellectual and linguistic objects. The descriptive elements of bibliographic items describe the physical properties of bibliographic entities. Bibliographic relationships constitute the logical strucutre of the database
    Imprint
    Oxford : Learned Information
    Source
    Navigating the networks: Proceedings of the 1994 Mid-year Meeting of the American Society for Information Science, Portland, Oregon, May 21-25, 1994. Ed.: D.L. Andersen et al
  2. Coyle, K.: Future considerations : the functional library systems record (2004) 0.00
    0.004972478 = product of:
      0.034807343 = sum of:
        0.021511177 = weight(_text_:system in 562) [ClassicSimilarity], result of:
          0.021511177 = score(doc=562,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.27838376 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
        0.0132961655 = product of:
          0.026592331 = sum of:
            0.026592331 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.026592331 = score(doc=562,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.30952093 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The paper performs a thought experiment on the concept of a record based on the Functional Requirements for Bibliographic Records and library system functions, and concludes that if we want to develop a functional bibliographic record we need to do it within the context of a flexible, functional library systems record structure. The article suggests a new way to look at the library systems record that would allow libraries to move forward in terms of technology but also in terms of serving library users.
    Source
    Library hi tech. 22(2004) no.2, S.166-174
  3. Oehlschläger, S.: Arbeitsgemeinschaft der Verbundsysteme : Aus der 46. Sitzung am 21. und 22. April 2004 im Bibliotheksservice-Zentrum Baden-Württemberg in Konstanz (2004) 0.00
    0.0038896988 = product of:
      0.018151928 = sum of:
        0.009411139 = weight(_text_:system in 2434) [ClassicSimilarity], result of:
          0.009411139 = score(doc=2434,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.1217929 = fieldWeight in 2434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2434)
        0.0029237159 = weight(_text_:information in 2434) [ClassicSimilarity], result of:
          0.0029237159 = score(doc=2434,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.06788416 = fieldWeight in 2434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2434)
        0.0058170725 = product of:
          0.011634145 = sum of:
            0.011634145 = weight(_text_:22 in 2434) [ClassicSimilarity], result of:
              0.011634145 = score(doc=2434,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.1354154 = fieldWeight in 2434, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2434)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Content
    "Die verbundübergreifende Fernleihe für Monografien steht kurz vor ihrer flächendeckenden Einführung. Voraussetzung hierfür ist ein funktionierendes Online-Fernleih-System in jedem Verbund. Dies ist prototypisch realisiert. Die Arbeitsgemeinschaft der Verbundsysteme geht davon aus, dass ab dem 1. Januar 2005 der Echtbetrieb aufgenommen werden kann und die Leistungen nach der neuen Leihverkehrsordnung abgerechnet werden können. Zur Klärung von Detailfragen trifft sich im Juni die Arbeitsgruppe Verbundübergreifende Fernleihe. Bereits in der letzten Sitzung wurde festgelegt, dass die jeweiligen Bibliotheken über die Festlegung des Leitwegs entscheiden sollen, und die Verbundzentralen nur dann eingreifen werden, wenn Probleme entstehen sollten. Die individuelle Leitwegsteuerung, sowohl innerhalb des Verbundes als auch bei der Festlegung der Reihenfolge der anzugehenden Verbünde hat in einigen Verbünden hohe Priorität. Traditionell gewachsene Beziehungen müssen von den Bestellsystemen abgebildet werden können. Eine lokale Zusammenarbeit wird auch über Verbundgrenzen hinaus möglich sein. Im Hinblick auf die Verrechnung verbundübergreifender Fernleihen haben sich die Verbünde auf einen einheitlichen Verrechnungszeitraum geeinigt. Voraussetzung ist außerdem, dass die Unterhaltsträger die notwendigen Rahmenbedingungen für die Abrechnung schaffen und die neue Leihverkehrsordnung in allen Bundesländern rechtzeitig in Kraft gesetzt wird."
    - Projekt Umstieg auf internationale Formate und Regelwerke (MARC21, AACR2) Das Projekt Umstieg auf internationale Formate und Regelwerke (MARC21, AACR2) stand zum Zeitpunkt der Sitzung der Arbeitsgemeinschaft kurz vor seinem Abschluss. Im Rahmen der Veranstaltung des Standardisierungsausschusses beim 2. Leipziger Kongress für Information und Bibliothek wurden die wesentlichen Projektergebnisse vorgestellt. Aufgrund der vorliegenden Informationen gehen die Mitglieder der Arbeitsgemeinschaft der Verbundsysteme davon aus, dass das finanzielle Argument bei der anstehenden Entscheidung nicht mehr im Vordergrund stehen kann. Auch wenn davon ausgegangen wird, dass eine klare Umstiegsentscheidung durch den Standardisierungsausschuss derzeit politisch nicht durchsetzbar sei, sehen die Mitglieder der Arbeitsgemeinschaft der Verbundsysteme die Entwicklung durch die Projektergebnisse positiv. Durch die Diskussion wurden Defizite des deutschen Regelwerks und der Verbundpraxis offen gelegt und verschiedene Neuerungen angestoßen. Zur Verbesserung des Datentausches untereinander sehen die Verbundzentralen unabhängig von einer Entscheidung des Standardisierungsausschusses die Notwendigkeit, ihre Datenbestände zu homogenisieren und Hierarchien abzubauen bzw. die Verknüpfungsstrukturen zu vereinfachen. Auch die Entwicklung der Functional Requirements for Bibliographic Records (FRBR) muss in diese Überlegungen einbezogen werden. Die Formate müssen dahingehend entwickelt werden, dass alle relevanten Informationen im Titelsatz transportiert werden können. Es wird eine Konvergenz von Regelwerk und Format angestrebt.
  4. Ranta, J.A.: Queens Borough Public Library's Guidelines for cataloging community information (1996) 0.00
    0.0038721424 = product of:
      0.027104996 = sum of:
        0.01547085 = weight(_text_:information in 6523) [ClassicSimilarity], result of:
          0.01547085 = score(doc=6523,freq=14.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.3592092 = fieldWeight in 6523, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6523)
        0.011634145 = product of:
          0.02326829 = sum of:
            0.02326829 = weight(_text_:22 in 6523) [ClassicSimilarity], result of:
              0.02326829 = score(doc=6523,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.2708308 = fieldWeight in 6523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6523)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Currently, few resources exist to guide libraries in the cataloguing of community information using the new USMARC Format for Cammunity Information (1993). In developing a community information database, Queens Borough Public Library, New York City, formulated their own cataloguing procedures for applying AACR2, LoC File Interpretations, and USMARC Format for Community Information to community information. Their practices include entering corporate names directly whenever possible and assigning LC subject headings for classes of persons and topics, adding neighbourhood level geographic subdivisions. The guidelines were specially designed to aid non cataloguers in cataloguing community information and have enabled library to maintain consistency in handling corporate names and in assigning subject headings, while creating database that is highly accessible to library staff and users
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.51-69
  5. Riva, P.: Mapping MARC 21 linking entry fields to FRBR and Tillett's taxonomy of bibliographic relationships (2004) 0.00
    0.0037293585 = product of:
      0.026105508 = sum of:
        0.016133383 = weight(_text_:system in 136) [ClassicSimilarity], result of:
          0.016133383 = score(doc=136,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.20878783 = fieldWeight in 136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=136)
        0.009972124 = product of:
          0.019944249 = sum of:
            0.019944249 = weight(_text_:22 in 136) [ClassicSimilarity], result of:
              0.019944249 = score(doc=136,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.23214069 = fieldWeight in 136, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=136)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Bibliographic relationships have taken on even greater importance in the context of ongoing efforts to integrate concepts from the Functional Requirements for Bibliographic Records (FRBR) into cataloging codes and database structures. In MARC 21, the linking entry fields are a major mechanism for expressing relationships between bibliographic records. Taxonomies of bibliographic relationships have been proposed by Tillett, with an extension by Smiraglia, and in FRBR itself. The present exercise is to provide a detailed bidirectional mapping of the MARC 21 linking fields to these two schemes. The correspondence of the Tillett taxonomic divisions to the MARC categorization of the linking fields as chronological, horizontal, or vertical is examined as well. Application of the findings to MARC format development and system functionality is discussed.
    Date
    10. 9.2000 17:38:22
  6. Boruah, B.B.; Ravikumar, S.; Gayang, F.L.: Consistency, extent, and validation of the utilization of the MARC 21 bibliographic standard in the college libraries of Assam in India (2023) 0.00
    0.0036616514 = product of:
      0.025631558 = sum of:
        0.008269517 = weight(_text_:information in 1183) [ClassicSimilarity], result of:
          0.008269517 = score(doc=1183,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1920054 = fieldWeight in 1183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1183)
        0.017362041 = weight(_text_:retrieval in 1183) [ClassicSimilarity], result of:
          0.017362041 = score(doc=1183,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23394634 = fieldWeight in 1183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1183)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper brings light to the existing practice of cataloging in the college libraries of Assam in terms of utilizing the MARC 21 standard and its structure, i.e., the tags, subfield codes, and indicators. Catalog records from six college libraries are collected and a survey is conducted to understand the local users' information requirements for the catalog. Places, where libraries have scope to improve and which divisions of tags could be more helpful for them in information retrieval, are identified and suggested. This study fulfilled the need for local-level assessment of the catalogs.
  7. Heaney, M.: Object-oriented cataloging (1995) 0.00
    0.0033156392 = product of:
      0.023209473 = sum of:
        0.0058474317 = weight(_text_:information in 3339) [ClassicSimilarity], result of:
          0.0058474317 = score(doc=3339,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13576832 = fieldWeight in 3339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3339)
        0.017362041 = weight(_text_:retrieval in 3339) [ClassicSimilarity], result of:
          0.017362041 = score(doc=3339,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23394634 = fieldWeight in 3339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3339)
      0.14285715 = coord(2/14)
    
    Abstract
    Catalogues have evolved from lists of physical items present in particular libraries into computerized access and retrieval tools for works dispersed across local and national boundaries. Works themselves are no longer constrained by physical form yet cataloguing rules have not evolved in parallel with these developments. Reanalyzes the nature of works and their publication in an approach based on object oriented modelling and demonstrates the advantages to be gained thereby. Suggests a strategic plan to enable an organic transformation to be made from current MARC based cataloguing to object oriented cataloguing. Proposes major revisions of MARC in order to allow records to maximize the benefits of both computerized databases and high speed data networks. This will involve a fundamental shift away from the AACR philosophy of description of, plus access to, physical items
    Source
    Information technology and libraries. 14(1995) no.3, S.135-153
  8. Crook, M.: Barbara Tillett discusses cataloging rules and conceptual models (1996) 0.00
    0.003108885 = product of:
      0.021762194 = sum of:
        0.010128049 = weight(_text_:information in 7683) [ClassicSimilarity], result of:
          0.010128049 = score(doc=7683,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23515764 = fieldWeight in 7683, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7683)
        0.011634145 = product of:
          0.02326829 = sum of:
            0.02326829 = weight(_text_:22 in 7683) [ClassicSimilarity], result of:
              0.02326829 = score(doc=7683,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.2708308 = fieldWeight in 7683, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7683)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The chief of cataloguing policy and support office at the LoC presents her views on the usefulness of conceptual modelling in determining future directions for cataloguing and the MARC format. After describing the evolution of bibliographic processes, suggests usign the entity-relationship conceptual model to step back from how we record information today and start thinking about what information really means and why we provide it. Argues that now is the time to reexamine the basic principles which underpin Anglo-American cataloguing codes and that MARC formats should be looked at to see how they can evolve towards a future, improved structure for communicating bibliographic and authority information
    Source
    OCLC newsletter. 1996, no.220, S.20-22
  9. Tosaka, Y.; Park, J.-r.: RDA: Resource description & access : a survey of the current state of the art (2013) 0.00
    0.0029649907 = product of:
      0.020754933 = sum of:
        0.008353474 = weight(_text_:information in 677) [ClassicSimilarity], result of:
          0.008353474 = score(doc=677,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.19395474 = fieldWeight in 677, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=677)
        0.012401459 = weight(_text_:retrieval in 677) [ClassicSimilarity], result of:
          0.012401459 = score(doc=677,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.16710453 = fieldWeight in 677, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=677)
      0.14285715 = coord(2/14)
    
    Abstract
    Resource Description & Access (RDA) is intended to provide a flexible and extensible framework that can accommodate all types of content and media within rapidly evolving digital environments while also maintaining compatibility with the Anglo-American Cataloguing Rules, 2nd edition (AACR2). The cataloging community is grappling with practical issues in navigating the transition from AACR2 to RDA; there is a definite need to evaluate major subject areas and broader themes in information organization under the new RDA paradigm. This article aims to accomplish this task through a thorough and critical review of the emerging RDA literature published from 2005 to 2011. The review mostly concerns key areas of difference between RDA and AACR2, the relationship of the new cataloging code to metadata standards, the impact on encoding standards such as Machine-Readable Cataloging (MARC), end user considerations, and practitioners' views on RDA implementation and training. Future research will require more in-depth studies of RDA's expected benefits and the manner in which the new cataloging code will improve resource retrieval and bibliographic control for users and catalogers alike over AACR2. The question as to how the cataloging community can best move forward to the post-AACR2/MARC environment must be addressed carefully so as to chart the future of bibliographic control in the evolving environment of information production, management, and use.
    Series
    Advances in information science
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.651-662
  10. Fattahi, R.: ¬A uniform approach to the indexing of cataloguing data in online library systems (1997) 0.00
    0.0028419765 = product of:
      0.019893834 = sum of:
        0.0050120843 = weight(_text_:information in 131) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=131,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=131)
        0.014881751 = weight(_text_:retrieval in 131) [ClassicSimilarity], result of:
          0.014881751 = score(doc=131,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.20052543 = fieldWeight in 131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=131)
      0.14285715 = coord(2/14)
    
    Abstract
    Argues that in library cataloguing and for optional functionality of bibliographic records the indexing of fields and subfields should follow a uniform approach. This would maintain effectiveness in searching, retrieval and display of bibliographic information both within systems and between systems. However, a review of different postings to the AUTOCAT and USMARC discussion lists indicates that the indexing and tagging of cataloguing data do not, at present, follow a consistent approach in online library systems. If the rationale of cataloguing principles is to bring uniformity in bibliographic description and effectiveness in access, they should also address the question of uniform approaches to the indexing of cataloguing data. In this context and in terms of the identification and handling of data elements, cataloguing standards (codes, MARC formats and the Z39.50 standard) should be brought closer, in that they should provide guidelines for the designation of data elements for machine readable records
  11. Oehlschläger, S.: Aus der 48. Sitzung der Arbeitsgemeinschaft der Verbundsysteme am 12. und 13. November 2004 in Göttingen (2005) 0.00
    0.002213057 = product of:
      0.015491398 = sum of:
        0.0067222426 = weight(_text_:system in 3556) [ClassicSimilarity], result of:
          0.0067222426 = score(doc=3556,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.08699492 = fieldWeight in 3556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3556)
        0.0087691555 = weight(_text_:retrieval in 3556) [ClassicSimilarity], result of:
          0.0087691555 = score(doc=3556,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.11816074 = fieldWeight in 3556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3556)
      0.14285715 = coord(2/14)
    
    Content
    Die Deutsche Bibliothek Retrieval von Content In dem Projekt wird angestrebt, Verfahren zu entwickeln und einzuführen, die automatisch und ohne intellektuelle Bearbeitung für das Content-Retrieval ausreichend Sucheinstiege bieten. Dabei kann es sich um die Suche nach Inhalten von Volltexten, digitalen Abbildern, Audiofiles, Videofiles etc. von in Der Deutschen Bibliothek archivierten digitalen Ressourcen oder digitalen Surrogaten archivierter analoger Ressourcen (z. B. OCR-Ergebnisse) handeln. Inhalte, die in elektronischer Form vorhanden sind, aber dem InternetBenutzer Der Deutschen Bibliothek bisher nicht oder nur eingeschränkt zur Verfügung stehen, sollen in möglichst großem Umfang und mit möglichst großem Komfort nutzbar gemacht werden. Darüber hinaus sollen Inhalte benutzt werden, die für ein in ILTIS katalogisiertes Objekt beschreibenden Charakter haben, um auf das beschriebene Objekt zu verweisen. Die höchste Priorität liegt dabei auf der Erschließung von Inhalten in Textformaten. In einem ersten Schritt wurde der Volltext aller Zeitschriften, die im Projekt "Exilpresse digital" digitalisiert wurden, für eine erweiterte Suche genutzt. In einem nächsten Schritt soll die PSI-Software für die Volltextindexierung von Abstracts evaluiert werden. MILOS Mit dem Einsatz von MILOS eröffnet sich die Möglichkeit, nicht oder wenig sachlich erschlossene Bestände automatisch mit ergänzenden Inhaltserschließungsinformationen zu versehen, der Schwerpunkt liegt dabei auf der Freitext-Indexierung. Das bereits in einigen Bibliotheken eingesetzte System, das inzwischen von Der Deutschen Bibliothek für Deutschland lizenziert wurde, wurde in eine UNIX-Version überführt und angepasst. Inzwischen wurde nahezu der gesamte Bestand rückwirkend behandelt, die Daten werden im Gesamt-OPAC für die Recherche zur Verfügung stehen. Die in einer XMLStruktur abgelegten Indexeinträge werden dabei vollständig indexiert und zugänglich gemacht. Ein weiterer Entwicklungsschritt wird in dem Einsatz von MILOS im Online-Verfahren liegen.
  12. Chapman, L.: How to catalogue : a practical manual using AACR2 and Library of Congress (1990) 0.00
    0.0021729572 = product of:
      0.030421399 = sum of:
        0.030421399 = weight(_text_:system in 6081) [ClassicSimilarity], result of:
          0.030421399 = score(doc=6081,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.3936941 = fieldWeight in 6081, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=6081)
      0.071428575 = coord(1/14)
    
    LCSH
    MARC System / United States
    Subject
    MARC System / United States
  13. Kushwoh, S.S.; Gautam, J.N.; Singh, R.: Migration from CDS / ISIS to KOHA : a case study of data conversion from CCF to MARC 21 (2009) 0.00
    0.0019959887 = product of:
      0.027943838 = sum of:
        0.027943838 = weight(_text_:system in 2279) [ClassicSimilarity], result of:
          0.027943838 = score(doc=2279,freq=6.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.36163113 = fieldWeight in 2279, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2279)
      0.071428575 = coord(1/14)
    
    Abstract
    Standards are important for quality and interoperability in any system. Bibliographic record creation standards such as MARC 21 (Machine Readable Catalogue), CCF (Common Communication Format), UNIMARC (Universal MARC) and their local variations, are in practice all across the library community. ILMS (Integrated Library Management Systems) are using these standards for the design of databases and the creation of bibliographic records. Their use is important for uniformity of the system and bibliographic data, but there are problems when a library wants to switch over from one system to another using different standards. This paper discusses migration from one record standard to another, mapping of data and related issues. Data exported from CDS/ISIS CCF based records to KOHA MARC 21 based records are discussed as a case study. This methodology, with few modifications, can be applied for migration of data in other bibliographicformats too. Freeware tools can be utilized for migration.
  14. Block, B.; Hengel, C.; Heuvelmann, R.; Katz, C.; Rusch, B.; Schmidgall, K.; Sigrist, B.: Maschinelles Austauschformat für Bibliotheken und die Functional Requirements for Bibliographic Records : Oder: Wieviel FRBR verträgt MAB? (2005) 0.00
    0.0017980238 = product of:
      0.025172332 = sum of:
        0.025172332 = product of:
          0.050344665 = sum of:
            0.050344665 = weight(_text_:datenmodell in 467) [ClassicSimilarity], result of:
              0.050344665 = score(doc=467,freq=2.0), product of:
                0.19304088 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.02453417 = queryNorm
                0.26079795 = fieldWeight in 467, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=467)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    Eine konsequente Umsetzung des FRBR-Modells - schreibt OCLC - würde die größte Veränderung in der Katalogisierung seit hundert Jahren bedeuten. Doch gibt es auch andere Stimmen. So hieß es am Rande eines FRBRWorkshops, der 2004 in Der Deutschen Bibliothek stattfand: Das Verhältnis zwischen den FRBR und der Katalogisierungspraxis sei vergleichbar mit der Beziehung zwischen Fußballkommentatoren und der Fußballmannschaft. Die einen theoretisierten nach Spielende das, was die anderen soeben getan hätten. Was hat es mit den Functional Requirements for Bibliographic Records nun tatsächlich auf sich? Haben vielleicht beide Stimmen Recht? In welcher Beziehung steht das MAB-Format zu dem vorliegenden Modell? Wie lassen sich die Entitäten mit ihren jeweiligen Attributen in MAB abbilden? Bietet MAB die strukturellen Voraussetzungen, um FRBR-Anwendungen zu unterstützen? Das sind die Fragen, die den MAB-Ausschuss, der seit Beginn diesen Jahres als Expertengruppe Datenformate auftritt, beschäftigten und auf die im Folgenden erste Antworten versucht werden. Die Functional Requirements for Bibliographic Records, kurz FRBR, sind eine Empfehlung der International Federation of Library Associations and Institutions (IFLA) von 1998 zur Neustrukturierung von Bibliothekskatalogen. Dabei sind die FRBR ausgelegt als ein logisches Denkmodell für bibliographische Beschreibungen. Es handelt sich ausdrücklich nicht um ein umsetzungsreifes Datenmodell oder gar ein praktisches Regelwerk. Das Modell bleibt auf einer abstrakten Ebene. Beschrieben werden abstrakte Entitäten mit ihren Eigenschaften und Beziehungen zueinander.
  15. Mönch, C.; Aalberg, T.: Automatic conversion from MARC to FRBR (2003) 0.00
    0.0017838344 = product of:
      0.012486841 = sum of:
        0.004176737 = weight(_text_:information in 2422) [ClassicSimilarity], result of:
          0.004176737 = score(doc=2422,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 2422, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2422)
        0.008310104 = product of:
          0.016620208 = sum of:
            0.016620208 = weight(_text_:22 in 2422) [ClassicSimilarity], result of:
              0.016620208 = score(doc=2422,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.19345059 = fieldWeight in 2422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2422)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Catalogs have for centuries been the main tool that enabled users to search for items in a library by author, title, or subject. A catalog can be interpreted as a set of bibliographic records, where each record acts as a surrogate for a publication. Every record describes a specific publication and contains the data that is used to create the indexes of search systems and the information that is presented to the user. Bibliographic records are often captured and exchanged by the use of the MARC format. Although there are numerous rdquodialectsrdquo of the MARC format in use, they are usually crafted on the same basis and are interoperable with each other -to a certain extent. The data model of a MARC-based catalog, however, is rdquo[...] extremely non-normalized with excessive replication of datardquo [1]. For instance, a literary work that exists in numerous editions and translations is likely to yield a large result set because each edition or translation is represented by an individual record, that is unrelated to other records that describe the same work.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  16. Weber, R.: "Functional requirements for bibliographic records" und Regelwerksentwicklung (2001) 0.00
    0.0016620208 = product of:
      0.02326829 = sum of:
        0.02326829 = product of:
          0.04653658 = sum of:
            0.04653658 = weight(_text_:22 in 6838) [ClassicSimilarity], result of:
              0.04653658 = score(doc=6838,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.5416616 = fieldWeight in 6838, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6838)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    Dialog mit Bibliotheken. 13(2001) H.3, S.20-22
  17. Chandrakar, R.: Mapping CCF to MARC21 : an experimental approach (2001) 0.00
    0.0016297178 = product of:
      0.022816047 = sum of:
        0.022816047 = weight(_text_:system in 5437) [ClassicSimilarity], result of:
          0.022816047 = score(doc=5437,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.29527056 = fieldWeight in 5437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5437)
      0.071428575 = coord(1/14)
    
    Abstract
    The purpose of this article is to raise and address a number of issues pertaining to the conversion of Common Communication Format (CCF) into MARC21. In this era of global resource sharing, exchange of bibliographic records from one system to another is imperative in today's library communities. Instead of using a single standard to create machine-readable catalogue records, more than 20 standards have emerged and are being used by different institutions. Because of these variations in standards, sharing of resources and transfer of data from one system to another among the institutions locally and globally has become a significant problem. Addressing this problem requires keeping in mind that countries such as India and others in southeast Asia are using the CCF as a standard for creating bibliographic cataloguing records. This paper describes a way to map the bibliographic catalogue records from CCF to MARC21, although 100% mapping is not possible. In addition, the paper describes an experimental approach that enumerates problems that may occur during the mapping of records/exchanging of records and how these problems can be overcome.
  18. Hoffmann, L.: Erste Projektergebnisse "Umstieg auf internationale Formate und Regelwerke (MARC21, AACR)" (2003) 0.00
    0.0015365126 = product of:
      0.021511177 = sum of:
        0.021511177 = weight(_text_:system in 1955) [ClassicSimilarity], result of:
          0.021511177 = score(doc=1955,freq=8.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.27838376 = fieldWeight in 1955, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=1955)
      0.071428575 = coord(1/14)
    
    Abstract
    In den Verbünden sind derzeit die beiden Bibliothekssysteme ALEPH 500 und PICA im Einsatz. In beiden Fällen handelt es sich um Systeme von international operierenden Herstellern. Beide Systeme sind MARC basiert und mussten an MAB2 angepasst werden. Der Südwestdeutsche Bibliotheksverbund (SWB) steht als einziger Verbund vor der Entscheidung für ein neues System. Der Österreichische Bibliothekenverbund (ÖBV) war Erstnutzer des Systems ALEPH 500 im deutschsprachigen Raum. Aus seiner Sicht waren die Anpassungen an das Datenhaltungskonzept sowie die Einführung der hierarchischen Strukturen am aufwändigsten. Die Nachnutzer Bibliotheks-Verbund Bayern (BVB), das Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (HBZ) und der Kooperative Bibliotheksverbund Berlin-Brandenburg (KOBV) konnten die Anpassungen an das MAB2-Format einkaufen, die für den ÖBV vorgenommen worden sind. In allen Verbünden musste ALEPH 500 an die jeweiligen Verbundmodelle und Verbundfunktionalitäten angepasst werden. Für das PICA-System fielen Anpassungen an die MAB2-Struktur beim Erstanwender Gemeinsamer Bibliotheksverbund der Länder Bremen, Hamburg, MecklenburgVorpommern, Niedersachsen, SachsenAnhalt, Schleswig-Holstein und Thüringen (GBV) an, das Hessische Bibliotheks-Informationssystem (HeBIS) als Nachnutzer konnte die Anpassungen übernehmen. Der SWB löst zurzeit sein System ab. Die MARC/MAB-Kompatibilität ist in der Ausschreibung gefordert. Der Aufwand für die Anpassungen an das MAB2-Format konnten in den meisten Fällen nicht spezifiziert werden, da sie nicht von den Anpassungen an die jeweiligen Verbundmodelle zu trennen sind. Zur Bewertung eines Formatumstiegs ist es von großer Bedeutung, ob MAB als Austausch- und Katalogisierungsformat oder nur als Austauschformat mit einer eigenen abweichenden Katalogisierungsoberfläche eingesetzt wird. In den ALEPH-Verbünden BVB, HBZ und ÖVB ist MAB2 auch Intern- und Katalogisierungsformat. Im KOBV wird in den Lokalsystemen mit eigenen Katalogisierungsoberflächen katalogisiert. In den PICA-Verbünden ist MAB2 ausschließlich Austauschformat. Im neuen System des SWB soll MAB2 nur Austauschformat sein. Die PICA-Verbünde sehen einen Formatumstieg als eher unproblematisch an. Da Austausch- und Internformat unabhängig voneinander sind, wirkt sich eine Änderung des Austauschformats auf das Internformat und auf die Katalogisierungsoberfläche kaum aus. Für den Entwurf von Umstiegsszenarien ist die Möglichkeit einzubeziehen, dass möglicherweise nicht alle Bibliotheken gleichzeitig umsteigen können.
  19. Giordano, R.: ¬The documentation of electronic texts : using Text Encoding Initiative headers: an introduction (1994) 0.00
    0.0015032839 = product of:
      0.021045974 = sum of:
        0.021045974 = weight(_text_:retrieval in 866) [ClassicSimilarity], result of:
          0.021045974 = score(doc=866,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.2835858 = fieldWeight in 866, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=866)
      0.071428575 = coord(1/14)
    
    Abstract
    Presents a general introduction to the form and functions of the Text Encoding Initiative (TEI) headers and explains their relationship to the MARC record. The TEI header's main strength is that it documents electronic texts in a standard exchange format that should be understandable to both librarian cataloguers and text encoders outside of librarianship. TEI gives encoders the ability to document the the electronic text itself, its source, its encoding principles, and revisions, as well as non bibliographic characteristics of the text that can support both scholarly analysis and retrieval. Its bibliographic descriptions can be loaded into standard remote bibliographic databases, which should make electronic texts as easy to find for researchers as texts in other media. Presents a brief overview of the TEI header, the file description and ways in which the TEI headers have counterparts in MARC, the Encoding Description, the Profile Description, the Revision Description, the size and complexity of the TEI header, and the use of the TEI header to support document retrieval and analysis, with notes on some of the prospects and problems
  20. McBride, J.L.: Faceted subject access for music through USMARC : a case for linked fields (2000) 0.00
    0.0015032839 = product of:
      0.021045974 = sum of:
        0.021045974 = weight(_text_:retrieval in 5403) [ClassicSimilarity], result of:
          0.021045974 = score(doc=5403,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.2835858 = fieldWeight in 5403, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5403)
      0.071428575 = coord(1/14)
    
    Abstract
    The USMARC Format for Bibliographic Description contains three fields (045, 047, and 048) designed to facilitate subject access to music materials. The fields cover three of the main aspects of subject description for music: date of composition, form or genre, and number of instruments or voices, respectively. The codes are rarely used for subject access, because of the difficulty of coding them and because false drops would result in retrieval of bibliographic records where more than one musical work is present, a situation that occurs frequently with sound recordings. It is proposed that the values of the fields be converted to natural language and that subfield 8 be used to link all access fields in a bibliographic record for greater precision in retrieval. This proposal has implications beyond music cataloging, especially for metadata and any bibliographic records describing materials containing many works and subjects.

Years

Languages

  • e 28
  • d 23
  • f 1
  • More… Less…

Types

  • a 50
  • m 2
  • b 1
  • el 1
  • s 1
  • x 1
  • More… Less…