Search (73 results, page 1 of 4)

  • × theme_ss:"Datenformate"
  1. Avram, H.D.: Machine Readable Cataloging (MARC): 1961-1974 (2009) 0.08
    0.08223935 = product of:
      0.20559838 = sum of:
        0.092339694 = weight(_text_:computers in 3844) [ClassicSimilarity], result of:
          0.092339694 = score(doc=3844,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.40661687 = fieldWeight in 3844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3844)
        0.11325869 = sum of:
          0.07229361 = weight(_text_:history in 3844) [ClassicSimilarity], result of:
            0.07229361 = score(doc=3844,freq=2.0), product of:
              0.20093648 = queryWeight, product of:
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.04319373 = queryNorm
              0.3597834 = fieldWeight in 3844, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3844)
          0.040965077 = weight(_text_:22 in 3844) [ClassicSimilarity], result of:
            0.040965077 = score(doc=3844,freq=2.0), product of:
              0.15125708 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04319373 = queryNorm
              0.2708308 = fieldWeight in 3844, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3844)
      0.4 = coord(2/5)
    
    Abstract
    The MARC Program of the Library of Congress, led during its formative years by the author of this entry, was a landmark in the history of automation. Technical procedures, standards, and formatting for the catalog record were experimented with and developed in modern form in this project. The project began when computers were mainframe, slow, and limited in storage. So little was known then about many aspects of automation of library information resources that the MARC project can be seen as a pioneering effort with immeasurable impact.
    Date
    27. 8.2011 14:22:53
  2. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.06
    0.05545435 = product of:
      0.13863587 = sum of:
        0.10553108 = weight(_text_:computers in 2845) [ClassicSimilarity], result of:
          0.10553108 = score(doc=2845,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.464705 = fieldWeight in 2845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.0625 = fieldNorm(doc=2845)
        0.033104785 = product of:
          0.06620957 = sum of:
            0.06620957 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.06620957 = score(doc=2845,freq=4.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  3. Mishra, K.S.: Bibliographic databases and exchange formats (1997) 0.05
    0.05157588 = product of:
      0.1289397 = sum of:
        0.10553108 = weight(_text_:computers in 1757) [ClassicSimilarity], result of:
          0.10553108 = score(doc=1757,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.464705 = fieldWeight in 1757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.0625 = fieldNorm(doc=1757)
        0.023408616 = product of:
          0.046817232 = sum of:
            0.046817232 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.046817232 = score(doc=1757,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.30952093 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Computers play an important role in the development of bibliographic databases. Exchange formats are needed for the generation and exchange of bibliographic data at different levels: international, national, regional and local. Discusses the formats available at national and international level such as the International Standard Exchange Format (ISO 2709); the various MARC formats and the Common Communication Format (CCF). Work on Indian standards involving the Bureau of Indian Standards, the National Information System for Science and Technology (NISSAT) and other institutions proceeds only slowly
    Source
    DESIDOC bulletin of information technology. 17(1997) no.5, S.17-22
  4. IFLA Cataloguing Principles : steps towards an International Cataloguing Code. Report from the 1st Meeting of Experts on an International Cataloguing Code, Frankfurt 2003 (2004) 0.03
    0.025915245 = product of:
      0.06478811 = sum of:
        0.05446045 = weight(_text_:analog in 2312) [ClassicSimilarity], result of:
          0.05446045 = score(doc=2312,freq=2.0), product of:
            0.32627475 = queryWeight, product of:
              7.5537524 = idf(docFreq=62, maxDocs=44218)
              0.04319373 = queryNorm
            0.16691592 = fieldWeight in 2312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5537524 = idf(docFreq=62, maxDocs=44218)
              0.015625 = fieldNorm(doc=2312)
        0.010327659 = product of:
          0.020655317 = sum of:
            0.020655317 = weight(_text_:history in 2312) [ClassicSimilarity], result of:
              0.020655317 = score(doc=2312,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.10279526 = fieldWeight in 2312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2312)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    The next section collects three papers, all presented at the meeting by the people best placed to address the topics authoritatively and comprehensively. The first is by John D. Byrum, of the Library of Congress, and Chair of the ISBD Review Group, who clearly and concisely explains the history and role of the ISBDs in "IFLA's ISBD Programme. Purpose, process, and prospects." The next paper, "Brave new FRBR world" is by Patrick Le Boeuf, of the Bibliothèque nationale de France and Chair of the FRBR Review Group (a French version is available an the website). Drawing from his extensive expertise with FRBR, Le Boeuf explains what FRBR is and equally importantly is not, points to its impact in the present context of Code revision, and discusses insights relevant to the working group topics that can be drawn from FRBR. Closing this section is Barbara Tillett's contribution "A Virtual International Authority File," which signals an important change in thinking about international cooperation for bibliographic control. Earlier efforts focussed an getting agreement about form and structure of headings, this view stresses linking authority files to share the intellectual effort yet present headings to the user in the form that is most appropriate culturally.
    Ton Heijligers reflects an the relation of the IME ICC effort to AACR and calls for an examination of the principles and function of the concept of main entry in his brief paper "Main entry into the future?" Ingrid Parent's article "From ISBD (S) to ISBD(CR): a voyage of discovery and alignment" is reprinted from Serials Librarian as it tells of the successful project not only to revise an ISBD, but also to harmonize three Codes for serials cataloguing: ISBD (CR), ISSN and AACR. Gunilla Jonsson's paper "The bibliographic unit in the digital context" is a perceptive discussion of level of granularity issues which must be addressed in deciding what to catalogue. Practical issues and user expectation are important considerations, whether the material to be catalogued is digital or analog. Ann Huthwaite's paper "Class of materials concept and GMDs" as well as Tom Delsey's ensuing comments, originated as Joint Steering Committee restricted papers in 2002. It is a great service to have them made widely available in this form as they raise fundamental issues and motivate work that has since taken place, leading to the current major round of revision to AACR. The GMD issue is about more than a list of terms and their placement in the cataloguing record, it is intertwined with consideration of whether the concept of classes of materials is helpful in organizing cataloguing rules, if so, which classes are needed, and how to allow for eventual integration of new types of materials. Useful in the Code comparison exercise is an extract of the section an access points from the draft of revised RAK (German cataloguing rules). Four short papers compare aspects of the Russian Cataloguing Rules with RAK and AACR: Tatiana Maskhoulia covers corporate body headings; Elena Zagorskaya outlines current development an serials and other continuing resources; Natalia N. Kasparova covers multilevel structures; Ljubov Ermakova and Tamara Bakhturina describe the uniform title and GMD provisions. The website includes one more item by Kasparova "Bibliographic record language in multilingual electronic communication." The volume is rounded out by the appendix which includes the conference agenda, the full list of participants, and the reports from the five working groups. Not for the casual reader, this volume is a must read for anyone working an cataloguing code development at the national or international levels, as well as those teaching cataloguing. Any practising cataloguer will benefit from reading the draft statement of principles and the three presentation papers, and dipping into the background papers."
  5. Cundiff, M.V.: ¬An introduction to the Metadata Encoding and Transmission Standard (METS) (2004) 0.03
    0.025887702 = product of:
      0.1294385 = sum of:
        0.1294385 = sum of:
          0.08262127 = weight(_text_:history in 2834) [ClassicSimilarity], result of:
            0.08262127 = score(doc=2834,freq=2.0), product of:
              0.20093648 = queryWeight, product of:
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.04319373 = queryNorm
              0.41118103 = fieldWeight in 2834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.0625 = fieldNorm(doc=2834)
          0.046817232 = weight(_text_:22 in 2834) [ClassicSimilarity], result of:
            0.046817232 = score(doc=2834,freq=2.0), product of:
              0.15125708 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04319373 = queryNorm
              0.30952093 = fieldWeight in 2834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2834)
      0.2 = coord(1/5)
    
    Abstract
    This article provides an introductory overview of the Metadata Encoding and Transmission Standard, better known as METS. It will be of most use to librarians and technical staff who are encountering METS for the first time. The article contains a brief history of the development of METS, a primer covering the basic structure and content of METS documents, and a discussion of several issues relevant to the implementation and continuing development of METS including object models, extension schemata, and application profiles.
    Source
    Library hi tech. 22(2004) no.1, S.52-64
  6. Guenther, R.S.: Using the Metadata Object Description Schema (MODS) for resource description : guidelines and applications (2004) 0.02
    0.022651738 = product of:
      0.11325869 = sum of:
        0.11325869 = sum of:
          0.07229361 = weight(_text_:history in 2837) [ClassicSimilarity], result of:
            0.07229361 = score(doc=2837,freq=2.0), product of:
              0.20093648 = queryWeight, product of:
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.04319373 = queryNorm
              0.3597834 = fieldWeight in 2837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.6519823 = idf(docFreq=1146, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2837)
          0.040965077 = weight(_text_:22 in 2837) [ClassicSimilarity], result of:
            0.040965077 = score(doc=2837,freq=2.0), product of:
              0.15125708 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04319373 = queryNorm
              0.2708308 = fieldWeight in 2837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2837)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes the Metadata Object Description Schema (MODS), its accompanying documentation and some of its applications. It reviews the MODS user guidelines provided by the Library of Congress and how they enable a user of the schema to consistently apply MODS as a metadata scheme. Because the schema itself could not fully document appropriate usage, the guidelines provide element definitions, history, relationships with other elements, usage conventions, and examples. Short descriptions of some MODS applications are given and a more detailed discussion of its use in the Library of Congress's Minerva project for Web archiving is given.
    Source
    Library hi tech. 22(2004) no.1, S.89-98
  7. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.02
    0.02202626 = product of:
      0.055065647 = sum of:
        0.039574157 = weight(_text_:computers in 1169) [ClassicSimilarity], result of:
          0.039574157 = score(doc=1169,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.17426437 = fieldWeight in 1169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.015491488 = product of:
          0.030982977 = sum of:
            0.030982977 = weight(_text_:history in 1169) [ClassicSimilarity], result of:
              0.030982977 = score(doc=1169,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.1541929 = fieldWeight in 1169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
  8. Kokabi, M.: ¬The Iranian adaptation of UNIMARC (1997) 0.02
    0.021106217 = product of:
      0.10553108 = sum of:
        0.10553108 = weight(_text_:computers in 537) [ClassicSimilarity], result of:
          0.10553108 = score(doc=537,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.464705 = fieldWeight in 537, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.0625 = fieldNorm(doc=537)
      0.2 = coord(1/5)
    
    Abstract
    Outline of a thesis produced at the University of New South Wales School of Information, Library and Archive Studies, the first serious study of MARC for Iran despite 12 years of the presence of computers in Iranian libraries. Considers the various MARC formats, reasons for choosing UNIMARC, and the use of the Farsi language in machines. Lists the modifications required to UNIMARC for use in the Iranian National Bibliography
  9. Sandberg-Fox, A.M.: ¬The microcomputer revolution (2001) 0.02
    0.018467939 = product of:
      0.092339694 = sum of:
        0.092339694 = weight(_text_:computers in 5409) [ClassicSimilarity], result of:
          0.092339694 = score(doc=5409,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.40661687 = fieldWeight in 5409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5409)
      0.2 = coord(1/5)
    
    Abstract
    With the introduction of the microcomputer in the 1980s, a revolution of sorts was initiated. In libraries this was evidenced by the acquisition of personal computers and the software to run on them. All that catalogers needed were cataloging rules and a MARC format to ensure their bibliographic control. However, little did catalogers realize they were dealing with an industry that introduced rapid technological changes, which effected continual revision of existing rules and the formulation of special guidelines to deal with the industry's innovative products. This article focuses on the attempts of libraries and organized cataloging groups to develop the Chapter 9 descriptive cataloging rules in AACR2; it highlights selected events and includes cataloging examples that illustrate the evolution of the chapter.
  10. Block, B.; Hengel, C.; Heuvelmann, R.; Katz, C.; Rusch, B.; Schmidgall, K.; Sigrist, B.: Maschinelles Austauschformat für Bibliotheken und die Functional Requirements for Bibliographic Records : Oder: Wieviel FRBR verträgt MAB? (2005) 0.02
    0.016338138 = product of:
      0.081690684 = sum of:
        0.081690684 = weight(_text_:analog in 467) [ClassicSimilarity], result of:
          0.081690684 = score(doc=467,freq=2.0), product of:
            0.32627475 = queryWeight, product of:
              7.5537524 = idf(docFreq=62, maxDocs=44218)
              0.04319373 = queryNorm
            0.2503739 = fieldWeight in 467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5537524 = idf(docFreq=62, maxDocs=44218)
              0.0234375 = fieldNorm(doc=467)
      0.2 = coord(1/5)
    
    Abstract
    Ausgehend von den beschriebenen Entitäten lassen sich interessante Anwendungen aufbauen. In diesem Zusammenhang sei auf den FictionFinder von OCLC, den Red-Light-Green-Opac der Research Library Group und das AustLit-Projekt hingewiesen. Interessant ist in allen Projekten der Versuch der Neustrukturierung von großen Treffermengen sowie im AustLit-Projekt die mehrdimensionale Navigation, die auf den im FRBR-Modell beschriebenen Beziehungen aufsetzt. Es ist kein Zufall, dass die genannten Anwendungen aus dem Bereich der Geisteswissenschaften stammen. So lässt sich am Beispiel des World-Cat-Kataloges exemplarisch nachweisen, dass nur in etwa 20% der vorhandenen Katalogsätze alle vier FRBR-Ebenen beschrieben sind bzw. sich aus vorhandenen Katalogdaten ableiten lassen. Als prominente Beispiele werden gerne genannt die Bibel, der Koran oder Werke aus der Weltliteratur wie Goethes Faust. Für das Gros des beschriebenen Materials hingegen gilt, dass die Ebenen Work, Expression und Manifestation wenig ausgeprägt sind. Man denke hier beispielsweise an Hochschulschriften. Die Diskussion im deutschsprachigen Raum befindet sich erst in den Anfängen, was sich unter anderem daran zeigt, dass es selbst für die Terminologie noch keine gebräuchlichen deutschen Übersetzungen gibt. Weiter fortgeschritten ist die Auseinandersetzung mit dem FRBR-Modell in den USA. Nicht nur, dass die Terminologie ausdrücklich im zukünftigen Regelwerk verankert werden soll, auch liegt eine detaillierte Studie von Tom Delsey vor, der die FRBR-Entitäten mit sämtlichen Attributen auf das MARC21-Format anwendet. Als Annäherung der Expertengruppe Datenformate entstand analog zur MARC-Umsetzung eine entsprechende Tabelle für MAB. Diese Tabelle allerdings hat ausdrücklich Entwurfscharakter und wird zunächst nicht weiter bearbeitet. Daneben entstand ein kleines Programm, das MAB-Daten nach FRBR strukturiert darstellt und damit die verschiedenen Entitäten sichtbar macht.
  11. Salgáné, M.M.: Our electronic era and bibliographic informations computer-related bibliographic data formats, metadata formats and BDML (2005) 0.01
    0.014924349 = product of:
      0.074621744 = sum of:
        0.074621744 = weight(_text_:computers in 3005) [ClassicSimilarity], result of:
          0.074621744 = score(doc=3005,freq=4.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.32859606 = fieldWeight in 3005, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
      0.2 = coord(1/5)
    
    Abstract
    Using new communication technologies libraries must face continuously new questions, possibilities and expectations. This study discusses library-related aspects of our electronic era and how computer-related data formats affect bibliographic dataprocessing to give a summary of the most important results. First bibliographic formats for the exchange of bibliographic and related information in the machine-readable form between different types of computer systems were created more than 30 years ago. The evolution of information technologies leads to the improvement of computer systems. In addition to the development of computers and media types Internet has a great influence on data structure as well. Since the introduction of MARC bibliographic format, technology of data exchange between computers and between different computer systems has reached a very sophisticated stage and has contributed to the creation of new standards in this field. Today libraries work with this new infrastructure that induces many challenges. One of the most significant challenges is moving from a relatively homogenous bibliographic environment to a diverse one. Despite these challenges such changes are achievable and necessary to exploit possibilities of new metadata and technologies like the Internet and XML (Extensible Markup Language). XML is an open standard, a universal language for data on the Web. XML is nearly six-years-old standard designed for the description and computer-based management of (semi)-structured data and structured texts. XML gives developers the power to deliver structured data from a wide variety of applications and it is also an ideal format from server-to-server transfer of structured data. XML also isn't limited for Internet use and is an especially valuable tool in the field of library. In fact, XML's main strength - organizing information - makes it perfect for exchanging data between different systems. Tools that work with the XML can be used to process XML records without incurring additional costs associated with one's own software development. In addition, XML is also a suitable format for library web services. The Department of Computer-related Graphic Design and Library and Information Sciences of Debrecen University launched the BDML (Bibliographic Description Markup Language) development project in order to standardize bibliogrphic description with the help of XML.
  12. Weiss, P.J.: Everything you always wanted to know about format integration, but were afraid to ask (1994) 0.01
    0.0123931905 = product of:
      0.061965954 = sum of:
        0.061965954 = product of:
          0.12393191 = sum of:
            0.12393191 = weight(_text_:history in 733) [ClassicSimilarity], result of:
              0.12393191 = score(doc=733,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.6167716 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Examines the impetus, history, effects and implementation of format integration of the USMARC bibliographic formats
  13. Duchemin, P.-Y.: BN OPALINE : the map database in the Department des Cartes et Plans de la Bibliothèque Nationale: history (1993) 0.01
    0.011684412 = product of:
      0.05842206 = sum of:
        0.05842206 = product of:
          0.11684412 = sum of:
            0.11684412 = weight(_text_:history in 916) [ClassicSimilarity], result of:
              0.11684412 = score(doc=916,freq=4.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.5814978 = fieldWeight in 916, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0625 = fieldNorm(doc=916)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Provides a brief history of the Department des Cartes et Plans de la Bibliothèque Nationale in Paris before going on to discuss the computerization of its collections. Describes the computer format for cartographic documents which the Department has developed, INTERMARC(C), which is an extension of INTERMARC, the French version of MARC. Then discusses BN-OPALINE, the Department's computerized system for cartographic materials, and its functionality. Finally looks to the future and the Department's hope to create a national union catalogue of cartographic documents
  14. Wolters, C.: Wie muß man seine Daten formulieren bzw. strukturieren, damit ein Computer etwas Vernünftiges damit anfangen kann? : Mit einem Glossar von Carlos Saro (1991) 0.01
    0.0105531085 = product of:
      0.05276554 = sum of:
        0.05276554 = weight(_text_:computers in 4013) [ClassicSimilarity], result of:
          0.05276554 = score(doc=4013,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.2323525 = fieldWeight in 4013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.03125 = fieldNorm(doc=4013)
      0.2 = coord(1/5)
    
    Abstract
    Die Dokumentationsabteilung im Institut für Museumskunde der Staatlichen Museen Preußischer Kulturbesitz (IfM) hat die Aufgabe, bundesweit Museen und museale Einrichtungen bei der Einführung der Informationstechnik mit Rat und Tat zu unterstützen. Hierbei arbeitet sie mit dem Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) zusammen. Diese Aktivitäten liegen im Bereich einer professionell durchgeführten Rationalisierung; Computerisierung ist nicht Selbstzweck, sondern eine Möglichkeit, ohnehin durchzuführende Arbeiten in puncto Zeit und Kosten effizienter zu gestalten bzw. Dinge durchzuführen, für die man beim Einsatz konventioneller Methoden weder die Zeit noch das Geld hätte. Mit der Vermittlung der hierfür notwendigen Kenntnisse und Fertigkeiten ist ein kleines Institut wie das IfM wegen des rapide ansteigenden Beratungsbedarfs der Museen heute noch weit überfordert. Wir konzentrieren daher unsere Aktivitäten auf eine Zusammenarbeit mit den für die Museumsbetreuung zuständigen Einrichtungen in den Bundesländern. Wir haben die Hoffnung, daß mehr und mehr Bundesländer für diese Aufgabe eigene Dienstleistungsangebote entwickeln werden. Bevor das erreicht ist, versucht das HM interessierten Museen zumindest eine "Hilfe zur Selbsthilfe" anzubieten; auch wenn dieses oder jenes Bundesland noch nicht in der Lage ist, seine Museen auf diesem Gebiet professionell zu beraten, soll es einzelnen Museen zumindest erleichtert werden, sich hier selber schlau zu machen. Zum Inhalt: Zur Zeit der Großrechner waren sich noch alle Beteiligten darüber einig, daß man für den Einsatz der Informationstechnik professioneller Hilfe bedarf. Man war bereit, sich den Anforderungen der Maschine anzupassen, man versuchte, sich "computergerecht" zu verhalten. Die Einführung leicht zu bedienender und leistungsstarker Computer im Bürobereich hat diese Arbeitsbedingungen aber grundlegend verändert. Auch Leute, die von Computer noch nichts verstehen, können heute in wenigen Tagen lernen, mit Programmen für "Textverarbeitung" ganz selbstverständlich umzugehen. Sie erwarten daher, daß das bei einem Einsatz des Computers für die Inventarisierung genauso problemlos sei. Von einer solchen Benutzerfreundlichkeit der Programme sind wir im deutschen Museumswesen aber noch weit entfernt. Das hat einen einfachen Grund: In die eben erwähnten einfach zu handhabenden Programme wurde inzwischen hunderte oder gar tausende von "Mannjahren" investiert; ein erheblicher Teil dieser Mittel diente dazu, den Computer den spezifischen Bedürfnissen bestimmter Arbeitsplätze anzupassen, bzw. die daran arbeitenden Mitarbeiter auszubilden. Bis das auch für das Museum gilt, wird wohl noch einige Zeit vergehen; dieser Markt ist zu klein, als daß sich solche Investitionen auf rein kommerzieller Basis kurzfristig auszahlen könnten. Das Institut für Museumskunde versucht hier Hilfestellung zu geben. Das vorliegende Heft 33 der "Materialien" ist aus Beratungen und Kursen hervorgegangen. Es versucht, die für die Einführung der Informationstechnik im Museum unabdingbaren grundlegenden Computerkenntnisse für Museumsleute in Form eines Lern- und Lesebuchs zu vermitteln. Es schließt damit an Heft 30 (Jane Sunderland und Leonore Sarasan, Was muß man alles tun, um den Computer im Museum erfolgreich einzusetzen?) direkt an und soll zusammen mit ihm benutzt werden.
  15. Studwell, W.E.; Rast, E.K.: Format integration and spatial data : a preliminary view (1993) 0.01
    0.010327659 = product of:
      0.051638294 = sum of:
        0.051638294 = product of:
          0.10327659 = sum of:
            0.10327659 = weight(_text_:history in 6698) [ClassicSimilarity], result of:
              0.10327659 = score(doc=6698,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.5139763 = fieldWeight in 6698, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6698)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Provides a brief history and explains the principle of format integration. Gives 2 examples of monographic map records taken from the OCLC database. Explains significant or substantial differences between the present fixed fields and the projected fields for both these examples
  16. Bourne, R.: Common MARC, or vivent les differences? (1996) 0.01
    0.010327659 = product of:
      0.051638294 = sum of:
        0.051638294 = product of:
          0.10327659 = sum of:
            0.10327659 = weight(_text_:history in 4690) [ClassicSimilarity], result of:
              0.10327659 = score(doc=4690,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.5139763 = fieldWeight in 4690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4690)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Investigates the history of the machine readable catalogue (MARC). Compares US and UK attitudes to MARC and raises the fact that US and UK standards are incompatible. Suggests that the 2 should be able to integrate and gives reasons for this
  17. Bourne, R.: Common MARC, or 'vivent les differences'? (1996) 0.01
    0.010327659 = product of:
      0.051638294 = sum of:
        0.051638294 = product of:
          0.10327659 = sum of:
            0.10327659 = weight(_text_:history in 6727) [ClassicSimilarity], result of:
              0.10327659 = score(doc=6727,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.5139763 = fieldWeight in 6727, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6727)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Investigates the history of the machine readable catalogue and the role of the MARC format. Compares US and UK attitiudes to MARC and raises the issue of the incompatibility of the US and British standards. Suggests that they should be able to integrate, giving reasons for this
  18. Jimenez, V.O.R.: Nuevas perspectivas para la catalogacion : metadatos ver MARC (1999) 0.01
    0.009931436 = product of:
      0.049657177 = sum of:
        0.049657177 = product of:
          0.099314354 = sum of:
            0.099314354 = weight(_text_:22 in 5743) [ClassicSimilarity], result of:
              0.099314354 = score(doc=5743,freq=4.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.6565931 = fieldWeight in 5743, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5743)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    30. 3.2002 19:45:22
    Source
    Revista Española de Documentaçion Cientifica. 22(1999) no.2, S.198-219
  19. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.01
    0.009363446 = product of:
      0.046817232 = sum of:
        0.046817232 = product of:
          0.093634464 = sum of:
            0.093634464 = weight(_text_:22 in 2840) [ClassicSimilarity], result of:
              0.093634464 = score(doc=2840,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.61904186 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2840)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Library hi tech. 22(2004) no.1
  20. Williams, R.D.: MARC: thirty years and still going ... (1995) 0.01
    0.008262127 = product of:
      0.041310634 = sum of:
        0.041310634 = product of:
          0.08262127 = sum of:
            0.08262127 = weight(_text_:history in 4020) [ClassicSimilarity], result of:
              0.08262127 = score(doc=4020,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.41118103 = fieldWeight in 4020, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4020)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Traces the history of the MARC formats, for computerized bibliographic records and computerized cataloguing, from the initial work of the Library of Congress in the early 60s through to the various stages of development of the MARC Pilot Project (1966 to 1968); MARC2 (1968 to 1974); Distribution Service (1968); Retrospective Conversion (1968 to 1970); the Committee on Representation in Machine-Readable Form of Bibliographic Information (MARBI) and the USMARC Advisory Group; and the expansion, linkage and integration stages (1980 to the present)

Years

Languages

  • e 55
  • d 14
  • pl 1
  • sp 1
  • More… Less…

Types

  • a 65
  • s 5
  • m 3
  • b 2
  • n 1
  • More… Less…