Search (75 results, page 1 of 4)

  • × theme_ss:"Datenformate"
  • × type_ss:"a"
  • × year_i:[1990 TO 2000}
  1. Cantrall, D.: From MARC to Mosaic : progressing toward data interchangeability at the Oregon State Archives (1994) 0.03
    0.03118084 = product of:
      0.12472336 = sum of:
        0.04767549 = weight(_text_:wide in 8470) [ClassicSimilarity], result of:
          0.04767549 = score(doc=8470,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.342674 = fieldWeight in 8470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8470)
        0.025864797 = weight(_text_:web in 8470) [ClassicSimilarity], result of:
          0.025864797 = score(doc=8470,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.25239927 = fieldWeight in 8470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8470)
        0.012962549 = weight(_text_:information in 8470) [ClassicSimilarity], result of:
          0.012962549 = score(doc=8470,freq=6.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.23515764 = fieldWeight in 8470, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8470)
        0.038220525 = weight(_text_:software in 8470) [ClassicSimilarity], result of:
          0.038220525 = score(doc=8470,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.30681872 = fieldWeight in 8470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8470)
      0.25 = coord(4/16)
    
    Abstract
    Explains the technology used by the Oregon State Archives to relaize the goal of data interchangeability given the prescribed nature of the MARC format. Describes an emergent model of learning and information delivery focusing on the example of World Wide Web, accessed most often by the software client Mosaic, which is the fastest growing segment of the Internet information highway. Also describes The Data Magician, a flexible program which allows for many combinations of input and output formats, and will read unconventional formats such as MARC communications format. Oregon State Archives, using Mosaic and The Data Magician, are consequently able to present valuable electronic information to a variety of users
  2. Guenther, R.S.: Automating the Library of Congress Classification Scheme : implementation of the USMARC format for classification data (1996) 0.01
    0.014298417 = product of:
      0.07625822 = sum of:
        0.015816603 = product of:
          0.031633206 = sum of:
            0.031633206 = weight(_text_:online in 5578) [ClassicSimilarity], result of:
              0.031633206 = score(doc=5578,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.33194235 = fieldWeight in 5578, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5578)
          0.5 = coord(1/2)
        0.022221092 = weight(_text_:retrieval in 5578) [ClassicSimilarity], result of:
          0.022221092 = score(doc=5578,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.23394634 = fieldWeight in 5578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5578)
        0.038220525 = weight(_text_:software in 5578) [ClassicSimilarity], result of:
          0.038220525 = score(doc=5578,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.30681872 = fieldWeight in 5578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5578)
      0.1875 = coord(3/16)
    
    Abstract
    Potential uses for classification data in machine readable form and reasons for the development of a standard, the USMARC Format for Classification Data, which allows for classification data to interact with other USMARC bibliographic and authority data are discussed. The development, structure, content, and use of the standard is reviewed with implementation decisions for the Library of Congress Classification scheme noted. The author examines the implementation of USMARC classification at LC, the conversion of the schedules, and the functionality of the software being used. Problems in the effort are explored, and enhancements desired for the online classification system are considered.
    Theme
    Klassifikationssysteme im Online-Retrieval
  3. Guenther, R.S.: ¬The USMARC Format for Classification Data : development and implementation (1992) 0.01
    0.0097546335 = product of:
      0.052024715 = sum of:
        0.018076118 = product of:
          0.036152236 = sum of:
            0.036152236 = weight(_text_:online in 2996) [ClassicSimilarity], result of:
              0.036152236 = score(doc=2996,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.37936267 = fieldWeight in 2996, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2996)
          0.5 = coord(1/2)
        0.008553064 = weight(_text_:information in 2996) [ClassicSimilarity], result of:
          0.008553064 = score(doc=2996,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.1551638 = fieldWeight in 2996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2996)
        0.025395533 = weight(_text_:retrieval in 2996) [ClassicSimilarity], result of:
          0.025395533 = score(doc=2996,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.26736724 = fieldWeight in 2996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2996)
      0.1875 = coord(3/16)
    
    Abstract
    This paper discusses the newly developed USMARC Format for Classification Data. It reviews its potential uses within an online system and its development as one of the USMARC standards for representing bibliographic and related information in machine-readable form. It provides a summary of the fields in the format, and considers the prospects for its implementation.
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Guenther, R.S.: ¬The development and implementation of the USMARC format for classification data (1992) 0.01
    0.0097546335 = product of:
      0.052024715 = sum of:
        0.018076118 = product of:
          0.036152236 = sum of:
            0.036152236 = weight(_text_:online in 8865) [ClassicSimilarity], result of:
              0.036152236 = score(doc=8865,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.37936267 = fieldWeight in 8865, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8865)
          0.5 = coord(1/2)
        0.008553064 = weight(_text_:information in 8865) [ClassicSimilarity], result of:
          0.008553064 = score(doc=8865,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.1551638 = fieldWeight in 8865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=8865)
        0.025395533 = weight(_text_:retrieval in 8865) [ClassicSimilarity], result of:
          0.025395533 = score(doc=8865,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.26736724 = fieldWeight in 8865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=8865)
      0.1875 = coord(3/16)
    
    Abstract
    This paper discusses the newly developed USMARC Format for Classification Data. It reviews its potential uses within an online system and its development as one of the USMARC standards. It provides a summary of the fields in the format and considers the prospects for its implementation. The papaer describes an experiment currently being conducted at the Library of Congress to create USMARC classification records and use a classification database in classifying materials in the social sciences
    Source
    Information technology and libraries. 11(1992) no.2, S.120-131
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Paulus, W.; Weishaupt, K.: Bibliotheksdaten werden mehr wert : LibLink wertet bibliothekarische Dienstleistung auf (1996) 0.01
    0.009484049 = product of:
      0.07587239 = sum of:
        0.054600753 = weight(_text_:software in 5228) [ClassicSimilarity], result of:
          0.054600753 = score(doc=5228,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.43831247 = fieldWeight in 5228, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=5228)
        0.021271642 = product of:
          0.042543285 = sum of:
            0.042543285 = weight(_text_:22 in 5228) [ClassicSimilarity], result of:
              0.042543285 = score(doc=5228,freq=2.0), product of:
                0.10995905 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031400457 = queryNorm
                0.38690117 = fieldWeight in 5228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5228)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    Die maschinenlesbaren Katalogdaten der Bibliotheken wurden bisher hauptsächlich intern genutzt. Die im Institut Arbeit und Technik entwickelte Software LibLink ermöglich es, diese Daten auch im wissenschaftlichen Kontext zu verwenden
    Date
    29. 9.1996 18:58:22
  6. Kernernman, V.Y.; Koenig, M.E.D.: USMARC as a standardized format for the Internet hypermedia document control/retrieval/delivery system design (1996) 0.01
    0.009392499 = product of:
      0.050093327 = sum of:
        0.0111840265 = product of:
          0.022368053 = sum of:
            0.022368053 = weight(_text_:online in 5565) [ClassicSimilarity], result of:
              0.022368053 = score(doc=5565,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.23471867 = fieldWeight in 5565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5565)
          0.5 = coord(1/2)
        0.0074839313 = weight(_text_:information in 5565) [ClassicSimilarity], result of:
          0.0074839313 = score(doc=5565,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13576832 = fieldWeight in 5565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5565)
        0.031425368 = weight(_text_:retrieval in 5565) [ClassicSimilarity], result of:
          0.031425368 = score(doc=5565,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.33085006 = fieldWeight in 5565, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5565)
      0.1875 = coord(3/16)
    
    Abstract
    Surveys how the USMARC integrated bibliographic format (UBIF) could be mapped onto an hypermedia document USMARC format (HDUF) to meet the requirements of a hypermedia document control/retrieval/delivery (HDRD) system for the Internet. Explores the characteristics of such a system using an example of the WWW's directory and searching engine Yahoo!. Discusses additional standard specifications for the UBIF's structure, content designation, and data content to map this format into the HDUF that can serve as a proxy for the Net HDRD system
    Imprint
    Medford, NJ : Information Today
    Source
    Proceedings of the 17th National Online Meeting 1996, New York, 14-16 May 1996. Ed.: M.E. Williams
  7. Oeltjen, W.: Dokumentenstrukturen manipulieren und visualisieren : über das Arbeiten mit der logischen Struktur (1998) 0.01
    0.009192536 = product of:
      0.073540285 = sum of:
        0.04767549 = weight(_text_:wide in 6616) [ClassicSimilarity], result of:
          0.04767549 = score(doc=6616,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.342674 = fieldWeight in 6616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6616)
        0.025864797 = weight(_text_:web in 6616) [ClassicSimilarity], result of:
          0.025864797 = score(doc=6616,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.25239927 = fieldWeight in 6616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6616)
      0.125 = coord(2/16)
    
    Abstract
    Thema dieses Beitrages sind Dokumentenstrukturen und zwar aus zwei Blickrichtungen: aus der Sicht der Autoren, die ein Dokument mit Computerunterstützung erstellen und die Dokumentenstruktur manipulieren und aus der Sicht der Lesenden, die ein Dokument lesen und die Struktur des Dokumentes wahrnehmen. Bei der Dokumentenstruktur wird unterschieden zwischen der logischen Struktur und der grafischen Struktur eines Dokumentes. Diese Trennung ermöglicht das Manipulieren und Visualisieren der logischen Struktur. Welche Bedeutung das für die Autoren und für die Benutzenden des Dokumentes hat, soll in dem Beitrag u.a. am Beispiel der Auszeichnungssprache HTML, der Dokumentenbeschreibungssprache des World-Wide Web, erörtert werden
  8. Gaschignard, J.-P.: UNIMARC et UNIMARC : attention aux contrefacons (1997) 0.01
    0.008790846 = product of:
      0.07032677 = sum of:
        0.008553064 = weight(_text_:information in 921) [ClassicSimilarity], result of:
          0.008553064 = score(doc=921,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.1551638 = fieldWeight in 921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=921)
        0.061773706 = weight(_text_:software in 921) [ClassicSimilarity], result of:
          0.061773706 = score(doc=921,freq=4.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.49589399 = fieldWeight in 921, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=921)
      0.125 = coord(2/16)
    
    Abstract
    UNIMARC is widely used in French libraries for internal cataloguing, but in versions that differ significantly from the official IFLA form, while the BNF uses its own version for exporting bibliographic information. This situation has in part been created by software suppliers who produce modified versions for small libraries but without precisely detailing the variations. Problems will inevitably arise when such libraries change software or join cataloguing networks
  9. Fattahi, R.: ¬A uniform approach to the indexing of cataloguing data in online library systems (1997) 0.01
    0.007315975 = product of:
      0.039018534 = sum of:
        0.013557088 = product of:
          0.027114175 = sum of:
            0.027114175 = weight(_text_:online in 131) [ClassicSimilarity], result of:
              0.027114175 = score(doc=131,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.284522 = fieldWeight in 131, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=131)
          0.5 = coord(1/2)
        0.006414798 = weight(_text_:information in 131) [ClassicSimilarity], result of:
          0.006414798 = score(doc=131,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.116372846 = fieldWeight in 131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=131)
        0.01904665 = weight(_text_:retrieval in 131) [ClassicSimilarity], result of:
          0.01904665 = score(doc=131,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.20052543 = fieldWeight in 131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=131)
      0.1875 = coord(3/16)
    
    Abstract
    Argues that in library cataloguing and for optional functionality of bibliographic records the indexing of fields and subfields should follow a uniform approach. This would maintain effectiveness in searching, retrieval and display of bibliographic information both within systems and between systems. However, a review of different postings to the AUTOCAT and USMARC discussion lists indicates that the indexing and tagging of cataloguing data do not, at present, follow a consistent approach in online library systems. If the rationale of cataloguing principles is to bring uniformity in bibliographic description and effectiveness in access, they should also address the question of uniform approaches to the indexing of cataloguing data. In this context and in terms of the identification and handling of data elements, cataloguing standards (codes, MARC formats and the Z39.50 standard) should be brought closer, in that they should provide guidelines for the designation of data elements for machine readable records
  10. Schwarz, I.; Umstätter, W.: Zum Prinzip der Objektdarstellung in SGML (1998) 0.01
    0.006566097 = product of:
      0.052528776 = sum of:
        0.03405392 = weight(_text_:wide in 6617) [ClassicSimilarity], result of:
          0.03405392 = score(doc=6617,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.24476713 = fieldWeight in 6617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6617)
        0.018474855 = weight(_text_:web in 6617) [ClassicSimilarity], result of:
          0.018474855 = score(doc=6617,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.18028519 = fieldWeight in 6617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6617)
      0.125 = coord(2/16)
    
    Abstract
    Semantische Thesauri sind dazu geeignet, Wissen zu strukturieren. Der vorliegende Beitrag soll unter anderem deutlich machen, daß die SGML (Standard Generalized Markup Language) ein mögliches Instrument zum Aufbau semantischer Thesauri ist. Die SGML ist eine Metasprache, die geeignet ist, Texte in natürlicher Sprache mit Strukturen zu versehen, die das Erkennen des Informationsgehaltes eines Dokuments erleichtern. Zugleich wird damit unter anderem die Voraussetzung dafür geschaffen, Volltextindexierungen in einer Weise vorzunehmen, wie dies bislang nicht möglich war. Die rasant zunehmende Bedeutung der SGML, liegt zweifellos an der bekanntesten Document Type Definition (DTD) im Rahmen der SGML, der Hypertext Markup Language (HTML), wie wir sie im WWW (World Wide Web) des Internet in Anwendung finden. Darüber hinaus erfüllt SGML je nach DTD die Bedingungen, die Objektorientiertheit unserer natürlichen Sprache mit ihren definierbaren Begriffen sinnvoll zu unterstützen und beispielsweise mit Hilfe der objektorientierten Programmiersprache JAVA zu verarbeiten. Besonders hervorzuheben ist die sich damit verändernde Publikationsform bei wissensbasierten Texten, in denen SGML-Dokumente nicht mehr nur für sich zu betrachten sind, wie Zeitschriftenaufsätze oder Bücher, sondern die darüber hinaus in Form von Wissenselementen in einer Daten- und Wissensbank organisiert und recherchiert werden können
  11. Chowdhury, G.G.: Record formats for integrated databases : a review and comparison (1996) 0.01
    0.0064032488 = product of:
      0.05122599 = sum of:
        0.01980062 = weight(_text_:information in 7679) [ClassicSimilarity], result of:
          0.01980062 = score(doc=7679,freq=14.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.3592092 = fieldWeight in 7679, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7679)
        0.031425368 = weight(_text_:retrieval in 7679) [ClassicSimilarity], result of:
          0.031425368 = score(doc=7679,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.33085006 = fieldWeight in 7679, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7679)
      0.125 = coord(2/16)
    
    Abstract
    Discusses the issues involved in the development of data formats for computerized information retrieval systems. Integrated databases capable of holding both bibliographic and factual information, in a single database structure, are more convenient for searching and retrieval by end users. Several bibliographic formats have been developed and are used for these bibliographic control puposes. Reviews features of 6 major bibliographic formats: USMARC, UKMARC, UNIMARC, CCF, MIBIS and ABNCD are reviewed. Only 2 formats: CCF and ABNCD are capable of holding both bibliographic and factual information and supporting the design of integrated databases. The comparison suggests that, while CCF makes more detailed provision for bibliographic information, ABNCD makes better provision for factual information such as profiles of institutions, information systems, projects and human experts
    Source
    Information development. 12(1996) no.4, S.218-223
  12. Riemer, J.J.: Adding 856 Fields to authority records : rationale and implications (1998) 0.01
    0.006175569 = product of:
      0.04940455 = sum of:
        0.0111840265 = product of:
          0.022368053 = sum of:
            0.022368053 = weight(_text_:online in 3715) [ClassicSimilarity], result of:
              0.022368053 = score(doc=3715,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.23471867 = fieldWeight in 3715, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3715)
          0.5 = coord(1/2)
        0.038220525 = weight(_text_:software in 3715) [ClassicSimilarity], result of:
          0.038220525 = score(doc=3715,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.30681872 = fieldWeight in 3715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3715)
      0.125 = coord(2/16)
    
    Abstract
    Discusses ways of applying MARC Field 856 (Electronic Location and Access) to authority records in online union catalogues. In principle, each catalogue site location can be treated as the electronic record of the work concerned and the MARC Field 856 can then refer to this location as if it were referring to the location of a primary record. Although URLs may become outdated, the fact that they are located in specifically defined MARC Fields makes the data contained amenable to the same link maintenance software ae used for the electronic records themselves. Includes practical examples of typical union catalogue records incorporating MARC Field 856
  13. Bourdon, F.: Qu'est-ce qu'un format d'autorité? (1997) 0.01
    0.0061005503 = product of:
      0.048804402 = sum of:
        0.010583877 = weight(_text_:information in 902) [ClassicSimilarity], result of:
          0.010583877 = score(doc=902,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.1920054 = fieldWeight in 902, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=902)
        0.038220525 = weight(_text_:software in 902) [ClassicSimilarity], result of:
          0.038220525 = score(doc=902,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.30681872 = fieldWeight in 902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=902)
      0.125 = coord(2/16)
    
    Abstract
    Authority records complement bibliographic records, providing cataloguers with essential subject heading and related information. At present there is no international format standard comparable to ISBD for bibliographic records, though IFLA and the International Archives Council have set up working groups. The essential data form comprises of subject heading, structure, homonyms, with supplementary supporting information. In France MARC formats are most widely used, e.g. UNIMARC(A) for authority records and (B) for bibliographic. The National Library (BNF) is introducing new cataloguing software based on the reorganisation of its authotity files, using integrated INTERMARC. As an experiments, readers will for the first time have access to authority files, thus enriching, completing and clarifying the bibliographic records
  14. Guenther, R.S.: Bringing the Library of Congress into the computer age : converting LCC to machine-readable form (1996) 0.01
    0.0059651993 = product of:
      0.047721595 = sum of:
        0.01597718 = product of:
          0.03195436 = sum of:
            0.03195436 = weight(_text_:online in 4578) [ClassicSimilarity], result of:
              0.03195436 = score(doc=4578,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.33531237 = fieldWeight in 4578, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4578)
          0.5 = coord(1/2)
        0.031744417 = weight(_text_:retrieval in 4578) [ClassicSimilarity], result of:
          0.031744417 = score(doc=4578,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.33420905 = fieldWeight in 4578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=4578)
      0.125 = coord(2/16)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  15. Willner, E.: Preparing data for the Web with SGML/XML (1998) 0.01
    0.005206953 = product of:
      0.041655622 = sum of:
        0.029559765 = weight(_text_:web in 2894) [ClassicSimilarity], result of:
          0.029559765 = score(doc=2894,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.2884563 = fieldWeight in 2894, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2894)
        0.012095859 = weight(_text_:information in 2894) [ClassicSimilarity], result of:
          0.012095859 = score(doc=2894,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.21943474 = fieldWeight in 2894, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2894)
      0.125 = coord(2/16)
    
    Abstract
    To solve the problem of information loss caused by format changes requires 1 more conversion to be made, i.e. to SGML or XML. Describes the 2 formats and discusses the conversion issues involved. The sooner conversion to SGML or XML is commenced the better for the organization and if necessary, outside facilities can be called upon to provide the expertise
    Source
    Information today. 15(1998) no.5, S.54
  16. Leazer, G.H.: ¬A conceptual schema for the control of bibliographic works (1994) 0.01
    0.0051254225 = product of:
      0.04100338 = sum of:
        0.009258964 = weight(_text_:information in 3033) [ClassicSimilarity], result of:
          0.009258964 = score(doc=3033,freq=6.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.16796975 = fieldWeight in 3033, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3033)
        0.031744417 = weight(_text_:retrieval in 3033) [ClassicSimilarity], result of:
          0.031744417 = score(doc=3033,freq=8.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.33420905 = fieldWeight in 3033, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3033)
      0.125 = coord(2/16)
    
    Abstract
    In this paper I describe a conceptual design of a bibliographic retrieval system that enables more thourough control of bibliographic entities. A bibliographic entity has 2 components: the intellectual work and the physical item. Users searching bibliographic retrieval systems generally do not search for a specific item, but are willing to retrieve one of several alternative manifestations of a work. However, contemporary bibliographic retrieval systems are based solely on the descriptions of items. Works are described only implcitly by collocating descriptions of items. This method has resulted in a tool that does not include important descriptive attributes of the work, e.g. information regarding its history, its genre, or its bibliographic relationships. A bibliographic relationship is an association between 2 bibliographic entities. A system evaluation methodology wasused to create a conceptual schema for a bibliographic retrieval system. The model is based upon an analysis of data elements in the USMARC Formats for Bibliographic Data. The conceptual schema describes a database comprising 2 separate files of bibliographic descriptions, one of works and the other of items. Each file consists of individual descriptive surrogates of their respective entities. the specific data content of each file is defined by a data dictionary. Data elements used in the description of bibliographic works reflect the nature of works as intellectual and linguistic objects. The descriptive elements of bibliographic items describe the physical properties of bibliographic entities. Bibliographic relationships constitute the logical strucutre of the database
    Imprint
    Oxford : Learned Information
    Source
    Navigating the networks: Proceedings of the 1994 Mid-year Meeting of the American Society for Information Science, Portland, Oregon, May 21-25, 1994. Ed.: D.L. Andersen et al
  17. Horah, J.L.: from cards to the Web : ¬The evolution of a library database (1998) 0.01
    0.005117397 = product of:
      0.040939175 = sum of:
        0.031352866 = weight(_text_:web in 4842) [ClassicSimilarity], result of:
          0.031352866 = score(doc=4842,freq=4.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.3059541 = fieldWeight in 4842, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4842)
        0.009586309 = product of:
          0.019172618 = sum of:
            0.019172618 = weight(_text_:online in 4842) [ClassicSimilarity], result of:
              0.019172618 = score(doc=4842,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.20118743 = fieldWeight in 4842, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4842)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    The Jack Brause Library at New York University (NYU) is a special library supporting the curriculum of NYU's Real Estate Institute. The Jack Brause Library (JBL) Real estate Periodical Index was established in 1990 and draws on the library's collection of over 140 real estate periodicals. Describes the conversion of the JBL Index from a 3x5 card index to an online resource. The database was originally created using Rbase for DOS but this quickly became obsolete and in 1993 was replaced with InMagic. In 1997 the JBL Index was made available on NYU's telnet catalogue, BobCat, and the Internet database catalogue, BobCatPlus. The transition of InMagic data to USMARC formatted records involved a 3-step process: data normalization; adding value; and data recording. The Index has been operational through telnet since May 1997 and installing it onto the Web became functional in Oct 1997
  18. Lupovici, C.: ¬L'¬information secondaire du document primaire : format MARC ou SGML? (1997) 0.00
    0.004853418 = product of:
      0.038827345 = sum of:
        0.025864797 = weight(_text_:web in 892) [ClassicSimilarity], result of:
          0.025864797 = score(doc=892,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.25239927 = fieldWeight in 892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=892)
        0.012962549 = weight(_text_:information in 892) [ClassicSimilarity], result of:
          0.012962549 = score(doc=892,freq=6.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.23515764 = fieldWeight in 892, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=892)
      0.125 = coord(2/16)
    
    Abstract
    Secondary information, e.g. MARC based bibliographic records, comprises structured data for identifying, tagging, retrieving and management of primary documents. SGML, the standard format for coding content and structure of primary documents, was introduced in 1986 as a publishing tool but is now being applied to bibliographic records. SGML now comprises standard definitions (DTD) for books, serials, articles and mathematical formulae. A simplified version (HTML) is used for Web pages. Pilot projects to develop SGML as a standard for bibliographic exchange include the Dublin Core, listing 13 descriptive elements for Internet documents; the French GRISELI programme using SGML for exchanging grey literature and US experiments on reformatting USMARC for use with SGML-based records
    Footnote
    Übers. des Titels: Secondary information on primary documents: MARC or SGML format?
  19. Guenther, R.S.: ¬The Library of Congress Classification in the USMARC format (1994) 0.00
    0.004754712 = product of:
      0.038037695 = sum of:
        0.015816603 = product of:
          0.031633206 = sum of:
            0.031633206 = weight(_text_:online in 8864) [ClassicSimilarity], result of:
              0.031633206 = score(doc=8864,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.33194235 = fieldWeight in 8864, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=8864)
          0.5 = coord(1/2)
        0.022221092 = weight(_text_:retrieval in 8864) [ClassicSimilarity], result of:
          0.022221092 = score(doc=8864,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.23394634 = fieldWeight in 8864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8864)
      0.125 = coord(2/16)
    
    Abstract
    The paper reviews the development of the USMARC Format for Classification Data, a standard for communication of classification data in machine-readable form. It considers the uses for online classification schedules, both for technical services and reference functions and gives an overview of the format specification details of data elements used and of the structure of the records. The paper describes an experiment conducted at the Library of Congress to test the format as well as the development of the classification database encompassing the LCC schedules. Features of the classification system are given. The LoC will complete its conversion of the LCC in mid-1995
    Theme
    Klassifikationssysteme im Online-Retrieval
  20. Gopinath, M.A.: Standardization for resource sharing databases (1995) 0.00
    0.0047459947 = product of:
      0.037967958 = sum of:
        0.020950641 = weight(_text_:information in 4414) [ClassicSimilarity], result of:
          0.020950641 = score(doc=4414,freq=12.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.38007212 = fieldWeight in 4414, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4414)
        0.017017314 = product of:
          0.03403463 = sum of:
            0.03403463 = weight(_text_:22 in 4414) [ClassicSimilarity], result of:
              0.03403463 = score(doc=4414,freq=2.0), product of:
                0.10995905 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031400457 = queryNorm
                0.30952093 = fieldWeight in 4414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4414)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    It is helpful and essential to adopt standards for bibliographic information, project description and institutional information which are shareable for access to information resources within a country. Describes a strategy for adopting international standards of bibliographic information exchange for developing a resource sharing facilitation database in India. A list of 22 ISO standards for information processing is included
    Source
    Library science with a slant to documentation and information studies. 32(1995) no.3, S.i-iv