Search (93 results, page 1 of 5)

  • × theme_ss:"Datenformate"
  • × type_ss:"a"
  1. Scholz, M.: Wie können Daten im Web mit JSON nachgenutzt werden? (2023) 0.02
    0.020549772 = product of:
      0.08219909 = sum of:
        0.08219909 = weight(_text_:digitale in 5345) [ClassicSimilarity], result of:
          0.08219909 = score(doc=5345,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.45597312 = fieldWeight in 5345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.0625 = fieldNorm(doc=5345)
      0.25 = coord(1/4)
    
    Abstract
    Martin Scholz ist Informatiker an der Universitätsbibliothek Erlangen-Nürnberg. Als Leiter der dortigen Gruppe Digitale Entwicklung und Datenmanagement beschäftigt er sich viel mit Webtechniken und Datentransformation. Er setzt sich mit der aktuellen ABI-Techik-Frage auseinander: Wie können Daten im Web mit JSON nachgenutzt werden?
  2. Boiger, W.: Entwicklung und Implementierung eines MARC21-MARCXML-Konverters in der Programmiersprache Perl (2015) 0.01
    0.012843607 = product of:
      0.051374428 = sum of:
        0.051374428 = weight(_text_:digitale in 2466) [ClassicSimilarity], result of:
          0.051374428 = score(doc=2466,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.2849832 = fieldWeight in 2466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2466)
      0.25 = coord(1/4)
    
    Abstract
    Aktuell befinden sich im Datenbestand des gemeinsamen Katalogs des Bibliotheksverbundes Bayern und des Kooperativen Bibliotheksverbundes Berlin-Brandenburg (B3Kat) etwa 25,6 Millionen Titeldatensätze. Die Bayerische Verbundzentrale veröffentlicht diese Daten seit 2011 im Zuge der bayerischen Open-Data-Initiative auf ihrer Webpräsenz. Zu den Nachnutzern dieser Daten gehören die Deutsche Digitale Bibliothek und das Projekt Culturegraph der DNB. Die Daten werden im weitverbreiteten Katalogdatenformat MARCXML publiziert. Zur Erzeugung der XML-Dateien verwendete die Verbundzentrale bis 2014 die Windows-Software MarcEdit. Anfang 2015 entwickelte der Verfasser im Rahmen der bayerischen Referendarsausbildung einen einfachen MARC-21-MARCXML-Konverter in Perl, der die Konvertierung wesentlich erleichert und den Einsatz von MarcEdit in der Verbundzentrale überflüssig macht. In der vorliegenden Arbeit, die zusammen mit dem Konverter verfasst wurde, wird zunächst die Notwendigkeit einer Perl-Implementierung motiviert. Im Anschluss werden die bibliographischen Datenformate MARC 21 und MARCXML beleuchtet und für die Konvertierung wesentliche Eigenschaften erläutert. Zum Schluss wird der Aufbau des Konverters im Detail beschrieben. Die Perl-Implementierung selbst ist Teil der Arbeit. Verwendung, Verbreitung und Veränderung der Software sind unter den Bedingungen der GNU Affero General Public License gestattet, entweder gemäß Version 3 der Lizenz oder (nach Ihrer Option) jeder späteren Version.[Sie finden die Datei mit der Perl-Implementierung in der rechten Spalte in der Kategorie Artikelwerkzeuge unter dem Punkt Zusatzdateien.]
  3. Mensing, P.: Planung und Durchführung von Digitalisierungsprojekten am Beispiel nicht-textueller Materialien (2010) 0.01
    0.008990525 = product of:
      0.0359621 = sum of:
        0.0359621 = weight(_text_:digitale in 3577) [ClassicSimilarity], result of:
          0.0359621 = score(doc=3577,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.19948824 = fieldWeight in 3577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3577)
      0.25 = coord(1/4)
    
    Series
    Themen: Digitale Bibliothek
  4. Oehlschläger, S.: Aus der 48. Sitzung der Arbeitsgemeinschaft der Verbundsysteme am 12. und 13. November 2004 in Göttingen (2005) 0.01
    0.0064218035 = product of:
      0.025687214 = sum of:
        0.025687214 = weight(_text_:digitale in 3556) [ClassicSimilarity], result of:
          0.025687214 = score(doc=3556,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.1424916 = fieldWeight in 3556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3556)
      0.25 = coord(1/4)
    
    Content
    Hessisches BibliotheksinformationsSystem (HEBIS) Personennamendatei (PND) Vor dem Hintergrund der Harmonisierungsbestrebungen bei den Normdateien hat der HeBIS-Verbundrat nach erneuter Diskussion mehrheitlich entschieden, künftig neben SWD und GKD auch die PND als in HeBIS integrierte Normdatei obligatorisch zu nutzen. Im Zuge der wachsenden Vernetzung der regionalen Verbundsysteme kommt der Homogenität der Datensätze eine zunehmend wichtigere Bedeutung zu. Konkret wird dies speziell für HeBIS mit dem Produktionsbeginn des HeBIS-Portals und der integrierten verbundübergreifenden Fernleihe. Nur wenn die Verfasserrecherche in den einzelnen Verbunddatenbanken auf weitgehend einheitliche Datensätze einschließlich Verweisungsformen trifft, kann der Benutzer gute Trefferergebnisse erwarten und damit seine Chancen erhöhen, die gewünschte Literatur über Fernleihe bestellen zu können. Das Gesamtkonzept ist ausgelegt auf eine pragmatische und aufwandsreduzierte Vorgehensweise. Mit der Umsetzung wurde begonnen. Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (HBZ) FAST-Suchmaschine Das HBZ hat die Suchmaschinentechnologie des norwegischen Herstellers FAST lizenziert. Ziel ist es, die Produkte des HBZ mit Hilfe innovativer Suchmaschinentechnologien in einer neuen Ausrichtung zu präsentieren. Die Präsentation soll einen schnellen Recherche-Zugang zu den NRWVerbunddaten mittels FAST-Suchmaschinentechnologie mit folgenden Eigenschaften beinhalten: - Eine Web-Oberfläche, um für Laien eine schnelle Literatursuche anbieten zu können. - Eine Web-Oberfläche, um für Expertinnen und Experten eine schnelle Literatur-Suche anbieten zu können. - Präsentation von Zusatzfunktionen, die in gängigen Bibliothekskatalogen so nicht vorhanden sind. - Schaffung einer Zugriffsmöglichkeit für den KVK auf die Verbunddaten mit sehr kurzen Antwortzeiten Digitale Bibliothek Die Mehrzahl der Bibliotheken ist inzwischen auf Release 5 umgezogen. Einige befinden sich noch im Bearbeitungsstatus. Von den letzten drei Bibliotheken liegen inzwischen die Umzugsanträge vor. Durch die Umstrukturierung der RLB Koblenz zum LBZ Rheinland-Pfalz werden die Einzelsichten der RLB Koblenz, PLB Speyer und der Bipontina in Zweibrücken mit den Büchereistellen Koblenz und Neustadt zu einer Sicht verschmolzen.
  5. Information transfer and exchange formats (1991) 0.01
    0.0059490725 = product of:
      0.02379629 = sum of:
        0.02379629 = weight(_text_:information in 7891) [ClassicSimilarity], result of:
          0.02379629 = score(doc=7891,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.38790947 = fieldWeight in 7891, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7891)
      0.25 = coord(1/4)
    
    Abstract
    Describes international standard exchange formats for bibliographic information transfer. Outlines their common and differing features. Describes: UNIMARC, UNISIST Reference manual and UNECSO Common Communication Format
    Source
    Standards for the international exchange of bibliographic information: papers presented at a course held at the School of Library, Archive and Information Studies, University College, London, 3-18 August 1990. Ed.: I.C. McIlwaine
  6. Gopinath, M.A.: Standardization for resource sharing databases (1995) 0.01
    0.005828877 = product of:
      0.023315508 = sum of:
        0.023315508 = weight(_text_:information in 4414) [ClassicSimilarity], result of:
          0.023315508 = score(doc=4414,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.38007212 = fieldWeight in 4414, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4414)
      0.25 = coord(1/4)
    
    Abstract
    It is helpful and essential to adopt standards for bibliographic information, project description and institutional information which are shareable for access to information resources within a country. Describes a strategy for adopting international standards of bibliographic information exchange for developing a resource sharing facilitation database in India. A list of 22 ISO standards for information processing is included
    Source
    Library science with a slant to documentation and information studies. 32(1995) no.3, S.i-iv
  7. Ranta, J.A.: Queens Borough Public Library's Guidelines for cataloging community information (1996) 0.01
    0.0055089183 = product of:
      0.022035673 = sum of:
        0.022035673 = weight(_text_:information in 6523) [ClassicSimilarity], result of:
          0.022035673 = score(doc=6523,freq=14.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3592092 = fieldWeight in 6523, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6523)
      0.25 = coord(1/4)
    
    Abstract
    Currently, few resources exist to guide libraries in the cataloguing of community information using the new USMARC Format for Cammunity Information (1993). In developing a community information database, Queens Borough Public Library, New York City, formulated their own cataloguing procedures for applying AACR2, LoC File Interpretations, and USMARC Format for Community Information to community information. Their practices include entering corporate names directly whenever possible and assigning LC subject headings for classes of persons and topics, adding neighbourhood level geographic subdivisions. The guidelines were specially designed to aid non cataloguers in cataloguing community information and have enabled library to maintain consistency in handling corporate names and in assigning subject headings, while creating database that is highly accessible to library staff and users
  8. Chowdhury, G.G.: Record formats for integrated databases : a review and comparison (1996) 0.01
    0.0055089183 = product of:
      0.022035673 = sum of:
        0.022035673 = weight(_text_:information in 7679) [ClassicSimilarity], result of:
          0.022035673 = score(doc=7679,freq=14.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3592092 = fieldWeight in 7679, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7679)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the issues involved in the development of data formats for computerized information retrieval systems. Integrated databases capable of holding both bibliographic and factual information, in a single database structure, are more convenient for searching and retrieval by end users. Several bibliographic formats have been developed and are used for these bibliographic control puposes. Reviews features of 6 major bibliographic formats: USMARC, UKMARC, UNIMARC, CCF, MIBIS and ABNCD are reviewed. Only 2 formats: CCF and ABNCD are capable of holding both bibliographic and factual information and supporting the design of integrated databases. The comparison suggests that, while CCF makes more detailed provision for bibliographic information, ABNCD makes better provision for factual information such as profiles of institutions, information systems, projects and human experts
    Source
    Information development. 12(1996) no.4, S.218-223
  9. Simmons, P.: Microcomputer software for ISO 2709 record conversion (1989) 0.00
    0.004759258 = product of:
      0.019037032 = sum of:
        0.019037032 = weight(_text_:information in 2) [ClassicSimilarity], result of:
          0.019037032 = score(doc=2,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3103276 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=2)
      0.25 = coord(1/4)
    
    Source
    Microcomputers for information management. 6(1989), S.197-205
  10. Avram, H.D.: Machine-readable cataloging (MARC) (1988) 0.00
    0.004759258 = product of:
      0.019037032 = sum of:
        0.019037032 = weight(_text_:information in 1277) [ClassicSimilarity], result of:
          0.019037032 = score(doc=1277,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3103276 = fieldWeight in 1277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=1277)
      0.25 = coord(1/4)
    
    Source
    Encyclopedia of library and information science. Vol.43, [=Suppl.8]
  11. Süle, G.: ¬Die Vereinheitlichung von Datenformaten im internationalen Bereich (1991) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 4418) [ClassicSimilarity], result of:
          0.016657405 = score(doc=4418,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 4418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4418)
      0.25 = coord(1/4)
    
    Source
    Wissenschaftliche Information im europäischen Rahmen: 23. Arbeits- und Fortbildungstagung der ASpB / Sektion 5 im DBV, 13.-16.3.1991 in München
  12. Cantrall, D.: From MARC to Mosaic : progressing toward data interchangeability at the Oregon State Archives (1994) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 8470) [ClassicSimilarity], result of:
          0.014425736 = score(doc=8470,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 8470, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8470)
      0.25 = coord(1/4)
    
    Abstract
    Explains the technology used by the Oregon State Archives to relaize the goal of data interchangeability given the prescribed nature of the MARC format. Describes an emergent model of learning and information delivery focusing on the example of World Wide Web, accessed most often by the software client Mosaic, which is the fastest growing segment of the Internet information highway. Also describes The Data Magician, a flexible program which allows for many combinations of input and output formats, and will read unconventional formats such as MARC communications format. Oregon State Archives, using Mosaic and The Data Magician, are consequently able to present valuable electronic information to a variety of users
  13. Crook, M.: Barbara Tillett discusses cataloging rules and conceptual models (1996) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 7683) [ClassicSimilarity], result of:
          0.014425736 = score(doc=7683,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 7683, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7683)
      0.25 = coord(1/4)
    
    Abstract
    The chief of cataloguing policy and support office at the LoC presents her views on the usefulness of conceptual modelling in determining future directions for cataloguing and the MARC format. After describing the evolution of bibliographic processes, suggests usign the entity-relationship conceptual model to step back from how we record information today and start thinking about what information really means and why we provide it. Argues that now is the time to reexamine the basic principles which underpin Anglo-American cataloguing codes and that MARC formats should be looked at to see how they can evolve towards a future, improved structure for communicating bibliographic and authority information
  14. Lupovici, C.: ¬L'¬information secondaire du document primaire : format MARC ou SGML? (1997) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 892) [ClassicSimilarity], result of:
          0.014425736 = score(doc=892,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 892, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=892)
      0.25 = coord(1/4)
    
    Abstract
    Secondary information, e.g. MARC based bibliographic records, comprises structured data for identifying, tagging, retrieving and management of primary documents. SGML, the standard format for coding content and structure of primary documents, was introduced in 1986 as a publishing tool but is now being applied to bibliographic records. SGML now comprises standard definitions (DTD) for books, serials, articles and mathematical formulae. A simplified version (HTML) is used for Web pages. Pilot projects to develop SGML as a standard for bibliographic exchange include the Dublin Core, listing 13 descriptive elements for Internet documents; the French GRISELI programme using SGML for exchanging grey literature and US experiments on reformatting USMARC for use with SGML-based records
    Footnote
    Übers. des Titels: Secondary information on primary documents: MARC or SGML format?
  15. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 5896) [ClassicSimilarity], result of:
          0.014425736 = score(doc=5896,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 5896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
      0.25 = coord(1/4)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
    Source
    Journal of digital information. 1(2001) no.8
  16. Skvortsov, V.; Zhlobinskaya, O.; Pashkova, A.: UNIMARC XML slim schema : living in new environment (2005) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 4335) [ClassicSimilarity], result of:
          0.014425736 = score(doc=4335,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 4335, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4335)
      0.25 = coord(1/4)
    
    Abstract
    The paper discusses the role of XML and its perspectives in library information systems, particularly with regards to basic functions of bibliographic formats - storage and transportation of the data. Slim XML Schema for UNIMARC representation is presented, its main features being lossless conversion from MARC to XML, roundtripability from XML back to MARC, support for embedded fields and extended range of indicator values, independence from any specific dialect of MARC format, stability to any changes of the format.
    Footnote
    Vortrag, World Library and Information Congress: 71th IFLA General Conference and Council "Libraries - A voyage of discovery", August 14th - 18th 2005, Oslo, Norway.
    Series
    121 UNIMARC with Information Technology ; 064-E
  17. McCallum, S.H.: MARCXML sampler (2005) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 4361) [ClassicSimilarity], result of:
          0.014425736 = score(doc=4361,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 4361, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4361)
      0.25 = coord(1/4)
    
    Abstract
    At the IFLA conference in Glasgow, three years ago, the Information Technology Section organized a workshop on metadata. At that workshop MARCXML was presented, along with plans and expectations for its use. This paper is an update to that report. It reviews the development of an XML schema for MARC 21 and the MARCXML tool kit of transformations. The close relationship of MARCXML to the recent ISO standards work associated with MARC in XML is described. Sketches of interesting applications follow with uses that range from MARCXML as a switching format to a maintenance tool to a record communication format for new XML-based protocols.
    Footnote
    Vortrag, World Library and Information Congress: 71th IFLA General Conference and Council "Libraries - A voyage of discovery", August 14th - 18th 2005, Oslo, Norway.
    Series
    121 UNIMARC with Information Technology ; 175-E121
  18. Eden, B.L.: Metadata and librarianship : will MARC survive? (2004) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 4750) [ClassicSimilarity], result of:
          0.014425736 = score(doc=4750,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 4750, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4750)
      0.25 = coord(1/4)
    
    Abstract
    Metadata schema and standards are now a part of the information landscape. Librarianship has slowly realized that MARC is only one of a proliferation of metadata standards, and that MARC has many pros and cons related to its age, original conception, and biases. Should librarianship continue to promote the MARC standard? Are there better metadata standards out there that are more robust, user-friendly, and dynamic in the organization and presentation of information? This special issue examines current initiatives that are actively incorporating MARC standards and concepts into new metadata schemata, while also predicting a future where MARC may not be the metadata schema of choice for the organization and description of information.
  19. Johnson, B.C.: XML and MARC : which is "right"? (2001) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 5423) [ClassicSimilarity], result of:
          0.014425736 = score(doc=5423,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 5423, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
      0.25 = coord(1/4)
    
    Abstract
    This article explores recent discussions about appropriate mark-up conventions for library information intended for use on the World Wide Web. In particular, the question of whether the MARC 21 format will continue to be useful and whether the time is right for a full-fledged conversion effort to XML is explored. The author concludes that the MARC format will be relevant well into the future, and its use will not hamper access to bibliographic information via the web. Early XML exploratory efforts carried out at the Stanford University's Lane Medical Library are reported on. Although these efforts are a promising start, much more consultation and investigation is needed to arrive at broadly acceptable standards for XML library information encoding and retrieval.
  20. Miller, K.; Matthews, B.: Having the right connections : the LIMBER project (2001) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 5933) [ClassicSimilarity], result of:
          0.014277775 = score(doc=5933,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 5933, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5933)
      0.25 = coord(1/4)
    
    Abstract
    As with any journey, you have to make the right connections if you want to reach your desired destination. The goal in the LIMBER project is to facilitate cross-European data analysis independent of domain, resource, language and vocabulary. The paper describes the expertise, associations, standards and architecture underlying the project deliverables designed to achieve the project's ambitious aims. - Limber (Language Independent Metadata Browsing of European Resources) is an EU (European Union) IST (Information Societies Technology) funded project that seeks to address the problems of linguistic and discipline boundaries, which, within a more integrated European environment, are becoming increasingly important. Decision-makers, researchers and journalists need to be provided with a broader, comparative picture of society across the continent; with the social science information often required to be correlated with information from domains such as environmental science, geography and health. This cross-discipline interoperability will be provided via a uniform metadata description. In addition, the provision of multilingual user interfaces and the controlled vocabulary of a multi-lingual thesaurus will make these datasets globally accessible in a range of end-user natural languages
    Source
    Journal of digital information. 1(2001) no.8

Authors

Years

Languages

  • e 71
  • d 17
  • f 5
  • More… Less…