Search (47 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Datenformate"
  • × year_i:[2000 TO 2010}
  1. Johnson, B.C.: XML and MARC : which is "right"? (2001) 0.01
    0.012034168 = product of:
      0.056159448 = sum of:
        0.028625458 = weight(_text_:web in 5423) [ClassicSimilarity], result of:
          0.028625458 = score(doc=5423,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.35694647 = fieldWeight in 5423, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
        0.010144223 = weight(_text_:information in 5423) [ClassicSimilarity], result of:
          0.010144223 = score(doc=5423,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23515764 = fieldWeight in 5423, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
        0.017389767 = weight(_text_:retrieval in 5423) [ClassicSimilarity], result of:
          0.017389767 = score(doc=5423,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.23394634 = fieldWeight in 5423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
      0.21428572 = coord(3/14)
    
    Abstract
    This article explores recent discussions about appropriate mark-up conventions for library information intended for use on the World Wide Web. In particular, the question of whether the MARC 21 format will continue to be useful and whether the time is right for a full-fledged conversion effort to XML is explored. The author concludes that the MARC format will be relevant well into the future, and its use will not hamper access to bibliographic information via the web. Early XML exploratory efforts carried out at the Stanford University's Lane Medical Library are reported on. Although these efforts are a promising start, much more consultation and investigation is needed to arrive at broadly acceptable standards for XML library information encoding and retrieval.
  2. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.01
    0.010290484 = product of:
      0.03601669 = sum of:
        0.01445804 = weight(_text_:web in 1166) [ClassicSimilarity], result of:
          0.01445804 = score(doc=1166,freq=8.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.18028519 = fieldWeight in 1166, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.003622937 = weight(_text_:information in 1166) [ClassicSimilarity], result of:
          0.003622937 = score(doc=1166,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.083984874 = fieldWeight in 1166, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.006210631 = weight(_text_:retrieval in 1166) [ClassicSimilarity], result of:
          0.006210631 = score(doc=1166,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.08355226 = fieldWeight in 1166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.011725084 = weight(_text_:frankfurt in 1166) [ClassicSimilarity], result of:
          0.011725084 = score(doc=1166,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.114801705 = fieldWeight in 1166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
      0.2857143 = coord(4/14)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
    NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.
  3. Qin, J.: Representation and organization of information in the Web space : from MARC to XML (2000) 0.01
    0.006391353 = product of:
      0.04473947 = sum of:
        0.03469929 = weight(_text_:web in 3918) [ClassicSimilarity], result of:
          0.03469929 = score(doc=3918,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.43268442 = fieldWeight in 3918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3918)
        0.010040177 = weight(_text_:information in 3918) [ClassicSimilarity], result of:
          0.010040177 = score(doc=3918,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23274569 = fieldWeight in 3918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3918)
      0.14285715 = coord(2/14)
    
  4. McCallum, S.H.: Machine Readable Cataloging (MARC): 1975-2007 (2009) 0.01
    0.006220379 = product of:
      0.029028434 = sum of:
        0.017349645 = weight(_text_:web in 3841) [ClassicSimilarity], result of:
          0.017349645 = score(doc=3841,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.21634221 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.0050200885 = weight(_text_:information in 3841) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=3841,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 3841) [ClassicSimilarity], result of:
              0.019976096 = score(doc=3841,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 3841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3841)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    This entry describes the development of the MARC Communications format. After a brief overview of the initial 10 years it describes the succeeding phases of development up to the present. This takes the reader through the expansion of the format for all types of bibliographic data and for a multiple character scripts. At the same time a large business community was developing that offered products based on the format to the library community. The introduction of the Internet in the 1990s and the Web technology brought new opportunities and challenges and the format was adapted to this new environment. There has been a great deal of international adoption of the format that has continued into the 2000s. More recently new syntaxes for MARC 21 and models are being explored.
    Date
    27. 8.2011 14:22:38
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  5. Miller, D.R.: XML: Libraries' strategic opportunity (2001) 0.01
    0.0060943845 = product of:
      0.04266069 = sum of:
        0.03541482 = weight(_text_:web in 1467) [ClassicSimilarity], result of:
          0.03541482 = score(doc=1467,freq=12.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.4416067 = fieldWeight in 1467, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
        0.007245874 = weight(_text_:information in 1467) [ClassicSimilarity], result of:
          0.007245874 = score(doc=1467,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16796975 = fieldWeight in 1467, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
      0.14285715 = coord(2/14)
    
    Abstract
    XML (eXtensible Markup Language) is fast gaining favor as the universal format for data and document exchange -- in effect becoming the lingua franca of the Information Age. Currently, "library information" is at a particular disadvantage on the rapidly evolving World Wide Web. Why? Despite libraries'explorations of web catalogs, scanning projects, digital data repositories, and creation of web pages galore, there remains a digital divide. The core of libraries' data troves are stored in proprietary formats of integrated library systems (ILS) and in the complex and arcane MARC formats -- both restricted chiefly to the province of technical services and systems librarians. Even they are hard-pressed to extract and integrate this wealth of data with resources from outside this rarefied environment. Segregation of library information underlies many difficulties: producing standard bibliographic citations from MARC data, automatically creating new materials lists (including new web resources) on a particular topic, exchanging data with our vendors, and even migrating from one ILS to another. Why do we continue to hobble our potential by embracing these self-imposed limitations? Most ILSs began in libraries, which soon recognized the pitfalls of do-it-yourself solutions. Thus, we wisely anticipated the necessity for standards. However, with the advent of the web, we soon found "our" collections and a flood of new resources appearing in digital format on opposite sides of the divide. If we do not act quickly to integrate library resources with mainstream web resources, we are in grave danger of becoming marginalized
  6. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.01
    0.005538526 = product of:
      0.03876968 = sum of:
        0.028625458 = weight(_text_:web in 5896) [ClassicSimilarity], result of:
          0.028625458 = score(doc=5896,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.35694647 = fieldWeight in 5896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
        0.010144223 = weight(_text_:information in 5896) [ClassicSimilarity], result of:
          0.010144223 = score(doc=5896,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23515764 = fieldWeight in 5896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
    Source
    Journal of digital information. 1(2001) no.8
  7. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.01
    0.005098375 = product of:
      0.035688624 = sum of:
        0.023132863 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
          0.023132863 = score(doc=2845,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.2884563 = fieldWeight in 2845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2845)
        0.012555763 = product of:
          0.03766729 = sum of:
            0.03766729 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.03766729 = score(doc=2845,freq=4.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  8. Ansorge, K.: Das was 2007 (2007) 0.00
    0.0046954756 = product of:
      0.021912219 = sum of:
        0.00722902 = weight(_text_:web in 2405) [ClassicSimilarity], result of:
          0.00722902 = score(doc=2405,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.09014259 = fieldWeight in 2405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2405)
        0.0029581154 = weight(_text_:information in 2405) [ClassicSimilarity], result of:
          0.0029581154 = score(doc=2405,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.068573356 = fieldWeight in 2405, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2405)
        0.011725084 = weight(_text_:frankfurt in 2405) [ClassicSimilarity], result of:
          0.011725084 = score(doc=2405,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.114801705 = fieldWeight in 2405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2405)
      0.21428572 = coord(3/14)
    
    Content
    "Standardisierung - Auch 2007 ist die Arbeitsstelle für Standardisierung (AfS) auf dem Weg zur Internationalisierung der deutschen Regelwerke, Formate und Normdateien entscheidende Schritte vorangekommen. Im Mittelpunkt der Vorbereitungen für den Format-umstieg standen eine Konkordanz von MAB2 nach MARC 21 und die Festlegung neuer Felder, die für den Umstieg auf nationaler Ebene notwendig sind. Neben einer Vielzahl anderer Aktivitäten hat die DNB zwei Veranstaltungen zum Format-umstieg durchgeführt. In Zusammenarbeit mit den Expertengruppen des Standardisierungsausschusses wurden drei Stellungnahmen zu Entwürfen des Regelwerkes »Resource Description and Access (RDA)« erarbeitet; es fand eine Beteiligung an der internationalen Diskussion zu wichtigen Grundlagen statt. Der Erfüllung des Wunsches nach Einführung der Onlinekommunikation mit Normdateien ist die DNB im vergangenen Jahr deutlich nähergekommen: Änderungen an Normdaten sollen gleichzeitig in die zentral bei der DNB gehaltenen Dateien und in der Verbunddatenbank vollzogen werden. Seit Anfang September ist die erste Stufe der Onlinekommunikation im produktiven Einsatz: Die PND-Redaktionen in den Aleph-Verbünden arbeiten online zusammen. Das neue Verfahren wird sich auf alle bei der DNB geführten Normdaten erstrecken und in einem gestuften Verfahren eingeführt werden. Die DNB war in zahlreichen Standardisierungsgremien zur Weiterentwicklung von Metadatenstandards wie z.B. Dublin Core und ONIX (Online Information eXchange) sowie bei den Entwicklungsarbeiten für The European Library beteiligt. Die Projektarbeiten im Projekt KIM - Kompetenzzentrum Interoperable Metadaten wurden maßgeblich unterstützt. Im Rahmen der Arbeiten zum Gesetz über die Deutsche Nationalbibliothek wurde ein Metadatenkernset für die Übermittlung von Metadaten an die DNB entwickelt und in einer ersten Stufe mit einem ONIX-Mapping versehen. Innerhalb des Projektes »Virtual International Authority File - VIAF« entwickelten die Library of Congress (LoC), die DNB und OCLC - zunächst für Personennamen - gemeinsam eine virtuelle, internationale Normdatei, in der die Normdatensätze der nationalen Normdateien im Web frei zugänglich miteinander verbunden werden sollen. Die bisherigen Projektergebnisse haben die Machbarkeit einer internationalen Normdatei eindrucksvoll unter Beweis gestellt. Darum haben die Projektpartner in einem neuen Abkommen, das auch die Bibliothèque Nationale de France einschließt, im Oktober 2007 ihr Engagement für den VIAF nochmals bekräftigt und damit eine Konsolidierungs- und Erweiterungsphase eingeleitet."
    "DDC-vascoda - Das Projekt DDC-vascoda wurde 2007 abgeschlossen. Für den Sucheinstieg bei vascoda wurde bislang nur eine Freitextsuche über alle Felder oder eine Expertensuche, bei der die Freitextsuche mit den formalen Kriterien Autor, Titel und (Erscheinungs-)Jahr kombiniert werden kann, angeboten. Die Suche konnte zwar auf einzelne Fächer oder Fachzugänge beschränkt werden, ein sachlicher Zugang zu der Information fehlt jedoch. Vascoda verwendete die Dewey Decimal Classification (DDC) als einheitliches Klassifikationsinstrument. Ziel des Projektes DDC-vascoda war es, über diese Klassifikation einen komfortablen und einheitlichen sachlichen Zugang als Einstieg in das Gesamtangebot einzurichten. Weiterhin wurde ein HTML-Dienst entwickelt, der es Fachportalen und anderen Datenanbietern ermöglicht, ohne großen Programmieraufwand ein DDC-Browsing über die eigenen Daten bereitzustellen."
    Location
    Frankfurt
  9. Martin, P.: Conventions and notations for knowledge representation and retrieval (2000) 0.00
    0.00460788 = product of:
      0.032255158 = sum of:
        0.017349645 = weight(_text_:web in 5070) [ClassicSimilarity], result of:
          0.017349645 = score(doc=5070,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.21634221 = fieldWeight in 5070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5070)
        0.014905514 = weight(_text_:retrieval in 5070) [ClassicSimilarity], result of:
          0.014905514 = score(doc=5070,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 5070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5070)
      0.14285715 = coord(2/14)
    
    Abstract
    Much research has focused on the problem of knowledge accessibility, sharing and reuse. Specific languages (e.g. KIF, CG, RDF) and ontologies have been proposed. Common characteristics, conventions or ontological distinctions are beginning to emerge. Since knowledge providers (humans and software agents) must follow common conventions for the knowledge to be widely accessed and re-used, we propose lexical, structural, semantic and ontological conventions based on various knowledge representation projects and our own research. These are minimal conventions that can be followed by most and cover the most common knowledge representation cases. However, agreement and refinements are still required. We also show that a notation can be both readable and expressive by quickly presenting two new notations -- Formalized English (FE) and Frame-CG (FCG) - derived from the CG linear form [9] and Frame-Logics [4]. These notations support the above conventions, and are implemented in our Web-based knowledge representation and document indexation tool, WebKB¹ [7]
  10. Guenther, R.S.: Using the Metadata Object Description Schema (MODS) for resource description : guidelines and applications (2004) 0.00
    0.004001391 = product of:
      0.028009737 = sum of:
        0.020241255 = weight(_text_:web in 2837) [ClassicSimilarity], result of:
          0.020241255 = score(doc=2837,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.25239927 = fieldWeight in 2837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2837)
        0.007768482 = product of:
          0.023305446 = sum of:
            0.023305446 = weight(_text_:22 in 2837) [ClassicSimilarity], result of:
              0.023305446 = score(doc=2837,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.2708308 = fieldWeight in 2837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2837)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper describes the Metadata Object Description Schema (MODS), its accompanying documentation and some of its applications. It reviews the MODS user guidelines provided by the Library of Congress and how they enable a user of the schema to consistently apply MODS as a metadata scheme. Because the schema itself could not fully document appropriate usage, the guidelines provide element definitions, history, relationships with other elements, usage conventions, and examples. Short descriptions of some MODS applications are given and a more detailed discussion of its use in the Library of Congress's Minerva project for Web archiving is given.
    Source
    Library hi tech. 22(2004) no.1, S.89-98
  11. Salgáné, M.M.: Our electronic era and bibliographic informations computer-related bibliographic data formats, metadata formats and BDML (2005) 0.00
    0.0034058448 = product of:
      0.023840912 = sum of:
        0.016357405 = weight(_text_:web in 3005) [ClassicSimilarity], result of:
          0.016357405 = score(doc=3005,freq=4.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.2039694 = fieldWeight in 3005, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
        0.007483506 = weight(_text_:information in 3005) [ClassicSimilarity], result of:
          0.007483506 = score(doc=3005,freq=10.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1734784 = fieldWeight in 3005, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
      0.14285715 = coord(2/14)
    
    Abstract
    Using new communication technologies libraries must face continuously new questions, possibilities and expectations. This study discusses library-related aspects of our electronic era and how computer-related data formats affect bibliographic dataprocessing to give a summary of the most important results. First bibliographic formats for the exchange of bibliographic and related information in the machine-readable form between different types of computer systems were created more than 30 years ago. The evolution of information technologies leads to the improvement of computer systems. In addition to the development of computers and media types Internet has a great influence on data structure as well. Since the introduction of MARC bibliographic format, technology of data exchange between computers and between different computer systems has reached a very sophisticated stage and has contributed to the creation of new standards in this field. Today libraries work with this new infrastructure that induces many challenges. One of the most significant challenges is moving from a relatively homogenous bibliographic environment to a diverse one. Despite these challenges such changes are achievable and necessary to exploit possibilities of new metadata and technologies like the Internet and XML (Extensible Markup Language). XML is an open standard, a universal language for data on the Web. XML is nearly six-years-old standard designed for the description and computer-based management of (semi)-structured data and structured texts. XML gives developers the power to deliver structured data from a wide variety of applications and it is also an ideal format from server-to-server transfer of structured data. XML also isn't limited for Internet use and is an especially valuable tool in the field of library. In fact, XML's main strength - organizing information - makes it perfect for exchanging data between different systems. Tools that work with the XML can be used to process XML records without incurring additional costs associated with one's own software development. In addition, XML is also a suitable format for library web services. The Department of Computer-related Graphic Design and Library and Information Sciences of Debrecen University launched the BDML (Bibliographic Description Markup Language) development project in order to standardize bibliogrphic description with the help of XML.
    Source
    Librarianship in the information age: Proceedings of the 13th BOBCATSSS Symposium, 31 January - 2 February 2005 in Budapest, Hungary. Eds.: Marte Langeland u.a
  12. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.00
    0.0031825004 = product of:
      0.022277502 = sum of:
        0.00561263 = weight(_text_:information in 1169) [ClassicSimilarity], result of:
          0.00561263 = score(doc=1169,freq=10.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1301088 = fieldWeight in 1169, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.016664872 = weight(_text_:retrieval in 1169) [ClassicSimilarity], result of:
          0.016664872 = score(doc=1169,freq=10.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.22419426 = fieldWeight in 1169, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
      0.14285715 = coord(2/14)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
    Issue
    Pt.1: Thesauri for information retrieval - Pt.2: Interoperability with other vocabularies.
  13. Taylor, M.; Dickmeiss, A.: Delivering MARC/XML records from the Library of Congress catalogue using the open protocols SRW/U and Z39.50 (2005) 0.00
    0.0031435704 = product of:
      0.022004992 = sum of:
        0.007099477 = weight(_text_:information in 4350) [ClassicSimilarity], result of:
          0.007099477 = score(doc=4350,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16457605 = fieldWeight in 4350, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4350)
        0.014905514 = weight(_text_:retrieval in 4350) [ClassicSimilarity], result of:
          0.014905514 = score(doc=4350,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 4350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4350)
      0.14285715 = coord(2/14)
    
    Abstract
    The MARC standard for representing catalogue records and the Z39.50 standard for locating and retrieving them have facilitated interoperability in the library domain for more than a decade. With the increasing ubiquity of XML, these standards are being superseded by MARCXML and MarcXchange for record representation and SRW/U for searching and retrieval. Service providers moving from the older standards to the newer generally need to support both old and new forms during the transition period. YAZ Proxy uses a novel approach to provide SRW/MARCXML access to the Library of Congress catalogue, by translating requests into Z39.50 and querying the older system directly. As a fringe benefit, it also greatly accelerates Z39.50 access.
    Footnote
    Vortrag, World Library and Information Congress: 71th IFLA General Conference and Council "Libraries - A voyage of discovery", August 14th - 18th 2005, Oslo, Norway.
    Series
    121 UNIMARC with Information Technology ; 065-E
  14. Concise UNIMARC Classification Format : Draft 5 (20000125) (2000) 0.00
    0.0028391457 = product of:
      0.03974804 = sum of:
        0.03974804 = weight(_text_:retrieval in 4421) [ClassicSimilarity], result of:
          0.03974804 = score(doc=4421,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.5347345 = fieldWeight in 4421, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=4421)
      0.071428575 = coord(1/14)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  15. Eden, B.L.: Metadata and librarianship : will MARC survive? (2004) 0.00
    0.0025589578 = product of:
      0.017912704 = sum of:
        0.010144223 = weight(_text_:information in 4750) [ClassicSimilarity], result of:
          0.010144223 = score(doc=4750,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23515764 = fieldWeight in 4750, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4750)
        0.007768482 = product of:
          0.023305446 = sum of:
            0.023305446 = weight(_text_:22 in 4750) [ClassicSimilarity], result of:
              0.023305446 = score(doc=4750,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.2708308 = fieldWeight in 4750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4750)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Metadata schema and standards are now a part of the information landscape. Librarianship has slowly realized that MARC is only one of a proliferation of metadata standards, and that MARC has many pros and cons related to its age, original conception, and biases. Should librarianship continue to promote the MARC standard? Are there better metadata standards out there that are more robust, user-friendly, and dynamic in the organization and presentation of information? This special issue examines current initiatives that are actively incorporating MARC standards and concepts into new metadata schemata, while also predicting a future where MARC may not be the metadata schema of choice for the organization and description of information.
    Source
    Library hi tech. 22(2004) no.1, S.6-7
  16. Helmkamp, K.; Oehlschläger, S.: Firmenworkshop Umstieg auf MARC 21 : Workshop an der Deutschen Nationalbibliothek am 26. September 2007 (2007) 0.00
    0.0023450172 = product of:
      0.03283024 = sum of:
        0.03283024 = weight(_text_:frankfurt in 2407) [ClassicSimilarity], result of:
          0.03283024 = score(doc=2407,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.32144478 = fieldWeight in 2407, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2407)
      0.071428575 = coord(1/14)
    
    Abstract
    Nach dem internationalen Workshop »MARC 21 - Experiences, Challenges and Visions« im Frühsommer dieses Jahres veranstaltete die Deutsche Nationalbibliothek (DNB) am 26. September 2007 in Frankfurt am Main im Rahmen des von der Deutschen Forschungsgemeinschaft (DFG) geförderten Projekts »Internationalisierung der deutschen Standards: Umstieg auf MARC 21« einen Workshop für Hersteller und Anbieter von Bibliothekssoftware unter Beteiligung von Mitgliedern der Expertengruppe Datenformate, von Experten der DNB und von weiteren Vertretern der Bibliotheksverbünde. Repräsentanten einzelner Bibliotheken, Verbünde und Hersteller folgten einer Einladung der Arbeitsstelle Datenformate und der Expertengruppe Formalerschließung und referierten über Voraussetzungen, Arbeitsvorhaben und Zeitfenster für den Umstieg. Außerdem wurden ausgewählte Aspekte des Formatumstiegs genauer betrachtet und ausführlich diskutiert.
  17. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.00
    0.0022930293 = product of:
      0.016051205 = sum of:
        0.008282723 = weight(_text_:information in 2848) [ClassicSimilarity], result of:
          0.008282723 = score(doc=2848,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1920054 = fieldWeight in 2848, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
        0.007768482 = product of:
          0.023305446 = sum of:
            0.023305446 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
              0.023305446 = score(doc=2848,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.2708308 = fieldWeight in 2848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2848)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  18. Avram, H.D.: Machine Readable Cataloging (MARC): 1961-1974 (2009) 0.00
    0.0022930293 = product of:
      0.016051205 = sum of:
        0.008282723 = weight(_text_:information in 3844) [ClassicSimilarity], result of:
          0.008282723 = score(doc=3844,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1920054 = fieldWeight in 3844, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3844)
        0.007768482 = product of:
          0.023305446 = sum of:
            0.023305446 = weight(_text_:22 in 3844) [ClassicSimilarity], result of:
              0.023305446 = score(doc=3844,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.2708308 = fieldWeight in 3844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3844)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    The MARC Program of the Library of Congress, led during its formative years by the author of this entry, was a landmark in the history of automation. Technical procedures, standards, and formatting for the catalog record were experimented with and developed in modern form in this project. The project began when computers were mainframe, slow, and limited in storage. So little was known then about many aspects of automation of library information resources that the MARC project can be seen as a pioneering effort with immeasurable impact.
    Date
    27. 8.2011 14:22:53
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  19. Tell, B.: On MARC and natural text searching : a review of Pauline Cochrane's inspirational thinking grafted onto a Swedish spy on library matters (2000) 0.00
    0.0019654538 = product of:
      0.013758176 = sum of:
        0.007099477 = weight(_text_:information in 1183) [ClassicSimilarity], result of:
          0.007099477 = score(doc=1183,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16457605 = fieldWeight in 1183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1183)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 1183) [ClassicSimilarity], result of:
              0.019976096 = score(doc=1183,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 1183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1183)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    The following discussion is in appreciation of the invaluable inspirations Pauline Cochrane, by her acumen and perspicacity, has implanted into my thinking regarding various applications of library and information science, especially those involving machine-readable records and subject categorization. It is indeed an honor for me at my age to be offered to contribute to Pauline's Festschrift when instead I should be concerned about my forthcoming obituary. In the following, I must give some Background to what formed my thinking before my involvement in the field and thus before I encountered Pauline.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  20. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.00
    0.0016683982 = product of:
      0.011678787 = sum of:
        0.0050200885 = weight(_text_:information in 4748) [ClassicSimilarity], result of:
          0.0050200885 = score(doc=4748,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.116372846 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
              0.019976096 = score(doc=4748,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 4748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4748)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152