Search (7555 results, page 378 of 378)

  • × language_ss:"e"
  • × year_i:[1990 TO 2000}
  1. Chowdhury, G.G.; Chowdhury, S.: Digital library research : major issues and trends (1999) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 4610) [ClassicSimilarity], result of:
          0.008582841 = score(doc=4610,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 4610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4610)
      0.25 = coord(1/4)
    
    Abstract
    Digital library research has attracted much attention in the most developed, and in a number of developing, countries. While many digital library research projects are funded by government agencies and national and international bodies, some are run by specific academic and research institutions and libraries, either individually or collaboratively. While some digital library projects, such as the ELINOR project in the UK, the first two phases of the eLib (Electronic Libraries) Programme in the UK, and the first phase of DLI (Digital Library Initiative) in the US, are now over, a number of other projects are currently under way in different parts of the world. Beginning with the definitions and characteristics of digital libraries, as proposed by various researchers, this paper provides brief accounts of some major digital library projects that are currently in progress, or are just completed, in different parts of the world. There follows a review of digital library research under sixteen major headings. Literature for this review has been identified through a search on LISA CD-ROM database, and a Dialog search on library and information science databases, and the resulting output has been supplemented by a scan of the various issues of D-Lib Magazine and Ariadne, and the websites of various organisations and institutions engaged in digital library research. The review indicates that we have learned a lot through digital library research within a short span of time. However, a number of issues are yet to be resolved. The paper ends with an indication of the research issues that need to be addressed and resolved in the near future in order to bring the digital library from the researcher's laboratory to the real life environment.
  2. Andrew, P.G.: ¬A survey technique for map collection retrospective conversion projects (1999) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 5339) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5339,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5339)
      0.25 = coord(1/4)
    
    Abstract
    Although much has been written about the need for, methodologies, costs, and other aspects of retrospective conversion little exists in the literature regarding retrospective conversion of cartographic materials, and map collections specifically. Reference is usually made to the need to survey the collection for conversion, but the author was unable to locate a description of a random sampling technique that explains how it is applied and what the outcome was. This article introduces the use of a random sampling technique with a major university map collection. The University of Georgia's Maps Collection was surveyed to ascertain how much of the existing maps card catalog needed to be converted to an electronic form for use in the local online public access catalog. In addition, the samples pulled from the survey were searched against the OCLC union catalog to determine the proportions of records that could be found in OCLC and loaded into the Georgia Libraries Information Network (GALIN), the online catalog, with no cataloging intervention versus the degree to which the maps cataloger would have to either adjust existing records available or create original records for the online catalog.
  3. Dahlberg, I.: ¬The future of classification in libraries and networks : a theoretical point of view (1995) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 5563) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5563,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5563)
      0.25 = coord(1/4)
    
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
  4. McIlwaine, I.: Preparing traditional classifications for the future : Universal Decimal Classification (1995) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 5565) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5565,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5565)
      0.25 = coord(1/4)
    
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
  5. Coates, E.J.: BC2 and BSO : presentation at the 36th Allerton Institute, 1994 session on preparing traditional classifications for the future (1995) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 5566) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5566,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5566)
      0.25 = coord(1/4)
    
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
  6. Ziadie, A.M.: Classification in libraries and networks abroad : a report of a panel discussion (1995) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 5569) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5569,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5569, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5569)
      0.25 = coord(1/4)
    
    Footnote
    Paper presented at the 36th Allerton Institute, 23-25 Oct 94, Allerton Park, Monticello, IL: "New Roles for Classification in Libraries and Information Networks: Presentation and Reports"
  7. Guerrini, M.: ACOLIT : un progetto in corso (1997) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 5616) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5616,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5616)
      0.25 = coord(1/4)
    
    Abstract
    In June 1995 the ABEI (Italian Catholic Librarians Association) established a working group in order to create an authority list of Catholic authors, (persons and corporate bodies) and of liturgical and religious anonymous works, titled ACOLIT, Autori Cattolici e Opere Liturgiche in Italiano (Catholic Authors and Liturgical Works in Italian). ACOLIT contains: (1) personal authors (particularly of the apostolic period and the Middle Ages); (2) popes and antipopes; (3) religious congregations, orders and societies; (4) Catholic Church and Roman Curia; (5) Catholic associations; (6) the Bible; (7) liturgical works; (8) religious anonymous works. Headings are established according to the RICA (Regole Italiane di Catalogazione per Autori), but also to the Norme per il catalogo degli stampati by the Vatican Library, the AACR2R, the RAK, the Reglas de catalogaci = F3n. Ed. refundita y rev., and the guidelines and decisions of IFLA. The work group has elaborated original considerations, particularly for the Bible. The group argues the choice and forth of the names of popes, Catholic Church and Roman Curia of the RICA and suggests that classical and Medieval writers should be formulated in Italian not in Latin, and that the indirect form, surname-name, should be used for saints who have a surname. ACOLIT has accepted the GARE punctuation (Guidelines for authority and reference entries/ recommended by the Working Group on an International Authority System; approved by the Standing Committees of the IFLA Section on Cataloguing and the IFLA Section on Information Technology). The print edition is planned for June 1997. ACOLIT will present headings in three sections: (1) Personal writers; (2) Corporate bodies; (1) Bible Liturgical and religious anonymous works, ABEI will also publish an electronic edition (CD ROM), periodically revised. The research will extend to Christian writers and -in the future- to writers of all religions.
  8. Heidorn, P.B.: Image retrieval as linguistic and nonlinguistic visual model matching (1999) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 841) [ClassicSimilarity], result of:
          0.008582841 = score(doc=841,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=841)
      0.25 = coord(1/4)
    
    Abstract
    This article reviews research on how people use mental models of images in an information retrieval environment. An understanding of these cognitive processes can aid a researcher in designing new systems and help librarians select systems that best serve their patrons. There are traditionally two main approaches to image indexing: concept-based and content-based (Rasmussen, 1997). The concept-based approach is used in many production library systems, while the content-based approach is dominant in research and in some newer systems. In the past, content-based indexing supported the identification of "low-level" features in an image. These features frequently do not require verbal labels. In many cases, current computer technology can create these indexes. Concept-based indexing, on the other hand, is a primarily verbal and abstract identification of "high-level" concepts in an image. This type of indexing requires the recognition of meaning and is primarily performed by humans. Most production-level library systems rely on concept-based indexing using keywords. Manual keyword indexing is, however, expensive and introduces problems with consistency. Recent advances have made some content-based indexing practical. In addition, some researchers are working on machine vision and pattern recognition techniques that blur the line between concept-based and content-based indexing. It is now possible to produce computer systems that allow users to search simultaneously on aspects of both concept-based and content-based indexes. The intelligent application of this technology requires an understanding of the user's visual mental models of images and cognitive behavior.
  9. Bearman, D.; Miller, E.; Rust, G.; Trant, J.; Weibel, S.: ¬A common model to support interoperable metadata : progress report on reconciling metadata requirements from the Dublin Core and INDECS/DOI communities (1999) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 1249) [ClassicSimilarity], result of:
          0.008582841 = score(doc=1249,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 1249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1249)
      0.25 = coord(1/4)
    
    Abstract
    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. Recent meetings at the 6th Dublin Core Workshop in Washington DC sketched out common models for semantics (informed by the requirements articulated in the IFLA Functional Requirements for the Bibliographic Record) and conventions for knowledge representation (based on the Resource Description Framework under development by the W3C). Further development of detailed requirements is planned by both communities in the coming months with the aim of fully representing the metadata needs of each. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities. The present document represents a starting point identifying historical developments and common requirements of these perspectives on metadata and charts a path for harmonizing their respective conceptual models. It is hoped that collaboration over the coming year will result in agreed semantic and syntactic conventions that will support a high degree of interoperability among these communities, ideally expressed in a single data model and using common, standard tools.
  10. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.00
    0.002124145 = product of:
      0.00849658 = sum of:
        0.00849658 = weight(_text_:information in 5903) [ClassicSimilarity], result of:
          0.00849658 = score(doc=5903,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0960027 = fieldWeight in 5903, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
      0.25 = coord(1/4)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
  11. Scott, M.L.: Dewey Decimal Classification, 21st edition : a study manual and number building guide (1998) 0.00
    0.002124145 = product of:
      0.00849658 = sum of:
        0.00849658 = weight(_text_:information in 1454) [ClassicSimilarity], result of:
          0.00849658 = score(doc=1454,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0960027 = fieldWeight in 1454, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1454)
      0.25 = coord(1/4)
    
    Content
    This work is a comprehensive guide to Edition 21 of the Dewey Decimal Classification (DDC 21). The previous edition was edited by John Phillip Comaromi, who also was the editor of DDC 20 and thus was able to impart in its pages information about the inner workings of the Decimal Classification Editorial Policy Committee, which guides the Classification's development. The manual begins with a brief history of the development of Dewey Decimal Classification (DDC) up to this edition and its impact internationally. It continues on to a review of the general structure of DDC and the 21st edition in particular, with emphasis on the framework ("Hierarchical Order," "Centered Entries") that aids the classifier in its use. An extensive part of this manual is an in-depth review of how DDC is updated with each edition, such as reductions and expansions, and detailed lists of such changes in each table and class. Each citation of a change indicates the previous location of the topic, usually in parentheses but also in textual explanations ("moved from 248.463"). A brief discussion of the topic moved or added provides substance to what otherwise would be lists of numbers. Where the changes are so dramatic that a new class or division structure has been developed, Comparative and Equivalence Tables are provided in volume 1 of DDC 21 (such as Life sciences in 560-590); any such list in this manual would only be redundant. In these cases, the only references to changes in this work are those topics that were moved from other classes. Besides these citations of changes, each class is introduced with a brief background discussion about its development or structure or both to familiarize the user with it. A new aspect in this edition of the DDC study manual is that it is combined with Marty Bloomberg and Hans Weber's An Introduction to Classification and Number Building in Dewey (Libraries Unlimited, 1976) to provide a complete reference for the application of DDC. Detailed examples of number building for each class will guide the classifier through the process that results in classifications for particular works within that class. In addition, at the end of each chapter, lists of book summaries are given as exercises in number analysis, with Library of Congress-assigned classifications to provide benchmarks. The last chapter covers book, or author, numbers, which-combined with the classification and often the date-provide unique call numbers for circulation and shelf arrangement. Guidelines in the application of Cutter tables and Library of Congress author numbers complete this comprehensive reference to the use of DDC 21. As with all such works, this was a tremendous undertaking, which coincided with the author completing a new edition of Conversion Tables: LC-Dewey, Dewey-LC (Libraries Unlimited, forthcoming). Helping hands are always welcome in our human existence, and this book is no exception. Grateful thanks are extended to Jane Riddle, at the NASA Goddard Space Flight Center Library, and to Darryl Hines, at SANAD Support Technologies, Inc., for their kind assistance in the completion of this study manual.
    Footnote
    Rez. in: Managing information 6(1999) no.2, S.49 (J. Bowman)
  12. Moed, H.F.; Leeuwen, T.N. van; Reedijk, J.: ¬A new classification system to describe the ageing of scientific journals and their impact factors (1998) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 4719) [ClassicSimilarity], result of:
          0.006866273 = score(doc=4719,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 4719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4719)
      0.25 = coord(1/4)
    
    Abstract
    During the past decades, journal impact data obtained from the Journal Citation Reports (JCR) have gained relevance in library management, research management and research evaluation. Hence, both information scientists and bibliometricians share the responsibility towards the users of the JCR to analyse the reliability and validity of its measures thoroughly, to indicate pitfalls and to suggest possible improvements. In this article, ageing patterns are examined in 'formal' use or impact of all scientific journals processed for the Science Citation Index (SCI) during 1981-1995. A new classification system of journals in terms of their ageing characteristics is introduced. This system has been applied to as many as 3,098 journals covered by the Science Citation Index. Following an earlier suggestion by Glnzel and Schoepflin, a maturing and a decline phase are distinguished. From an analysis across all subfields it has been concluded that ageing characteristics are primarily specific to the individual journal rather than to the subfield, while the distribution of journals in terms of slowly or rapidly maturing or declining types is specific to the subfield. It is shown that the cited half life (CHL), printed in the JCR, is an inappropriate measure of decline of journal impact. Following earlier work by Line and others, a more adequate parameter of decline is calculated taking into account the size of annual volumes during a range of fifteen years. For 76 per cent of SCI journals the relative difference between this new parameter and the ISI CHL exceeds 5 per cent. The current JCR journal impact factor is proven to be biased towards journals revealing a rapid maturing and decline in impact. Therefore, a longer term impact factor is proposed, as well as a normalised impact statistic, taking into account citation characteristics of the research subfield covered by a journal and the type of documents published in it. When these new measures are combined with the proposed ageing classification system, they provide a significantly improved picture of a journal's impact to that obtained from the JCR.
  13. Wiegand, W.A.: Irrepressible reformer : a biography of Melvil Dewey (1996) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 1646) [ClassicSimilarity], result of:
          0.006866273 = score(doc=1646,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 1646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1646)
      0.25 = coord(1/4)
    
    Footnote
    Rez.: Journal of librarianship and information science 29(1997) no.3, S.164-165 (J.H. Bowman)
  14. Clavel, G.; Dale, P.; Heiner-Freiling, M.; Kunz, M.; Landry, P.; MacEwan, A.; Naudi, M.; Oddy, P.; Saget, A.: CoBRA+ working group on multilingual subject access : final report (1999) 0.00
    0.0015019972 = product of:
      0.006007989 = sum of:
        0.006007989 = weight(_text_:information in 6067) [ClassicSimilarity], result of:
          0.006007989 = score(doc=6067,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.06788416 = fieldWeight in 6067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6067)
      0.25 = coord(1/4)
    
    Footnote
    Vgl. auch: http://www.bl.uk/information/finrap3.html
  15. Baker, T.: Languages for Dublin Core (1998) 0.00
    0.0015019972 = product of:
      0.006007989 = sum of:
        0.006007989 = weight(_text_:information in 1257) [ClassicSimilarity], result of:
          0.006007989 = score(doc=1257,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.06788416 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
      0.25 = coord(1/4)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.

Authors

Types

  • a 6519
  • m 511
  • s 405
  • r 85
  • el 78
  • i 33
  • b 23
  • n 21
  • ? 8
  • p 7
  • d 4
  • h 2
  • pat 1
  • x 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications