Search (7 results, page 1 of 1)

  • × author_ss:"Lupovici, C."
  1. Lupovici, C.: ¬Le digital object identifier : le système du DOI (1998) 0.01
    0.013365167 = product of:
      0.0400955 = sum of:
        0.010820055 = weight(_text_:in in 2747) [ClassicSimilarity], result of:
          0.010820055 = score(doc=2747,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 2747, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2747)
        0.029275443 = product of:
          0.058550887 = sum of:
            0.058550887 = weight(_text_:22 in 2747) [ClassicSimilarity], result of:
              0.058550887 = score(doc=2747,freq=4.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.38301262 = fieldWeight in 2747, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2747)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The Digital Object Identifier (DOI) has been developed by the academic technical and medical publishing sectors to enable the management of access rights to information published electronically. The DOI system has evolved from the physical documentary unit identifiers developed in the 1970, physical and document logical unit identifiers developed in the 1980s and recently modified to meet the needs of electronic distribution. This experience is integrated into the standardization, currently in progress on the Internet network, of the identification of resources and their localization. The DOI system is potentially the object of an international standard as the ISBN and the ISSN have been
    Date
    22. 1.1999 19:29:22
  2. Masanès, J.; Lupovici, C.: Preservation metadata : the NEDLIB's proposal Bibliothèque Nationale de France (2001) 0.01
    0.0091104945 = product of:
      0.027331483 = sum of:
        0.013115887 = weight(_text_:in in 6013) [ClassicSimilarity], result of:
          0.013115887 = score(doc=6013,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 6013, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6013)
        0.014215595 = weight(_text_:und in 6013) [ClassicSimilarity], result of:
          0.014215595 = score(doc=6013,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 6013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=6013)
      0.33333334 = coord(2/6)
    
    Abstract
    Preservation of digital documents for the long term requires above all to solve the problem of technological obsolescence. Accessing to digital documents in 20 or loo years will be impossible if we, or our successor, can't process the bit stream underlying digital documents. We can be sure that the modality of data processing will be different in 20 or loo years. It is then our task to collect key information about today's data processing to ensure future access to these documents. In this paper we present the NEDLIB's proposal for a preservation metadata set. This set gathers core metadata that are mandatory for preservation management purposes. We propose to define 8 metadata elements and 38 sub-elements following the OAIS taxonomy of information object. A layered information analysis of the digital document is proposed in order to list all information involved in the data processing of the bit stream. These metadata elements are intended to be populate, as much as possible, in an automatic way to make it possible to handle large amounts of documents
    Source
    Zeitschrift für Bibliothekswesen und Bibliographie. 48(2001) H.3/4, S.194-199
  3. Lupovici, C.: ¬La conversion rétrospective des catalogues (1991) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 6580) [ClassicSimilarity], result of:
          0.008924231 = score(doc=6580,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 6580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=6580)
      0.16666667 = coord(1/6)
    
    Abstract
    Defines retrospective conversion, discusses the origins of the term and describes developments during the 80s. Discusses objectives of retrospective conversion, techniques of data capture and conversion in house as opposed to conversion by an outside specialist. Stresses the need for planning, possible problems, the choice of standards and of bibliographic storage
  4. Lupovici, C.: ¬L' information bibliographique des documents electroniques (1998) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 3200) [ClassicSimilarity], result of:
          0.008834538 = score(doc=3200,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 3200, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3200)
      0.16666667 = coord(1/6)
    
    Abstract
    Bibliographic information adds value to primary documents by facilitating access to them. The extension of classic library catalogues to give access to electronic documents can be achieved by a single link as with the recent addition of a specific field to the MARC format. Other approaches are being tested in other communities of users by the introduction of added value information in the electronic document itself using the format of the document. These are contributing to the construction of the libraries and archives of tomorrow
  5. Lupovici, C.: Web crawling : the Bibliothèque Nationale de France experience (2005) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 4352) [ClassicSimilarity], result of:
          0.008834538 = score(doc=4352,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 4352, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4352)
      0.16666667 = coord(1/6)
    
    Abstract
    The Bibliothèque Nationale de France in the framework of its Legal deposit mission, is currently experimenting with web harvesting procedures and is organising the long-term preservation of digital documents. The work carried out to achieve this goal includes fundamental thoughts on the essence of Legal deposit and on the bibliographic treatment of Internet resources. Working on real scale, the huge amount of digital resources is an important factor to help in any decision to be taken by the National Library.
  6. Lupovici, C.: Standards and electronic publishing (1996) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 5869) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=5869,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 5869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5869)
      0.16666667 = coord(1/6)
    
    Abstract
    Standardization of multimedia over the past 5 years has been applied to its content in terms of text, images, graphics, sound and video as well as to information structure and presentation. Describes the coding of content and structure, the family of SGML and Portable Document Format (PDF). Categorizes the current formats of electronic publications
  7. Lupovici, C.: ¬L'¬information secondaire du document primaire : format MARC ou SGML? (1997) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 892) [ClassicSimilarity], result of:
          0.006246961 = score(doc=892,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=892)
      0.16666667 = coord(1/6)
    
    Abstract
    Secondary information, e.g. MARC based bibliographic records, comprises structured data for identifying, tagging, retrieving and management of primary documents. SGML, the standard format for coding content and structure of primary documents, was introduced in 1986 as a publishing tool but is now being applied to bibliographic records. SGML now comprises standard definitions (DTD) for books, serials, articles and mathematical formulae. A simplified version (HTML) is used for Web pages. Pilot projects to develop SGML as a standard for bibliographic exchange include the Dublin Core, listing 13 descriptive elements for Internet documents; the French GRISELI programme using SGML for exchanging grey literature and US experiments on reformatting USMARC for use with SGML-based records