Search (5 results, page 1 of 1)

  • × author_ss:"Andresen, L."
  1. Andresen, L.: Metadata in Denmark (2000) 0.06
    0.05533268 = product of:
      0.11066536 = sum of:
        0.11066536 = sum of:
          0.0108246 = weight(_text_:a in 4899) [ClassicSimilarity], result of:
            0.0108246 = score(doc=4899,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.20383182 = fieldWeight in 4899, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=4899)
          0.09984076 = weight(_text_:22 in 4899) [ClassicSimilarity], result of:
            0.09984076 = score(doc=4899,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.61904186 = fieldWeight in 4899, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=4899)
      0.5 = coord(1/2)
    
    Date
    16. 7.2000 20:58:22
    Type
    a
  2. Andresen, L.: After MARC - what then? (2004) 0.02
    0.023258494 = product of:
      0.04651699 = sum of:
        0.04651699 = sum of:
          0.009076704 = weight(_text_:a in 4751) [ClassicSimilarity], result of:
            0.009076704 = score(doc=4751,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1709182 = fieldWeight in 4751, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4751)
          0.037440285 = weight(_text_:22 in 4751) [ClassicSimilarity], result of:
            0.037440285 = score(doc=4751,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 4751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4751)
      0.5 = coord(1/2)
    
    Abstract
    The article discusses the future of the MARC formats and outlines how future cataloguing practice and bibliographic records might look. Background and basic functionality of the MARC formats are outlined, and it is pointed out that MARC is manifest in several different formats. This is illustrated through a comparison between the MARC21 format and the Danish MARC format "danMARC2". It is argued that present cataloguing codes and MARC formats are based primarily on the Paris principles and that "functional requirements for bibliographic records" (FRBR) would serve as a more solid and user-oriented platform for future development of cataloguing codes and formats. Furthermore, it is argued that MARC is a library-specific format, which results in neither exchange with library external sectors nor inclusion of other texts being facilitated. XML could serve as the technical platform for a model for future registrations, consisting of some core data and different supplements of data necessary for different sectors and purposes.
    Source
    Library hi tech. 22(2004) no.1, S.40-51
    Type
    a
  3. Andresen, L.: Z39.50 update (2000) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 482) [ClassicSimilarity], result of:
              0.0108246 = score(doc=482,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=482)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  4. Andresen, L.: Standardisation of Dublin Core in Europe (2000) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 4454) [ClassicSimilarity], result of:
              0.010739701 = score(doc=4454,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 4454, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4454)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is developed in an open consensus building environment and has also succeeded in many countries and in many domains. The idea of Dublin Core was to establish a de facto standard for metadata for discovery. There is a natural development from de facto standards to formal standards. A very important detail of formal standardisation is the stability and the credibility of an official standard. That was the background for the metadata policy of The Danish National Library Authority, agreed upon in Dec 1997. We decided to work for a formal standardisation of Dublin Core. there was - and is - a need for standardised method of describing Internet resources. And at that time Dublin core seemed to be the best bid for a metadata schema. It was not very widely used in practice, but Dublin Core was what people were talking about
    Type
    a
  5. Andresen, L.: 7th Dublin Core Workshop (2000) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 4879) [ClassicSimilarity], result of:
              0.009471525 = score(doc=4879,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 4879, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4879)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Languages