Search (10 results, page 1 of 1)

  • × classification_ss:"020"
  1. Gartner, R.: Metadata in the digital library : building an integrated strategy with XML (2021) 0.02
    0.018424764 = product of:
      0.09826541 = sum of:
        0.057796773 = weight(_text_:descriptive in 732) [ClassicSimilarity], result of:
          0.057796773 = score(doc=732,freq=6.0), product of:
            0.17974061 = queryWeight, product of:
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.032090448 = queryNorm
            0.32155657 = fieldWeight in 732, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
        0.013489546 = product of:
          0.026979093 = sum of:
            0.026979093 = weight(_text_:rules in 732) [ClassicSimilarity], result of:
              0.026979093 = score(doc=732,freq=2.0), product of:
                0.16161752 = queryWeight, product of:
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.032090448 = queryNorm
                0.16693173 = fieldWeight in 732, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.036312 = idf(docFreq=780, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=732)
          0.5 = coord(1/2)
        0.026979093 = weight(_text_:rules in 732) [ClassicSimilarity], result of:
          0.026979093 = score(doc=732,freq=2.0), product of:
            0.16161752 = queryWeight, product of:
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.032090448 = queryNorm
            0.16693173 = fieldWeight in 732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.036312 = idf(docFreq=780, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
      0.1875 = coord(3/16)
    
    Abstract
    The range of metadata needed to run a digital library and preserve its collections in the long term is much more extensive and complicated than anything in its traditional counterpart. It includes the same 'descriptive' information which guides users to the resources they require but must supplement this with comprehensive 'administrative' metadata: this encompasses technical details of the files that make up its collections, the documentation of complex intellectual property rights and the extensive set needed to support its preservation in the long-term. To accommodate all of this requires the use of multiple metadata standards, all of which have to be brought together into a single integrated whole.
    Metadata in the Digital Library is a complete guide to building a digital library metadata strategy from scratch, using established metadata standards bound together by the markup language XML. The book introduces the reader to the theory of metadata and shows how it can be applied in practice. It lays out the basic principles that should underlie any metadata strategy, including its relation to such fundamentals as the digital curation lifecycle, and demonstrates how they should be put into effect. It introduces the XML language and the key standards for each type of metadata, including Dublin Core and MODS for descriptive metadata and PREMIS for its administrative and preservation counterpart. Finally, the book shows how these can all be integrated using the packaging standard METS. Two case studies from the Warburg Institute in London show how the strategy can be implemented in a working environment. The strategy laid out in this book will ensure that a digital library's metadata will support all of its operations, be fully interoperable with others and enable its long-term preservation. It assumes no prior knowledge of metadata, XML or any of the standards that it covers. It provides both an introduction to best practices in digital library metadata and a manual for their practical implementation.
    Content
    Inhalt: 1 Introduction, Aims and Definitions -- 1.1 Origins -- 1.2 From information science to libraries -- 1.3 The central place of metadata -- 1.4 The book in outline -- 2 Metadata Basics -- 2.1 Introduction -- 2.2 Three types of metadata -- 2.2.1 Descriptive metadata -- 2.2.2 Administrative metadata -- 2.2.3 Structural metadata -- 2.3 The core components of metadata -- 2.3.1 Syntax -- 2.3.2 Semantics -- 2.3.3 Content rules -- 2.4 Metadata standards -- 2.5 Conclusion -- 3 Planning a Metadata Strategy: Basic Principles -- 3.1 Introduction -- 3.2 Principle 1: Support all stages of the digital curation lifecycle -- 3.3 Principle 2: Support the long-term preservation of the digital object -- 3.4 Principle 3: Ensure interoperability -- 3.5 Principle 4: Control metadata content wherever possible -- 3.6 Principle 5: Ensure software independence -- 3.7 Principle 6: Impose a logical system of identifiers -- 3.8 Principle 7: Use standards whenever possible -- 3.9 Principle 8: Ensure the integrity of the metadata itself -- 3.10 Summary: the basic principles of a metadata strategy -- 4 Planning a Metadata Strategy: Applying the Basic Principles -- 4.1 Introduction -- 4.2 Initial steps: standards as a foundation -- 4.2.1 'Off-the shelf' standards -- 4.2.2 Mapping out an architecture and serialising it into a standard -- 4.2.3 Devising a local metadata scheme -- 4.2.4 How standards support the basic principles -- 4.3 Identifiers: everything in its place -- 5 XML: The Syntactical Foundation of Metadata -- 5.1 Introduction -- 5.2 What XML looks like -- 5.3 XML schemas -- 5.4 Namespaces -- 5.5 Creating and editing XML -- 5.6 Transforming XML -- 5.7 Why use XML? -- 6 METS: The Metadata Package -- 6.1 Introduction -- 6.2 Why use METS?.
  2. Ranganathan, S.R.: ¬The five laws of library science (1989) 0.02
    0.01676211 = product of:
      0.13409688 = sum of:
        0.11168019 = weight(_text_:2nd in 6227) [ClassicSimilarity], result of:
          0.11168019 = score(doc=6227,freq=2.0), product of:
            0.18010403 = queryWeight, product of:
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.032090448 = queryNorm
            0.6200871 = fieldWeight in 6227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6123877 = idf(docFreq=438, maxDocs=44218)
              0.078125 = fieldNorm(doc=6227)
        0.022416692 = product of:
          0.044833384 = sum of:
            0.044833384 = weight(_text_:ed in 6227) [ClassicSimilarity], result of:
              0.044833384 = score(doc=6227,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.39288494 = fieldWeight in 6227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6227)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Issue
    2nd ed., repr.
  3. ¬The common market for information : proceedings of the annual conference of the Institute of Information Scientists (1992) 0.00
    0.0033163952 = product of:
      0.053062323 = sum of:
        0.053062323 = weight(_text_:26 in 6096) [ClassicSimilarity], result of:
          0.053062323 = score(doc=6096,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.4682183 = fieldWeight in 6096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.09375 = fieldNorm(doc=6096)
      0.0625 = coord(1/16)
    
    Date
    26. 7.2002 12:20:23
  4. Gurnsey, J.; White, M.: Information consultancy (1988) 0.00
    0.0022109302 = product of:
      0.035374884 = sum of:
        0.035374884 = weight(_text_:26 in 593) [ClassicSimilarity], result of:
          0.035374884 = score(doc=593,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.31214553 = fieldWeight in 593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0625 = fieldNorm(doc=593)
      0.0625 = coord(1/16)
    
    Date
    26. 7.2002 12:02:02
  5. Tüür-Fröhlich, T.: ¬The non-trivial effects of trivial errors in scientific communication and evaluation (2016) 0.00
    0.002063346 = product of:
      0.033013538 = sum of:
        0.033013538 = weight(_text_:author in 3137) [ClassicSimilarity], result of:
          0.033013538 = score(doc=3137,freq=2.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.21322623 = fieldWeight in 3137, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.03125 = fieldNorm(doc=3137)
      0.0625 = coord(1/16)
    
    Abstract
    "Thomson Reuters' citation indexes i.e. SCI, SSCI and AHCI are said to be "authoritative". Due to the huge influence of these databases on global academic evaluation of productivity and impact, Terje Tüür-Fröhlich decided to conduct case studies on the data quality of Social Sciences Citation Index (SSCI) records. Tüür-Fröhlich investigated articles from social science and law. The main findings: SSCI records contain tremendous amounts of "trivial errors", not only misspellings and typos as previously mentioned in bibliometrics and scientometrics literature. But Tüür-Fröhlich's research documented fatal errors which have not been mentioned in the scientometrics literature yet at all. Tüür-Fröhlich found more than 80 fatal mutations and mutilations of Pierre Bourdieu (e.g. "Atkinson" or "Pierre, B. and "Pierri, B."). SSCI even generated zombie references (phantom authors and works) by data fields' confusion - a deadly sin for a database producer - as fragments of Patent Laws were indexed as fictional author surnames/initials. Additionally, horrific OCR-errors (e.g. "nuxure" instead of "Nature" as journal title) were identified. Tüür-Fröhlich´s extensive quantitative case study of an article of the Harvard Law Review resulted in a devastating finding: only 1% of all correct references from the original article were indexed by SSCI without any mistake or error. Many scientific communication experts and database providers' believe that errors in databanks are of less importance: There are many errors, yes - but they would counterbalance each other, errors would not result in citation losses and would not bear any effect on retrieval and evaluation outcomes. Terje Tüür-Fröhlich claims the contrary: errors and inconsistencies are not evenly distributed but linked with languages biases and publication cultures."
  6. Vickery, B.C.; Vickery, A.: Information science in theory and practice (1993) 0.00
    0.001934564 = product of:
      0.030953024 = sum of:
        0.030953024 = weight(_text_:26 in 3033) [ClassicSimilarity], result of:
          0.030953024 = score(doc=3033,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.27312735 = fieldWeight in 3033, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3033)
      0.0625 = coord(1/16)
    
    Date
    26. 7.2002 13:23:28
  7. Bertram, J.: Einführung in die inhaltliche Erschließung : Grundlagen - Methoden - Instrumente (2005) 0.00
    0.0018060643 = product of:
      0.028897028 = sum of:
        0.028897028 = weight(_text_:anglo in 210) [ClassicSimilarity], result of:
          0.028897028 = score(doc=210,freq=2.0), product of:
            0.20485519 = queryWeight, product of:
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.032090448 = queryNorm
            0.14106075 = fieldWeight in 210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.3836813 = idf(docFreq=202, maxDocs=44218)
              0.015625 = fieldNorm(doc=210)
      0.0625 = coord(1/16)
    
    Abstract
    Die inhaltliche Erschließung geht von der Frage aus, wie man sich gezielt und schnell einen Zugang zu Dokumentinhalten und eine Orientierung über sie verschaffen kann. Ein Dokument kann dabei alles sein, was als Träger inhaltlicher Daten in Betracht kommt: eine Patentschrift oder ein Plakat, ein Fernsehbeitrag oder ein Musikstück, ein musealer Gegenstand oder eine Internetquelle, ein Buch, ein Zeitungsartikel und vieles mehr. Als Dozentin für inhaltliche Erschließung am Potsdamer Institut für Information und Dokumentation (IID), das wissenschaftliche Dokumentare ausbildet, habe ich mich mit obiger Frage vier Jahre lang beschäftigt. Das vorliegende Buch stellt eine Essenz meiner Unterrichtsmaterialien dar, die ich im Laufe dieser Zeit entwickelt habe. Mit der Erstellung der Materialien wollte ich zum einen dem Mangel an aktueller deutschsprachiger Literatur zum Thema 'Inhaltserschließung' begegnen und außerdem das Lehrgeschehen so vollständig wie möglich dokumentieren. Die Zusammenfassung meiner Skripte zu einer Buchpublikation ist eine Reaktion auf das häufig geäußerte Bedürfnis der Kursteilnehmer, neben all den Loseblattsammlungen auch ein "richtiges Buch" erstehen zu können. Die Publikation richtet sich aber ebenso an mein derzeitiges Klientel, die Studentinnen und Studenten des Fachhochschulstudiengangs Informationsberufe in Eisenstadt (Österreich). Darüber hinaus ist sie an jegliche Teilnehmer von Studiengängen im Bibliotheks-, Informations- und Dokumentationsbereich adressiert. Zudem verbinde ich mit ihr die Hoffnung, daß sie auch der einen oder anderen lehrenden Person als ein Hilfsmittel dienen möge. Bei der Erstellung der Lehrmaterialien ging es mir nicht darum, das Rad neu zu erfinden. Vielmehr war es meine Absicht, die bereits vorhandenen Räder zusammenzutragen und in einem einheitlichen Konzept zu vereinen. Der Literaturpool, aus dem sich die Publikation speist, besteht zunächst aus nationalen und internationalen Normen, die den Bereich der Inhaltserschließung berühren. Dazu kommen die nicht eben zahlreichen Monographien und Aufsätze im deutschen Sprachraum sowie einige englischsprachige Buchpublikationen zum Thema. Schließlich habe ich Zeitschriftenartikel aus dem Bibliotheks- und Dokumentationsbereich der letzten vier Jahre einbezogen, mehrheitlich ebenfalls aus dem deutschen Sprachraum, vereinzelt aus dem anglo-amerikanischen. Ein wesentliches Anliegen dieses Buches ist es, Licht in das terminologische Dunkel zu bringen, das sich dem interessierten Leser bei intensivem Literaturstudium darbietet. Denn der faktische Sprachgebrauch weicht häufig vom genormten ab, die Bibliothekare verwenden andere Ausdrücke als die Dokumentare und Informationswissenschaftler, unterschiedliche Autoren sprechen unterschiedliche Sprachen und auch die Normen selbst sind längst nicht immer eindeutig. Ich habe mich um terminologische Konsistenz in der Weise bemüht, daß ich alternative deutsche Ausdrucksformen zu einem Terminus in Fußnoten aufführe. Dort finden sich gegebenenfalls auch die entsprechenden englischen Bezeichnungen, wobei ich mich in dieser Hinsicht überwiegend an dem genormten englischen Vokabular orientiert habe.
  8. Greifeneder, E.: Online-Hilfen in OPACs : Analyse deutscher Universitäts-Onlinekataloge (2007) 0.00
    6.7934574E-4 = product of:
      0.010869532 = sum of:
        0.010869532 = product of:
          0.021739064 = sum of:
            0.021739064 = weight(_text_:22 in 1935) [ClassicSimilarity], result of:
              0.021739064 = score(doc=1935,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.19345059 = fieldWeight in 1935, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1935)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Date
    22. 6.2008 13:03:30
  9. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.00
    5.434766E-4 = product of:
      0.008695626 = sum of:
        0.008695626 = product of:
          0.017391251 = sum of:
            0.017391251 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.017391251 = score(doc=566,freq=2.0), product of:
                0.11237528 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032090448 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  10. Vickery, B.C.; Vickery, A.: Information science in theory and practice (2004) 0.00
    3.5026082E-4 = product of:
      0.005604173 = sum of:
        0.005604173 = product of:
          0.011208346 = sum of:
            0.011208346 = weight(_text_:ed in 4320) [ClassicSimilarity], result of:
              0.011208346 = score(doc=4320,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.098221235 = fieldWeight in 4320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4320)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Issue
    3rd rev. and enlarged ed.

Years

Languages

Types

  • m 10
  • s 1