Search (48 results, page 2 of 3)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Margaritopoulos, M.; Margaritopoulos, T.; Mavridis, I.; Manitsaris, A.: Quantifying and measuring metadata completeness (2012) 0.00
    0.0011787476 = product of:
      0.008251233 = sum of:
        0.008251233 = product of:
          0.041256163 = sum of:
            0.041256163 = weight(_text_:system in 43) [ClassicSimilarity], result of:
              0.041256163 = score(doc=43,freq=6.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.36163113 = fieldWeight in 43, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=43)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Completeness of metadata is one of the most essential characteristics of their quality. An incomplete metadata record is a record of degraded quality. Existing approaches to measure metadata completeness limit their scope in counting the existence of values in fields, regardless of the metadata hierarchy as defined in international standards. Such a traditional approach overlooks several issues that need to be taken into account. This paper presents a fine-grained metrics system for measuring metadata completeness, based on field completeness. A metadata field is considered to be a container of multiple pieces of information. In this regard, the proposed system is capable of following the hierarchy of metadata as it is set by the metadata schema and admeasuring the effect of multiple values of multivalued fields. An application of the proposed metrics system, after being configured according to specific user requirements, to measure completeness of a real-world set of metadata is demonstrated. The results prove its ability to assess the sufficiency of metadata to describe a resource and provide targeted measures of completeness throughout the metadata hierarchy.
  2. Kopácsi, S.; Hudak, R.; Ganguly, R.: Implementation of a classification server to support metadata organization for long term preservation systems (2017) 0.00
    0.0011228506 = product of:
      0.007859954 = sum of:
        0.007859954 = product of:
          0.039299767 = sum of:
            0.039299767 = weight(_text_:system in 3915) [ClassicSimilarity], result of:
              0.039299767 = score(doc=3915,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.34448233 = fieldWeight in 3915, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3915)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In diesem Artikel beschreiben wir die Implementierung eines Klassifikationsservers für Metadatenorganisation in einem Langzeitarchivierungssystem für digitale Objekte. Nach einer kurzen Einführung in Klassifikationen und Wissensorganisationen stellen wir die Anforderungen an das zu implementierende System vor. Wir beschreiben sämtliche Simple Knowledge Organization System (SKOS) Management Tools, die wir untersucht haben, darunter auch Skosmos, die Lösung, die wir für die Implementierung gewählt haben. Skosmos ist ein open source, webbasierter SKOS Browser, basierend auf dem Jena Fuseki SPARQL Server. Wir diskutieren einige entscheidende Schritte während der Installation der ausgewählten Tools und präsentieren sowohl die potentiell auftretenden Probleme mit den verwendeten Klassifikationen als auch mögliche Lösungen.
  3. Metadata and semantics research : 9th Research Conference, MTSR 2015, Manchester, UK, September 9-11, 2015, Proceedings (2015) 0.00
    0.0010873 = product of:
      0.0076110996 = sum of:
        0.0076110996 = product of:
          0.0380555 = sum of:
            0.0380555 = weight(_text_:retrieval in 3274) [ClassicSimilarity], result of:
              0.0380555 = score(doc=3274,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.34732026 = fieldWeight in 3274, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3274)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Content
    The papers are organized in several sessions and tracks: general track on ontology evolution, engineering, and frameworks, semantic Web and metadata extraction, modelling, interoperability and exploratory search, data analysis, reuse and visualization; track on digital libraries, information retrieval, linked and social data; track on metadata and semantics for open repositories, research information systems and data infrastructure; track on metadata and semantics for agriculture, food and environment; track on metadata and semantics for cultural collections and applications; track on European and national projects.
    LCSH
    Information storage and retrieval systems
    Subject
    Information storage and retrieval systems
  4. Jeffery, K.G.; Bailo, D.: EPOS: using metadata in geoscience (2014) 0.00
    9.624433E-4 = product of:
      0.0067371028 = sum of:
        0.0067371028 = product of:
          0.033685513 = sum of:
            0.033685513 = weight(_text_:system in 1581) [ClassicSimilarity], result of:
              0.033685513 = score(doc=1581,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.29527056 = fieldWeight in 1581, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1581)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    One of the key aspects of the approaching data-intensive science era is integration of data through interoperability of systems providing data products or visualisation and processing services. Far from being simple, interoperability requires robust and scalable e-infrastructures capable of supporting it. In this work we present the case of EPOS, a project for data integration in the field of Earth Sciences. We describe the design of its e-infrastructure and show its main characteristics. One of the main elements enabling the system to integrate data, data products and services is the metadata catalog based on the CERIF metadata model. Such a model, modified to fit into the general e-infrastructure design, is part of a three-layer metadata architecture. CERIF guarantees a robust handling of metadata, which is in this case the key to the interoperability and to one of the feature of the EPOS system: the possibility of carrying on data intensive science orchestrating the distributed resources made available by EPOS data providers and stakeholders.
  5. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.00
    9.624433E-4 = product of:
      0.0067371028 = sum of:
        0.0067371028 = product of:
          0.033685513 = sum of:
            0.033685513 = weight(_text_:system in 3896) [ClassicSimilarity], result of:
              0.033685513 = score(doc=3896,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.29527056 = fieldWeight in 3896, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3896)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  6. Roux, M.: Metadata for search engines : what can be learned from e-Sciences? (2012) 0.00
    8.877766E-4 = product of:
      0.006214436 = sum of:
        0.006214436 = product of:
          0.03107218 = sum of:
            0.03107218 = weight(_text_:retrieval in 96) [ClassicSimilarity], result of:
              0.03107218 = score(doc=96,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2835858 = fieldWeight in 96, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=96)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    E-sciences are data-intensive sciences that make a large use of the Web to share, collect, and process data. In this context, primary scientific data is becoming a new challenging issue as data must be extensively described (1) to account for empiric conditions and results that allow interpretation and/or analyses and (2) to be understandable by computers used for data storage and information retrieval. With this respect, metadata is a focal point whatever it is considered from the point of view of the user to visualize and exploit data as well as this of the search tools to find and retrieve information. Numerous disciplines are concerned with the issues of describing complex observations and addressing pertinent knowledge. In this paper, similarities and differences in data description and exploration strategies among disciplines in e-sciences are examined.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  7. Sturmane, A.; Eglite, E.; Jankevica-Balode, M.: Subject metadata development for digital resources in Latvia (2014) 0.00
    7.9397525E-4 = product of:
      0.0055578267 = sum of:
        0.0055578267 = product of:
          0.027789133 = sum of:
            0.027789133 = weight(_text_:system in 1963) [ClassicSimilarity], result of:
              0.027789133 = score(doc=1963,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 1963, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1963)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The National Library of Latvia (NLL) made a decision to use the Library of Congress Subject Headings (LCSH) in 2000. At present the NLL Subject Headings Database in Latvian holds approximately 34,000 subject headings and is used for subject cataloging of textual resources, including articles from serials. For digital objects NLL uses a system like Faceted Application of Subject Terminology (FAST). We succesfully use it in the project "In Search of Lost Latvia," one of the milestones in the development of the subject cataloging of digital resources in Latvia.
  8. Moulaison Sandy, H.L.; Dykas, F.: High-quality metadata and repository staffing : perceptions of United States-based OpenDOAR participants (2016) 0.00
    7.9397525E-4 = product of:
      0.0055578267 = sum of:
        0.0055578267 = product of:
          0.027789133 = sum of:
            0.027789133 = weight(_text_:system in 2806) [ClassicSimilarity], result of:
              0.027789133 = score(doc=2806,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 2806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2806)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Digital repositories require good metadata, created according to community-based principles that include provisions for interoperability. When metadata is of high quality, digital objects become sharable and metadata can be harvested and reused outside of the local system. A sample of U.S.-based repository administrators from the OpenDOAR initiative were surveyed to understand aspects of the quality and creation of their metadata, and how their metadata could improve. Most respondents (65%) thought their metadata was of average quality; none thought their metadata was high quality or poor quality. The discussion argues that increased strategic staffing will alleviate many perceived issues with metadata quality.
  9. Holzhause, R.; Krömker, H.; Schnöll, M.: Vernetzung von audiovisuellen Inhalten und Metadaten : Metadatengestütztes System zur Generierung und Erschließung von Medienfragmenten (Teil 1) (2016) 0.00
    7.9397525E-4 = product of:
      0.0055578267 = sum of:
        0.0055578267 = product of:
          0.027789133 = sum of:
            0.027789133 = weight(_text_:system in 5636) [ClassicSimilarity], result of:
              0.027789133 = score(doc=5636,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 5636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5636)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  10. Holzhause, R.; Krömker, H.; Schnöll, M.: Vernetzung von audiovisuellen Inhalten und Metadaten : Metadatengestütztes System zur Generierung und Erschließung von Medienfragmenten (Teil 2) (2016) 0.00
    7.9397525E-4 = product of:
      0.0055578267 = sum of:
        0.0055578267 = product of:
          0.027789133 = sum of:
            0.027789133 = weight(_text_:system in 5861) [ClassicSimilarity], result of:
              0.027789133 = score(doc=5861,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 5861, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5861)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  11. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.00
    7.398139E-4 = product of:
      0.005178697 = sum of:
        0.005178697 = product of:
          0.025893483 = sum of:
            0.025893483 = weight(_text_:retrieval in 2192) [ClassicSimilarity], result of:
              0.025893483 = score(doc=2192,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23632148 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    LCSH
    Information storage and retrieval systems
    Subject
    Information storage and retrieval systems
  12. Strobel, S.; Marín-Arraiza, P.: Metadata for scientific audiovisual media : current practices and perspectives of the TIB / AV-portal (2015) 0.00
    7.398139E-4 = product of:
      0.005178697 = sum of:
        0.005178697 = product of:
          0.025893483 = sum of:
            0.025893483 = weight(_text_:retrieval in 3667) [ClassicSimilarity], result of:
              0.025893483 = score(doc=3667,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23632148 = fieldWeight in 3667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3667)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Descriptive metadata play a key role in finding relevant search results in large amounts of unstructured data. However, current scientific audiovisual media are provided with little metadata, which makes them hard to find, let alone individual sequences. In this paper, the TIB / AV-Portal is presented as a use case where methods concerning the automatic generation of metadata, a semantic search and cross-lingual retrieval (German/English) have already been applied. These methods result in a better discoverability of the scientific audiovisual media hosted in the portal. Text, speech, and image content of the video are automatically indexed by specialised GND (Gemeinsame Normdatei) subject headings. A semantic search is established based on properties of the GND ontology. The cross-lingual retrieval uses English 'translations' that were derived by an ontology mapping (DBpedia i. a.). Further ways of increasing the discoverability and reuse of the metadata are publishing them as Linked Open Data and interlinking them with other data sets.
  13. Gartner, R.: Metadata : shaping knowledge from antiquity to the semantic web (2016) 0.00
    7.398139E-4 = product of:
      0.005178697 = sum of:
        0.005178697 = product of:
          0.025893483 = sum of:
            0.025893483 = weight(_text_:retrieval in 731) [ClassicSimilarity], result of:
              0.025893483 = score(doc=731,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23632148 = fieldWeight in 731, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=731)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    LCSH
    Information storage and retrieval
    Subject
    Information storage and retrieval
  14. Ashton, J.; Kent, C.: New approaches to subject indexing at the British Library (2017) 0.00
    7.323784E-4 = product of:
      0.0051266486 = sum of:
        0.0051266486 = product of:
          0.025633242 = sum of:
            0.025633242 = weight(_text_:retrieval in 5158) [ClassicSimilarity], result of:
              0.025633242 = score(doc=5158,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23394634 = fieldWeight in 5158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5158)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Theme
    Verbale Doksprachen im Online-Retrieval
  15. Chou, C.: Purpose-driven assessment of cataloging and metadata services : transforming broken links into linked data (2019) 0.00
    7.323784E-4 = product of:
      0.0051266486 = sum of:
        0.0051266486 = product of:
          0.025633242 = sum of:
            0.025633242 = weight(_text_:retrieval in 5280) [ClassicSimilarity], result of:
              0.025633242 = score(doc=5280,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23394634 = fieldWeight in 5280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5280)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Many primary school classrooms have book collections. Most teachers organize and maintain these collections by themselves, although some involve students in the processes. This qualitative study considers a third approach, parent-involved categorization, to understand how people without library or education training categorize books. We observed and interviewed parents and a teacher who worked together to categorize books in a kindergarten classroom. They employed multiple orthogonal organizing principles, felt that working collaboratively made the task less overwhelming, solved difficult problems pragmatically, organized books primarily to facilitate retrieval by the teacher, and left lumping and splitting decisions to the teacher.
  16. Birkner, M.; Gonter, G.; Lackner, K.; Kann, B.; Kranewitter, M.; Mayer, A.; Parschalk, A.: Guideline zur Langzeitarchivierung (2016) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 3139) [ClassicSimilarity], result of:
              0.023819257 = score(doc=3139,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 3139, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3139)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Guideline zur Hilfestellung für die Langzeitarchivierung von Daten und Objekten im Kontext des Publikations- und Forschungswesens. Diese Guideline soll Hilfestellung für die Langzeitarchivierung von Daten und Objekten im Kontext des Publikations- und Forschungswesens bieten. Sie ist ausdrücklich nicht für den Kontext der Compliance anwendbar. Sie soll dazu ermächtigen, die richtigen Fragen zur Auswahl der für die eigene Institution geeigneten Langzeitarchivierungs-Lösung zu stellen und bei der Entscheidung für eine System-Lösung behilflich sein. Langzeitarchivierungssysteme werden hier als Systeme verstanden, die im Workflow hinter einem Repositorium stehen, in dem digitale Objekte und ihre Metadaten gespeichert und angezeigt werden sowie recherchierbar sind. Allfällige Begriffserklärungen finden Sie im Glossar des Clusters C (Aufbau eines Wissensnetzwerks: Erarbeitung eines Referenzmodells für den Aufbau von Repositorien / Themenbereich Terminologie und Standards).
  17. Grün, S.; Poley, C: Statistische Analysen von Semantic Entities aus Metadaten- und Volltextbeständen von German Medical Science (2017) 0.00
    6.2775286E-4 = product of:
      0.00439427 = sum of:
        0.00439427 = product of:
          0.02197135 = sum of:
            0.02197135 = weight(_text_:retrieval in 5032) [ClassicSimilarity], result of:
              0.02197135 = score(doc=5032,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 5032, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5032)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper analyzes the information content of metadata and full texts in German Medical Science (GMS) articles in English language. The object of the study is to compare semantic entities that are used to enrich GMS metadata (titles and abstracts) and GMS full texts. The aim of the study is to test whether using full texts increases the value added information. The comparison and evaluation of semantic entities was done statistically. Measures of descriptive statistics were gathered for this purpose. In addition to the ratio of central tendencies and scatterings, we computed the overlaps and complements of the values. The results show a distinct increase of information when full texts are added. On average, metadata contain 25 different entities and full texts 215. 89% of the concepts in the metadata are also represented in the full texts. Hence, 11% of the metadata concepts are found in the metadata only. In summary, the results show that the addition of full texts increases the informational value, e.g. for information retrieval processes.
  18. Farney, T.: using Google Tag Manager to share code : Designing shareable tags (2019) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 5443) [ClassicSimilarity], result of:
              0.01984938 = score(doc=5443,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 5443, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5443)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It's a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library's GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
  19. Hodges, D.W.; Schlottmann, K.: better archival migration outcomes with Python and the Google Sheets API : Reporting from the archives (2019) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 5444) [ClassicSimilarity], result of:
              0.01984938 = score(doc=5444,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 5444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5444)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Columbia University Libraries recently embarked on a multi-phase project to migrate nearly 4,000 records describing over 70,000 linear feet of archival material from disparate sources and formats into ArchivesSpace. This paper discusses tools and methods brought to bear in Phase 2 of this project, which required us to look closely at how to integrate a large number of legacy finding aids into the new system and merge descriptive data that had diverged in myriad ways. Using Python, XSLT, and a widely available if underappreciated resource-the Google Sheets API-archival and technical library staff devised ways to efficiently report data from different sources, and present it in an accessible, user-friendly way,. Responses were then fed back into automated data remediation processes to keep the migration project on track and minimize manual intervention. The scripts and processes developed proved very effective, and moreover, show promise well beyond the ArchivesSpace migration. This paper describes the Python/XSLT/Sheets API processes developed and how they opened a path to move beyond CSV-based reporting with flexible, ad-hoc data interfaces easily adaptable to meet a variety of purposes.
  20. Tosaka, Y.; Park, J.-r.: RDA: Resource description & access : a survey of the current state of the art (2013) 0.00
    5.2312744E-4 = product of:
      0.003661892 = sum of:
        0.003661892 = product of:
          0.01830946 = sum of:
            0.01830946 = weight(_text_:retrieval in 677) [ClassicSimilarity], result of:
              0.01830946 = score(doc=677,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16710453 = fieldWeight in 677, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=677)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Resource Description & Access (RDA) is intended to provide a flexible and extensible framework that can accommodate all types of content and media within rapidly evolving digital environments while also maintaining compatibility with the Anglo-American Cataloguing Rules, 2nd edition (AACR2). The cataloging community is grappling with practical issues in navigating the transition from AACR2 to RDA; there is a definite need to evaluate major subject areas and broader themes in information organization under the new RDA paradigm. This article aims to accomplish this task through a thorough and critical review of the emerging RDA literature published from 2005 to 2011. The review mostly concerns key areas of difference between RDA and AACR2, the relationship of the new cataloging code to metadata standards, the impact on encoding standards such as Machine-Readable Cataloging (MARC), end user considerations, and practitioners' views on RDA implementation and training. Future research will require more in-depth studies of RDA's expected benefits and the manner in which the new cataloging code will improve resource retrieval and bibliographic control for users and catalogers alike over AACR2. The question as to how the cataloging community can best move forward to the post-AACR2/MARC environment must be addressed carefully so as to chart the future of bibliographic control in the evolving environment of information production, management, and use.

Languages

  • e 39
  • d 8
  • pt 1
  • More… Less…

Types

  • a 37
  • m 8
  • el 5
  • s 4
  • r 1
  • x 1
  • More… Less…