Search (89 results, page 4 of 5)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Maurer, M.B.; McCutcheon, S.; Schwing, T.: Who's doing what? : findability and author-supplied ETD metadata in the library catalog (2011) 0.01
    0.009475192 = product of:
      0.03790077 = sum of:
        0.03790077 = weight(_text_:library in 1891) [ClassicSimilarity], result of:
          0.03790077 = score(doc=1891,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 1891, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1891)
      0.25 = coord(1/4)
    
    Abstract
    Kent State University Libraries' ETD cataloging process features contributions by authors, by the ETDcat application, and by catalogers. Who is doing what, and how much of it is findable in the library catalog? An empirical analysis is performed featuring simple frequencies within the KentLINK catalog, articulated by the use of a newly devised rubric. The researchers sought the degree to which the ETD authors, the applications, and the catalogers can supply accurate, findable metadata. Further development of combinatory cataloging processes is suggested. The method of examining the data and the rubric are provided as a framework for other metadata analysis.
  2. Pope, J.T.; Holley, R.P.: Google Book Search and metadata (2011) 0.01
    0.008633038 = product of:
      0.034532152 = sum of:
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 1887) [ClassicSimilarity], result of:
              0.069064304 = score(doc=1887,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 1887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1887)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article summarizes published documents on metadata provided by Google for books scanned as part of the Google Book Search (GBS) project and provides suggestions for improvement. The faulty, misleading, and confusing metadata in current Google records can pose potentially serious problems for users of GBS. Google admits that it took data, which proved to be inaccurate, from many sources and is attempting to correct errors. Some argue that metadata is not needed with keyword searching; but optical character recognition (OCR) errors, synonym control, and materials in foreign languages make reliable metadata a requirement for academic researchers. The authors recommend that users should be able to submit error reports to Google to correct faulty metadata.
  3. Wolfe, EW.: a case study in automated metadata enhancement : Natural Language Processing in the humanities (2019) 0.01
    0.008633038 = product of:
      0.034532152 = sum of:
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 5236) [ClassicSimilarity], result of:
              0.069064304 = score(doc=5236,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 5236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5236)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
  4. Gracy, K.F.: Enriching and enhancing moving images with Linked Data : an exploration in the alignment of metadata models (2018) 0.01
    0.0086163655 = product of:
      0.034465462 = sum of:
        0.034465462 = weight(_text_:digital in 4200) [ClassicSimilarity], result of:
          0.034465462 = score(doc=4200,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 4200, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=4200)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this paper is to examine the current state of Linked Data (LD) in archival moving image description, and propose ways in which current metadata records can be enriched and enhanced by interlinking such metadata with relevant information found in other data sets. Design/methodology/approach Several possible metadata models for moving image production and archiving are considered, including models from records management, digital curation, and the recent BIBFRAME AV Modeling Study. This research also explores how mappings between archival moving image records and relevant external data sources might be drawn, and what gaps exist between current vocabularies and what is needed to record and make accessible the full lifecycle of archiving through production, use, and reuse. Findings The author notes several major impediments to implementation of LD for archival moving images. The various pieces of information about creators, places, and events found in moving image records are not easily connected to relevant information in other sources because they are often not semantically defined within the record and can be hidden in unstructured fields. Libraries, archives, and museums must work on aligning the various vocabularies and schemas of potential value for archival moving image description to enable interlinking between vocabularies currently in use and those which are used by external data sets. Alignment of vocabularies is often complicated by mismatches in granularity between vocabularies. Research limitations/implications The focus is on how these models inform functional requirements for access and other archival activities, and how the field might benefit from having a common metadata model for critical archival descriptive activities. Practical implications By having a shared model, archivists may more easily align current vocabularies and develop new vocabularies and schemas to address the needs of moving image data creators and scholars. Originality/value Moving image archives, like other cultural institutions with significant heritage holdings, can benefit tremendously from investing in the semantic definition of information found in their information databases. While commercial entities such as search engines and data providers have already embraced the opportunities that semantic search provides for resource discovery, most non-commercial entities are just beginning to do so. Thus, this research addresses the benefits and challenges of enriching and enhancing archival moving image records with semantically defined information via LD.
  5. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.01
    0.008488459 = product of:
      0.033953834 = sum of:
        0.033953834 = product of:
          0.06790767 = sum of:
            0.06790767 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.06790767 = score(doc=3280,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  6. Hajra, A. et al.: Enriching scientific publications from LOD repositories through word embeddings approach (2016) 0.01
    0.008488459 = product of:
      0.033953834 = sum of:
        0.033953834 = product of:
          0.06790767 = sum of:
            0.06790767 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.06790767 = score(doc=3281,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.38690117 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  7. Mora-Mcginity, M. et al.: MusicWeb: music discovery with open linked semantic metadata (2016) 0.01
    0.008488459 = product of:
      0.033953834 = sum of:
        0.033953834 = product of:
          0.06790767 = sum of:
            0.06790767 = weight(_text_:22 in 3282) [ClassicSimilarity], result of:
              0.06790767 = score(doc=3282,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.38690117 = fieldWeight in 3282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3282)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  8. Peters, I.; Stock, W.G.: Power tags in information retrieval (2010) 0.01
    0.008289068 = product of:
      0.033156272 = sum of:
        0.033156272 = weight(_text_:library in 865) [ClassicSimilarity], result of:
          0.033156272 = score(doc=865,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.25158736 = fieldWeight in 865, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=865)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Many Web 2.0 services (including Library 2.0 catalogs) make use of folksonomies. The purpose of this paper is to cut off all tags in the long tail of a document-specific tag distribution. The remaining tags at the beginning of a tag distribution are considered power tags and form a new, additional search option in information retrieval systems. Design/methodology/approach - In a theoretical approach the paper discusses document-specific tag distributions (power law and inverse-logistic shape), the development of such distributions (Yule-Simon process and shuffling theory) and introduces search tags (besides the well-known index tags) as a possibility for generating tag distributions. Findings - Search tags are compatible with broad and narrow folksonomies and with all knowledge organization systems (e.g. classification systems and thesauri), while index tags are only applicable in broad folksonomies. Based on these findings, the paper presents a sketch of an algorithm for mining and processing power tags in information retrieval systems. Research limitations/implications - This conceptual approach is in need of empirical evaluation in a concrete retrieval system. Practical implications - Power tags are a new search option for retrieval systems to limit the amount of hits. Originality/value - The paper introduces power tags as a means for enhancing the precision of search results in information retrieval systems that apply folksonomies, e.g. catalogs in Library 2.0environments.
    Source
    Library hi tech. 28(2010) no.1, S.81-93
  9. Carlson, S.; Seely, A.: Using OpenRefine's reconciliation to validate local authority headings (2017) 0.01
    0.0076571116 = product of:
      0.030628446 = sum of:
        0.030628446 = weight(_text_:library in 5142) [ClassicSimilarity], result of:
          0.030628446 = score(doc=5142,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.23240642 = fieldWeight in 5142, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=5142)
      0.25 = coord(1/4)
    
    Abstract
    In 2015, the Cataloging and Metadata Services department of Rice University's Fondren Library developed a process to reconcile four years of authority headings against an internally developed thesaurus. With a goal of immediate cleanup as well as an ongoing maintenance procedure, staff developed a "hack" of OpenRefine's normal Reconciliation function that ultimately yielded 99.6% authority reconciliation and a stable process for monthly data verification.
  10. Jeffery, K.G.; Bailo, D.: EPOS: using metadata in geoscience (2014) 0.01
    0.0073997467 = product of:
      0.029598987 = sum of:
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 1581) [ClassicSimilarity], result of:
              0.059197973 = score(doc=1581,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 1581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1581)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    One of the key aspects of the approaching data-intensive science era is integration of data through interoperability of systems providing data products or visualisation and processing services. Far from being simple, interoperability requires robust and scalable e-infrastructures capable of supporting it. In this work we present the case of EPOS, a project for data integration in the field of Earth Sciences. We describe the design of its e-infrastructure and show its main characteristics. One of the main elements enabling the system to integrate data, data products and services is the metadata catalog based on the CERIF metadata model. Such a model, modified to fit into the general e-infrastructure design, is part of a three-layer metadata architecture. CERIF guarantees a robust handling of metadata, which is in this case the key to the interoperability and to one of the feature of the EPOS system: the possibility of carrying on data intensive science orchestrating the distributed resources made available by EPOS data providers and stakeholders.
  11. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.01
    0.0067907665 = product of:
      0.027163066 = sum of:
        0.027163066 = product of:
          0.054326132 = sum of:
            0.054326132 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
              0.054326132 = score(doc=1953,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.30952093 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    29. 5.2015 19:09:22
  12. Bohne-Lang, A.: Semantische Metadaten für den Webauftritt einer Bibliothek (2016) 0.01
    0.0067679947 = product of:
      0.027071979 = sum of:
        0.027071979 = weight(_text_:library in 3337) [ClassicSimilarity], result of:
          0.027071979 = score(doc=3337,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 3337, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3337)
      0.25 = coord(1/4)
    
    Abstract
    Das Semantic Web ist schon seit über 10 Jahren viel beachtet und hat mit der Verfügbarkeit von Resource Description Framework (RDF) und den entsprechenden Ontologien einen großen Sprung in die Praxis gemacht. Vertreter kleiner Bibliotheken und Bibliothekare mit geringer Technik-Affinität stehen aber im Alltag vor großen Hürden, z.B. bei der Frage, wie man diese Technik konkret in den eigenen Webauftritt einbinden kann: man kommt sich vor wie Don Quijote, der versucht die Windmühlen zu bezwingen. RDF mit seinen Ontologien ist fast unverständlich komplex für Nicht-Informatiker und somit für den praktischen Einsatz auf Bibliotheksseiten in der Breite nicht direkt zu gebrauchen. Mit Schema.org wurde ursprünglich von den drei größten Suchmaschinen der Welt Google, Bing und Yahoo eine einfach und effektive semantische Beschreibung von Entitäten entwickelt. Aktuell wird Schema.org durch Google, Microsoft, Yahoo und Yandex weiter gesponsert und von vielen weiteren Suchmaschinen verstanden. Vor diesem Hintergrund hat die Bibliothek der Medizinischen Fakultät Mannheim auf ihrer Homepage (http://www.umm.uni-heidelberg.de/bibl/) verschiedene maschinenlesbare semantische Metadaten eingebettet. Sehr interessant und zukunftsweisend ist die neueste Entwicklung von Schema.org, bei der man eine 'Library' (https://schema.org/Library) mit Öffnungszeiten und vielem mehr modellieren kann. Ferner haben wir noch semantische Metadaten im Open Graph- und Dublin Core-Format eingebettet, um alte Standards und Facebook-konforme Informationen maschinenlesbar zur Verfügung zu stellen.
  13. Gartner, R.: Metadata : shaping knowledge from antiquity to the semantic web (2016) 0.01
    0.0067679947 = product of:
      0.027071979 = sum of:
        0.027071979 = weight(_text_:library in 731) [ClassicSimilarity], result of:
          0.027071979 = score(doc=731,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
      0.25 = coord(1/4)
    
    LCSH
    Library science
    Subject
    Library science
  14. Kleeck, D. Van; Langford, G.; Lundgren, J.; Nakano, H.; O'Dell, A.J.; Shelton, T.: Managing bibliographic data quality in a consortial academic library : a case study (2016) 0.01
    0.006699973 = product of:
      0.026799891 = sum of:
        0.026799891 = weight(_text_:library in 5133) [ClassicSimilarity], result of:
          0.026799891 = score(doc=5133,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 5133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5133)
      0.25 = coord(1/4)
    
  15. Kleeck, D. Van; Nakano, H.; Langford, G.; Shelton, T.; Lundgren, J.; O'Dell, A.J.: Managing bibliographic data quality for electronic resources (2017) 0.01
    0.006699973 = product of:
      0.026799891 = sum of:
        0.026799891 = weight(_text_:library in 5160) [ClassicSimilarity], result of:
          0.026799891 = score(doc=5160,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 5160, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5160)
      0.25 = coord(1/4)
    
    Abstract
    This article presents a case study of quality management issues for electronic resource metadata to assess the support of user tasks (find, select, and obtain library resources) and potential for increased efficiencies in acquisitions and cataloging workflows. The authors evaluated the quality of existing bibliographic records (mostly vendor supplied) for e-resource collections as compared with records for the same collections in OCLC's WorldShare Collection Manager (WCM). Findings are that WCM records better support user tasks by containing more summaries and tables of contents; other checkpoints are largely comparable between the two source record groups. The transition to WCM records is discussed.
  16. Häusner, E.-M.; Sommerland, Y.: Assessment of metadata quality of the Swedish National Bibliography through mapping user awareness (2018) 0.01
    0.006699973 = product of:
      0.026799891 = sum of:
        0.026799891 = weight(_text_:library in 5169) [ClassicSimilarity], result of:
          0.026799891 = score(doc=5169,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 5169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5169)
      0.25 = coord(1/4)
    
    Abstract
    This article is examining if the metadata quality of the Swedish National Bibliography could be measured through mapping the level of user awareness regarding the characteristics of the data. A qualitative meta-synthesis was carried out and results from two previous studies conducted at the National Library of Sweden were interpreted and conceptualized through an integrated analyze. The results of the meta-synthesis showed a need for an action plan for increasing user awareness to efficiently reach target groups of national bibliographic data at its fullest potential, i.e. user awareness on the usability and the quality of the metadata.
  17. Chou, C.: Purpose-driven assessment of cataloging and metadata services : transforming broken links into linked data (2019) 0.01
    0.006699973 = product of:
      0.026799891 = sum of:
        0.026799891 = weight(_text_:library in 5280) [ClassicSimilarity], result of:
          0.026799891 = score(doc=5280,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 5280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5280)
      0.25 = coord(1/4)
    
    Abstract
    Many primary school classrooms have book collections. Most teachers organize and maintain these collections by themselves, although some involve students in the processes. This qualitative study considers a third approach, parent-involved categorization, to understand how people without library or education training categorize books. We observed and interviewed parents and a teacher who worked together to categorize books in a kindergarten classroom. They employed multiple orthogonal organizing principles, felt that working collaboratively made the task less overwhelming, solved difficult problems pragmatically, organized books primarily to facilitate retrieval by the teacher, and left lumping and splitting decisions to the teacher.
  18. Pfister, E.; Wittwer, B.; Wolff, M.: Metadaten - Manuelle Datenpflege vs. Automatisieren : ein Praxisbericht zu Metadatenmanagement an der ETH-Bibliothek (2017) 0.01
    0.0059419204 = product of:
      0.023767682 = sum of:
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 5630) [ClassicSimilarity], result of:
              0.047535364 = score(doc=5630,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 5630, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5630)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    B.I.T.online. 20(2017) H.1, S.22-25
  19. Park, H.; Smiraglia, R.P.: Enhancing data curation of cultural heritage for information sharing : a case study using open Government data (2014) 0.01
    0.0057428335 = product of:
      0.022971334 = sum of:
        0.022971334 = weight(_text_:library in 1575) [ClassicSimilarity], result of:
          0.022971334 = score(doc=1575,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 1575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=1575)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this paper is to enhance cultural heritage data curation. A core research question of this study is how to share cultural heritage data by using ontologies. A case study was conducted using open government data mapped with the CIDOC-CRM (Conceptual Reference Model). Twelve library-related files in unstructured data format were collected from an open government website, Seoul Metropolitan Government of Korea (http://data.seoul.go.kr). By using the ontologies of the CIDOC CRM 5.1.2, we conducted a mapping process as a way of enhancing cultural heritage information to share information as a data component. We graphed each file then mapped each file in tables. Implications of this study are both the enhanced discoverability of unstructured data and the reusability of mapped information. Issues emerging from this study involve verification of detail for complete compatibility without further input from domain experts.
  20. Welhouse, Z.; Lee, J.H.; Bancroft, J.: "What am I fighting for?" : creating a controlled vocabulary for video game plot metadata (2015) 0.01
    0.0057428335 = product of:
      0.022971334 = sum of:
        0.022971334 = weight(_text_:library in 2015) [ClassicSimilarity], result of:
          0.022971334 = score(doc=2015,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 2015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2015)
      0.25 = coord(1/4)
    
    Abstract
    A video game's plot is one of its defining features, and prior research confirms the importance of plot metadata to users through persona analysis, interviews, and surveys. However, existing organizational systems, including library catalogs, game-related websites, and traditional plot classification systems, do not adequately describe the plot information of video games, in other words, what the game is really about. We attempt to address the issue by creating a controlled vocabulary based on a domain analysis involving a review of relevant literature and existing data structures. The controlled vocabulary is constructed in a pair structure for maximizing flexibility and extensibility. Adopting this controlled vocabulary for describing plot information of games will allow for useful search and collocation of video games.

Authors

Languages

  • e 85
  • d 4
  • More… Less…

Types

Subjects