Search (27 results, page 1 of 2)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.01
    0.010645477 = product of:
      0.03193643 = sum of:
        0.03193643 = product of:
          0.06387286 = sum of:
            0.06387286 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.06387286 = score(doc=3280,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  2. Hajra, A. et al.: Enriching scientific publications from LOD repositories through word embeddings approach (2016) 0.01
    0.010645477 = product of:
      0.03193643 = sum of:
        0.03193643 = product of:
          0.06387286 = sum of:
            0.06387286 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.06387286 = score(doc=3281,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.38690117 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  3. Mora-Mcginity, M. et al.: MusicWeb: music discovery with open linked semantic metadata (2016) 0.01
    0.010645477 = product of:
      0.03193643 = sum of:
        0.03193643 = product of:
          0.06387286 = sum of:
            0.06387286 = weight(_text_:22 in 3282) [ClassicSimilarity], result of:
              0.06387286 = score(doc=3282,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.38690117 = fieldWeight in 3282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3282)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  4. Peters, I.; Stock, W.G.: Power tags in information retrieval (2010) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 865) [ClassicSimilarity], result of:
              0.05284806 = score(doc=865,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=865)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - Many Web 2.0 services (including Library 2.0 catalogs) make use of folksonomies. The purpose of this paper is to cut off all tags in the long tail of a document-specific tag distribution. The remaining tags at the beginning of a tag distribution are considered power tags and form a new, additional search option in information retrieval systems. Design/methodology/approach - In a theoretical approach the paper discusses document-specific tag distributions (power law and inverse-logistic shape), the development of such distributions (Yule-Simon process and shuffling theory) and introduces search tags (besides the well-known index tags) as a possibility for generating tag distributions. Findings - Search tags are compatible with broad and narrow folksonomies and with all knowledge organization systems (e.g. classification systems and thesauri), while index tags are only applicable in broad folksonomies. Based on these findings, the paper presents a sketch of an algorithm for mining and processing power tags in information retrieval systems. Research limitations/implications - This conceptual approach is in need of empirical evaluation in a concrete retrieval system. Practical implications - Power tags are a new search option for retrieval systems to limit the amount of hits. Originality/value - The paper introduces power tags as a means for enhancing the precision of search results in information retrieval systems that apply folksonomies, e.g. catalogs in Library 2.0environments.
  5. Ya-Ning, C.; Hao-Ren, K.: FRBRoo-based approach to heterogeneous metadata integration (2013) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 1765) [ClassicSimilarity], result of:
              0.05284806 = score(doc=1765,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 1765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1765)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper seeks to adopt FRBRoo as an ontological approach to integrate heterogeneous metadata, and transform human-understandable format into machine-understandable format for semantic query. Design/methodology/approach - Two cases of use with museum artefacts and literary works were exploited to illustrate how FRBRoo can be used to re-contextualize the semantics of elements and the semantic relationships embedded in those elements. The shared ontology was then RDFized and examples were explored to examine the feasibility of the proposed approach. Findings - FRBRoo can play a role as inter lingua aligning museum and library metadata to achieve heterogeneous metadata integration and semantic query without changing either of the original approaches to fit the other. Research limitations/implications - Exploration of more diverse use cases is required to further align the different approaches of museums and libraries using FRBRoo and make revisions. Practical implications - Solid evidence is provided for the use of FRBRoo in heterogeneous metadata integration and semantic query. Originality/value - This is the first study to elaborate how FRBRoo can play a role as a shared ontology to integrate the heterogeneous metadata generated by museums and libraries. This paper also shows how the proposed approach is distinct from the Dublin Core format crosswalk in re-contextualizing semantic meanings and their relationships, and further provides four new sub-types for mapping description language.
  6. Niininen, S.; Nykyri, S.; Suominen, O.: ¬The future of metadata : open, linked, and multilingual - the YSO case (2017) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 3707) [ClassicSimilarity], result of:
              0.05284806 = score(doc=3707,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 3707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3707)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is threefold: to focus on the process of multilingual concept scheme construction and the challenges involved; to addresses concrete challenges faced in the construction process and especially those related to equivalence between terms and concepts; and to briefly outlines the translation strategies developed during the process of concept scheme construction. Design/methodology/approach The analysis is based on experience acquired during the establishment of the Finnish thesaurus and ontology service Finto as well as the trilingual General Finnish Ontology YSO, both of which are being maintained and further developed at the National Library of Finland. Findings Although uniform resource identifiers can be considered language-independent, they do not render concept schemes and their construction free of language-related challenges. The fundamental issue with all the challenges faced is how to maintain consistency and predictability when the nature of language requires each concept to be treated individually. The key to such challenges is to recognise the function of the vocabulary and the needs of its intended users. Social implications Open science increases the transparency of not only research products, but also metadata tools. Gaining a deeper understanding of the challenges involved in their construction is important for a great variety of users - e.g. indexers, vocabulary builders and information seekers. Today, multilingualism is an essential aspect at both the national and international information society level. Originality/value This paper draws on the practical challenges faced in concept scheme construction in a trilingual environment, with a focus on "concept scheme" as a translation and mapping unit.
  7. Gursoy, A.; Wickett, K.; Feinberg, M.: Understanding tag functions in a moderated, user-generated metadata ecosystem (2018) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 3946) [ClassicSimilarity], result of:
              0.05284806 = score(doc=3946,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 3946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3946)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is to investigate tag use in a metadata ecosystem that supports a fan work repository to identify functions of tags and explore the system as a co-constructed communicative context. Design/methodology/approach Using modified techniques from grounded theory (Charmaz, 2007), this paper integrates humanistic and social science methods to identify kinds of tag use in a rich setting. Findings Three primary roles of tags emerge out of detailed study of the metadata ecosystem: tags can identify elements in the fan work, tags can reflect on how those elements are used or adapted in the fan work, and finally, tags can express the fan author's sense of her role in the discursive context of the fan work repository. Attending to each of the tag roles shifts focus away from just what tags say to include how they say it. Practical implications Instead of building metadata systems designed solely for retrieval or description, this research suggests that it may be fruitful to build systems that recognize various metadata functions and allow for expressivity. This research also suggests that attending to metadata previously considered unusable in systems may reflect the participants' sense of the system and their role within it. Originality/value In addition to accommodating a wider range of tag functions, this research implies consideration of metadata ecosystems, where different kinds of tags do different things and work together to create a multifaceted artifact.
  8. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 4048) [ClassicSimilarity], result of:
              0.05284806 = score(doc=4048,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 4048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4048)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
  9. Tallerås, C.; Dahl, J.H.B.; Pharo, N.: User conceptualizations of derivative relationships in the bibliographic universe (2018) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 4247) [ClassicSimilarity], result of:
              0.05284806 = score(doc=4247,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 4247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4247)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose Considerable effort is devoted to developing new models for organizing bibliographic metadata. However, such models have been repeatedly criticized for their lack of proper user testing. The purpose of this paper is to present a study on how non-experts in bibliographic systems map the bibliographic universe and, in particular, how they conceptualize relationships between independent but strongly related entities. Design/methodology/approach The study is based on an open concept-mapping task performed to externalize the conceptualizations of 98 novice students. The conceptualizations of the resulting concept maps are identified and analyzed statistically. Findings The study shows that the participants' conceptualizations have great variety, differing in detail and granularity. These conceptualizations can be categorized into two main groups according to derivative relationships: those that apply a single-entity model directly relating document entities and those (the majority) that apply a multi-entity model relating documents through a high-level collocating node. These high-level nodes seem to be most adequately interpreted either as superwork devices collocating documents belonging to the same bibliographic family or as devices collocating documents belonging to a shared fictional world. Originality/value The findings can guide the work to develop bibliographic standards. Based on the diversity of the conceptualizations, the findings also emphasize the need for more user testing of both conceptual models and the bibliographic end-user systems implementing those models.
  10. Bogaard, T.; Hollink, L.; Wielemaker, J.; Ossenbruggen, J. van; Hardman, L.: Metadata categorization for identifying search patterns in a digital library (2019) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 5281) [ClassicSimilarity], result of:
              0.05284806 = score(doc=5281,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 5281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5281)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose For digital libraries, it is useful to understand how users search in a collection. Investigating search patterns can help them to improve the user interface, collection management and search algorithms. However, search patterns may vary widely in different parts of a collection. The purpose of this paper is to demonstrate how to identify these search patterns within a well-curated historical newspaper collection using the existing metadata. Design/methodology/approach The authors analyzed search logs combined with metadata records describing the content of the collection, using this metadata to create subsets in the logs corresponding to different parts of the collection. Findings The study shows that faceted search is more prevalent than non-faceted search in terms of number of unique queries, time spent, clicks and downloads. Distinct search patterns are observed in different parts of the collection, corresponding to historical periods, geographical regions or subject matter. Originality/value First, this study provides deeper insights into search behavior at a fine granularity in a historical newspaper collection, by the inclusion of the metadata in the analysis. Second, it demonstrates how to use metadata categorization as a way to analyze distinct search patterns in a collection.
  11. Montenegro, M.: Subverting the universality of metadata standards (2019) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 5340) [ClassicSimilarity], result of:
              0.05284806 = score(doc=5340,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 5340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5340)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is to investigate the underlying meanings, effects and cultural patterns of metadata standards, focusing on Dublin Core (DC), and explore the ways in which anticolonial metadata tools can be applied to exercise and promote Indigenous data sovereignty. Design/methodology/approach Applying an anticolonial approach, this paper examines the assumptions underpinning the stated roles of two of DC's metadata elements, rights and creator. Based on that examination, the paper considers the limitations of DC for appropriately documenting Indigenous traditional knowledge (TK). Introduction of the TK labels and their implementation are put forward as an alternative method to such limitations in metadata standards. Findings The analysis of the rights and creator elements revealed that DC's universality and supposed neutrality threaten the rightful attribution, specificity and dynamism of TK, undermining Indigenous data sovereignty. The paper advocates for alternative descriptive methods grounded within tribal sovereignty values while recognizing the difficulties of dealing with issues of interoperability by means of metadata standards given potentially innate tendencies to customization within communities. Originality/value This is the first paper to directly examine the implications of DC's rights and creator elements for documenting TK. The paper identifies ethical practices and culturally appropriate tools that unsettle the universality claims of metadata standards. By introducing the TK labels, the paper contributes to the efforts of Indigenous communities to regain control and ownership of their cultural and intellectual property.
  12. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.01
    0.0085163815 = product of:
      0.025549144 = sum of:
        0.025549144 = product of:
          0.051098287 = sum of:
            0.051098287 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
              0.051098287 = score(doc=1953,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.30952093 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    29. 5.2015 19:09:22
  13. Alves dos Santos, E.; Mucheroni, M.L.: VIAF and OpenCitations : cooperative work as a strategy for information organization in the linked data era (2018) 0.01
    0.0085163815 = product of:
      0.025549144 = sum of:
        0.025549144 = product of:
          0.051098287 = sum of:
            0.051098287 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
              0.051098287 = score(doc=4826,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.30952093 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    18. 1.2019 19:13:22
  14. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.01
    0.0074518337 = product of:
      0.0223555 = sum of:
        0.0223555 = product of:
          0.044711 = sum of:
            0.044711 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.044711 = score(doc=2606,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2000 17:38:22
  15. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.0074518337 = product of:
      0.0223555 = sum of:
        0.0223555 = product of:
          0.044711 = sum of:
            0.044711 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.044711 = score(doc=3283,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  16. Pfister, E.; Wittwer, B.; Wolff, M.: Metadaten - Manuelle Datenpflege vs. Automatisieren : ein Praxisbericht zu Metadatenmanagement an der ETH-Bibliothek (2017) 0.01
    0.0074518337 = product of:
      0.0223555 = sum of:
        0.0223555 = product of:
          0.044711 = sum of:
            0.044711 = weight(_text_:22 in 5630) [ClassicSimilarity], result of:
              0.044711 = score(doc=5630,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.2708308 = fieldWeight in 5630, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5630)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    B.I.T.online. 20(2017) H.1, S.22-25
  17. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.01
    0.007046408 = product of:
      0.021139223 = sum of:
        0.021139223 = product of:
          0.042278446 = sum of:
            0.042278446 = weight(_text_:methodology in 2320) [ClassicSimilarity], result of:
              0.042278446 = score(doc=2320,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.1990817 = fieldWeight in 2320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2320)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
  18. Gracy, K.F.: Enriching and enhancing moving images with Linked Data : an exploration in the alignment of metadata models (2018) 0.01
    0.007046408 = product of:
      0.021139223 = sum of:
        0.021139223 = product of:
          0.042278446 = sum of:
            0.042278446 = weight(_text_:methodology in 4200) [ClassicSimilarity], result of:
              0.042278446 = score(doc=4200,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.1990817 = fieldWeight in 4200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4200)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The purpose of this paper is to examine the current state of Linked Data (LD) in archival moving image description, and propose ways in which current metadata records can be enriched and enhanced by interlinking such metadata with relevant information found in other data sets. Design/methodology/approach Several possible metadata models for moving image production and archiving are considered, including models from records management, digital curation, and the recent BIBFRAME AV Modeling Study. This research also explores how mappings between archival moving image records and relevant external data sources might be drawn, and what gaps exist between current vocabularies and what is needed to record and make accessible the full lifecycle of archiving through production, use, and reuse. Findings The author notes several major impediments to implementation of LD for archival moving images. The various pieces of information about creators, places, and events found in moving image records are not easily connected to relevant information in other sources because they are often not semantically defined within the record and can be hidden in unstructured fields. Libraries, archives, and museums must work on aligning the various vocabularies and schemas of potential value for archival moving image description to enable interlinking between vocabularies currently in use and those which are used by external data sets. Alignment of vocabularies is often complicated by mismatches in granularity between vocabularies. Research limitations/implications The focus is on how these models inform functional requirements for access and other archival activities, and how the field might benefit from having a common metadata model for critical archival descriptive activities. Practical implications By having a shared model, archivists may more easily align current vocabularies and develop new vocabularies and schemas to address the needs of moving image data creators and scholars. Originality/value Moving image archives, like other cultural institutions with significant heritage holdings, can benefit tremendously from investing in the semantic definition of information found in their information databases. While commercial entities such as search engines and data providers have already embraced the opportunities that semantic search provides for resource discovery, most non-commercial entities are just beginning to do so. Thus, this research addresses the benefits and challenges of enriching and enhancing archival moving image records with semantically defined information via LD.
  19. Maron, D.; Feinberg, M.: What does it mean to adopt a metadata standard? : a case study of Omeka and the Dublin Core (2018) 0.01
    0.007046408 = product of:
      0.021139223 = sum of:
        0.021139223 = product of:
          0.042278446 = sum of:
            0.042278446 = weight(_text_:methodology in 4248) [ClassicSimilarity], result of:
              0.042278446 = score(doc=4248,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.1990817 = fieldWeight in 4248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4248)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case, Dublin Core) can result in contrasting rhetorical arguments regarding metadata utility, quality, and reliability. In the Omeka example, the author illustrate a conceptual disconnect in how two metadata stakeholders - standards creators and standards users - operationalize metadata quality. For standards creators such as the Dublin Core community, metadata quality involves implementing a standard properly, according to established usage principles; in contrast, for standards users like Omeka, metadata quality involves mere adoption of the standard, with little consideration of proper usage and accompanying principles. Design/methodology/approach The paper uses an approach based on rhetorical criticism. The paper aims to establish whether Omeka's given ends (the position that Omeka claims to take regarding Dublin Core) align with Omeka's guiding ends (Omeka's actual argument regarding Dublin Core). To make this assessment, the paper examines both textual evidence (what Omeka says) and material-discursive evidence (what Omeka does). Findings The evidence shows that, while Omeka appears to argue that adopting the Dublin Core is an integral part of Omeka's mission, the platform's lack of support for Dublin Core implementation makes an opposing argument. Ultimately, Omeka argues that the appearance of adopting a standard is more important than its careful implementation. Originality/value This study contributes to our understanding of how metadata standards are understood and used in practice. The misalignment between Omeka's position and the goals of the Dublin Core community suggests that Omeka, and some portion of its users, do not value metadata interoperability and aggregation in the same way that the Dublin Core community does. This indicates that, although certain values regarding standards adoption may be pervasive in the metadata community, these values are not equally shared amongst all stakeholders in a digital library ecosystem. The way that standards creators (Dublin Core) understand what it means to "adopt a standard" is different from the way that standards users (Omeka) understand what it means to "adopt a standard."
  20. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.01
    0.007046408 = product of:
      0.021139223 = sum of:
        0.021139223 = product of:
          0.042278446 = sum of:
            0.042278446 = weight(_text_:methodology in 117) [ClassicSimilarity], result of:
              0.042278446 = score(doc=117,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.1990817 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.