Search (18 results, page 1 of 1)

  • × author_ss:"Zeng, M.L."
  1. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.07
    0.07186526 = product of:
      0.14373052 = sum of:
        0.02834915 = weight(_text_:libraries in 1967) [ClassicSimilarity], result of:
          0.02834915 = score(doc=1967,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.05077526 = weight(_text_:case in 1967) [ClassicSimilarity], result of:
          0.05077526 = score(doc=1967,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.04182736 = weight(_text_:studies in 1967) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1967,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.022778753 = product of:
          0.045557506 = sum of:
            0.045557506 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.045557506 = score(doc=1967,freq=4.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  2. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.06
    0.059887715 = product of:
      0.11977543 = sum of:
        0.023624292 = weight(_text_:libraries in 1962) [ClassicSimilarity], result of:
          0.023624292 = score(doc=1962,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.18147534 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.042312715 = weight(_text_:case in 1962) [ClassicSimilarity], result of:
          0.042312715 = score(doc=1962,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.034856133 = weight(_text_:studies in 1962) [ClassicSimilarity], result of:
          0.034856133 = score(doc=1962,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.018982293 = product of:
          0.037964586 = sum of:
            0.037964586 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
              0.037964586 = score(doc=1962,freq=4.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.27358043 = fieldWeight in 1962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1962)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  3. Salaba, A.; Zeng, M.L.: Extending the "Explore" user task beyond subject authority data into the linked data sphere (2014) 0.02
    0.01950733 = product of:
      0.07802932 = sum of:
        0.059237804 = weight(_text_:case in 1465) [ClassicSimilarity], result of:
          0.059237804 = score(doc=1465,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 1465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1465)
        0.018791512 = product of:
          0.037583023 = sum of:
            0.037583023 = weight(_text_:22 in 1465) [ClassicSimilarity], result of:
              0.037583023 = score(doc=1465,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.2708308 = fieldWeight in 1465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1465)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    "Explore" is a user task introduced in the Functional Requirements for Subject Authority Data (FRSAD) final report. Through various case scenarios, the authors discuss how structured data, presented based on Linked Data principles and using knowledge organisation systems (KOS) as the backbone, extend the explore task within and beyond subject authority data.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  4. Golub, K.; Tudhope, D.; Zeng, M.L.; Zumer, M.: Terminology registries for knowledge organization systems : functionality, use, and attributes (2014) 0.01
    0.01111404 = product of:
      0.04445616 = sum of:
        0.02834915 = weight(_text_:libraries in 1347) [ClassicSimilarity], result of:
          0.02834915 = score(doc=1347,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 1347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=1347)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 1347) [ClassicSimilarity], result of:
              0.03221402 = score(doc=1347,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 1347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1347)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Terminology registries (TRs) are a crucial element of the infrastructure required for resource discovery services, digital libraries, Linked Data, and semantic interoperability generally. They can make the content of knowledge organization systems (KOS) available both for human and machine access. The paper describes the attributes and functionality for a TR, based on a review of published literature, existing TRs, and a survey of experts. A domain model based on user tasks is constructed and a set of core metadata elements for use in TRs is proposed. Ideally, the TR should allow searching as well as browsing for a KOS, matching a user's search while also providing information about existing terminology services, accessible to both humans and machines. The issues surrounding metadata for KOS are also discussed, together with the rationale for different aspects and the importance of a core set of KOS metadata for future machine-based access; a possible core set of metadata elements is proposed. This is dealt with in terms of practical experience and in relation to the Dublin Core Application Profile.
    Date
    22. 8.2014 17:12:54
  5. Zeng, M.L.: Developing control mechanisms for discipline-based virtual libraries : a study of the process (1995) 0.01
    0.008268503 = product of:
      0.06614802 = sum of:
        0.06614802 = weight(_text_:libraries in 6837) [ClassicSimilarity], result of:
          0.06614802 = score(doc=6837,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.50813097 = fieldWeight in 6837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.109375 = fieldNorm(doc=6837)
      0.125 = coord(1/8)
    
  6. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.01
    0.0074093603 = product of:
      0.029637441 = sum of:
        0.018899433 = weight(_text_:libraries in 1096) [ClassicSimilarity], result of:
          0.018899433 = score(doc=1096,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.14518027 = fieldWeight in 1096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03125 = fieldNorm(doc=1096)
        0.010738007 = product of:
          0.021476014 = sum of:
            0.021476014 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
              0.021476014 = score(doc=1096,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.15476047 = fieldWeight in 1096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1096)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    As a part of a research project aiming to connect library data to the unfamiliar data sets available in the Linked Data (LD) community's CKAN Data Hub (thedatahub.org), this project collected, analyzed, and mapped properties used in describing and accessing music recordings, scores, and music-related information used by selected music LD data sets, library catalogs, and various digital collections created by libraries and other cultural institutions. This article reviews current efforts to connect music data through the Semantic Web, with an emphasis on the Music Ontology (MO) and ontology alignment approaches; it also presents a framework for understanding the life cycle of a musical work, focusing on the central activities of composition, performance, and use. The project studied metadata structures and properties of 11 music-related LD data sets and mapped them to the descriptions commonly used in the library cataloging records for sound recordings and musical scores (including MARC records and their extended schema.org markup), and records from 20 collections of digitized music recordings and scores (featuring a variety of metadata structures). The analysis resulted in a set of crosswalks and a unified crosswalk that aligns these properties. The paper reports on detailed methodologies used and discusses research findings and issues. Topics of particular concern include (a) the challenges of mapping between the overgeneralized descriptions found in library data and the specialized, music-oriented properties present in the LD data sets; (b) the hidden information and access points in library data; and (c) the potential benefits of enriching library data through the mapping of properties found in library catalogs to similar properties used by LD data sets.
    Date
    28.10.2013 17:22:17
  7. Zeng, M.L.: Metadata elements for object description and representaion : a case report from a digitized historical fashion collection project (1999) 0.01
    0.0063469075 = product of:
      0.05077526 = sum of:
        0.05077526 = weight(_text_:case in 4055) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4055,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4055)
      0.125 = coord(1/8)
    
  8. Zumer, M.; Zeng, M.L.; Mitchell, J.S.: FRBRizing KOS relationships : applying the FRBR model to versions of the DDC (2012) 0.01
    0.0063469075 = product of:
      0.05077526 = sum of:
        0.05077526 = weight(_text_:case in 846) [ClassicSimilarity], result of:
          0.05077526 = score(doc=846,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=846)
      0.125 = coord(1/8)
    
    Abstract
    The paper presents the approach of using the Functional Requirements for Bibliographic Records (FRBR) model to investigate the complicated sets of relationships among different versions of a classification system for the purposes of specifying provenance of classification data and facilitating collaborative efforts for using and reusing classification data, particularly in a linked data setting. The long-term goal of this research goes beyond the Dewey Decimal Classification that is used as a case. It addresses the questions of if and how the modelling approach and the FRBR-based model itself can be generalized and applied to other classification systems, multilingual and multicultural vocabularies, and even non-KOS resources that share similar characteristics.
  9. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Extending models for controlled vocabularies to classification systems : modeling DDC with FRSAD (2011) 0.01
    0.0052890894 = product of:
      0.042312715 = sum of:
        0.042312715 = weight(_text_:case in 4092) [ClassicSimilarity], result of:
          0.042312715 = score(doc=4092,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 4092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4092)
      0.125 = coord(1/8)
    
    Abstract
    The Functional Requirements for Subject Authority Data (FRSAD) conceptual model identifies entities, attributes and relationships as they relate to subject authority data. FRSAD includes two main entities, thema (any entity used as a subject of a work) and nomen (any sign or sequence of signs that a thema is known by, referred to, or addressed as). In a given controlled vocabulary and within a domain, a nomen is the appellation of only one thema. The authors consider the question, can the FRSAD conceptual model be extended beyond controlled vocabularies (its original focus) to model classification data? Models that are developed based on the structures and functions of controlled vocabularies (such as thesauri and subject heading systems) often need to be adjusted or extended to accommodate classification systems that have been developed with different focused functions, structures and fundamental theories. The Dewey Decimal Classification (DDC) system is used as a case study to test applicability of the FRSAD model for classification data, and as a springboard for a general discussion of issues related to the use of FRSAD for the representation of classification data.
  10. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Extending models for controlled vocabularies to classification systems : modelling DDC with FRSAD (2011) 0.01
    0.0052890894 = product of:
      0.042312715 = sum of:
        0.042312715 = weight(_text_:case in 4828) [ClassicSimilarity], result of:
          0.042312715 = score(doc=4828,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 4828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4828)
      0.125 = coord(1/8)
    
    Abstract
    The Functional Requirements for Subject Authority Data (FRSAD) conceptual model identifies entities, attributes and relationships as they relate to subject authority data. FRSAD includes two main entities, thema (any entity used as a subject of a work) and nomen (any sign or sequence of signs that a thema is known by, referred to, or addressed as). In a given controlled vocabulary and within a domain, a nomen is the appellation of only one thema. The authors consider the question, can the FRSAD conceptual model be extended beyond controlled vocabularies (its original focus) to model classification data? Models that are developed based on the structures and functions of controlled vocabularies (such as thesauri and subject heading systems) often need to be adjusted or extended to accommodate classification systems that have been developed with different focused functions, structures and fundamental theories. The Dewey Decimal Classification (DDC) system is used as a case study to test applicability of the FRSAD model for classification data, and as a springboard for a general discussion of issues related to the use of FRSAD for the representation of classification data.
  11. Zumer, M.; Zeng, M.L.: Application of FRBR and FRSAD to classification systems (2015) 0.01
    0.0052890894 = product of:
      0.042312715 = sum of:
        0.042312715 = weight(_text_:case in 2284) [ClassicSimilarity], result of:
          0.042312715 = score(doc=2284,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 2284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2284)
      0.125 = coord(1/8)
    
    Abstract
    The Functional Requirements for Subject Authority Data (FRSAD) conceptual model defines entities, attributes and relationships as they relate to subject authority data. FRSAD includes two main entities, thema (any entity used as the subject of a work) and nomen (any sign or arrangement of signs that a thema is known by, referred to, or addressed as). In a given controlled vocabulary and within a domain, a nomen is the appellation of only one thema. The authors consider the question: can the FRSAD conceptual model be extended beyond controlled vocabularies (its original focus) to model classification data? Models that are developed based on the structures and functions of controlled vocabularies (such as thesauri and subject heading systems) often need to be adjusted or extended to accommodate classification systems that have been developed with different focused functions, structures and fundamental theories. The Dewey Decimal Classification (DDC) system and Universal Decimal Classification (UDC) are used as a case study to test applicability of the FRSAD model for classification data and the applicability of the Functional Requirements for Bibliographic Records (FRBR) for modelling versions, such as different adaptations and different language editions.
  12. Zumer, M.; Zeng, M.L.; Salaba, A.: FRSAD: conceptual modeling of aboutness (2012) 0.00
    0.0041342513 = product of:
      0.03307401 = sum of:
        0.03307401 = weight(_text_:libraries in 1960) [ClassicSimilarity], result of:
          0.03307401 = score(doc=1960,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 1960, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1960)
      0.125 = coord(1/8)
    
    Imprint
    Santa Barbara, CA : Libraries Unlimited
  13. Zeng, M.L.; Fan, W.: SKOS and its application in transferring traditional thesauri into networked knowledge organization systems (2008) 0.00
    0.0035436437 = product of:
      0.02834915 = sum of:
        0.02834915 = weight(_text_:libraries in 2170) [ClassicSimilarity], result of:
          0.02834915 = score(doc=2170,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 2170, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=2170)
      0.125 = coord(1/8)
    
    Abstract
    In remembrance of Magda Heiner-Freiling who dedicated her professional efforts in promoting the sharing of subject access among world libraries, we sincerely wish to add our contribution to the endeavor she started and dreamed of finishing by writing this paper in Chinese, introducing SKOS and discussing its applications in transferring the largest controlled vocabulary in China, the Chinese Classified Thesaurus (CCT), into a SKOS-based knowledge organization system (KOS). The paper discusses the conceptual models of concept-based and term-based systems, the converting solutions of CCT, and the potential usage of a KOS registry built on SKOS and other Web-based protocols and technologies.
  14. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.00
    0.0029530365 = product of:
      0.023624292 = sum of:
        0.023624292 = weight(_text_:libraries in 1176) [ClassicSimilarity], result of:
          0.023624292 = score(doc=1176,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.18147534 = fieldWeight in 1176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1176)
      0.125 = coord(1/8)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  15. Zeng, M.L.; Sula, C.A.; Gracy, K.F.; Hyvönen, E.; Alves Lima, V.M.: JASIST special issue on digital humanities (DH) : guest editorial (2022) 0.00
    0.00265737 = product of:
      0.02125896 = sum of:
        0.02125896 = product of:
          0.04251792 = sum of:
            0.04251792 = weight(_text_:area in 462) [ClassicSimilarity], result of:
              0.04251792 = score(doc=462,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.21775553 = fieldWeight in 462, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03125 = fieldNorm(doc=462)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    More than 15 years ago, A Companion to Digital Humanities marked out the area of digital humanities (DH) "as a discipline in its own right" (Schreibman et al., 2004, p. xxiii). In the years that followed, there is ample evidence that the DH domain, formed by the intersection of humanities disciplines and digital information technology, has undergone remarkable expansion. This growth is reflected in A New Companion to Digital Humanities (Schreibman et al., 2016). The extensively revised contents of the second edition were contributed by a global team of authors who are pioneers of innovative research in the field. Over this formative period, DH has become a widely recognized, impactful mode of scholarship and an institutional unit for collaborative, transdisciplinary, and computationally engaged research, teaching, and publication (Burdick et al., 2012; Svensson, 2010; Van Ruyskensvelde, 2014). The field of DH has advanced tremendously over the last decade and continues to expand. Meanwhile, competing definitions and approaches of DH scholars continue to spark debate. "Complexity" was a theme of the DH2019 international conference, as it demonstrates the multifaceted connections within DH scholarship today (Alliance of Digital Humanities Organizations, 2019). Yet, while it is often assumed that the DH is in flux and not particularly fixed as an institutional or intellectual construct, there are also obviously touchstones within the DH field, most visibly in the relationship between traditional humanities disciplines and technological infrastructures. Thus, it is still meaningful to "bring together the humanistic and the digital through embracing a non-territorial and liminal zone" (Svensson, 2016, p. 477). This is the focus of this JASIST special issue, which mirrors the increasing attention on DH worldwide.
  16. Zeng, M.L.; Gracy, K.F.; Zumer, M.: Using a semantic analysis tool to generate subject access points : a study using Panofsky's theory and two research samples (2014) 0.00
    0.0020133762 = product of:
      0.01610701 = sum of:
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 1464) [ClassicSimilarity], result of:
              0.03221402 = score(doc=1464,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 1464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1464)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  17. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.00
    0.0018982294 = product of:
      0.015185835 = sum of:
        0.015185835 = product of:
          0.03037167 = sum of:
            0.03037167 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.03037167 = score(doc=2654,freq=4.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.00
    0.0016778135 = product of:
      0.013422508 = sum of:
        0.013422508 = product of:
          0.026845016 = sum of:
            0.026845016 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.026845016 = score(doc=1778,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    8. 4.2015 16:22:13