Search (11 results, page 1 of 1)

  • × author_ss:"Zeng, M.L."
  1. Golub, K.; Tudhope, D.; Zeng, M.L.; Zumer, M.: Terminology registries for knowledge organization systems : functionality, use, and attributes (2014) 0.03
    0.032060843 = product of:
      0.096182525 = sum of:
        0.096182525 = sum of:
          0.054981556 = weight(_text_:searching in 1347) [ClassicSimilarity], result of:
            0.054981556 = score(doc=1347,freq=2.0), product of:
              0.20502694 = queryWeight, product of:
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.05068286 = queryNorm
              0.26816747 = fieldWeight in 1347, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0452914 = idf(docFreq=2103, maxDocs=44218)
                0.046875 = fieldNorm(doc=1347)
          0.041200966 = weight(_text_:22 in 1347) [ClassicSimilarity], result of:
            0.041200966 = score(doc=1347,freq=2.0), product of:
              0.17748274 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05068286 = queryNorm
              0.23214069 = fieldWeight in 1347, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1347)
      0.33333334 = coord(1/3)
    
    Abstract
    Terminology registries (TRs) are a crucial element of the infrastructure required for resource discovery services, digital libraries, Linked Data, and semantic interoperability generally. They can make the content of knowledge organization systems (KOS) available both for human and machine access. The paper describes the attributes and functionality for a TR, based on a review of published literature, existing TRs, and a survey of experts. A domain model based on user tasks is constructed and a set of core metadata elements for use in TRs is proposed. Ideally, the TR should allow searching as well as browsing for a KOS, matching a user's search while also providing information about existing terminology services, accessible to both humans and machines. The issues surrounding metadata for KOS are also discussed, together with the rationale for different aspects and the importance of a core set of KOS metadata for future machine-based access; a possible core set of metadata elements is proposed. This is dealt with in terms of practical experience and in relation to the Dublin Core Application Profile.
    Date
    22. 8.2014 17:12:54
  2. Zumer, M.; Zeng, M.L.; Mitchell, J.S.: FRBRizing KOS relationships : applying the FRBR model to versions of the DDC (2012) 0.02
    0.016973633 = product of:
      0.050920896 = sum of:
        0.050920896 = weight(_text_:bibliographic in 846) [ClassicSimilarity], result of:
          0.050920896 = score(doc=846,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.2580748 = fieldWeight in 846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=846)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper presents the approach of using the Functional Requirements for Bibliographic Records (FRBR) model to investigate the complicated sets of relationships among different versions of a classification system for the purposes of specifying provenance of classification data and facilitating collaborative efforts for using and reusing classification data, particularly in a linked data setting. The long-term goal of this research goes beyond the Dewey Decimal Classification that is used as a case. It addresses the questions of if and how the modelling approach and the FRBR-based model itself can be generalized and applied to other classification systems, multilingual and multicultural vocabularies, and even non-KOS resources that share similar characteristics.
  3. Zumer, M.; Zeng, M.L.: Application of FRBR and FRSAD to classification systems (2015) 0.01
    0.014144694 = product of:
      0.04243408 = sum of:
        0.04243408 = weight(_text_:bibliographic in 2284) [ClassicSimilarity], result of:
          0.04243408 = score(doc=2284,freq=2.0), product of:
            0.19731061 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.05068286 = queryNorm
            0.21506234 = fieldWeight in 2284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2284)
      0.33333334 = coord(1/3)
    
    Abstract
    The Functional Requirements for Subject Authority Data (FRSAD) conceptual model defines entities, attributes and relationships as they relate to subject authority data. FRSAD includes two main entities, thema (any entity used as the subject of a work) and nomen (any sign or arrangement of signs that a thema is known by, referred to, or addressed as). In a given controlled vocabulary and within a domain, a nomen is the appellation of only one thema. The authors consider the question: can the FRSAD conceptual model be extended beyond controlled vocabularies (its original focus) to model classification data? Models that are developed based on the structures and functions of controlled vocabularies (such as thesauri and subject heading systems) often need to be adjusted or extended to accommodate classification systems that have been developed with different focused functions, structures and fundamental theories. The Dewey Decimal Classification (DDC) system and Universal Decimal Classification (UDC) are used as a case study to test applicability of the FRSAD model for classification data and the applicability of the Functional Requirements for Bibliographic Records (FRBR) for modelling versions, such as different adaptations and different language editions.
  4. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.009711162 = product of:
      0.029133486 = sum of:
        0.029133486 = product of:
          0.05826697 = sum of:
            0.05826697 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.05826697 = score(doc=1967,freq=4.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  5. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.01
    0.008092634 = product of:
      0.024277903 = sum of:
        0.024277903 = product of:
          0.048555806 = sum of:
            0.048555806 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
              0.048555806 = score(doc=1962,freq=4.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.27358043 = fieldWeight in 1962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1962)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  6. Salaba, A.; Zeng, M.L.: Extending the "Explore" user task beyond subject authority data into the linked data sphere (2014) 0.01
    0.008011299 = product of:
      0.024033897 = sum of:
        0.024033897 = product of:
          0.048067793 = sum of:
            0.048067793 = weight(_text_:22 in 1465) [ClassicSimilarity], result of:
              0.048067793 = score(doc=1465,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.2708308 = fieldWeight in 1465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1465)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  7. Zeng, M.L.; Gracy, K.F.; Zumer, M.: Using a semantic analysis tool to generate subject access points : a study using Panofsky's theory and two research samples (2014) 0.01
    0.0068668276 = product of:
      0.020600483 = sum of:
        0.020600483 = product of:
          0.041200966 = sum of:
            0.041200966 = weight(_text_:22 in 1464) [ClassicSimilarity], result of:
              0.041200966 = score(doc=1464,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.23214069 = fieldWeight in 1464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1464)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  8. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.01
    0.0064741075 = product of:
      0.019422323 = sum of:
        0.019422323 = product of:
          0.038844645 = sum of:
            0.038844645 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.038844645 = score(doc=2654,freq=4.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.01
    0.0061090617 = product of:
      0.018327184 = sum of:
        0.018327184 = product of:
          0.036654368 = sum of:
            0.036654368 = weight(_text_:searching in 1177) [ClassicSimilarity], result of:
              0.036654368 = score(doc=1177,freq=2.0), product of:
                0.20502694 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.05068286 = queryNorm
                0.1787783 = fieldWeight in 1177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  10. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.01
    0.0057223565 = product of:
      0.017167069 = sum of:
        0.017167069 = product of:
          0.034334138 = sum of:
            0.034334138 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.034334138 = score(doc=1778,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    8. 4.2015 16:22:13
  11. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.00
    0.0045778854 = product of:
      0.013733655 = sum of:
        0.013733655 = product of:
          0.02746731 = sum of:
            0.02746731 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
              0.02746731 = score(doc=1096,freq=2.0), product of:
                0.17748274 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05068286 = queryNorm
                0.15476047 = fieldWeight in 1096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1096)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    28.10.2013 17:22:17