Search (26 results, page 1 of 2)

  • × author_ss:"Zeng, M.L."
  1. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.03
    0.034212865 = product of:
      0.051319294 = sum of:
        0.02263261 = weight(_text_:on in 1967) [ClassicSimilarity], result of:
          0.02263261 = score(doc=1967,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 1967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.028686684 = product of:
          0.057373367 = sum of:
            0.057373367 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.057373367 = score(doc=1967,freq=4.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  2. Golub, K.; Tudhope, D.; Zeng, M.L.; Zumer, M.: Terminology registries for knowledge organization systems : functionality, use, and attributes (2014) 0.03
    0.028611436 = product of:
      0.042917155 = sum of:
        0.02263261 = weight(_text_:on in 1347) [ClassicSimilarity], result of:
          0.02263261 = score(doc=1347,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 1347, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1347)
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 1347) [ClassicSimilarity], result of:
              0.040569093 = score(doc=1347,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 1347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1347)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Terminology registries (TRs) are a crucial element of the infrastructure required for resource discovery services, digital libraries, Linked Data, and semantic interoperability generally. They can make the content of knowledge organization systems (KOS) available both for human and machine access. The paper describes the attributes and functionality for a TR, based on a review of published literature, existing TRs, and a survey of experts. A domain model based on user tasks is constructed and a set of core metadata elements for use in TRs is proposed. Ideally, the TR should allow searching as well as browsing for a KOS, matching a user's search while also providing information about existing terminology services, accessible to both humans and machines. The issues surrounding metadata for KOS are also discussed, together with the rationale for different aspects and the importance of a core set of KOS metadata for future machine-based access; a possible core set of metadata elements is proposed. This is dealt with in terms of practical experience and in relation to the Dublin Core Application Profile.
    Date
    22. 8.2014 17:12:54
  3. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.03
    0.02851072 = product of:
      0.04276608 = sum of:
        0.01886051 = weight(_text_:on in 1962) [ClassicSimilarity], result of:
          0.01886051 = score(doc=1962,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 1962, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.023905568 = product of:
          0.047811136 = sum of:
            0.047811136 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
              0.047811136 = score(doc=1962,freq=4.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.27358043 = fieldWeight in 1962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1962)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  4. Salaba, A.; Zeng, M.L.: Extending the "Explore" user task beyond subject authority data into the linked data sphere (2014) 0.03
    0.02822417 = product of:
      0.042336255 = sum of:
        0.01867095 = weight(_text_:on in 1465) [ClassicSimilarity], result of:
          0.01867095 = score(doc=1465,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 1465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1465)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 1465) [ClassicSimilarity], result of:
              0.04733061 = score(doc=1465,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 1465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1465)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    "Explore" is a user task introduced in the Functional Requirements for Subject Authority Data (FRSAD) final report. Through various case scenarios, the authors discuss how structured data, presented based on Linked Data principles and using knowledge organisation systems (KOS) as the backbone, extend the explore task within and beyond subject authority data.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  5. Zhang, J.; Zeng, M.L.: ¬A new similarity measure for subject hierarchical structures (2014) 0.03
    0.026668733 = product of:
      0.0400031 = sum of:
        0.02309931 = weight(_text_:on in 1778) [ClassicSimilarity], result of:
          0.02309931 = score(doc=1778,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 1778, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1778)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 1778) [ClassicSimilarity], result of:
              0.03380758 = score(doc=1778,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 1778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1778)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - The purpose of this paper is to introduce a new similarity method to gauge the differences between two subject hierarchical structures. Design/methodology/approach - In the proposed similarity measure, nodes on two hierarchical structures are projected onto a two-dimensional space, respectively, and both structural similarity and subject similarity of nodes are considered in the similarity between the two hierarchical structures. The extent to which the structural similarity impacts on the similarity can be controlled by adjusting a parameter. An experiment was conducted to evaluate soundness of the measure. Eight experts whose research interests were information retrieval and information organization participated in the study. Results from the new measure were compared with results from the experts. Findings - The evaluation shows strong correlations between the results from the new method and the results from the experts. It suggests that the similarity method achieved satisfactory results. Practical implications - Hierarchical structures that are found in subject directories, taxonomies, classification systems, and other classificatory structures play an extremely important role in information organization and information representation. Measuring the similarity between two subject hierarchical structures allows an accurate overarching understanding of the degree to which the two hierarchical structures are similar. Originality/value - Both structural similarity and subject similarity of nodes were considered in the proposed similarity method, and the extent to which the structural similarity impacts on the similarity can be adjusted. In addition, a new evaluation method for a hierarchical structure similarity was presented.
    Date
    8. 4.2015 16:22:13
  6. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.02
    0.021334989 = product of:
      0.032002483 = sum of:
        0.01847945 = weight(_text_:on in 1096) [ClassicSimilarity], result of:
          0.01847945 = score(doc=1096,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 1096, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1096)
        0.013523032 = product of:
          0.027046064 = sum of:
            0.027046064 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
              0.027046064 = score(doc=1096,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.15476047 = fieldWeight in 1096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1096)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As a part of a research project aiming to connect library data to the unfamiliar data sets available in the Linked Data (LD) community's CKAN Data Hub (thedatahub.org), this project collected, analyzed, and mapped properties used in describing and accessing music recordings, scores, and music-related information used by selected music LD data sets, library catalogs, and various digital collections created by libraries and other cultural institutions. This article reviews current efforts to connect music data through the Semantic Web, with an emphasis on the Music Ontology (MO) and ontology alignment approaches; it also presents a framework for understanding the life cycle of a musical work, focusing on the central activities of composition, performance, and use. The project studied metadata structures and properties of 11 music-related LD data sets and mapped them to the descriptions commonly used in the library cataloging records for sound recordings and musical scores (including MARC records and their extended schema.org markup), and records from 20 collections of digitized music recordings and scores (featuring a variety of metadata structures). The analysis resulted in a set of crosswalks and a unified crosswalk that aligns these properties. The paper reports on detailed methodologies used and discusses research findings and issues. Topics of particular concern include (a) the challenges of mapping between the overgeneralized descriptions found in library data and the specialized, music-oriented properties present in the LD data sets; (b) the hidden information and access points in library data; and (c) the potential benefits of enriching library data through the mapping of properties found in library catalogs to similar properties used by LD data sets.
    Date
    28.10.2013 17:22:17
  7. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.02
    0.01986238 = product of:
      0.02979357 = sum of:
        0.010669114 = weight(_text_:on in 2654) [ClassicSimilarity], result of:
          0.010669114 = score(doc=2654,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2654)
        0.019124456 = product of:
          0.03824891 = sum of:
            0.03824891 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.03824891 = score(doc=2654,freq=4.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.21886435 = fieldWeight in 2654, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  8. Zeng, M.L.; Panzer, M.; Salaba, A.: Expressing classification schemes with OWL 2 Web Ontology Language : exploring issues and opportunities based on experiments using OWL 2 for three classification schemes 0.01
    0.012319634 = product of:
      0.0369589 = sum of:
        0.0369589 = weight(_text_:on in 3130) [ClassicSimilarity], result of:
          0.0369589 = score(doc=3130,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.33671528 = fieldWeight in 3130, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=3130)
      0.33333334 = coord(1/3)
    
    Abstract
    Based on the research on three general classification schemes, this paper discusses issues encountered when expressing classification schemes in SKOS and explores opportunities of resolving major issues using OWL 2 Web Ontology Language.
  9. Panzer, M.; Zeng, M.L.: Modeling classification systems in SKOS : Some challenges and best-practice (2009) 0.01
    0.010779679 = product of:
      0.032339036 = sum of:
        0.032339036 = weight(_text_:on in 3717) [ClassicSimilarity], result of:
          0.032339036 = score(doc=3717,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29462588 = fieldWeight in 3717, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3717)
      0.33333334 = coord(1/3)
    
    Abstract
    Representing classification systems on the web for publication and exchange continues to be a challenge within the SKOS framework. This paper focuses on the differences between classification schemes and other families of KOS (knowledge organization systems) that make it difficult to express classifications without sacrificing a large amount of their semantic richness. Issues resulting from the specific set of relationships between classes and topics that defines the basic nature of any classification system are discussed. Where possible, different solutions within the frameworks of SKOS and OWL are proposed and examined.
    Source
    Semantic Interoperability for Linked Data, proc. DC2009: International Conference on Dublin Core and Metadata Applications, Seoul, Korea, October 12-17, 2009
  10. Chan, L.M.; Lin, X.; Zeng, M.L.: Structural and multilingual approaches to subject access on the Web (2000) 0.01
    0.010669115 = product of:
      0.032007344 = sum of:
        0.032007344 = weight(_text_:on in 507) [ClassicSimilarity], result of:
          0.032007344 = score(doc=507,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=507)
      0.33333334 = coord(1/3)
    
  11. Salaba, A.; Zeng, M.L.; Zumer, M.: Functional Requirements for Subject Authority Records (2006) 0.01
    0.008801571 = product of:
      0.026404712 = sum of:
        0.026404712 = weight(_text_:on in 279) [ClassicSimilarity], result of:
          0.026404712 = score(doc=279,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 279, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=279)
      0.33333334 = coord(1/3)
    
    Abstract
    Continuing the tradition set by the FRBR model, a new IFLA working group was formed to examine the functional requirements for subject authority records (FRSAR). The focus of the FRSAR Working Group is on the user tasks and functional requirements of authority records for the Group 3 entities as defined by FRBR. This paper presents the Working Group's terms of reference and reports on initial activities and subject authority issues discussed.
  12. Zeng, M.L.; Fan, W.: SKOS and its application in transferring traditional thesauri into networked knowledge organization systems (2008) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 2170) [ClassicSimilarity], result of:
          0.02263261 = score(doc=2170,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 2170, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2170)
      0.33333334 = coord(1/3)
    
    Abstract
    In remembrance of Magda Heiner-Freiling who dedicated her professional efforts in promoting the sharing of subject access among world libraries, we sincerely wish to add our contribution to the endeavor she started and dreamed of finishing by writing this paper in Chinese, introducing SKOS and discussing its applications in transferring the largest controlled vocabulary in China, the Chinese Classified Thesaurus (CCT), into a SKOS-based knowledge organization system (KOS). The paper discusses the conceptual models of concept-based and term-based systems, the converting solutions of CCT, and the potential usage of a KOS registry built on SKOS and other Web-based protocols and technologies.
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  13. Chen, S.-j.; Zeng, M.L.; Chen, H.-h.: Alignment of conceptual structures in controlled vocabularies in the domain of Chinese art : a discussion of issues and patterns (2012) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 857) [ClassicSimilarity], result of:
          0.02263261 = score(doc=857,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 857, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=857)
      0.33333334 = coord(1/3)
    
    Abstract
    Based on our recent sub-project of the Chinese AAT-Taiwan Project, this paper reports issues regarding the alignment of the controlled vocabularies in the domain of Chinese art. The conceptual structures of the concepts for Chinese art in the National Palace Museum (NPM) Vocabularies and the Art & Architecture Thesaurus (AAT) are studied and patterns were identified in the effort of achieving semantic interoperability. The findings presented in the paper are meaningful to the research on the semantic interoperability of multilingual KOS, especially when dealing with cultural-related concepts that cannot be exactly aligned in vocabularies due to the discrepancies in the conceptual structures.
  14. Zeng, M.L.; Chan, L.M.: Semantic interoperability (2009) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 3738) [ClassicSimilarity], result of:
          0.021338228 = score(doc=3738,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 3738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=3738)
      0.33333334 = coord(1/3)
    
    Abstract
    This entry discusses the importance of semantic interoperability in the networked environment, introduces various approaches contributing to semantic interoperability, and summarizes different methodologies used in current projects that are focused on achieving semantic interoperability. It is intended to inform readers about the fundamentals and mechanisms that have been experimented with, or implemented, that strive to ensure and achieve semantic interoperability in the current networked environment.
  15. Zeng, M.L.: Interoperability (2019) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 5232) [ClassicSimilarity], result of:
          0.021338228 = score(doc=5232,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 5232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=5232)
      0.33333334 = coord(1/3)
    
    Abstract
    Interoperability refers to the ability of two or more systems or components to exchange information and to use the information that has been exchanged. This article presents the major viewpoints of interoperability, with the focus on semantic interoperability. It discusses the approaches to achieving interoperability as demonstrated in standards and best practices, projects, and products in the broad domain of knowledge organization.
  16. Zeng, M.L.; Gracy, K.F.; Zumer, M.: Using a semantic analysis tool to generate subject access points : a study using Panofsky's theory and two research samples (2014) 0.01
    0.0067615155 = product of:
      0.020284547 = sum of:
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 1464) [ClassicSimilarity], result of:
              0.040569093 = score(doc=1464,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 1464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1464)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  17. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.01
    0.006159817 = product of:
      0.01847945 = sum of:
        0.01847945 = weight(_text_:on in 1177) [ClassicSimilarity], result of:
          0.01847945 = score(doc=1177,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 1177, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1177)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  18. Zeng, M.L.; Sula, C.A.; Gracy, K.F.; Hyvönen, E.; Alves Lima, V.M.: JASIST special issue on digital humanities (DH) : guest editorial (2022) 0.01
    0.006159817 = product of:
      0.01847945 = sum of:
        0.01847945 = weight(_text_:on in 462) [ClassicSimilarity], result of:
          0.01847945 = score(doc=462,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 462, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=462)
      0.33333334 = coord(1/3)
    
    Abstract
    More than 15 years ago, A Companion to Digital Humanities marked out the area of digital humanities (DH) "as a discipline in its own right" (Schreibman et al., 2004, p. xxiii). In the years that followed, there is ample evidence that the DH domain, formed by the intersection of humanities disciplines and digital information technology, has undergone remarkable expansion. This growth is reflected in A New Companion to Digital Humanities (Schreibman et al., 2016). The extensively revised contents of the second edition were contributed by a global team of authors who are pioneers of innovative research in the field. Over this formative period, DH has become a widely recognized, impactful mode of scholarship and an institutional unit for collaborative, transdisciplinary, and computationally engaged research, teaching, and publication (Burdick et al., 2012; Svensson, 2010; Van Ruyskensvelde, 2014). The field of DH has advanced tremendously over the last decade and continues to expand. Meanwhile, competing definitions and approaches of DH scholars continue to spark debate. "Complexity" was a theme of the DH2019 international conference, as it demonstrates the multifaceted connections within DH scholarship today (Alliance of Digital Humanities Organizations, 2019). Yet, while it is often assumed that the DH is in flux and not particularly fixed as an institutional or intellectual construct, there are also obviously touchstones within the DH field, most visibly in the relationship between traditional humanities disciplines and technological infrastructures. Thus, it is still meaningful to "bring together the humanistic and the digital through embracing a non-territorial and liminal zone" (Svensson, 2016, p. 477). This is the focus of this JASIST special issue, which mirrors the increasing attention on DH worldwide.
    Series
    JASIST special issue on digital humanities (DH)
  19. Zeng, M.L.: Knowledge Organization Systems (KOS) (2008) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 2316) [ClassicSimilarity], result of:
          0.016003672 = score(doc=2316,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 2316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2316)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge organization systems (KOS) can be described based on their structures (from flat to multidimensional) and main functions. The latter include eliminating ambiguity, controlling synonyms or equivalents, establishing explicit semantic relationships such as hierarchical and associative relationships, and presenting both relationships and properties of concepts in the knowledge models. Examples of KOS include lists, authority files, gazetteers, synonym rings, taxonomies and classification schemes, thesauri, and ontologies. These systems model the underlying semantic structure of a domain and provide semantics, navigation, and translation through labels, definitions, typing, relationships, and properties for concepts. The term knowledge organization systems (KOS) is intended to encompass all types of schemes for organizing information and promoting knowledge management, such as classification schemes, gazetteers, lexical databases, taxonomies, thesauri, and ontologies (Hodge 2000). These systems model the underlying semantic structure of a domain and provide semantics, navigation, and translation through labels, definitions, typing, relationships, and properties for concepts (Hill et al. 2002, Koch and Tudhope 2004). Embodied as (Web) services, they facilitate resource discovery and retrieval by acting as semantic road maps, thereby making possible a common orientation for indexers and future users, either human or machine (Koch and Tudhope 2003, 2004).
  20. Zeng, M.L.; Zumer, M.: Introducing FRSAD and mapping it with SKOS and other models (2009) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 3150) [ClassicSimilarity], result of:
          0.016003672 = score(doc=3150,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 3150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3150)
      0.33333334 = coord(1/3)
    
    Abstract
    The Functional Requirements for Subject Authority Records (FRSAR) Working Group was formed in 2005 as the third IFLA working group of the FRBR family to address subject authority data issues and to investigate the direct and indirect uses of subject authority data by a wide range of users. This paper introduces the Functional Requirements for Subject Authority Data (FRSAD), the model developed by the FRSAR Working Group, and discusses it in the context of other related conceptual models defined in the specifications during recent years, including the British Standard BS8723-5: Structured vocabularies for information retrieval - Guide Part 5: Exchange formats and protocols for interoperability, W3C's SKOS Simple Knowledge Organization System Reference, and OWL Web Ontology Language Reference. These models enable the consideration of the functions of subject authority data and concept schemes at a higher level that is independent of any implementation, system, or specific context, while allowing us to focus on the semantics, structures, and interoperability of subject authority data.