Search (17 results, page 1 of 1)

  • × author_ss:"Chan, L.M."
  1. Chan, L.M.; Hodges, T.: Entering the millennium : a new century for LCSH (2000) 0.03
    0.033387464 = product of:
      0.08346866 = sum of:
        0.04098487 = weight(_text_:it in 5920) [ClassicSimilarity], result of:
          0.04098487 = score(doc=5920,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27114958 = fieldWeight in 5920, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=5920)
        0.042483795 = weight(_text_:22 in 5920) [ClassicSimilarity], result of:
          0.042483795 = score(doc=5920,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 5920, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=5920)
      0.4 = coord(2/5)
    
    Abstract
    Library of Congress Subject Headings (LCSH), a system originally designed as a tool for subject access to the Library's own collection in the late nineteenth century, has become, in the course of the last century, the main subject retrieval tool in library catalogs throughout the United States and in many other countries. It is one of the largest non-specialized controlled vocabularies in the world. As LCSH enters a new century, it faces an information environment that has undergone vast changes from what had prevailed when LCSH began, or, indeed, from its state in the early days of the online age. In order to continue its mission and to be useful in spheres outside library catalogs as well, LCSH must adapt to the multifarious environment. One possible approach is to adopt a series of scalable and flexible syntax and application rules to meet the needs of different user communities
    Date
    27. 5.2001 16:22:21
  2. Chan, L.M.; Hodges, T.L.: Library of Congress Classification (LCC) (2009) 0.03
    0.033387464 = product of:
      0.08346866 = sum of:
        0.04098487 = weight(_text_:it in 3842) [ClassicSimilarity], result of:
          0.04098487 = score(doc=3842,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27114958 = fieldWeight in 3842, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=3842)
        0.042483795 = weight(_text_:22 in 3842) [ClassicSimilarity], result of:
          0.042483795 = score(doc=3842,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 3842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3842)
      0.4 = coord(2/5)
    
    Abstract
    The Library of Congress Classification (LCC), originally designed for classifying the Library's own collection, is now used in a wide range of libraries, both in the United States and abroad. This entry recounts its history and development from its genesis to the present time, leading up to an explanation of LCC structure, tables, and notation. It then considers the system's potential for wider application in the online age, through speculation on using LCC as a tool for (a) partitioning large files; (b) generating domain-specific taxonomies; and (c) integrating classification and controlled subject terms for improved retrieval in the online public access catalog (OPAC) and the Internet. Finally, analyzing both its strong and relatively weak features, it addresses the question of whether in its current state LCC is in all respects ready for playing such roles
    Date
    27. 8.2011 14:22:42
  3. Chan, L.M.; Mitchell, J.S.: Dewey Decimal Classification : principles and applications (2003) 0.02
    0.019825771 = product of:
      0.09912886 = sum of:
        0.09912886 = weight(_text_:22 in 3247) [ClassicSimilarity], result of:
          0.09912886 = score(doc=3247,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.5416616 = fieldWeight in 3247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=3247)
      0.2 = coord(1/5)
    
    Object
    DDC-22
  4. Chan, L.M.: Library of Congress Subject Headings : principles and application (1995) 0.02
    0.016993519 = product of:
      0.08496759 = sum of:
        0.08496759 = weight(_text_:22 in 3985) [ClassicSimilarity], result of:
          0.08496759 = score(doc=3985,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.46428138 = fieldWeight in 3985, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=3985)
      0.2 = coord(1/5)
    
    Date
    25.11.2005 18:37:22
  5. Hodges, T.L.; Chan, L.M.: Subject cataloging principles and systems (2009) 0.01
    0.009563136 = product of:
      0.04781568 = sum of:
        0.04781568 = weight(_text_:it in 4698) [ClassicSimilarity], result of:
          0.04781568 = score(doc=4698,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.31634116 = fieldWeight in 4698, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4698)
      0.2 = coord(1/5)
    
    Abstract
    After an introduction that addresses the means people use to search for information, this entry articulates the principles underlying various subject access options, including both controlled vocabulary systems and classification. It begins with a brief history of subject access provisions, including an account of the impact of automation, and goes on to discuss in some detail the principles underlying American library practice in respect to subject access. It then, briefly, describes selected subject-access schemes (including both subject heading lists and classification systems) in terms of how they reflect the principles presented, and how well they fulfill their stated functions.
  6. O'Neill, E.T.; Chan, L.M.; Childress, E.; Dean, R.; El-Hoshy, L.M.; Vizine-Goetz, D.: Form subdivisions : their identification and use in LCSH (2001) 0.01
    0.008496759 = product of:
      0.042483795 = sum of:
        0.042483795 = weight(_text_:22 in 2205) [ClassicSimilarity], result of:
          0.042483795 = score(doc=2205,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 2205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2205)
      0.2 = coord(1/5)
    
    Date
    10. 9.2000 17:38:22
  7. O'Neill, E.T.; Chan, L.M.: FAST - a new approach to controlled subject access (2008) 0.01
    0.008196974 = product of:
      0.04098487 = sum of:
        0.04098487 = weight(_text_:it in 2181) [ClassicSimilarity], result of:
          0.04098487 = score(doc=2181,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27114958 = fieldWeight in 2181, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=2181)
      0.2 = coord(1/5)
    
    Abstract
    Recent trends, driven to a large extent by the rapid proliferation of digital resources, are forcing changes in bibliographic control to make it easier to use, understand, and apply subject data. Subject headings are no exception. The enormous volume and rapid growth of digital libraries and repositories and the emergence of numerous metadata schemes have spurred a reexamination of the way subject data are to be provided for such resources efficiently and effectively. To address this need, OCLC in cooperation with the Library of Congress, has taken a new approach, called FAST (Faceted Application of Subject Terminology). FAST headings are based on the existing vocabulary in Library of Congress Subject Headings (LCSH), but are applied with a simpler syntax than required by Library of Congress application policies. Adapting the LCSH vocabulary in a simplified faceted syntax retains the rich vocabulary of LCSH while making it easier to understand, control, apply, and use.
  8. Zeng, M.L.; Chan, L.M.: Semantic interoperability (2009) 0.01
    0.0077281813 = product of:
      0.038640905 = sum of:
        0.038640905 = weight(_text_:it in 3738) [ClassicSimilarity], result of:
          0.038640905 = score(doc=3738,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.25564227 = fieldWeight in 3738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=3738)
      0.2 = coord(1/5)
    
    Abstract
    This entry discusses the importance of semantic interoperability in the networked environment, introduces various approaches contributing to semantic interoperability, and summarizes different methodologies used in current projects that are focused on achieving semantic interoperability. It is intended to inform readers about the fundamentals and mechanisms that have been experimented with, or implemented, that strive to ensure and achieve semantic interoperability in the current networked environment.
  9. Chan, L.M.; Comaroni, J.P.; Satija, M.P.: Dewey Decimal Classification : a practical guide (1994) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 336) [ClassicSimilarity], result of:
          0.03381079 = score(doc=336,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=336)
      0.2 = coord(1/5)
    
    Abstract
    Introduction to the methods of classifying and arranging library collections according to the DDC. It begins with a brief history of the DDC, followed by discussions, the methods of analyzing the subject content of documents to be classed, and the proper procedures of assigning class number. Its essential aims is to explain the proper methods of applying the DDC schedules, of locating and assigning the appropriate class number, and of synthesizing a class number if need be. Examples and exercises are based on ed. 20
  10. Chan, L.M.; Childress, E.; Dean, R.; O'Neill, E.T.; Vizine-Goetz, D.: ¬A faceted approach to subject data in the Dublin Core metadata record (2001) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 6109) [ClassicSimilarity], result of:
          0.03381079 = score(doc=6109,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 6109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6109)
      0.2 = coord(1/5)
    
    Abstract
    This article describes FAST, the Faceted Application of Subject Terminology, a project at OCLC to make Library of Congress Subject Headings easier to use in Dublin Core metadata by breaking out facets of space, time, and form. Work on FAST can be watched at its web site, http://www.miskatonic.org/library/, which has recent presentations and reports. It is interesting to see facets and Dublin Core combined, though both LCSH and FAST subject headings are beyond what most people making a small faceted classification would want or need.
  11. Chan, L.M.: Classification present and future (1995) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 5560) [ClassicSimilarity], result of:
          0.03381079 = score(doc=5560,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 5560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5560)
      0.2 = coord(1/5)
    
    Abstract
    Suggests that recent developments in the way information is generated, packaged amd accessed have broadened and changed the nature and application of classification in library and information networks. Examines the role of classification by posing the following questions: what, how and why do we classify? Within this context the expanding role of classification is examined with regard to how classification affects accessing, browsing, identifying, navigating, mapping and evaluating information and how it is and may be used in collection and database management, controlled vocabulary construction and development, and research
  12. Zeng, M.L.; Chan, L.M.: Trends and issues in establishing interoperability among knowledge organization systems (2004) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 2224) [ClassicSimilarity], result of:
          0.028980678 = score(doc=2224,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 2224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=2224)
      0.2 = coord(1/5)
    
    Abstract
    This report analyzes the methodologies used in establishing interoperability among knowledge organization systems (KOS) such as controlled vocabularies and classification schemes that present the organized interpretation of knowledge structures. The development and trends of KOS are discussed with reference to the online era and the Internet era. Selected current projects and activities addressing KOS interoperability issues are reviewed in terms of the languages and structures involved. The methodological analysis encompasses both conventional and new methods that have proven to be widely accepted, including derivation/modeling, translation/adaptation, satellite and leaf node linking, direct mapping, co-occurrence mapping, switching, linking through a temporary union list, and linking through a thesaurus server protocol. Methods used in link storage and management, as weIl as common issues regarding mapping and methodological options, are also presented. It is concluded that interoperability of KOS is an unavoidable issue and process in today's networked environment. There have been and will be many multilingual products and services, with many involving various structured systems. Results from recent efforts are encouraging.
  13. Chan, L.M.; Vizine-Goetz, D.: Towards a computer-generated subject validation file : feasibility and usefulness (1998) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 1781) [ClassicSimilarity], result of:
          0.024150565 = score(doc=1781,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 1781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1781)
      0.2 = coord(1/5)
    
    Abstract
    Recognition, by libraries, of the need for improved efficiency and reliability in subject authority control in catalogues led to a study of the feasibility of automatically creating a subject heading validation file by scanning the OLUC. The premises were: that although the file would not be axhaustive, it would contain the majority of frequently used headings; and that the predicted level of accurary in the file would be high. A sample file of Library of Congress assigned subject headings, from the OCLC Subject Headings Corrections database was analyzed. Results showed that: the frequency of use varies inversely with the number of headings at a given rrate of use; a small number of headings with high frequencies of use accounts for the majority of total use, while a large proportion shows very low frequency of use; topical headings account for 2/3 of assigned headings; and error and obsolescence rates are both low and are in inverse relationship to the frequency of heading use. Concludes that an automatically generated subject heading validation file is feasible and could serve various purposes, including: verification of subject heading strings constructed by cataloguers; updating of subject headings in catalogue maintenance; and validation of subject headings during retrospective catalogue conversion
  14. O'Neill, E.T.; Childress, E.; Dean, R.; Kammerer, K.; Vizine-Goetz, D.; Chan, L.M.; El-Hoshy, L.: FAST: faceted application of subject terminology (2003) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 3816) [ClassicSimilarity], result of:
          0.024150565 = score(doc=3816,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 3816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3816)
      0.2 = coord(1/5)
    
    Abstract
    The Library of Congress Subject Headings schema (LCSH) is by far the most commonly used and widely accepted subject vocabulary for general application. It is the de facto universal controlled vocabulary and has been a model for developing subject heading systems by many countries. However, LCSH's complex syntax and rules for constructing headings restrict its application by requiring highly skilled personnel and limit the effectiveness of automated authority control. Recent trends, driven to a large extent by the rapid growth of the Web, are forcing changes in bibliographic control systems to make them easier to use, understand, and apply, and subject headings are no exception. The purpose of adapting the LCSH with a simplified syntax to create FAST is to retain the very rich vocabulary of LCSH while making the schema easier to understand, control, apply, and use. The schema maintains upward compatibility with LCSH, and any valid set of LC subject headings can be converted to FAST headings.
  15. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 1176) [ClassicSimilarity], result of:
          0.024150565 = score(doc=1176,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 1176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1176)
      0.2 = coord(1/5)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  16. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.00
    0.0038640907 = product of:
      0.019320453 = sum of:
        0.019320453 = weight(_text_:it in 1177) [ClassicSimilarity], result of:
          0.019320453 = score(doc=1177,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.12782113 = fieldWeight in 1177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=1177)
      0.2 = coord(1/5)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  17. Yi, K.; Chan, L.M.: Linking folksonomy to Library of Congress subject headings : an exploratory study (2009) 0.00
    0.0038640907 = product of:
      0.019320453 = sum of:
        0.019320453 = weight(_text_:it in 3616) [ClassicSimilarity], result of:
          0.019320453 = score(doc=3616,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.12782113 = fieldWeight in 3616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=3616)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The purpose of this paper is to investigate the linking of a folksonomy (user vocabulary) and LCSH (controlled vocabulary) on the basis of word matching, for the potential use of LCSH in bringing order to folksonomies. Design/methodology/approach - A selected sample of a folksonomy from a popular collaborative tagging system, Delicious, was word-matched with LCSH. LCSH was transformed into a tree structure called an LCSH tree for the matching. A close examination was conducted on the characteristics of folksonomies, the overlap of folksonomies with LCSH, and the distribution of folksonomies over the LCSH tree. Findings - The experimental results showed that the total proportion of tags being matched with LC subject headings constituted approximately two-thirds of all tags involved, with an additional 10 percent of the remaining tags having potential matches. A number of barriers for the linking as well as two areas in need of improving the matching are identified and described. Three important tag distribution patterns over the LCSH tree were identified and supported: skewedness, multifacet, and Zipfian-pattern. Research limitations/implications - The results of the study can be adopted for the development of innovative methods of mapping between folksonomy and LCSH, which directly contributes to effective access and retrieval of tagged web resources and to the integration of multiple information repositories based on the two vocabularies. Practical implications - The linking of controlled vocabularies can be applicable to enhance information retrieval capability within collaborative tagging systems as well as across various tagging system information depositories and bibliographic databases. Originality/value - This is among frontier works that examines the potential of linking a folksonomy, extracted from a collaborative tagging system, to an authority-maintained subject heading system. It provides exploratory data to support further advanced mapping methods for linking the two vocabularies.