Search (9 results, page 1 of 1)

  • × author_ss:"Binding, C."
  1. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.03
    0.027913915 = product of:
      0.05582783 = sum of:
        0.037581123 = weight(_text_:subject in 2320) [ClassicSimilarity], result of:
          0.037581123 = score(doc=2320,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.22353725 = fieldWeight in 2320, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
        0.018246707 = product of:
          0.036493413 = sum of:
            0.036493413 = weight(_text_:classification in 2320) [ClassicSimilarity], result of:
              0.036493413 = score(doc=2320,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24377833 = fieldWeight in 2320, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2320)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
  2. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: Compound descriptors in context : a matching function for classifications and thesauri (2002) 0.01
    0.011744101 = product of:
      0.046976402 = sum of:
        0.046976402 = weight(_text_:subject in 3179) [ClassicSimilarity], result of:
          0.046976402 = score(doc=3179,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27942157 = fieldWeight in 3179, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3179)
      0.25 = coord(1/4)
    
    Abstract
    There are many advantages for Digital Libraries in indexing with classifications or thesauri, but some current disincentive in the lack of flexible retrieval tools that deal with compound descriptors. This paper discusses a matching function for compound descriptors, or multi-concept subject headings, that does not rely an exact matching but incorporates term expansion via thesaurus semantic relationships to produce ranked results that take account of missing and partially matching terms. The matching function is based an a measure of semantic closeness between terms, which has the potential to help with recall problems. The work reported is part of the ongoing FACET project in collaboration with the National Museum of Science and Industry and its collections database. The architecture of the prototype system and its Interface are outlined. The matching problem for compound descriptors is reviewed and the FACET implementation described. Results are discussed from scenarios using the faceted Getty Art and Architecture Thesaurus. We argue that automatic traversal of thesaurus relationships can augment the user's browsing possibilities. The techniques can be applied both to unstructured multi-concept subject headings and potentially to more syntactically structured strings. The notion of a focus term is used by the matching function to model AAT modified descriptors (noun phrases). The relevance of the approach to precoordinated indexing and matching faceted strings is discussed.
  3. Binding, C.; Tudhope, D.: Improving interoperability using vocabulary linked data (2015) 0.01
    0.008304333 = product of:
      0.033217333 = sum of:
        0.033217333 = weight(_text_:subject in 2205) [ClassicSimilarity], result of:
          0.033217333 = score(doc=2205,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.19758089 = fieldWeight in 2205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2205)
      0.25 = coord(1/4)
    
    Abstract
    The concept of Linked Data has been an emerging theme within the computing and digital heritage areas in recent years. The growth and scale of Linked Data has underlined the need for greater commonality in concept referencing, to avoid local redefinition and duplication of reference resources. Achieving domain-wide agreement on common vocabularies would be an unreasonable expectation; however, datasets often already have local vocabulary resources defined, and so the prospects for large-scale interoperability can be substantially improved by creating alignment links from these local vocabularies out to common external reference resources. The ARIADNE project is undertaking large-scale integration of archaeology dataset metadata records, to create a cross-searchable research repository resource. Key to enabling this cross search will be the 'subject' metadata originating from multiple data providers, containing terms from multiple multilingual controlled vocabularies. This paper discusses various aspects of vocabulary mapping. Experience from the previous SENESCHAL project in the publication of controlled vocabularies as Linked Open Data is discussed, emphasizing the importance of unique URI identifiers for vocabulary concepts. There is a need to align legacy indexing data to the uniquely defined concepts and examples are discussed of SENESCHAL data alignment work. A case study for the ARIADNE project presents work on mapping between vocabularies, based on the Getty Art and Architecture Thesaurus as a central hub and employing an interactive vocabulary mapping tool developed for the project, which generates SKOS mapping relationships in JSON and other formats. The potential use of such vocabulary mappings to assist cross search over archaeological datasets from different countries is illustrated in a pilot experiment. The results demonstrate the enhanced opportunities for interoperability and cross searching that the approach offers.
  4. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.01
    0.00806398 = product of:
      0.03225592 = sum of:
        0.03225592 = product of:
          0.06451184 = sum of:
            0.06451184 = weight(_text_:classification in 600) [ClassicSimilarity], result of:
              0.06451184 = score(doc=600,freq=12.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.43094325 = fieldWeight in 600, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=600)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The Integrative Levels Classification (ILC) is a comprehensive "freely faceted" knowledge organization system not previously expressed as SKOS (Simple Knowledge Organization System). This paper reports and reflects on work converting the ILC to SKOS representation. Design/methodology/approach The design of the ILC representation and the various steps in the conversion to SKOS are described and located within the context of previous work considering the representation of complex classification schemes in SKOS. Various issues and trade-offs emerging from the conversion are discussed. The conversion implementation employed the STELETO transformation tool. Findings The ILC conversion captures some of the ILC facet structure by a limited extension beyond the SKOS standard. SPARQL examples illustrate how this extension could be used to create faceted, compound descriptors when indexing or cataloguing. Basic query patterns are provided that might underpin search systems. Possible routes for reducing complexity are discussed. Originality/value Complex classification schemes, such as the ILC, have features which are not straight forward to represent in SKOS and which extend beyond the functionality of the SKOS standard. The ILC's facet indicators are modelled as rdf:Property sub-hierarchies that accompany the SKOS RDF statements. The ILC's top-level fundamental facet relationships are modelled by extensions of the associative relationship - specialised sub-properties of skos:related. An approach for representing faceted compound descriptions in ILC and other faceted classification schemes is proposed.
  5. Binding, C.; Tudhope, D.: Terminology Web services (2010) 0.01
    0.006842515 = product of:
      0.02737006 = sum of:
        0.02737006 = product of:
          0.05474012 = sum of:
            0.05474012 = weight(_text_:classification in 4067) [ClassicSimilarity], result of:
              0.05474012 = score(doc=4067,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.3656675 = fieldWeight in 4067, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4067)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Controlled terminologies such as classification schemes, name authorities, and thesauri have long been the domain of the library and information science community. Although historically there have been initiatives towards library style classification of web resources, there remain significant problems with searching and quality judgement of online content. Terminology services can play a key role in opening up access to these valuable resources. By exposing controlled terminologies via a web service, organisations maintain data integrity and version control, whilst motivating external users to design innovative ways to present and utilise their data. We introduce terminology web services and review work in the area. We describe the approaches taken in establishing application programming interfaces (API) and discuss the comparative benefits of a dedicated terminology web service versus general purpose programming languages. We discuss experiences at Glamorgan in creating terminology web services and associated client interface components, in particular for the archaeology domain in the STAR (Semantic Technologies for Archaeological Resources) Project.
    Content
    Teil von: Papers from Classification at a Crossroads: Multiple Directions to Usability: International UDC Seminar 2009-Part 2
  6. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: FACET: thesaurus retrieval with semantic term expansion (2002) 0.01
    0.0066434667 = product of:
      0.026573867 = sum of:
        0.026573867 = weight(_text_:subject in 175) [ClassicSimilarity], result of:
          0.026573867 = score(doc=175,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.15806471 = fieldWeight in 175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=175)
      0.25 = coord(1/4)
    
    Abstract
    There are many advantages for Digital Libraries in indexing with classifications or thesauri, but some current disincentive in the lack of flexible retrieval tools that deal with compound descriptors. This demonstration of a research prototype illustrates a matching function for compound descriptors, or multi-concept subject headings, that does not rely on exact matching but incorporates term expansion via thesaurus semantic relationships to produce ranked results that take account of missing and partially matching terms. The matching function is based on a measure of semantic closeness between terms.The work is part of the EPSRC funded FACET project in collaboration with the UK National Museum of Science and Industry (NMSI) which includes the National Railway Museum. An export of NMSI's Collections Database is used as the dataset for the research. The J. Paul Getty Trust's Art and Architecture Thesaurus (AAT) is the main thesaurus in the project. The AAT is a widely used thesaurus (over 120,000 terms). Descriptors are organised in 7 facets representing separate conceptual classes of terms.The FACET application is a multi tiered architecture accessing a SQL Server database, with an OLE DB connection. The thesauri are stored as relational tables in the Server's database. However, a key component of the system is a parallel representation of the underlying semantic network as an in-memory structure of thesaurus concepts (corresponding to preferred terms). The structure models the hierarchical and associative interrelationships of thesaurus concepts via weighted poly-hierarchical links. Its primary purpose is real-time semantic expansion of query terms, achieved by a spreading activation semantic closeness algorithm. Queries with associated results are stored persistently using XML format data. A Visual Basic interface combines a thesaurus browser and an initial term search facility that takes into account equivalence relationships. Terms are dragged to a direct manipulation Query Builder which maintains the facet structure.
  7. Tudhope, D.; Binding, C.: Toward terminology services : experiences with a pilot Web service thesaurus browser (2006) 0.01
    0.0066434667 = product of:
      0.026573867 = sum of:
        0.026573867 = weight(_text_:subject in 1955) [ClassicSimilarity], result of:
          0.026573867 = score(doc=1955,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.15806471 = fieldWeight in 1955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=1955)
      0.25 = coord(1/4)
    
    Abstract
    Dublin Core recommends controlled terminology for the subject of a resource. Knowledge organization systems (KOS), such as classifications, gazetteers, taxonomies and thesauri, provide controlled vocabularies that organize and structure concepts for indexing, classifying, browsing and search. For example, a thesaurus employs a set of standard semantic relationships (ISO 2788, ISO 5964), and major thesauri have a large entry vocabulary of terms considered equivalent for retrieval purposes. Many KOS have been made available for Web-based access. However, they are often not fully integrated into indexing and search systems and the full potential for networked and programmatic access remains untapped. The lack of standardized access and interchange formats impedes wider use of KOS resources. We developed a Web demonstrator (www.comp.glam.ac.uk/~FACET/webdemo/) for the FACET project (www.comp.glam.ac.uk/~facet/facetproject.html) that explored thesaurus-based query expansion with the Getty Art and Architecture Thesaurus. A Web demonstrator was implemented via Active Server Pages (ASP) with server-side scripting and compiled server-side components for database access, and cascading style sheets for presentation. The browser-based interactive interface permits dynamic control of query term expansion. However, being based on a custom thesaurus representation and API, the techniques cannot be applied directly to thesauri in other formats on the Web. General programmatic access requires commonly agreed protocols, for example, building on Web and Grid services. The development of common KOS representation formats and service protocols are closely linked. Linda Hill and colleagues argued in 2002 for a general KOS service protocol from which protocols for specific types of KOS can be derived. Thus, in the future, a combination of thesaurus and query protocols might permit a thesaurus to be used with a choice of search tools on various kinds of databases. Service-oriented architectures bring an opportunity for moving toward a clearer separation of interface components from the underlying data sources. In our view, basing distributed protocol services on the atomic elements of thesaurus data structures and relationships is not necessarily the best approach because client operations that require multiple client-server calls would carry too much overhead. This would limit the interfaces that could be offered by applications following such a protocol. Advanced interactive interfaces require protocols that group primitive thesaurus data elements (via their relationships) into composites to achieve reasonable response.
  8. Tudhope, D.; Binding, C.: Faceted thesauri (2008) 0.01
    0.00526737 = product of:
      0.02106948 = sum of:
        0.02106948 = product of:
          0.04213896 = sum of:
            0.04213896 = weight(_text_:classification in 1855) [ClassicSimilarity], result of:
              0.04213896 = score(doc=1855,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.28149095 = fieldWeight in 1855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1855)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The basic elements of faceted thesauri are described, together with a review of their origins and some prominent examples. Their use in browsing and searching applications is discussed. Faceted thesauri are distinguished from faceted classification schemes, while acknowledging the close similarities. The paper concludes by comparing faceted thesauri and related knowledge organization systems to ontologies and discussing appropriate areas of use.
  9. Tudhope, D.; Binding, C.; Blocks, D.; Cuncliffe, D.: Representation and retrieval in faceted systems (2003) 0.00
    0.0032921063 = product of:
      0.013168425 = sum of:
        0.013168425 = product of:
          0.02633685 = sum of:
            0.02633685 = weight(_text_:classification in 2703) [ClassicSimilarity], result of:
              0.02633685 = score(doc=2703,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.17593184 = fieldWeight in 2703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2703)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses two inter-related themes: the retrieval potential of faceted thesauri and XML representations of fundamental facets. Initial findings are discussed from the ongoing 'FACET' project, in collaboration with the National Museum of Science and Industry. The work discussed seeks to take advantage of the structure afforded by faceted systems for multi-term queries and flexible matching, focusing in this paper an the Art and Architecture Thesaurus. A multi-term matching function yields ranked results with partial matches via semantic term expansion, based an a measure of distance over the semantic index space formed by thesaurus relationships. Our intention is to drive the system from general representations and a common query structure and interface. To this end, we are developing an XML representation based an work by the Classification Research Group an fundamental facets or categories. The XML representation maps categories to particular thesauri and hierarchies. The system interface, which is configured by the mapping, incorporates a thesaurus browser with navigation history together with a term search facility and drag and drop query builder.