Search (35 results, page 2 of 2)

  • × author_ss:"Tudhope, D."
  1. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: Compound descriptors in context : a matching function for classifications and thesauri (2002) 0.00
    0.004624805 = product of:
      0.011562012 = sum of:
        0.0076151006 = weight(_text_:a in 3179) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=3179,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 3179, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3179)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 3179) [ClassicSimilarity], result of:
              0.007893822 = score(doc=3179,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 3179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3179)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    There are many advantages for Digital Libraries in indexing with classifications or thesauri, but some current disincentive in the lack of flexible retrieval tools that deal with compound descriptors. This paper discusses a matching function for compound descriptors, or multi-concept subject headings, that does not rely an exact matching but incorporates term expansion via thesaurus semantic relationships to produce ranked results that take account of missing and partially matching terms. The matching function is based an a measure of semantic closeness between terms, which has the potential to help with recall problems. The work reported is part of the ongoing FACET project in collaboration with the National Museum of Science and Industry and its collections database. The architecture of the prototype system and its Interface are outlined. The matching problem for compound descriptors is reviewed and the FACET implementation described. Results are discussed from scenarios using the faceted Getty Art and Architecture Thesaurus. We argue that automatic traversal of thesaurus relationships can augment the user's browsing possibilities. The techniques can be applied both to unstructured multi-concept subject headings and potentially to more syntactically structured strings. The notion of a focus term is used by the matching function to model AAT modified descriptors (noun phrases). The relevance of the approach to precoordinated indexing and matching faceted strings is discussed.
    Theme
    Information Gateway
    Type
    a
  2. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.00
    0.004532365 = product of:
      0.011330913 = sum of:
        0.008173384 = weight(_text_:a in 2320) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2320,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2320, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 2320) [ClassicSimilarity], result of:
              0.006315058 = score(doc=2320,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 2320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2320)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
    Type
    a
  3. Tudhope, D.: Virtual architecture based on a binary relational model : a museum hypermedia application (1994) 0.00
    0.0026970792 = product of:
      0.013485395 = sum of:
        0.013485395 = weight(_text_:a in 2801) [ClassicSimilarity], result of:
          0.013485395 = score(doc=2801,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25222903 = fieldWeight in 2801, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2801)
      0.2 = coord(1/5)
    
    Abstract
    Reviews claims made for virtual architectures and proposes a semantic data model for hypermedia architecture. Semantic modelling and an extended binary relational model in particular, are outlined in the context of hypermedia. The binary relational store is a simple, uniform data structure, capable of representing abstraction in the application model. Pilot implementations of museum hypermedia systems demonstrate that the architecture is capable of supporting a variety of navigation techniques and authoring tools. Outlines the SHIC (Social History and Industrial Classification) museum classification schema, and discusses its implementation in a hypermedia system based on a binary relational store. Considers experiences with the prototypes and discusses feedback from the museum profession and general public. An extended binary relational model is particularly suited to certain forms of reasoning based on generalization
    Type
    a
  4. Souza, R.R.; Tudhope, D.; Almeida, M.B.: Towards a taxonomy of KOS (2012) 0.00
    0.0025228865 = product of:
      0.012614433 = sum of:
        0.012614433 = weight(_text_:a in 139) [ClassicSimilarity], result of:
          0.012614433 = score(doc=139,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.23593865 = fieldWeight in 139, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=139)
      0.2 = coord(1/5)
    
    Abstract
    This paper analyzes previous work on the classification of Knowledge Organization Systems (KOS), discusses strengths and weaknesses, and proposes a new and integrative framework. It argues that current analyses of the KOS tend to be idiosyncratic and incomplete, relying on a limited number of dimensions of analysis. The paper discusses why and how KOS should be classified on a new basis. Based on the available literature and previous work, the authors propose a wider set of dimensions for the analysis of KOS. These are represented in a taxonomy of KOS. Issues arising are discussed.
    Type
    a
  5. Souza, R.R.; Tudhope, D.; Almeida, M.B.: ¬The KOS spectra : a tentative typology of knowledge organization systems (2010) 0.00
    0.0021322283 = product of:
      0.010661141 = sum of:
        0.010661141 = weight(_text_:a in 3523) [ClassicSimilarity], result of:
          0.010661141 = score(doc=3523,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 3523, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3523)
      0.2 = coord(1/5)
    
    Abstract
    This work tries to propose a set of evaluation dimensions for the analysis of the knowledge organization systems (KOS), building over previous research and the available literature on the subject. It presents a compiled taxonomy of KOSs, a set of tentative characteristics proposed in the literature and the authors' spectra proposal. The full details of the typology are not covered in the scope of the article, but will be available as an ontology in the near future.
    Type
    a
  6. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.00
    0.0019264851 = product of:
      0.009632425 = sum of:
        0.009632425 = weight(_text_:a in 2300) [ClassicSimilarity], result of:
          0.009632425 = score(doc=2300,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 2300, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2300)
      0.2 = coord(1/5)
    
    Abstract
    Subject terms play a crucial role in resource discovery but require substantial effort to produce. Automatic subject classification and indexing address problems of scale and sustainability and can be used to enrich existing bibliographic records, establish more connections across and between resources and enhance consistency of bibliographic data. The paper aims to put forward a complex methodological framework to evaluate automatic classification tools of Swedish textual documents based on the Dewey Decimal Classification (DDC) recently introduced to Swedish libraries. Three major complementary approaches are suggested: a quality-built gold standard, retrieval effects, domain analysis. The gold standard is built based on input from at least two catalogue librarians, end-users expert in the subject, end users inexperienced in the subject and automated tools. Retrieval effects are studied through a combination of assigned and free tasks, including factual and comprehensive types. The study also takes into consideration the different role and character of subject terms in various knowledge domains, such as scientific disciplines. As a theoretical framework, domain analysis is used and applied in relation to the implementation of DDC in Swedish libraries and chosen domains of knowledge within the DDC itself.
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
    Type
    a
  7. Tudhope, D.; Taylor, C.; Beynon-Davies, P.: Classification and hypermedia (1995) 0.00
    0.0019071229 = product of:
      0.009535614 = sum of:
        0.009535614 = weight(_text_:a in 4575) [ClassicSimilarity], result of:
          0.009535614 = score(doc=4575,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 4575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4575)
      0.2 = coord(1/5)
    
    Type
    a
  8. Binding, C.; Tudhope, D.: KOS at your service : Programmatic access to knowledge organisation systems (2004) 0.00
    0.0018945175 = product of:
      0.009472587 = sum of:
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 1342) [ClassicSimilarity], result of:
              0.018945174 = score(doc=1342,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 1342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1342)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Teil eines Themenheftes von: Journal of digital information. 4(2004) no.4.
  9. Tudhope, D.; Binding, C.: Faceted thesauri (2008) 0.00
    0.0018875621 = product of:
      0.009437811 = sum of:
        0.009437811 = weight(_text_:a in 1855) [ClassicSimilarity], result of:
          0.009437811 = score(doc=1855,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 1855, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1855)
      0.2 = coord(1/5)
    
    Abstract
    The basic elements of faceted thesauri are described, together with a review of their origins and some prominent examples. Their use in browsing and searching applications is discussed. Faceted thesauri are distinguished from faceted classification schemes, while acknowledging the close similarities. The paper concludes by comparing faceted thesauri and related knowledge organization systems to ontologies and discussing appropriate areas of use.
    Content
    Beitrag eines Themenheftes "Facets: a fruitful notion in many domains".
    Type
    a
  10. Tudhope, D.; Binding, C.; Blocks, D.; Cuncliffe, D.: Representation and retrieval in faceted systems (2003) 0.00
    0.0016683849 = product of:
      0.008341924 = sum of:
        0.008341924 = weight(_text_:a in 2703) [ClassicSimilarity], result of:
          0.008341924 = score(doc=2703,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 2703, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2703)
      0.2 = coord(1/5)
    
    Abstract
    This paper discusses two inter-related themes: the retrieval potential of faceted thesauri and XML representations of fundamental facets. Initial findings are discussed from the ongoing 'FACET' project, in collaboration with the National Museum of Science and Industry. The work discussed seeks to take advantage of the structure afforded by faceted systems for multi-term queries and flexible matching, focusing in this paper an the Art and Architecture Thesaurus. A multi-term matching function yields ranked results with partial matches via semantic term expansion, based an a measure of distance over the semantic index space formed by thesaurus relationships. Our intention is to drive the system from general representations and a common query structure and interface. To this end, we are developing an XML representation based an work by the Classification Research Group an fundamental facets or categories. The XML representation maps categories to particular thesauri and hierarchies. The system interface, which is configured by the mapping, incorporates a thesaurus browser with navigation history together with a term search facility and drag and drop query builder.
    Type
    a
  11. Tudhope, D.; Blocks, D.; Cunliffe, D.; Binding, C.: Query expansion via conceptual distance in thesaurus indexed collections (2006) 0.00
    0.0015230201 = product of:
      0.0076151006 = sum of:
        0.0076151006 = weight(_text_:a in 2215) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=2215,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 2215, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2215)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The purpose of this paper is to explore query expansion via conceptual distance in thesaurus indexed collections Design/methodology/approach - An extract of the National Museum of Science and Industry's collections database, indexed with the Getty Art and Architecture Thesaurus (AAT), was the dataset for the research. The system architecture and algorithms for semantic closeness and the matching function are outlined. Standalone and web interfaces are described and formative qualitative user studies are discussed. One user session is discussed in detail, together with a scenario based on a related public inquiry. Findings are set in context of the literature on thesaurus-based query expansion. This paper discusses the potential of query expansion techniques using the semantic relationships in a faceted thesaurus. Findings - Thesaurus-assisted retrieval systems have potential for multi-concept descriptors, permitting very precise queries and indexing. However, indexer and searcher may differ in terminology judgments and there may not be any exactly matching results. The integration of semantic closeness in the matching function permits ranked results for multi-concept queries in thesaurus-indexed applications. An in-memory representation of the thesaurus semantic network allows a combination of automatic and interactive control of expansion and control of expansion on individual query terms. Originality/value - The application of semantic expansion to browsing may be useful in interface options where thesaurus structure is hidden.
    Type
    a
  12. Binding, C.; Tudhope, D.: Improving interoperability using vocabulary linked data (2015) 0.00
    0.0015230201 = product of:
      0.0076151006 = sum of:
        0.0076151006 = weight(_text_:a in 2205) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=2205,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 2205, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2205)
      0.2 = coord(1/5)
    
    Abstract
    The concept of Linked Data has been an emerging theme within the computing and digital heritage areas in recent years. The growth and scale of Linked Data has underlined the need for greater commonality in concept referencing, to avoid local redefinition and duplication of reference resources. Achieving domain-wide agreement on common vocabularies would be an unreasonable expectation; however, datasets often already have local vocabulary resources defined, and so the prospects for large-scale interoperability can be substantially improved by creating alignment links from these local vocabularies out to common external reference resources. The ARIADNE project is undertaking large-scale integration of archaeology dataset metadata records, to create a cross-searchable research repository resource. Key to enabling this cross search will be the 'subject' metadata originating from multiple data providers, containing terms from multiple multilingual controlled vocabularies. This paper discusses various aspects of vocabulary mapping. Experience from the previous SENESCHAL project in the publication of controlled vocabularies as Linked Open Data is discussed, emphasizing the importance of unique URI identifiers for vocabulary concepts. There is a need to align legacy indexing data to the uniquely defined concepts and examples are discussed of SENESCHAL data alignment work. A case study for the ARIADNE project presents work on mapping between vocabularies, based on the Getty Art and Architecture Thesaurus as a central hub and employing an interactive vocabulary mapping tool developed for the project, which generates SKOS mapping relationships in JSON and other formats. The potential use of such vocabulary mappings to assist cross search over archaeological datasets from different countries is illustrated in a pilot experiment. The results demonstrate the enhanced opportunities for interoperability and cross searching that the approach offers.
  13. Tudhope, D.; Binding, C.: Mapping between linked data vocabularies in ARIADNE (2015) 0.00
    0.0013622305 = product of:
      0.0068111527 = sum of:
        0.0068111527 = weight(_text_:a in 2250) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=2250,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 2250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2250)
      0.2 = coord(1/5)
    
    Abstract
    Semantic Enrichment Enabling Sustainability of Archaeological Links (SENESCHAL) was a project coordinated by the Hypermedia Research Unit at the University of South Wales. The project aims included widening access to key vocabulary resources. National cultural heritage thesauri and vocabularies are used by both national organizations and local authority Historic Environment Records and could potentially act as vocabulary hubs for the Web of Data. Following completion, a set of prominent UK archaeological thesauri and vocabularies is now freely available as Linked Open Data (LOD) via http://www.heritagedata.org - together with open source web services and user interface controls. This presentation will reflect on work done to date for the ARIADNE FP7 infrastructure project (http://www.ariadne-infrastructure.eu) mapping between archaeological vocabularies in different languages and the utility of a hub architecture. The poly-hierarchical structure of the Getty Art & Architecture Thesaurus (AAT) was extracted for use as an example mediating structure to interconnect various multilingual vocabularies originating from ARIADNE data providers. Vocabulary resources were first converted to a common concept-based format (SKOS) and the concepts were then manually mapped to nodes of the extracted AAT structure using some judgement on the meaning of terms and scope notes. Results are presented along with reflections on the wider application to existing European archaeological vocabularies and associated online datasets.
  14. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.00
    0.0013622305 = product of:
      0.0068111527 = sum of:
        0.0068111527 = weight(_text_:a in 600) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=600,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 600, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.2 = coord(1/5)
    
    Abstract
    Purpose The Integrative Levels Classification (ILC) is a comprehensive "freely faceted" knowledge organization system not previously expressed as SKOS (Simple Knowledge Organization System). This paper reports and reflects on work converting the ILC to SKOS representation. Design/methodology/approach The design of the ILC representation and the various steps in the conversion to SKOS are described and located within the context of previous work considering the representation of complex classification schemes in SKOS. Various issues and trade-offs emerging from the conversion are discussed. The conversion implementation employed the STELETO transformation tool. Findings The ILC conversion captures some of the ILC facet structure by a limited extension beyond the SKOS standard. SPARQL examples illustrate how this extension could be used to create faceted, compound descriptors when indexing or cataloguing. Basic query patterns are provided that might underpin search systems. Possible routes for reducing complexity are discussed. Originality/value Complex classification schemes, such as the ILC, have features which are not straight forward to represent in SKOS and which extend beyond the functionality of the SKOS standard. The ILC's facet indicators are modelled as rdf:Property sub-hierarchies that accompany the SKOS RDF statements. The ILC's top-level fundamental facet relationships are modelled by extensions of the associative relationship - specialised sub-properties of skos:related. An approach for representing faceted compound descriptions in ILC and other faceted classification schemes is proposed.
    Type
    a
  15. Golub, K.; Moon, J.; Nielsen, M.L.; Tudhope, D.: EnTag: Enhanced Tagging for Discovery (2008) 0.00
    0.0013485396 = product of:
      0.0067426977 = sum of:
        0.0067426977 = weight(_text_:a in 2294) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=2294,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 2294, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2294)
      0.2 = coord(1/5)
    
    Abstract
    Purpose: Investigate the combination of controlled and folksonomy approaches to support resource discovery in repositories and digital collections. Aim: Investigate whether use of an established controlled vocabulary can help improve social tagging for better resource discovery. Objectives: (1) Investigate indexing aspects when using only social tagging versus when using social tagging with suggestions from a controlled vocabulary; (2) Investigate above in two different contexts: tagging by readers and tagging by authors; (3) Investigate influence of only social tagging versus social tagging with a controlled vocabulary on retrieval. - Vgl.: http://www.ukoln.ac.uk/projects/enhanced-tagging/.