Search (29 results, page 2 of 2)

  • × author_ss:"Tudhope, D."
  1. Binding, C.; Tudhope, D.: KOS at your service : Programmatic access to knowledge organisation systems (2004) 0.01
    0.005149705 = product of:
      0.02059882 = sum of:
        0.02059882 = weight(_text_:information in 1342) [ClassicSimilarity], result of:
          0.02059882 = score(doc=1342,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 1342, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1342)
      0.25 = coord(1/4)
    
    Footnote
    Teil eines Themenheftes von: Journal of digital information. 4(2004) no.4.
  2. Tudhope, D.; Binding, C.: Mapping between linked data vocabularies in ARIADNE (2015) 0.00
    0.004425438 = product of:
      0.017701752 = sum of:
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 2250) [ClassicSimilarity], result of:
              0.035403505 = score(doc=2250,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 2250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2250)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Vortrag anlässlich: 14th European Networked Knowledge Organization Systems (NKOS) Workshop, TPDL 2015 Conference in Poznan, Poland, Friday 18th September 2015.
  3. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.00
    0.0037164795 = product of:
      0.014865918 = sum of:
        0.014865918 = weight(_text_:information in 3311) [ClassicSimilarity], result of:
          0.014865918 = score(doc=3311,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16796975 = fieldWeight in 3311, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3311)
      0.25 = coord(1/4)
    
    Abstract
    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. Although some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The article reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single "gold standard" method when evaluating indexing and retrieval, and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on evaluation approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard, evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
    Series
    Advances in information science
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.3-16
  4. Tudhope, D.; Taylor, C.: Navigation via similarity (1997) 0.00
    0.0036413912 = product of:
      0.014565565 = sum of:
        0.014565565 = weight(_text_:information in 155) [ClassicSimilarity], result of:
          0.014565565 = score(doc=155,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=155)
      0.25 = coord(1/4)
    
    Abstract
    Describes a research project, in which similarity measures have been extended to include imprecise matching over different dimensions of structured classification schemes (subject, soace, time). The semantic similarity of information units forms the basis for the automatic construction of links and is integrated into hypermedia navigation. Outlines a semantic hypermedia architecture, and a prototype museum social history application. Presents illustrative navigation scenarios which make use of a navigation via similarity tool. The temporal mesaures of semantic closeness underpin the similarity tool. The temporal measures takes account of periods as well as time points. The most general measure is based on a traversal of a semantic net, taking into account relationship type and level of specialisation. It is based on a notion of closeness rather than absolute distance, and returns a seit of semantically close terms. Discusses a methods of calculating semantic similarity between sets of index terms, based on the maximal closeness values achieved by each term
    Source
    Information processing and management. 33(1997) no.2, S.233-242
  5. Blocks, D.; Cunliffe, D.; Tudhope, D.: ¬A reference model for user-system interaction in thesaurus-based searching (2006) 0.00
    0.0036413912 = product of:
      0.014565565 = sum of:
        0.014565565 = weight(_text_:information in 202) [ClassicSimilarity], result of:
          0.014565565 = score(doc=202,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=202)
      0.25 = coord(1/4)
    
    Abstract
    The authors present a model of information searching in thesaurus-enhanced search systems, intended as a reference model for system developers. The model focuses on user-system interaction and charts the specific stages of searching an indexed collection with a thesaurus. It was developed based on literature, findings from empirical studies, and analysis of existing systems. The model describes in detail the entities, processes, and decisions when interacting with a search system augmented with a thesaurus. A basic search scenario illustrates this process through the model. Graphical and textual depictions of the model are complemented by a concise matrix representation for evaluation purposes. Potential problems at different stages of the search process are discussed, together with possibilities for system developers. The aim is to set out a framework of processes, decisions, and risks involved in thesaurus-based search, within which system developers can consider potential avenues for support.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.12, S.1655-1665
  6. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: Compound descriptors in context : a matching function for classifications and thesauri (2002) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 3179) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3179,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3179)
      0.25 = coord(1/4)
    
    Theme
    Information Gateway
  7. Tudhope, D.; Alani, H.; Jones, C.: Augmenting thesaurus relationships : possibilities for retrieval (2001) 0.00
    0.0021457102 = product of:
      0.008582841 = sum of:
        0.008582841 = weight(_text_:information in 1520) [ClassicSimilarity], result of:
          0.008582841 = score(doc=1520,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 1520, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1520)
      0.25 = coord(1/4)
    
    Source
    Journal of digital information. 1(2001) no.8
  8. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: FACET: thesaurus retrieval with semantic term expansion (2002) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 175) [ClassicSimilarity], result of:
          0.006866273 = score(doc=175,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=175)
      0.25 = coord(1/4)
    
    Theme
    Information Gateway
  9. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 2320) [ClassicSimilarity], result of:
          0.006866273 = score(doc=2320,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 2320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.