Search (10 results, page 1 of 1)

  • × author_ss:"Binding, C."
  • × author_ss:"Tudhope, D."
  1. Tudhope, D.; Binding, C.: Still quite popular after all those years : the continued relevance of the information retrieval thesaurus (2016) 0.00
    0.0019417685 = product of:
      0.014563262 = sum of:
        0.009436456 = weight(_text_:und in 2908) [ClassicSimilarity], result of:
          0.009436456 = score(doc=2908,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.14692576 = fieldWeight in 2908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2908)
        0.0051268064 = product of:
          0.010253613 = sum of:
            0.010253613 = weight(_text_:information in 2908) [ClassicSimilarity], result of:
              0.010253613 = score(doc=2908,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.20156369 = fieldWeight in 2908, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    The recent ISKO-UK conference considered the question of whether the traditional thesaurus has any place in modern information retrieval. This note is intended to continue in the spirit of that good-natured debate, arguing that there is indeed a role today and highlighting some recent work showing the continued relevance of the thesaurus, particularly in the linked data area. Key functionality that a thesaurus makes possible is discussed. A brief outline is provided of prominent work hat employs thesauri in three key areas of infrastructure underpinning advanced retrieval functionality today: metadata enrichment,vocabulary mapping and web services.
    Content
    Beitrag in einem Special issue: The Great Debate: "This House Believes that the Traditional Thesaurus has no Place in Modern Information Retrieval." [19 February 2015, 14:00-17:30 preceded by ISKO UK AGM and followed by networking, wine and nibbles; vgl.: http://www.iskouk.org/content/great-debate].
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Tudhope, D.; Binding, C.: Faceted thesauri (2008) 0.00
    8.387961E-4 = product of:
      0.012581941 = sum of:
        0.012581941 = weight(_text_:und in 1855) [ClassicSimilarity], result of:
          0.012581941 = score(doc=1855,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.19590102 = fieldWeight in 1855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1855)
      0.06666667 = coord(1/15)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Binding, C.; Tudhope, D.: KOS at your service : Programmatic access to knowledge organisation systems (2004) 0.00
    3.9466174E-4 = product of:
      0.005919926 = sum of:
        0.005919926 = product of:
          0.011839852 = sum of:
            0.011839852 = weight(_text_:information in 1342) [ClassicSimilarity], result of:
              0.011839852 = score(doc=1342,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274569 = fieldWeight in 1342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1342)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Teil eines Themenheftes von: Journal of digital information. 4(2004) no.4.
  4. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.00
    2.941635E-4 = product of:
      0.004412452 = sum of:
        0.004412452 = product of:
          0.008824904 = sum of:
            0.008824904 = weight(_text_:information in 3948) [ClassicSimilarity], result of:
              0.008824904 = score(doc=3948,freq=10.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1734784 = fieldWeight in 3948, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3948)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
  5. Binding, C.; Tudhope, D.: Integrating faceted structure into the search process (2004) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 2627) [ClassicSimilarity], result of:
              0.005919926 = score(doc=2627,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 2627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2627)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  6. Binding, C.; Tudhope, D.: Terminology Web services (2010) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 4067) [ClassicSimilarity], result of:
              0.005919926 = score(doc=4067,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 4067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4067)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Controlled terminologies such as classification schemes, name authorities, and thesauri have long been the domain of the library and information science community. Although historically there have been initiatives towards library style classification of web resources, there remain significant problems with searching and quality judgement of online content. Terminology services can play a key role in opening up access to these valuable resources. By exposing controlled terminologies via a web service, organisations maintain data integrity and version control, whilst motivating external users to design innovative ways to present and utilise their data. We introduce terminology web services and review work in the area. We describe the approaches taken in establishing application programming interfaces (API) and discuss the comparative benefits of a dedicated terminology web service versus general purpose programming languages. We discuss experiences at Glamorgan in creating terminology web services and associated client interface components, in particular for the archaeology domain in the STAR (Semantic Technologies for Archaeological Resources) Project.
  7. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: Compound descriptors in context : a matching function for classifications and thesauri (2002) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 3179) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=3179,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 3179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3179)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Information Gateway
  8. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: FACET: thesaurus retrieval with semantic term expansion (2002) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 175) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=175,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=175)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Information Gateway
  9. Tudhope, D.; Binding, C.: Toward terminology services : experiences with a pilot Web service thesaurus browser (2006) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 1955) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=1955,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 1955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1955)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Bulletin of the American Society for Information Science and Technology. 33(2006) no.5, S.xx-xx
  10. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 2320) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=2320,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 2320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2320)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.