Search (7 results, page 1 of 1)

  • × author_ss:"French, J.C."
  1. French, J.C.; Brown, D.E.; Kim, N.-H.: ¬A classification approach to Boolean query reformulation (1997) 0.00
    0.001450082 = product of:
      0.008700492 = sum of:
        0.008700492 = weight(_text_:a in 197) [ClassicSimilarity], result of:
          0.008700492 = score(doc=197,freq=22.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.25351265 = fieldWeight in 197, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=197)
      0.16666667 = coord(1/6)
    
    Abstract
    One of the difficulties in using current Boolean-based information retrieval systems is that it is hard for a user, especially a novice, to formulate an effective Boolean query. Query reformulation can be even more difficult and complex than formulation since users often have difficulty incorporating the new information gained from the previous search into the next query. In this article, query reformulation is viewed as a classification problem, that is, classifying documents as either relevant or non relevant. A new reformulation algorithm is proposed which builds a tree-structure classifier, called a query tree, at each reformulation from a set of feedback documents retrieved from the previous search. The query tree can easily be transformed into a Boolean query. The query tree is compared to two query reformulation algorithms on benchmark test sets (CACM, CISI, and Medlars). In most experiments, the query tree showed significant improvements in precision over the 2 algorithms compared in this study. We attribute this improved performance to the ability of the query tree algorithm to select good search terms and to represent the relationships among search terms into a tree structure
    Type
    a
  2. French, J.C.; Knight, J.C.; Powell, A.L.: Applying hypertext structures to software documentation (1997) 0.00
    0.0012366341 = product of:
      0.007419804 = sum of:
        0.007419804 = weight(_text_:a in 3257) [ClassicSimilarity], result of:
          0.007419804 = score(doc=3257,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.2161963 = fieldWeight in 3257, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=3257)
      0.16666667 = coord(1/6)
    
    Footnote
    Contribution to a special issue on methods and tools for the automatic construction of hypertext
    Type
    a
  3. French, J.C.; Chapin, A.C.; Martin, W.N.: Multiple viewpoints as an approach to digital library interfaces (2004) 0.00
    0.0011567653 = product of:
      0.0069405916 = sum of:
        0.0069405916 = weight(_text_:a in 2881) [ClassicSimilarity], result of:
          0.0069405916 = score(doc=2881,freq=14.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.20223314 = fieldWeight in 2881, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2881)
      0.16666667 = coord(1/6)
    
    Abstract
    We introduce a framework of multiple viewpoint systems for describing and designing systems that use more than one representation or set of relevance judgments an the same collection. A viewpoint is any representational scheme an some collection of data objects together with a mechanism for accessing this content. A multiple viewpoint system allows a searcher to pose queries to one viewpoint and then change to another viewpoint white retaining a sense of context. Multiple viewpoint systems are weIl suited to alleviate vocabulary mismatches and to take advantage of the possibility of combining evidence. We discuss some of the issues that arise in designing and using such systems and illustrate the concepts with several examples.
    Type
    a
  4. Viles, C.L.; French, J.C.: TREC-4 experiments using DRIFT (1996) 0.00
    0.001020171 = product of:
      0.006121026 = sum of:
        0.006121026 = weight(_text_:a in 7575) [ClassicSimilarity], result of:
          0.006121026 = score(doc=7575,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 7575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=7575)
      0.16666667 = coord(1/6)
    
    Type
    a
  5. French, J.C.; Powell, A.L.; Gey, F.; Perelman, N.: Exploiting manual indexing to improve collection selection and retrieval effectiveness (2002) 0.00
    0.001020171 = product of:
      0.006121026 = sum of:
        0.006121026 = weight(_text_:a in 3896) [ClassicSimilarity], result of:
          0.006121026 = score(doc=3896,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.17835285 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3896)
      0.16666667 = coord(1/6)
    
    Type
    a
  6. French, J.C.; Powell, A.L.; Schulman, E.: Using clustering strategies for creating authority files (2000) 0.00
    6.1831705E-4 = product of:
      0.003709902 = sum of:
        0.003709902 = weight(_text_:a in 4811) [ClassicSimilarity], result of:
          0.003709902 = score(doc=4811,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.10809815 = fieldWeight in 4811, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4811)
      0.16666667 = coord(1/6)
    
    Abstract
    As more online databases are integrated into digital libraries, the issue of quality control of the data becomes increasingly important, especially as it relates to the effective retrieval of information. Authority work, the need to discover and reconcile variant forms of strings in bibliographical entries, will become more critical in the future. Spelling variants, misspellings, and transliteration differences will all increase the difficulty of retrieving information. We investigate a number of approximate string matching techniques that have traditionally been used to help with this problem. We then introduce the notion of approximate word matching and show how it can be used to improve detection and categorization of variant forms. We demonstrate the utility of these approaches using data from the Astrophysics Data System and show how we can reduce the human effort involved in the creation of authority files
    Type
    a
  7. Sharretts, C.W.; Shieh, J.; French, J.C.: Electronic theses and dissertations at the University of Virginia (1999) 0.00
    5.152642E-4 = product of:
      0.0030915851 = sum of:
        0.0030915851 = weight(_text_:a in 6702) [ClassicSimilarity], result of:
          0.0030915851 = score(doc=6702,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.090081796 = fieldWeight in 6702, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6702)
      0.16666667 = coord(1/6)
    
    Abstract
    Although technology has made life easier in many ways, one constant complaint has been the time it takes to learn it. This is why simplicity was the main concern of the University of Virginia (UVa) when implementing the Electronic Theses and Dissertations (ETD). ETD are not a new concept. The uniqueness of the Virginia ETD lies in the fact that the whole process was assimilated through the technical skills and intellectual efforts of faculty and students. The ETD creates no extra network load and is fully automatic from the submission of data, to the conversion into MARC and subsequent loading into the Library's online catalog, VIRGO. This paper describes the trajectory of an ETD upon submission. The system is designed to be easy and self-explanatory. Submission instructions guide the student step by step. Screen messages, such as errors, are generated automatically when appropriate, while e-mail messages, regarding the status of the process, are automatically posted to students, advisors, catalogers, and school officials. The paradigms and methodologies will help to push forward the ETD project at the University. Planned enhancements are: Indexing the data for searching and retrieval using Dienst for Web interface, to synchronize the searching experience in both VIRGO and the Web; Securing the authorship of the data; Automating the upload and indexing bibliographic data in VIRGO; Employing Uniform Resource Names (URN) using the Corporation for National Research Initiatives (CNRI) Handle architecture scheme; Adding Standard Generalized Markup Language (SGML) to the list of formats acceptable for archiving ETD
    Type
    a