Search (10 results, page 1 of 1)

  • × author_ss:"Eastman, C.M."
  1. Kim, S.H.; Eastman, C.M.: ¬An experiment on node size in a hypermedia system (1999) 0.03
    0.027640268 = product of:
      0.055280536 = sum of:
        0.055280536 = sum of:
          0.011600202 = weight(_text_:a in 3673) [ClassicSimilarity], result of:
            0.011600202 = score(doc=3673,freq=12.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.21843673 = fieldWeight in 3673, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3673)
          0.043680333 = weight(_text_:22 in 3673) [ClassicSimilarity], result of:
            0.043680333 = score(doc=3673,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 3673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3673)
      0.5 = coord(1/2)
    
    Abstract
    The node size that should be used in a hypermedia system is an important design issue. 3 interpretations of node size are identified: storage (physical size), window size (presentation size), and length (logical size). an experiment in which presentation size and text length are varied in a HyperCard application is described. The experiment involves student subjects performing a fact retrieval task from a reference handbook. No interaction is found between these 2 independent variables. Performance is significantly better for the longer texts, but no significant difference is found for the 2 different window sizes
    Date
    22. 5.1999 9:35:20
    Type
    a
  2. Eastman, C.M.: Overlaps in postings to thesaurus terms : a preliminary study (1988) 0.03
    0.02713491 = product of:
      0.05426982 = sum of:
        0.05426982 = sum of:
          0.010589487 = weight(_text_:a in 3555) [ClassicSimilarity], result of:
            0.010589487 = score(doc=3555,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.19940455 = fieldWeight in 3555, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3555)
          0.043680333 = weight(_text_:22 in 3555) [ClassicSimilarity], result of:
            0.043680333 = score(doc=3555,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 3555, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3555)
      0.5 = coord(1/2)
    
    Abstract
    The patterns of overlap between terms which are closely related in a thesaurus are considered. The relationships considered are parent/child, in which one term is a broader term of the other, and sibling in which to 2 terms share the same broader term. The patterns of overlap observed in the MeSH thesaurus with respect to selected MEDLINE postings are examined. The implications of the overlap patterns are discussed, in particular, the impact of the overlap patterns on the potential effectiveness of a proposed algorithm for handling negation is considered.
    Date
    25.12.1995 22:52:34
    Type
    a
  3. Eastman, C.M.; Rose, J.R.: Hierarchical support for browsing (1996) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 5604) [ClassicSimilarity], result of:
              0.012102271 = score(doc=5604,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 5604, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5604)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents an approach to support browsing and retrieval in chemical reaction databases. The method involves the use of a classification hierarchy based upon a variety of chemically relevant features. This hierarchy is determined dynamically in response to the search requests of a particular user. Describes a prototype system and gives an example sample
    Type
    a
  4. McQuire, A.R.; Eastman, C.M.: ¬The ambiguity of negation in natural language queries to information retrieval systems (1998) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 1147) [ClassicSimilarity], result of:
              0.011219106 = score(doc=1147,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 1147, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1147)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A prototype system to handle negation in natural language queries to information retrieval systems is presented. Whenever a query that has negation is entered, the system will determine whether or not it is necessary for the user to clarify exactly what constituents in the query are being negated. If clarification is needed, the user is presented with a list of choices and asked to select the appropriate one. The algorithm used is based on the results of a survey adminitered to 64 subjects. The subjects were given a number of queries using negation. For each query, several possible choices for the negated constituent(s) were given. Whenever a lexical unit composed of nouns connected by the conjunction 'and' was negated, there was general agreement on the response. But whenever there were multiple lexical units involved, such as complex lexical units connected by 'and' or prepositional phrases, the subjects were divided on the choices. The results of this survey indicate that it is not possible for a system to automatically disambiguate all uses of negotiation. However, it is possible for the user interface to handle disambiguation through a clarification dialog during which a user is asked to select from a list of possible interpretations
    Type
    a
  5. Chang, Y.F.; Eastman, C.M.: ¬An information retrieval system for reusable software (1993) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 6348) [ClassicSimilarity], result of:
              0.0108246 = score(doc=6348,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 6348, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=6348)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Eastman, C.M.; Carter, R.M.: Anthropological perspectives on classification schemes (1994) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 8888) [ClassicSimilarity], result of:
              0.009471525 = score(doc=8888,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 8888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Rose, J.R.; Eastman, C.M.: Hierarchical classification as an aid to browsing (1994) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 8894) [ClassicSimilarity], result of:
              0.009471525 = score(doc=8894,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 8894, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8894)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  8. Nakkouzi, Z.S.; Eastman, C.M.: Query formulation for handling negation in information retrieval systems (1990) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 3531) [ClassicSimilarity], result of:
              0.009374379 = score(doc=3531,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 3531, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3531)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Queries containing negation are widely recognised as presenting problems for both users and systems. In information retrieval systems such problems usually manifest themselves in the use of the NOT operator. Describes an algorithm to transform Boolean queries with negated terms into queries without negation; the transformation process is based on the use of a hierarchical thesaurus. Examines a set of user requests submitted to the Thomas Cooper Library at the University of South Carolina to determine the pattern and frequency of use of negation.
    Type
    a
  9. Eastman, C.M.: 30,000 hits may be better than 300 : precision anomalies in Internet searches (2002) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 5231) [ClassicSimilarity], result of:
              0.008285859 = score(doc=5231,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 5231, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5231)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this issue we begin with a paper where Eastman points out that conventional narrower queries (the use of conjunctions and phrases) in a web engine search will reduce returned number of hits but not necessarily increase precision in the top ranked documents in the return. Thus by precision anomalies Eastman means that search narrowing activity results in no precision change or a decrease in precision. Multiple queries with multiple engines were run by students for a three-year period and the formulation/engine combination was recorded as was the number of hits. Relevance was also recorded for the top ten and top twenty ranked retrievals. While narrower searches reduced total hits they did not usually improve precision. Initial high precision and poor query reformulation account for some of the results, as did Alta Vista's failure to use the ranking algorithm incorporated in its regular search in its advanced search feature. However, since the top listed returns often reoccurred in all formulations, it would seem that the ranking algorithms are doing a consistent job of practical precision ranking that is not improved by reformulation.
    Type
    a
  10. Young, C.W.; Eastman, C.M.; Oakman, R.L.: ¬An analysis of ill-formed input in natural language queries to document retrieval systems (1991) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 5263) [ClassicSimilarity], result of:
              0.007030784 = score(doc=5263,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 5263, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5263)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natrual language document retrieval queries from the Thomas Cooper Library, South Carolina Univ. were analysed in oder to investigate the frequency of various types of ill-formed input, such as spelling errors, cooccurrence violations, conjunctions, ellipsis, and missing or incorrect punctuation. Users were requested to write out their requests for information in complete sentences on the form normally used by the library. The primary reason for analysing ill-formed inputs was to determine whether there is a significant need to study ill-formed inputs in detail. Results indicated that most of the queries were sentence fragments and that many of them contained some type of ill-formed input. Conjunctions caused the most problems. The next most serious problem was caused by punctuation errors. Spelling errors occured in a small number of queries. The remaining types of ill-formed input considered, allipsis and cooccurrence violations, were not found in the queries
    Type
    a