Search (88 results, page 1 of 5)

  • × theme_ss:"Retrievalstudien"
  1. Spink, A.; Goodrum, A.; Robins, D.: Search intermediary elicitations during mediated online searching (1995) 0.06
    0.064490765 = product of:
      0.09673614 = sum of:
        0.06476502 = weight(_text_:reference in 3872) [ClassicSimilarity], result of:
          0.06476502 = score(doc=3872,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 3872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3872)
        0.03197112 = product of:
          0.06394224 = sum of:
            0.06394224 = weight(_text_:database in 3872) [ClassicSimilarity], result of:
              0.06394224 = score(doc=3872,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.31264183 = fieldWeight in 3872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3872)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Investigates search intermediary elicitations during mediated online searching. A study of 40 online reference interviews involving 1.557 search intermediary elicitation, found 15 different types of search intermediary elicitation to users. The elicitation purpose included search terms and strategies, database selection, relevance of retrieved items, users' knowledge and previous information seeking. Analysis of the patterns in the types and sequencing of elicitation showed significant strings of multiple elicitation regarding search terms and strategies, and relevance judgements. Discusses the implications of the findings for training search intermediaries and the design of interfaces eliciting information from end users
  2. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.04
    0.042638287 = product of:
      0.12791486 = sum of:
        0.12791486 = sum of:
          0.073076844 = weight(_text_:database in 3087) [ClassicSimilarity], result of:
            0.073076844 = score(doc=3087,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.35730496 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
          0.05483802 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
            0.05483802 = score(doc=3087,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.30952093 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
      0.33333334 = coord(1/3)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  3. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.04
    0.042638287 = product of:
      0.12791486 = sum of:
        0.12791486 = sum of:
          0.073076844 = weight(_text_:database in 4049) [ClassicSimilarity], result of:
            0.073076844 = score(doc=4049,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.35730496 = fieldWeight in 4049, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
          0.05483802 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
            0.05483802 = score(doc=4049,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.30952093 = fieldWeight in 4049, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
      0.33333334 = coord(1/3)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  4. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.04
    0.04226508 = product of:
      0.063397616 = sum of:
        0.046260733 = weight(_text_:reference in 2587) [ClassicSimilarity], result of:
          0.046260733 = score(doc=2587,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.22474778 = fieldWeight in 2587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2587)
        0.017136881 = product of:
          0.034273762 = sum of:
            0.034273762 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
              0.034273762 = score(doc=2587,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.19345059 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
  5. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.04
    0.039546072 = product of:
      0.11863822 = sum of:
        0.11863822 = sum of:
          0.0775097 = weight(_text_:database in 2552) [ClassicSimilarity], result of:
            0.0775097 = score(doc=2552,freq=4.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.37897915 = fieldWeight in 2552, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
          0.041128512 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
            0.041128512 = score(doc=2552,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.23214069 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  6. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.04
    0.037308503 = product of:
      0.111925505 = sum of:
        0.111925505 = sum of:
          0.06394224 = weight(_text_:database in 3368) [ClassicSimilarity], result of:
            0.06394224 = score(doc=3368,freq=2.0), product of:
              0.20452234 = queryWeight, product of:
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.050593734 = queryNorm
              0.31264183 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.042444 = idf(docFreq=2109, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.047983266 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.047983266 = score(doc=3368,freq=2.0), product of:
              0.17717063 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050593734 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.33333334 = coord(1/3)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  7. Aldous, K.J.: ¬A system for the automatic retrieval of information from a specialist database (1996) 0.02
    0.024167895 = product of:
      0.072503686 = sum of:
        0.072503686 = product of:
          0.14500737 = sum of:
            0.14500737 = weight(_text_:database in 4078) [ClassicSimilarity], result of:
              0.14500737 = score(doc=4078,freq=14.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.70900506 = fieldWeight in 4078, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4078)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Accessing useful information from a complex database requires knowledge of the structure of the database and an understanding of the methods of information retrieval. A means of overcoming this knowledge barrier to the use of narrow domain databases is proposed in which the user is required to enter only a series of terms which identify the required material. Describes a method which classifies terms according to their meaning in the context of the database and which uses this classification to access and execute models of code stored in the database to effect retrieval. Presents an implementation of the method using a database of technical information on the nature and use of fungicides. Initial results of trials with potential users indicate that the system can produce relevant resposes to queries expressed in this style. Since the code modules are part of the database, extensions may be easily implemented to handle most queries which users are likely to pose
  8. Schultz Jr., W.N.; Braddy, L.: ¬A librarian-centered study of perceptions of subject terms and controlled vocabulary (2017) 0.02
    0.02158834 = product of:
      0.06476502 = sum of:
        0.06476502 = weight(_text_:reference in 5156) [ClassicSimilarity], result of:
          0.06476502 = score(doc=5156,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.31464687 = fieldWeight in 5156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5156)
      0.33333334 = coord(1/3)
    
    Abstract
    Controlled vocabulary and subject headings in OPAC records have proven to be useful in improving search results. The authors used a survey to gather information about librarian opinions and professional use of controlled vocabulary. Data from a range of backgrounds and expertise were examined, including academic and public libraries, and technical services as well as public services professionals. Responses overall demonstrated positive opinions of the value of controlled vocabulary, including in reference interactions as well as during bibliographic instruction sessions. Results are also examined based upon factors such as age and type of librarian.
  9. Spink, A.: Term relevance feedback and mediated database searching : implications for information retrieval practice and systems design (1995) 0.02
    0.020425599 = product of:
      0.061276797 = sum of:
        0.061276797 = product of:
          0.122553594 = sum of:
            0.122553594 = weight(_text_:database in 1756) [ClassicSimilarity], result of:
              0.122553594 = score(doc=1756,freq=10.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.5992186 = fieldWeight in 1756, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1756)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Research into both the algorithmic and human approaches to information retrieval is required to improve information retrieval system design and database searching effectiveness. Uses the human approach to examine the sources and effectiveness of search terms selected during mediated interactive information retrieval. Focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. Results show that termns selected from particular database fields of retrieved items during term relevance feedback (TRF) were more effective than search terms from the intermediarity, database thesauri or users' domain knowledge during the interaction, but not as effective as terms from the users' written question statements. Implications for the design and testing of automatic relevance feedback techniques that place greater emphasis on these sources and the practice of database searching are also discussed
  10. Bhattacharyya, K.: ¬The effectiveness of natural language in science indexing and retrieval (1974) 0.02
    0.018504292 = product of:
      0.055512875 = sum of:
        0.055512875 = weight(_text_:reference in 2628) [ClassicSimilarity], result of:
          0.055512875 = score(doc=2628,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 2628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=2628)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper examines the implications of the findings of evaluative tests regarding the retrieval performance of natural language in various subject fields. It suggests parallel investigations into the structure of natural language, with particular reference to terminology, as used in the different branches of basic science. The criteria for defining the terminological consistency of a subject are formulated and a measure suggested for determining the degree of terminological consistency. The terminological and information structures of specific disciplines such as, chemistry, physics, botany, zoology, and geology; the circumstances in which terms originate; and the efforts made by the international scientific community to standardize the terminology in their respective disciplines - are examined in detail. This investigation shows why and how an artificially created scientific language finds it impossible to keep pace with current developments and thus points to the source of strength of natural language
  11. Hallet, K.S.: Separate but equal? : A system comparison study of MEDLINE's controlled vocabulary MeSH (1998) 0.02
    0.018504292 = product of:
      0.055512875 = sum of:
        0.055512875 = weight(_text_:reference in 3553) [ClassicSimilarity], result of:
          0.055512875 = score(doc=3553,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 3553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=3553)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to test the effect of controlled vocabulary search feature implementation on 2 online systems. Specifically, the study examined retrieval rates using 4 unique controlled vocabulary search features (Explode, major descriptor, descriptor, subheadings). 2 questions were addressed; what, if any, are the general differences between controlled vocabulary system implementations in DIALOG and Ovid; and what, if any are the impacts of each on the differing controlled vocabulary search features upon retrieval rates? Each search feature was applied to to 9 search queries obtained from a medical reference librarian. The same queires were searched in the complete MEDLINE file on the DIALOG and Ovid online host systems. The unique records (those records retrieved in only 1 of the 2 systems) were identified and analyzed. DIALOG produced equal or more records than Ovid in nearly 20% of the queries. Concludes than users need to be aware of system specific designs that may require differing input strategies across different systems for the same unique controlled vocabulary search features. Making recommendations and suggestions for future research
  12. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.02
    0.018504292 = product of:
      0.055512875 = sum of:
        0.055512875 = weight(_text_:reference in 4587) [ClassicSimilarity], result of:
          0.055512875 = score(doc=4587,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 4587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
      0.33333334 = coord(1/3)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
  13. Wolfram, D.; Dimitroff, A.: Hypertext vs. Boolean-based searching in a bibliographic database environment : a direct comparison of searcher performance (1998) 0.02
    0.018269213 = product of:
      0.054807637 = sum of:
        0.054807637 = product of:
          0.109615274 = sum of:
            0.109615274 = weight(_text_:database in 6436) [ClassicSimilarity], result of:
              0.109615274 = score(doc=6436,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.53595746 = fieldWeight in 6436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6436)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  14. Kristensen, J.: Expanding end-users' query statements for free text searching with a search-aid thesaurus (1993) 0.02
    0.017224379 = product of:
      0.051673137 = sum of:
        0.051673137 = product of:
          0.10334627 = sum of:
            0.10334627 = weight(_text_:database in 6621) [ClassicSimilarity], result of:
              0.10334627 = score(doc=6621,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.5053055 = fieldWeight in 6621, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6621)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Tests the effectiveness of a thesaurus as a search-aid in free text searching of a full text database. A set of queries was searched against a large full text database of newspaper articles. The thesaurus contained equivalence, hierarchical and associative relationships. Each query was searched in five modes: basic search, synonym search, narrower term search, related term search, and union of all previous searches. The searches were analyzed in terms of relative recall and precision
  15. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.015994422 = product of:
      0.047983266 = sum of:
        0.047983266 = product of:
          0.09596653 = sum of:
            0.09596653 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09596653 = score(doc=262,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  16. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.015994422 = product of:
      0.047983266 = sum of:
        0.047983266 = product of:
          0.09596653 = sum of:
            0.09596653 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.09596653 = score(doc=6418,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.6, S.57-58
  17. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.015994422 = product of:
      0.047983266 = sum of:
        0.047983266 = product of:
          0.09596653 = sum of:
            0.09596653 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.09596653 = score(doc=6438,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19
  18. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.015994422 = product of:
      0.047983266 = sum of:
        0.047983266 = product of:
          0.09596653 = sum of:
            0.09596653 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09596653 = score(doc=5089,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 18:43:54
  19. Ribeiro, F.: Subject indexing and authority control in archives : the need for subject indexing in archives and for an indexing policy using controlled language (1996) 0.02
    0.015821602 = product of:
      0.047464807 = sum of:
        0.047464807 = product of:
          0.09492961 = sum of:
            0.09492961 = weight(_text_:database in 6577) [ClassicSimilarity], result of:
              0.09492961 = score(doc=6577,freq=6.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46415278 = fieldWeight in 6577, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6577)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes an experiment carried out in the City Archives of Oporto, Portugal to test the relative value for information retrieval of controling or not controlling vocabulary in subject indexing. A comparison was made of the results obtained by searching 2 databases covering the same archival documents, one of them without any control in the indexing language and the other with authority control. Results indicate that the database where authority control in subject indexing was used showed better performance and efficiency in information retrieval than the database which used an uncontrolled subject indexing language. A significant complementarity between the databases was found, the addition of the retrievals of one database to those of the other adding considerable advantage. Posits the possibility of creating an archival authority list suitable for use in groups with identical characteristics, such as local archives of judicial groups. Such a list should include broader terms, representing subject classes, which will be subdivided into narrower terms, according to the particular needs of each archives or archival groups
  20. Keyes, J.G.: Using conceptual categories of questions to measure differences in retrieval performance (1996) 0.02
    0.015821602 = product of:
      0.047464807 = sum of:
        0.047464807 = product of:
          0.09492961 = sum of:
            0.09492961 = weight(_text_:database in 7440) [ClassicSimilarity], result of:
              0.09492961 = score(doc=7440,freq=6.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46415278 = fieldWeight in 7440, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7440)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The form of a question denotes the relationship between the current state of knowledge of the questioner and the propositional content of the question. To assess whether these semantic differences have implications for information retrieval, uses CF database, a 1239 document test database containing titles and abstracts of documents pertaining to cystic fibrosis. The database has an accompanying list of 100 questions which were divided into 5 conceptual categories of questions based on their semantic representation. 2 retrieval methods were used to investigate potential diferences in outcomes across conceptual categories: the cosine measurement and the similarity measurement. The ranked results produced by different algorithms will vary for individual conceptual categories as well as for overall performance

Years

Languages

  • e 79
  • d 5
  • chi 1
  • f 1
  • fi 1
  • More… Less…

Types

  • a 80
  • s 7
  • m 4
  • el 1
  • More… Less…