Search (130 results, page 7 of 7)

  • × theme_ss:"Retrievalstudien"
  1. Sünkler, S.: Prototypische Entwicklung einer Software für die Erfassung und Analyse explorativer Suchen in Verbindung mit Tests zur Retrievaleffektivität (2012) 0.00
    0.002290387 = product of:
      0.004580774 = sum of:
        0.004580774 = product of:
          0.009161548 = sum of:
            0.009161548 = weight(_text_:d in 479) [ClassicSimilarity], result of:
              0.009161548 = score(doc=479,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.104954086 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    d
  2. Kelly, D.; Sugimoto, C.R.: ¬A systematic review of interactive information retrieval evaluation studies, 1967-2006 (2013) 0.00
    0.002290387 = product of:
      0.004580774 = sum of:
        0.004580774 = product of:
          0.009161548 = sum of:
            0.009161548 = weight(_text_:d in 684) [ClassicSimilarity], result of:
              0.009161548 = score(doc=684,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.104954086 = fieldWeight in 684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=684)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    0.002290387 = product of:
      0.004580774 = sum of:
        0.004580774 = product of:
          0.009161548 = sum of:
            0.009161548 = weight(_text_:d in 3144) [ClassicSimilarity], result of:
              0.009161548 = score(doc=3144,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.104954086 = fieldWeight in 3144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.00
    0.002290387 = product of:
      0.004580774 = sum of:
        0.004580774 = product of:
          0.009161548 = sum of:
            0.009161548 = weight(_text_:d in 3700) [ClassicSimilarity], result of:
              0.009161548 = score(doc=3700,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.104954086 = fieldWeight in 3700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3700)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Wartena, C.; Golub, K.: Evaluierung von Verschlagwortung im Kontext des Information Retrievals (2021) 0.00
    0.002290387 = product of:
      0.004580774 = sum of:
        0.004580774 = product of:
          0.009161548 = sum of:
            0.009161548 = weight(_text_:d in 376) [ClassicSimilarity], result of:
              0.009161548 = score(doc=376,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.104954086 = fieldWeight in 376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=376)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    d
  6. Dzeyk, W.: Effektiv und nutzerfreundlich : Einsatz von semantischen Technologien und Usability-Methoden zur Verbesserung der medizinischen Literatursuche (2010) 0.00
    0.0022673677 = product of:
      0.0045347353 = sum of:
        0.0045347353 = product of:
          0.009069471 = sum of:
            0.009069471 = weight(_text_:d in 4416) [ClassicSimilarity], result of:
              0.009069471 = score(doc=4416,freq=4.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.10389925 = fieldWeight in 4416, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    d
    Location
    D
  7. Lioma, C.; Ounis, I.: ¬A syntactically-based query reformulation technique for information retrieval (2008) 0.00
    0.0018323096 = product of:
      0.0036646193 = sum of:
        0.0036646193 = product of:
          0.0073292386 = sum of:
            0.0073292386 = weight(_text_:d in 2031) [ClassicSimilarity], result of:
              0.0073292386 = score(doc=2031,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.08396327 = fieldWeight in 2031, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2031)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Whereas in language words of high frequency are generally associated with low content [Bookstein, A., & Swanson, D. (1974). Probabilistic models for automatic indexing. Journal of the American Society of Information Science, 25(5), 312-318; Damerau, F. J. (1965). An experiment in automatic indexing. American Documentation, 16, 283-289; Harter, S. P. (1974). A probabilistic approach to automatic keyword indexing. PhD thesis, University of Chicago; Sparck-Jones, K. (1972). A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28, 11-21; Yu, C., & Salton, G. (1976). Precision weighting - an effective automatic indexing method. Journal of the Association for Computer Machinery (ACM), 23(1), 76-88], shallow syntactic fragments of high frequency generally correspond to lexical fragments of high content [Lioma, C., & Ounis, I. (2006). Examining the content load of part of speech blocks for information retrieval. In Proceedings of the international committee on computational linguistics and the association for computational linguistics (COLING/ACL 2006), Sydney, Australia]. We implement this finding to Information Retrieval, as follows. We present a novel automatic query reformulation technique, which is based on shallow syntactic evidence induced from various language samples, and used to enhance the performance of an Information Retrieval system. Firstly, we draw shallow syntactic evidence from language samples of varying size, and compare the effect of language sample size upon retrieval performance, when using our syntactically-based query reformulation (SQR) technique. Secondly, we compare SQR to a state-of-the-art probabilistic pseudo-relevance feedback technique. Additionally, we combine both techniques and evaluate their compatibility. We evaluate our proposed technique across two standard Text REtrieval Conference (TREC) English test collections, and three statistically different weighting models. Experimental results suggest that SQR markedly enhances retrieval performance, and is at least comparable to pseudo-relevance feedback. Notably, the combination of SQR and pseudo-relevance feedback further enhances retrieval performance considerably. These collective experimental results confirm the tenet that high frequency shallow syntactic fragments correspond to content-bearing lexical fragments.
  8. Cross-language information retrieval (1998) 0.00
    0.0016195483 = product of:
      0.0032390966 = sum of:
        0.0032390966 = product of:
          0.006478193 = sum of:
            0.006478193 = weight(_text_:d in 6299) [ClassicSimilarity], result of:
              0.006478193 = score(doc=6299,freq=4.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.07421375 = fieldWeight in 6299, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.
  9. Lohmann, H.: KASCADE: Dokumentanreicherung und automatische Inhaltserschließung : Projektbericht und Ergebnisse des Retrievaltests (2000) 0.00
    0.0016032709 = product of:
      0.0032065418 = sum of:
        0.0032065418 = product of:
          0.0064130835 = sum of:
            0.0064130835 = weight(_text_:d in 494) [ClassicSimilarity], result of:
              0.0064130835 = score(doc=494,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.07346786 = fieldWeight in 494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    d
  10. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0011451935 = product of:
      0.002290387 = sum of:
        0.002290387 = product of:
          0.004580774 = sum of:
            0.004580774 = weight(_text_:d in 636) [ClassicSimilarity], result of:
              0.004580774 = score(doc=636,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.052477043 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones

Languages

  • e 69
  • d 55
  • f 2
  • nl 2
  • fi 1
  • More… Less…

Types

  • a 103
  • el 7
  • r 7
  • s 7
  • m 6
  • x 6
  • p 2
  • d 1
  • More… Less…