Search (100 results, page 1 of 5)

  • × theme_ss:"Retrievalalgorithmen"
  1. Information retrieval : data structures and algorithms (1992) 0.04
    0.04403838 = product of:
      0.06605757 = sum of:
        0.0155384 = weight(_text_:d in 3495) [ClassicSimilarity], result of:
          0.0155384 = score(doc=3495,freq=6.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.18178582 = fieldWeight in 3495, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3495)
        0.05051917 = product of:
          0.10103834 = sum of:
            0.10103834 = weight(_text_:u.a in 3495) [ClassicSimilarity], result of:
              0.10103834 = score(doc=3495,freq=8.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.49812075 = fieldWeight in 3495, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3495)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    An edited volume containing data structures and algorithms for information retrieval including a disk with examples written in C. for prgrammers and students interested in parsing text, automated indexing, its the first collection in book form of the basic data structures and algorithms that are critical to the storage and retrieval of documents. ------------------Enthält die Kapitel: FRAKES, W.B.: Introduction to information storage and retrieval systems; BAEZA-YATES, R.S.: Introduction to data structures and algorithms related to information retrieval; HARMAN, D. u.a.: Inverted files; FALOUTSOS, C.: Signature files; GONNET, G.H. u.a.: New indices for text: PAT trees and PAT arrays; FORD, D.A. u. S. CHRISTODOULAKIS: File organizations for optical disks; FOX, C.: Lexical analysis and stoplists; FRAKES, W.B.: Stemming algorithms; SRINIVASAN, P.: Thesaurus construction; BAEZA-YATES, R.A.: String searching algorithms; HARMAN, D.: Relevance feedback and other query modification techniques; WARTIK, S.: Boolean operators; WARTIK, S. u.a.: Hashing algorithms; HARMAN, D.: Ranking algorithms; FOX, E.: u.a.: Extended Boolean models; RASMUSSEN, E.: Clustering algorithms; HOLLAAR, L.: Special-purpose hardware for information retrieval; STANFILL, C.: Parallel information retrieval algorithms
  2. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.04
    0.038736187 = product of:
      0.05810428 = sum of:
        0.021530636 = weight(_text_:d in 58) [ClassicSimilarity], result of:
          0.021530636 = score(doc=58,freq=2.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.2518898 = fieldWeight in 58, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.09375 = fieldNorm(doc=58)
        0.036573645 = product of:
          0.07314729 = sum of:
            0.07314729 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.07314729 = score(doc=58,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    14. 6.2015 22:12:44
    Language
    d
  3. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.04
    0.038736187 = product of:
      0.05810428 = sum of:
        0.021530636 = weight(_text_:d in 2051) [ClassicSimilarity], result of:
          0.021530636 = score(doc=2051,freq=2.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.2518898 = fieldWeight in 2051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.09375 = fieldNorm(doc=2051)
        0.036573645 = product of:
          0.07314729 = sum of:
            0.07314729 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.07314729 = score(doc=2051,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    14. 6.2015 22:12:56
    Language
    d
  4. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.03
    0.029787809 = product of:
      0.044681713 = sum of:
        0.02029928 = weight(_text_:d in 1484) [ClassicSimilarity], result of:
          0.02029928 = score(doc=1484,freq=4.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.237484 = fieldWeight in 1484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.0625 = fieldNorm(doc=1484)
        0.024382431 = product of:
          0.048764862 = sum of:
            0.048764862 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.048764862 = score(doc=1484,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    13. 9.2014 14:45:22
    Language
    d
  5. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.02
    0.022596112 = product of:
      0.033894166 = sum of:
        0.012559538 = weight(_text_:d in 3276) [ClassicSimilarity], result of:
          0.012559538 = score(doc=3276,freq=2.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.14693572 = fieldWeight in 3276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3276)
        0.021334628 = product of:
          0.042669255 = sum of:
            0.042669255 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
              0.042669255 = score(doc=3276,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.2708308 = fieldWeight in 3276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3276)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    20. 3.2005 16:23:22
    Language
    d
  6. Cross-language information retrieval (1998) 0.02
    0.02106874 = product of:
      0.03160311 = sum of:
        0.0063435254 = weight(_text_:d in 6299) [ClassicSimilarity], result of:
          0.0063435254 = score(doc=6299,freq=4.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.07421375 = fieldWeight in 6299, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.025259584 = product of:
          0.05051917 = sum of:
            0.05051917 = weight(_text_:u.a in 6299) [ClassicSimilarity], result of:
              0.05051917 = score(doc=6299,freq=8.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.24906038 = fieldWeight in 6299, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.
  7. Liu, A.; Zou, Q.; Chu, W.W.: Configurable indexing and ranking for XML information retrieval (2004) 0.02
    0.016839724 = product of:
      0.05051917 = sum of:
        0.05051917 = product of:
          0.10103834 = sum of:
            0.10103834 = weight(_text_:u.a in 4114) [ClassicSimilarity], result of:
              0.10103834 = score(doc=4114,freq=2.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.49812075 = fieldWeight in 4114, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4114)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  8. Yu, K.; Tresp, V.; Yu, S.: ¬A nonparametric hierarchical Bayesian framework for information filtering (2004) 0.02
    0.016839724 = product of:
      0.05051917 = sum of:
        0.05051917 = product of:
          0.10103834 = sum of:
            0.10103834 = weight(_text_:u.a in 4117) [ClassicSimilarity], result of:
              0.10103834 = score(doc=4117,freq=2.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.49812075 = fieldWeight in 4117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4117)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  9. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.02
    0.016254954 = product of:
      0.048764862 = sum of:
        0.048764862 = product of:
          0.097529724 = sum of:
            0.097529724 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.097529724 = score(doc=402,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  10. Song, D.; Bruza, P.D.: Towards context sensitive information inference (2003) 0.02
    0.01614008 = product of:
      0.024210118 = sum of:
        0.008971099 = weight(_text_:d in 1428) [ClassicSimilarity], result of:
          0.008971099 = score(doc=1428,freq=2.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.104954086 = fieldWeight in 1428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1428)
        0.01523902 = product of:
          0.03047804 = sum of:
            0.03047804 = weight(_text_:22 in 1428) [ClassicSimilarity], result of:
              0.03047804 = score(doc=1428,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.19345059 = fieldWeight in 1428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1428)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 3.2003 19:35:46
  11. Shiri, A.A.; Revie, C.: Query expansion behavior within a thesaurus-enhanced search environment : a user-centered evaluation (2006) 0.02
    0.01614008 = product of:
      0.024210118 = sum of:
        0.008971099 = weight(_text_:d in 56) [ClassicSimilarity], result of:
          0.008971099 = score(doc=56,freq=2.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.104954086 = fieldWeight in 56, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.0390625 = fieldNorm(doc=56)
        0.01523902 = product of:
          0.03047804 = sum of:
            0.03047804 = weight(_text_:22 in 56) [ClassicSimilarity], result of:
              0.03047804 = score(doc=56,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.19345059 = fieldWeight in 56, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=56)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students.
    Date
    22. 7.2006 16:32:43
  12. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.014223086 = product of:
      0.042669255 = sum of:
        0.042669255 = product of:
          0.08533851 = sum of:
            0.08533851 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.08533851 = score(doc=2134,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 3.2001 13:32:22
  13. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.01
    0.014223086 = product of:
      0.042669255 = sum of:
        0.042669255 = product of:
          0.08533851 = sum of:
            0.08533851 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.08533851 = score(doc=3445,freq=2.0), product of:
                0.15754949 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044990618 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    25. 8.2005 17:42:22
  14. Pfeifer, U.; Pennekamp, S.: Incremental processing of vague queries in interactive retrieval systems (1997) 0.01
    0.0134717785 = product of:
      0.040415335 = sum of:
        0.040415335 = product of:
          0.08083067 = sum of:
            0.08083067 = weight(_text_:u.a in 735) [ClassicSimilarity], result of:
              0.08083067 = score(doc=735,freq=2.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.3984966 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.0625 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Hypertext - Information Retrieval - Multimedia '97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme. Proceedings HIM '97. Hrsg.: N. Fuhr u.a
  15. Harper, D.J.: Relevance feedback in document retrieval (1980) 0.01
    0.011961466 = product of:
      0.035884395 = sum of:
        0.035884395 = weight(_text_:d in 5867) [ClassicSimilarity], result of:
          0.035884395 = score(doc=5867,freq=2.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.41981634 = fieldWeight in 5867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.15625 = fieldNorm(doc=5867)
      0.33333334 = coord(1/3)
    
    Type
    d
  16. Biskri, I.; Rompré, L.: Using association rules for query reformulation (2012) 0.01
    0.010103834 = product of:
      0.0303115 = sum of:
        0.0303115 = product of:
          0.060623 = sum of:
            0.060623 = weight(_text_:u.a in 92) [ClassicSimilarity], result of:
              0.060623 = score(doc=92,freq=2.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.29887244 = fieldWeight in 92, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.046875 = fieldNorm(doc=92)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  17. Habernal, I.; Konopík, M.; Rohlík, O.: Question answering (2012) 0.01
    0.010103834 = product of:
      0.0303115 = sum of:
        0.0303115 = product of:
          0.060623 = sum of:
            0.060623 = weight(_text_:u.a in 101) [ClassicSimilarity], result of:
              0.060623 = score(doc=101,freq=2.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.29887244 = fieldWeight in 101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.046875 = fieldNorm(doc=101)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  18. Mandl, T.: Tolerantes Information Retrieval : Neuronale Netze zur Erhöhung der Adaptivität und Flexibilität bei der Informationssuche (2001) 0.01
    0.009128182 = product of:
      0.013692273 = sum of:
        0.0035884394 = weight(_text_:d in 5965) [ClassicSimilarity], result of:
          0.0035884394 = score(doc=5965,freq=2.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.041981634 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.015625 = fieldNorm(doc=5965)
        0.010103834 = product of:
          0.020207668 = sum of:
            0.020207668 = weight(_text_:u.a in 5965) [ClassicSimilarity], result of:
              0.020207668 = score(doc=5965,freq=2.0), product of:
                0.20283905 = queryWeight, product of:
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.044990618 = queryNorm
                0.09962415 = fieldWeight in 5965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5084743 = idf(docFreq=1323, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5965)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: nfd - Information 54(2003) H.6, S.379-380 (U. Thiel): "Kannte G. Salton bei der Entwicklung des Vektorraummodells die kybernetisch orientierten Versuche mit assoziativen Speicherstrukturen? An diese und ähnliche Vermutungen, die ich vor einigen Jahren mit Reginald Ferber und anderen Kollegen diskutierte, erinnerte mich die Thematik des vorliegenden Buches. Immerhin lässt sich feststellen, dass die Vektorrepräsentation eine genial einfache Darstellung sowohl der im Information Retrieval (IR) als grundlegende Datenstruktur benutzten "inverted files" als auch der assoziativen Speichermatrizen darstellt, die sich im Laufe der Zeit Über Perzeptrons zu Neuronalen Netzen (NN) weiterentwickelten. Dieser formale Zusammenhang stimulierte in der Folge eine Reihe von Ansätzen, die Netzwerke im Retrieval zu verwenden, wobei sich, wie auch im vorliegenden Band, hybride Ansätze, die Methoden aus beiden Disziplinen kombinieren, als sehr geeignet erweisen. Aber der Reihe nach... Das Buch wurde vom Autor als Dissertation beim Fachbereich IV "Sprachen und Technik" der Universität Hildesheim eingereicht und resultiert aus einer Folge von Forschungsbeiträgen zu mehreren Projekten, an denen der Autor in der Zeit von 1995 bis 2000 an verschiedenen Standorten beteiligt war. Dies erklärt die ungewohnte Breite der Anwendungen, Szenarien und Domänen, in denen die Ergebnisse gewonnen wurden. So wird das in der Arbeit entwickelte COSIMIR Modell (COgnitive SIMilarity learning in Information Retrieval) nicht nur anhand der klassischen Cranfield-Kollektion evaluiert, sondern auch im WING-Projekt der Universität Regensburg im Faktenretrieval aus einer Werkstoffdatenbank eingesetzt. Weitere Versuche mit der als "Transformations-Netzwerk" bezeichneten Komponente, deren Aufgabe die Abbildung von Gewichtungsfunktionen zwischen zwei Termräumen ist, runden das Spektrum der Experimente ab. Aber nicht nur die vorgestellten Resultate sind vielfältig, auch der dem Leser angebotene "State-of-the-Art"-Überblick fasst in hoch informativer Breite Wesentliches aus den Gebieten IR und NN zusammen und beleuchtet die Schnittpunkte der beiden Bereiche. So werden neben den Grundlagen des Text- und Faktenretrieval die Ansätze zur Verbesserung der Adaptivität und zur Beherrschung von Heterogenität vorgestellt, während als Grundlagen Neuronaler Netze neben einer allgemeinen Einführung in die Grundbegriffe u.a. das Backpropagation-Modell, KohonenNetze und die Adaptive Resonance Theory (ART) geschildert werden. Einweiteres Kapitel stellt die bisherigen NN-orientierten Ansätze im IR vor und rundet den Abriss der relevanten Forschungslandschaft ab. Als Vorbereitung der Präsentation des COSIMIR-Modells schiebt der Autor an dieser Stelle ein diskursives Kapitel zum Thema Heterogenität im IR ein, wodurch die Ziele und Grundannahmen der Arbeit noch einmal reflektiert werden. Als Dimensionen der Heterogenität werden der Objekttyp, die Qualität der Objekte und ihrer Erschließung und die Mehrsprachigkeit genannt. Wenn auch diese Systematik im Wesentlichen die Akzente auf Probleme aus den hier tangierten Projekten legt, und weniger eine umfassende Aufbereitung z.B. der Literatur zum Problem der Relevanz anstrebt, ist sie dennoch hilfreich zum Verständnis der in den nachfolgenden Kapitel oft nur implizit angesprochenen Designentscheidungen bei der Konzeption der entwickelten Prototypen. Der Ansatz, Heterogenität durch Transformationen zu behandeln, wird im speziellen Kontext der NN konkretisiert, wobei andere Möglichkeiten, die z.B. Instrumente der Logik und Probabilistik einzusetzen, nur kurz diskutiert werden. Eine weitergehende Analyse hätte wohl auch den Rahmen der Arbeit zu weit gespannt,
    Language
    d
  19. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.01
    0.008458034 = product of:
      0.025374101 = sum of:
        0.025374101 = weight(_text_:d in 578) [ClassicSimilarity], result of:
          0.025374101 = score(doc=578,freq=16.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.296855 = fieldWeight in 578, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.0390625 = fieldNorm(doc=578)
      0.33333334 = coord(1/3)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  20. Dreßler, H.: Fuzzy Information Retrieval (2008) 0.01
    0.008458034 = product of:
      0.025374101 = sum of:
        0.025374101 = weight(_text_:d in 2300) [ClassicSimilarity], result of:
          0.025374101 = score(doc=2300,freq=4.0), product of:
            0.08547641 = queryWeight, product of:
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.044990618 = queryNorm
            0.296855 = fieldWeight in 2300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.899872 = idf(docFreq=17979, maxDocs=44218)
              0.078125 = fieldNorm(doc=2300)
      0.33333334 = coord(1/3)
    
    Abstract
    Nach einer Erläuterung der Grundlagen der Fuzzylogik wird das Prinzip der unscharfen Suche dargestellt und die Unterschiede zum herkömmlichen Information Retrieval beschrieben. Am Beispiel der Suche nach Steinen für ein Mauerwerk wird gezeigt, wie eine unscharfe Suche in der D&WFuzzydatenbank erfolgreich durchgeführt werden kann und zu eindeutigen Ergebnissen führt.
    Language
    d

Years

Languages

  • e 59
  • d 40
  • pt 1
  • More… Less…

Types

  • a 84
  • x 7
  • el 3
  • m 3
  • r 2
  • s 2
  • d 1
  • More… Less…