Search (43 results, page 1 of 3)

  • × theme_ss:"Retrievalalgorithmen"
  1. Faloutsos, C.: Signature files (1992) 0.05
    0.051622465 = product of:
      0.10324493 = sum of:
        0.10324493 = sum of:
          0.04673685 = weight(_text_:classification in 3499) [ClassicSimilarity], result of:
            0.04673685 = score(doc=3499,freq=2.0), product of:
              0.16603322 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05213454 = queryNorm
              0.28149095 = fieldWeight in 3499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0625 = fieldNorm(doc=3499)
          0.056508083 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
            0.056508083 = score(doc=3499,freq=2.0), product of:
              0.18256627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05213454 = queryNorm
              0.30952093 = fieldWeight in 3499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3499)
      0.5 = coord(1/2)
    
    Abstract
    Presents a survey and discussion on signature-based text retrieval methods. It describes the main idea behind the signature approach and its advantages over other text retrieval methods, it provides a classification of the signature methods that have appeared in the literature, it describes the main representatives of each class, together with the relative advantages and drawbacks, and it gives a list of applications as well as commercial or university prototypes that use the signature approach
    Date
    7. 5.1999 15:22:48
  2. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.03
    0.028254041 = product of:
      0.056508083 = sum of:
        0.056508083 = product of:
          0.113016166 = sum of:
            0.113016166 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.113016166 = score(doc=402,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  3. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.02
    0.024722286 = product of:
      0.04944457 = sum of:
        0.04944457 = product of:
          0.09888914 = sum of:
            0.09888914 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.09888914 = score(doc=2134,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 3.2001 13:32:22
  4. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.02
    0.024722286 = product of:
      0.04944457 = sum of:
        0.04944457 = product of:
          0.09888914 = sum of:
            0.09888914 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.09888914 = score(doc=3445,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    25. 8.2005 17:42:22
  5. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.02
    0.02119053 = product of:
      0.04238106 = sum of:
        0.04238106 = product of:
          0.08476212 = sum of:
            0.08476212 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.08476212 = score(doc=58,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:44
  6. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.02
    0.02119053 = product of:
      0.04238106 = sum of:
        0.04238106 = product of:
          0.08476212 = sum of:
            0.08476212 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.08476212 = score(doc=2051,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:56
  7. Schiminovich, S.: Automatic classification and retrieval of documents by means of a bibliographic pattern discovery algorithm (1971) 0.02
    0.020447372 = product of:
      0.040894743 = sum of:
        0.040894743 = product of:
          0.081789486 = sum of:
            0.081789486 = weight(_text_:classification in 4846) [ClassicSimilarity], result of:
              0.081789486 = score(doc=4846,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.49260917 = fieldWeight in 4846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Srinivasan, P.: Intelligent information retrieval using rough set approximations (1989) 0.02
    0.017707944 = product of:
      0.035415888 = sum of:
        0.035415888 = product of:
          0.070831776 = sum of:
            0.070831776 = weight(_text_:classification in 2526) [ClassicSimilarity], result of:
              0.070831776 = score(doc=2526,freq=6.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.42661208 = fieldWeight in 2526, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2526)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The theory of rough sets was introduced in 1982. It allows the classification of objects into sets of equivalent members based on their attributes. Any combination of the same objetcts (or even their attributes) may be examined using the resultant classification. The theory has direct applications in the design and evaluation of classification schemes and the selection of discriminating attributes. Introductory papers discuss its application in the domain of medical diagnostic systems and the design of information retrieval systems accessing collections of documents. Advantages offered by the theory are: the implicit inclusion of Boolean logic; term weighting; and the ability to rank retrieved documents.
  9. Bauckhage, C.: Marginalizing over the PageRank damping factor (2014) 0.01
    0.014605265 = product of:
      0.02921053 = sum of:
        0.02921053 = product of:
          0.05842106 = sum of:
            0.05842106 = weight(_text_:classification in 928) [ClassicSimilarity], result of:
              0.05842106 = score(doc=928,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.35186368 = fieldWeight in 928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=928)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this note, we show how to marginalize over the damping parameter of the PageRank equation so as to obtain a parameter-free version known as TotalRank. Our discussion is meant as a reference and intended to provide a guided tour towards an interesting result that has applications in information retrieval and classification.
  10. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.01
    0.014127021 = product of:
      0.028254041 = sum of:
        0.028254041 = product of:
          0.056508083 = sum of:
            0.056508083 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.056508083 = score(doc=5108,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 18:30:22
  11. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.01
    0.014127021 = product of:
      0.028254041 = sum of:
        0.028254041 = product of:
          0.056508083 = sum of:
            0.056508083 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.056508083 = score(doc=1422,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2003 19:27:23
  12. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.014127021 = product of:
      0.028254041 = sum of:
        0.028254041 = product of:
          0.056508083 = sum of:
            0.056508083 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.056508083 = score(doc=1431,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  13. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.01
    0.014127021 = product of:
      0.028254041 = sum of:
        0.028254041 = product of:
          0.056508083 = sum of:
            0.056508083 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.056508083 = score(doc=1484,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 9.2014 14:45:22
  14. Kang, I.-H.; Kim, G.C.: Integration of multiple evidences based on a query type for web search (2004) 0.01
    0.012648531 = product of:
      0.025297062 = sum of:
        0.025297062 = product of:
          0.050594125 = sum of:
            0.050594125 = weight(_text_:classification in 2568) [ClassicSimilarity], result of:
              0.050594125 = score(doc=2568,freq=6.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.3047229 = fieldWeight in 2568, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2568)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The massive and heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web pages are not enough to find answer pages. PageRank compensates for the insufficiencies of content information. The content information and PageRank are combined to get better results. However, static combination of multiple evidences may lower the retrieval performance. We have to use different strategies to meet the need of a user. We can classify user queries as three categories according to users' intent, the topic relevance task, the homepage finding task, and the service finding task. In this paper, we present a user query classification method. The difference of distribution, mutual information, the usage rate as anchor texts and the POS information are used for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used.
  15. González-Ibáñez, R.; Esparza-Villamán, A.; Vargas-Godoy, J.C.; Shah, C.: ¬A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR (2019) 0.01
    0.012648531 = product of:
      0.025297062 = sum of:
        0.025297062 = product of:
          0.050594125 = sum of:
            0.050594125 = weight(_text_:classification in 5417) [ClassicSimilarity], result of:
              0.050594125 = score(doc=5417,freq=6.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.3047229 = fieldWeight in 5417, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5417)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Implicit detection of relevance has been approached by many during the last decade. From the use of individual measures to the use of multiple features from different sources (multimodality), studies have shown the feasibility to automatically detect whether a document is relevant. Despite promising results, it is not clear yet to what extent multimodality constitutes an effective approach compared to unimodality. In this article, we hypothesize that it is possible to build unimodal models capable of outperforming multimodal models in the detection of perceived relevance. To test this hypothesis, we conducted three experiments to compare unimodal and multimodal classification models built using a combination of 24 features. Our classification experiments showed that a univariate unimodal model based on the left-click feature supports our hypothesis. On the other hand, our prediction experiment suggests that multimodality slightly improves early classification compared to the best unimodal models. Based on our results, we argue that the feasibility for practical applications of state-of-the-art multimodal approaches may be strongly constrained by technology, cultural, ethical, and legal aspects, in which case unimodality may offer a better alternative today for supporting relevance detection in interactive information retrieval systems.
  16. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.01
    0.012486639 = product of:
      0.024973279 = sum of:
        0.024973279 = product of:
          0.049946558 = sum of:
            0.049946558 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.049946558 = score(doc=2591,freq=4.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  17. Biskri, I.; Rompré, L.: Using association rules for query reformulation (2012) 0.01
    0.012392979 = product of:
      0.024785958 = sum of:
        0.024785958 = product of:
          0.049571916 = sum of:
            0.049571916 = weight(_text_:classification in 92) [ClassicSimilarity], result of:
              0.049571916 = score(doc=92,freq=4.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.29856625 = fieldWeight in 92, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=92)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper the authors will present research on the combination of two methods of data mining: text classification and maximal association rules. Text classification has been the focus of interest of many researchers for a long time. However, the results take the form of lists of words (classes) that people often do not know what to do with. The use of maximal association rules induced a number of advantages: (1) the detection of dependencies and correlations between the relevant units of information (words) of different classes, (2) the extraction of hidden knowledge, often relevant, from a large volume of data. The authors will show how this combination can improve the process of information retrieval.
  18. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.01
    0.012361143 = product of:
      0.024722286 = sum of:
        0.024722286 = product of:
          0.04944457 = sum of:
            0.04944457 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.04944457 = score(doc=1319,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  19. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.01
    0.012361143 = product of:
      0.024722286 = sum of:
        0.024722286 = product of:
          0.04944457 = sum of:
            0.04944457 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
              0.04944457 = score(doc=3276,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.2708308 = fieldWeight in 3276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 3.2005 16:23:22
  20. Hofferer, M.: Heuristic search in information retrieval (1994) 0.01
    0.011684213 = product of:
      0.023368426 = sum of:
        0.023368426 = product of:
          0.04673685 = sum of:
            0.04673685 = weight(_text_:classification in 1070) [ClassicSimilarity], result of:
              0.04673685 = score(doc=1070,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.28149095 = fieldWeight in 1070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1070)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes an adaptive information retrieval system: Information Retrieval Algorithm System (IRAS); that uses heuristic searching to sample a document space and retrieve relevant documents according to users' requests; and also a learning module based on a knowledge representation system and an approximate probabilistic characterization of relevant documents; to reproduce a user classification of relevant documents and to provide a rule controlled ranking

Years

Languages

  • e 38
  • d 5

Types

  • a 40
  • m 2
  • el 1
  • r 1
  • More… Less…