Search (49 results, page 1 of 3)

  • × theme_ss:"Retrievalalgorithmen"
  1. Shah, B.; Raghavan, V.; Dhatric, P.; Zhao, X.: ¬A cluster-based approach for efficient content-based image retrieval using a similarity-preserving space transformation method (2006) 0.02
    0.018693518 = product of:
      0.046733793 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 6118) [ClassicSimilarity], result of:
              0.034336135 = score(doc=6118,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 6118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6118)
          0.5 = coord(1/2)
        0.029565725 = product of:
          0.05913145 = sum of:
            0.05913145 = weight(_text_:etc in 6118) [ClassicSimilarity], result of:
              0.05913145 = score(doc=6118,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2992217 = fieldWeight in 6118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6118)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The techniques of clustering and space transformation have been successfully used in the past to solve a number of pattern recognition problems. In this article, the authors propose a new approach to content-based image retrieval (CBIR) that uses (a) a newly proposed similarity-preserving space transformation method to transform the original low-level image space into a highlevel vector space that enables efficient query processing, and (b) a clustering scheme that further improves the efficiency of our retrieval system. This combination is unique and the resulting system provides synergistic advantages of using both clustering and space transformation. The proposed space transformation method is shown to preserve the order of the distances in the transformed feature space. This strategy makes this approach to retrieval generic as it can be applied to object types, other than images, and feature spaces more general than metric spaces. The CBIR approach uses the inexpensive "estimated" distance in the transformed space, as opposed to the computationally inefficient "real" distance in the original space, to retrieve the desired results for a given query image. The authors also provide a theoretical analysis of the complexity of their CBIR approach when used for color-based retrieval, which shows that it is computationally more efficient than other comparable approaches. An extensive set of experiments to test the efficiency and effectiveness of the proposed approach has been performed. The results show that the approach offers superior response time (improvement of 1-2 orders of magnitude compared to retrieval approaches that either use pruning techniques like indexing, clustering, etc., or space transformation, but not both) with sufficiently high retrieval accuracy.
  2. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.01
    0.013857874 = product of:
      0.034644686 = sum of:
        0.017168067 = product of:
          0.034336135 = sum of:
            0.034336135 = weight(_text_:problems in 2591) [ClassicSimilarity], result of:
              0.034336135 = score(doc=2591,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.22801295 = fieldWeight in 2591, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
        0.01747662 = product of:
          0.03495324 = sum of:
            0.03495324 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.03495324 = score(doc=2591,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose In a system-based approach, replicating the web would require large test collections, and judging the relevancy of all documents per topic in creating relevance judgment through human assessors is infeasible. Due to the large amount of documents that requires judgment, there are possible errors introduced by human assessors because of disagreements. The paper aims to discuss these issues. Design/methodology/approach This study explores exponential variation and document ranking methods that generate a reliable set of relevance judgments (pseudo relevance judgments) to reduce human efforts. These methods overcome problems with large amounts of documents for judgment while avoiding human disagreement errors during the judgment process. This study utilizes two key factors: number of occurrences of each document per topic from all the system runs; and document rankings to generate the alternate methods. Findings The effectiveness of the proposed method is evaluated using the correlation coefficient of ranked systems using mean average precision scores between the original Text REtrieval Conference (TREC) relevance judgments and pseudo relevance judgments. The results suggest that the proposed document ranking method with a pool depth of 100 could be a reliable alternative to reduce human effort and disagreement errors involved in generating TREC-like relevance judgments. Originality/value Simple methods proposed in this study show improvement in the correlation coefficient in generating alternate relevance judgment without human assessors while contributing to information retrieval evaluation.
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  3. Liu, J.; Liu, C.: Personalization in text information retrieval : a survey (2020) 0.01
    0.01003494 = product of:
      0.050174702 = sum of:
        0.050174702 = product of:
          0.100349404 = sum of:
            0.100349404 = weight(_text_:etc in 5761) [ClassicSimilarity], result of:
              0.100349404 = score(doc=5761,freq=4.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.50779605 = fieldWeight in 5761, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5761)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Personalization of information retrieval (PIR) is aimed at tailoring a search toward individual users and user groups by taking account of additional information about users besides their queries. In the past two decades or so, PIR has received extensive attention in both academia and industry. This article surveys the literature of personalization in text retrieval, following a framework for aspects or factors that can be used for personalization. The framework consists of additional information about users that can be explicitly obtained by asking users for their preferences, or implicitly inferred from users' search behaviors. Users' characteristics and contextual factors such as tasks, time, location, etc., can be helpful for personalization. This article also addresses various issues including when to personalize, the evaluation of PIR, privacy, usability, etc. Based on the extensive review, challenges are discussed and directions for future effort are suggested.
  4. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.007909016 = product of:
      0.039545078 = sum of:
        0.039545078 = product of:
          0.079090156 = sum of:
            0.079090156 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.079090156 = score(doc=402,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  5. Jones, G.; Robertson, A.M.; Willett, P.: ¬An introduction to genetic algorithms and to their use in information retrieval (1994) 0.01
    0.0077693807 = product of:
      0.038846903 = sum of:
        0.038846903 = product of:
          0.077693805 = sum of:
            0.077693805 = weight(_text_:problems in 7415) [ClassicSimilarity], result of:
              0.077693805 = score(doc=7415,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5159344 = fieldWeight in 7415, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper provides an introduction to genetic algorithms, a new approach to the investigation of computationally-intensive problems that may be insoluble using conventional, deterministic approaches. A genetic algorithm takes an initial set of possible starting solutions and then iteratively improves theses solutions using operators that are analogous to those involved in Darwinian evolution. The approach is illusrated by reference to several problems in information retrieval
  6. Nakkouzi, Z.S.; Eastman, C.M.: Query formulation for handling negation in information retrieval systems (1990) 0.01
    0.0077693807 = product of:
      0.038846903 = sum of:
        0.038846903 = product of:
          0.077693805 = sum of:
            0.077693805 = weight(_text_:problems in 3531) [ClassicSimilarity], result of:
              0.077693805 = score(doc=3531,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5159344 = fieldWeight in 3531, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3531)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Queries containing negation are widely recognised as presenting problems for both users and systems. In information retrieval systems such problems usually manifest themselves in the use of the NOT operator. Describes an algorithm to transform Boolean queries with negated terms into queries without negation; the transformation process is based on the use of a hierarchical thesaurus. Examines a set of user requests submitted to the Thomas Cooper Library at the University of South Carolina to determine the pattern and frequency of use of negation.
  7. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.06920389 = score(doc=2134,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    30. 3.2001 13:32:22
  8. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.06920389 = score(doc=3445,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    25. 8.2005 17:42:22
  9. Willett, P.: Best-match text retrieval (1993) 0.01
    0.006867227 = product of:
      0.034336135 = sum of:
        0.034336135 = product of:
          0.06867227 = sum of:
            0.06867227 = weight(_text_:problems in 7818) [ClassicSimilarity], result of:
              0.06867227 = score(doc=7818,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4560259 = fieldWeight in 7818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7818)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Provides an introduction to the computational techniques that underlie best match searching retrieval systems. Discusses: problems of traditional Boolean systems; characteristics of best-match searching; automatic indexing; term conflation; matching of documents and queries (dealing with similarity measures, initial weights, relevance weights, and the matching algorithm); and describes operational best-match systems
  10. Zhang, W.; Korf, R.E.: Performance of linear-space search algorithms (1995) 0.01
    0.006867227 = product of:
      0.034336135 = sum of:
        0.034336135 = product of:
          0.06867227 = sum of:
            0.06867227 = weight(_text_:problems in 4744) [ClassicSimilarity], result of:
              0.06867227 = score(doc=4744,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4560259 = fieldWeight in 4744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4744)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Search algorithms in artificial intelligence systems that use space linear in the search depth are employed in practice to solve difficult problems optimally, such as planning and scheduling. Studies the average-case performance of linear-space search algorithms, including depth-first branch-and-bound, iterative-deepening, and recursive best-first search
  11. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.005931762 = product of:
      0.02965881 = sum of:
        0.02965881 = product of:
          0.05931762 = sum of:
            0.05931762 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.05931762 = score(doc=58,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    14. 6.2015 22:12:44
  12. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.005931762 = product of:
      0.02965881 = sum of:
        0.02965881 = product of:
          0.05931762 = sum of:
            0.05931762 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.05931762 = score(doc=2051,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    14. 6.2015 22:12:56
  13. Lewandowski, D.: How can library materials be ranked in the OPAC? (2009) 0.01
    0.0059131454 = product of:
      0.029565725 = sum of:
        0.029565725 = product of:
          0.05913145 = sum of:
            0.05913145 = weight(_text_:etc in 2810) [ClassicSimilarity], result of:
              0.05913145 = score(doc=2810,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2992217 = fieldWeight in 2810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Some Online Public Access Catalogues offer a ranking component. However, ranking there is merely text-based and is doomed to fail due to limited text in bibliographic data. The main assumption for the talk is that we are in a situation where the appropriate ranking factors for OPACs should be defined, while the implementation is no major problem. We must define what we want, and not so much focus on the technical work. Some deep thinking is necessary on the "perfect results set" and how we can achieve it through ranking. The talk presents a set of potential ranking factors and clustering possibilities for further discussion. A look at commercial Web search engines could provide us with ideas how ranking can be improved with additional factors. Search engines are way beyond pure text-based ranking and apply ranking factors in the groups like popularity, freshness, personalisation, etc. The talk describes the main factors used in search engines and how derivatives of these could be used for libraries' purposes. The goal of ranking is to provide the user with the best-suitable results on top of the results list. How can this goal be achieved with the library catalogue and also concerning the library's different collections and databases? The assumption is that ranking of such materials is a complex problem and is yet nowhere near solved. Libraries should focus on ranking to improve user experience.
  14. Ding, Y.; Chowdhury, G.; Foo, S.: Organsising keywords in a Web search environment : a methodology based on co-word analysis (2000) 0.01
    0.0058270353 = product of:
      0.029135175 = sum of:
        0.029135175 = product of:
          0.05827035 = sum of:
            0.05827035 = weight(_text_:problems in 105) [ClassicSimilarity], result of:
              0.05827035 = score(doc=105,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.3869508 = fieldWeight in 105, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=105)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The rapid development of the Internet and World Wide Web has caused some critical problem for information retrieval. Researchers have made several attempts to solve these problems. Thesauri and subject heading lists as traditional information retrieval tools have been criticised for their efficiency to tackle these newly emerging problems. This paper proposes an information retrieval tool generated by cocitation analysis, comprising keyword clusters with relationships based on the co-occurrences of keywords in the literature. Such a tool can play the role of an associative thesaurus that can provide information about the keywords in a domain that might be useful for information searching and query expansion
  15. Robertson, A.M.; Willett, P.: Use of genetic algorithms in information retrieval (1995) 0.01
    0.0054937815 = product of:
      0.027468907 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 2418) [ClassicSimilarity], result of:
              0.054937813 = score(doc=2418,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Reviews the basic techniques involving genetic algorithms and their application to 2 problems in information retrieval: the generation of equifrequent groups of index terms; and the identification of optimal query and term weights. The algorithm developed for the generation of equifrequent groupings proved to be effective in operation, achieving results comparable with those obtained using a good deterministic algorithm. The algorithm developed for the identification of optimal query and term weighting involves fitness function that is based on full relevance information
  16. Gauch, S.; Smith, J.B.: ¬An expert system for automatic query reformation (1993) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 3693) [ClassicSimilarity], result of:
              0.04120336 = score(doc=3693,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Unfamiliarity with search tactics creates difficulties for many users of online retrieval systems. User observations indicate that even experienced searchers use vocabulary incorrectly and rarely reformulate their queries. To address these problems, an expert system for online search assistance was developed. This prototype automatically reformulates queries to improve the search results, and ranks the retrieved passages to speed the identification of relevant information. User's search performance using the expert system was compared with their search performance using an online thesaurus. The following conclusions were reached: (1) the expert system significantly reduced the number of queries necessary to find relevant passages compared with the user searching alone or with the thesaurus. (2) The expert system produced marginally significant improvements in precision compared with the user searching on their own. There was no significant difference in the recall achieved by the three system configurations. (3) Overall, the expert system ranked relevant passages above irrelevant passages
  17. Nie, J.-Y.: Query expansion and query translation as logical inference (2003) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 1425) [ClassicSimilarity], result of:
              0.04120336 = score(doc=1425,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 1425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1425)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A number of studies have examined the problems of query expansion in monolingual Information Retrieval (IR), and query translation for crosslanguage IR. However, no link has been made between them. This article first shows that query translation is a special case of query expansion. There is also another set of studies an inferential IR. Again, there is no relationship established with query translation or query expansion. The second claim of this article is that logical inference is a general form that covers query expansion and query translation. This analysis provides a unified view of different subareas of IR. We further develop the inferential IR approach in two particular contexts: using fuzzy logic and probability theory. The evaluation formulas obtained are shown to strongly correspond to those used in other IR models. This indicates that inference is indeed the core of advanced IR.
  18. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (1999) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 5777) [ClassicSimilarity], result of:
              0.04120336 = score(doc=5777,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 5777, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5777)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This book discusses many of the key design issues for building search engines and emphazises the important role that applied mathematics can play in improving information retrieval. The authors discuss not only important data structures, algorithms, and software but also user-centered issues such as interfaces, manual indexing, and document preparation. They also present some of the current problems in information retrieval that many not be familiar to applied mathematicians and computer scientists and some of the driving computational methods (SVD, SDD) for automated conceptual indexing
  19. White, R.W.; Jose, J.M.; Ruthven, I.: ¬An implicit feedback approach for interactive information retrieval (2006) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 964) [ClassicSimilarity], result of:
              0.04120336 = score(doc=964,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=964)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Searchers can face problems finding the information they seek. One reason for this is that they may have difficulty devising queries to express their information needs. In this article, we describe an approach that uses unobtrusive monitoring of interaction to proactively support searchers. The approach chooses terms to better represent information needs by monitoring searcher interaction with different representations of top-ranked documents. Information needs are dynamic and can change as a searcher views information. The approach we propose gathers evidence on potential changes in these needs and uses this evidence to choose new retrieval strategies. We present an evaluation of how well our technique estimates information needs, how well it estimates changes in these needs and the appropriateness of the interface support it offers. The results are presented and the avenues for future research identified.
  20. Li, H.; Wu, H.; Li, D.; Lin, S.; Su, Z.; Luo, X.: PSI: A probabilistic semantic interpretable framework for fine-grained image ranking (2018) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 4577) [ClassicSimilarity], result of:
              0.04120336 = score(doc=4577,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Image Ranking is one of the key problems in information science research area. However, most current methods focus on increasing the performance, leaving the semantic gap problem, which refers to the learned ranking models are hard to be understood, remaining intact. Therefore, in this article, we aim at learning an interpretable ranking model to tackle the semantic gap in fine-grained image ranking. We propose to combine attribute-based representation and online passive-aggressive (PA) learning based ranking models to achieve this goal. Besides, considering the highly localized instances in fine-grained image ranking, we introduce a supervised constrained clustering method to gather class-balanced training instances for local PA-based models, and incorporate the learned local models into a unified probabilistic framework. Extensive experiments on the benchmark demonstrate that the proposed framework outperforms state-of-the-art methods in terms of accuracy and speed.

Years

Languages

  • e 44
  • d 5

Types

  • a 41
  • m 5
  • r 2
  • el 1
  • s 1
  • More… Less…