Search (19 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Retrievalalgorithmen"
  • × year_i:[2010 TO 2020}
  1. Lee, J.; Min, J.-K.; Oh, A.; Chung, C.-W.: Effective ranking and search techniques for Web resources considering semantic relationships (2014) 0.03
    0.02812336 = product of:
      0.05624672 = sum of:
        0.05624672 = product of:
          0.08437008 = sum of:
            0.05481886 = weight(_text_:k in 2670) [ClassicSimilarity], result of:
              0.05481886 = score(doc=2670,freq=6.0), product of:
                0.16049191 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.044958513 = queryNorm
                0.34156775 = fieldWeight in 2670, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2670)
            0.029551215 = weight(_text_:c in 2670) [ClassicSimilarity], result of:
              0.029551215 = score(doc=2670,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 2670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2670)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    On the Semantic Web, the types of resources and the semantic relationships between resources are defined in an ontology. By using that information, the accuracy of information retrieval can be improved. In this paper, we present effective ranking and search techniques considering the semantic relationships in an ontology. Our technique retrieves top-k resources which are the most relevant to query keywords through the semantic relationships. To do this, we propose a weighting measure for the semantic relationship. Based on this measure, we propose a novel ranking method which considers the number of meaningful semantic relationships between a resource and keywords as well as the coverage and discriminating power of keywords. In order to improve the efficiency of the search, we prune the unnecessary search space using the length and weight thresholds of the semantic relationship path. In addition, we exploit Threshold Algorithm based on an extended inverted index to answer top-k results efficiently. The experimental results using real data sets demonstrate that our retrieval method using the semantic information generates accurate results efficiently compared to the traditional methods.
  2. Tsai, C.-F.; Hu, Y.-H.; Chen, Z.-Y.: Factors affecting rocchio-based pseudorelevance feedback in image retrieval (2015) 0.02
    0.0204003 = product of:
      0.0408006 = sum of:
        0.0408006 = product of:
          0.061200902 = sum of:
            0.031649686 = weight(_text_:k in 1607) [ClassicSimilarity], result of:
              0.031649686 = score(doc=1607,freq=2.0), product of:
                0.16049191 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.044958513 = queryNorm
                0.19720423 = fieldWeight in 1607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1607)
            0.029551215 = weight(_text_:c in 1607) [ClassicSimilarity], result of:
              0.029551215 = score(doc=1607,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 1607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1607)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Pseudorelevance feedback (PRF) was proposed to solve the limitation of relevance feedback (RF), which is based on the user-in-the-loop process. In PRF, the top-k retrieved images are regarded as PRF. Although the PRF set contains noise, PRF has proven effective for automatically improving the overall retrieval result. To implement PRF, the Rocchio algorithm has been considered as a reasonable and well-established baseline. However, the performance of Rocchio-based PRF is subject to various representation choices (or factors). In this article, we examine these factors that affect the performance of Rocchio-based PRF, including image-feature representation, the number of top-ranked images, the weighting parameters of Rocchio, and similarity measure. We offer practical insights on how to optimize the performance of Rocchio-based PRF by choosing appropriate representation choices. Our extensive experiments on NUS-WIDE-LITE and Caltech 101 + Corel 5000 data sets show that the optimal feature representation is color moment + wavelet texture in terms of retrieval efficiency and effectiveness. Other representation choices are that using top-20 ranked images as pseudopositive and pseudonegative feedback sets with the equal weight (i.e., 0.5) by the correlation and cosine distance functions can produce the optimal retrieval result.
  3. Efron, M.; Winget, M.: Query polyrepresentation for ranking retrieval systems without relevance judgments (2010) 0.01
    0.008951883 = product of:
      0.017903766 = sum of:
        0.017903766 = product of:
          0.053711295 = sum of:
            0.053711295 = weight(_text_:k in 3469) [ClassicSimilarity], result of:
              0.053711295 = score(doc=3469,freq=4.0), product of:
                0.16049191 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.044958513 = queryNorm
                0.33466667 = fieldWeight in 3469, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3469)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Ranking information retrieval (IR) systems with respect to their effectiveness is a crucial operation during IR evaluation, as well as during data fusion. This article offers a novel method of approaching the system-ranking problem, based on the widely studied idea of polyrepresentation. The principle of polyrepresentation suggests that a single information need can be represented by many query articulations-what we call query aspects. By skimming the top k (where k is small) documents retrieved by a single system for multiple query aspects, we collect a set of documents that are likely to be relevant to a given test topic. Labeling these skimmed documents as putatively relevant lets us build pseudorelevance judgments without undue human intervention. We report experiments where using these pseudorelevance judgments delivers a rank ordering of IR systems that correlates highly with rankings based on human relevance judgments.
  4. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.008121678 = product of:
      0.016243355 = sum of:
        0.016243355 = product of:
          0.048730064 = sum of:
            0.048730064 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.048730064 = score(doc=1431,freq=2.0), product of:
                0.15743706 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044958513 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  5. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.01
    0.0071786167 = product of:
      0.014357233 = sum of:
        0.014357233 = product of:
          0.0430717 = sum of:
            0.0430717 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.0430717 = score(doc=2591,freq=4.0), product of:
                0.15743706 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044958513 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  6. Cecchini, R.L.; Lorenzetti, C.M.; Maguitman, A.G.; Brignole, N.B.: Multiobjective evolutionary algorithms for context-based search (2010) 0.01
    0.0059102434 = product of:
      0.011820487 = sum of:
        0.011820487 = product of:
          0.03546146 = sum of:
            0.03546146 = weight(_text_:c in 3482) [ClassicSimilarity], result of:
              0.03546146 = score(doc=3482,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.22866541 = fieldWeight in 3482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3482)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Formulating high-quality queries is a key aspect of context-based search. However, determining the effectiveness of a query is challenging because multiple objectives, such as high precision and high recall, are usually involved. In this work, we study techniques that can be applied to evolve contextualized queries when the criteria for determining query quality are based on multiple objectives. We report on the results of three different strategies for evolving queries: (a) single-objective, (b) multiobjective with Pareto-based ranking, and (c) multiobjective with aggregative ranking. After a comprehensive evaluation with a large set of topics, we discuss the limitations of the single-objective approach and observe that both the Pareto-based and aggregative strategies are highly effective for evolving topical queries. In particular, our experiments lead us to conclude that the multiobjective techniques are superior to a baseline as well as to well-known and ad hoc query reformulation techniques.
  7. Biskri, I.; Rompré, L.: Using association rules for query reformulation (2012) 0.01
    0.0059102434 = product of:
      0.011820487 = sum of:
        0.011820487 = product of:
          0.03546146 = sum of:
            0.03546146 = weight(_text_:c in 92) [ClassicSimilarity], result of:
              0.03546146 = score(doc=92,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.22866541 = fieldWeight in 92, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=92)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  8. Habernal, I.; Konopík, M.; Rohlík, O.: Question answering (2012) 0.01
    0.0059102434 = product of:
      0.011820487 = sum of:
        0.011820487 = product of:
          0.03546146 = sum of:
            0.03546146 = weight(_text_:c in 101) [ClassicSimilarity], result of:
              0.03546146 = score(doc=101,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.22866541 = fieldWeight in 101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=101)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  9. Van der Veer Martens, B.; Fleet, C. van: Opening the black box of "relevance work" : a domain analysis (2012) 0.01
    0.0059102434 = product of:
      0.011820487 = sum of:
        0.011820487 = product of:
          0.03546146 = sum of:
            0.03546146 = weight(_text_:c in 247) [ClassicSimilarity], result of:
              0.03546146 = score(doc=247,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.22866541 = fieldWeight in 247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=247)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  10. Bhansali, D.; Desai, H.; Deulkar, K.: ¬A study of different ranking approaches for semantic search (2015) 0.01
    0.0052749477 = product of:
      0.010549895 = sum of:
        0.010549895 = product of:
          0.031649686 = sum of:
            0.031649686 = weight(_text_:k in 2696) [ClassicSimilarity], result of:
              0.031649686 = score(doc=2696,freq=2.0), product of:
                0.16049191 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.044958513 = queryNorm
                0.19720423 = fieldWeight in 2696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2696)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  11. Hubert, G.; Pitarch, Y.; Pinel-Sauvagnat, K.; Tournier, R.; Laporte, L.: TournaRank : when retrieval becomes document competition (2018) 0.01
    0.0052749477 = product of:
      0.010549895 = sum of:
        0.010549895 = product of:
          0.031649686 = sum of:
            0.031649686 = weight(_text_:k in 5087) [ClassicSimilarity], result of:
              0.031649686 = score(doc=5087,freq=2.0), product of:
                0.16049191 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.044958513 = queryNorm
                0.19720423 = fieldWeight in 5087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5087)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Baloh, P.; Desouza, K.C.; Hackney, R.: Contextualizing organizational interventions of knowledge management systems : a design science perspectiveA domain analysis (2012) 0.01
    0.005076049 = product of:
      0.010152098 = sum of:
        0.010152098 = product of:
          0.030456292 = sum of:
            0.030456292 = weight(_text_:22 in 241) [ClassicSimilarity], result of:
              0.030456292 = score(doc=241,freq=2.0), product of:
                0.15743706 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044958513 = queryNorm
                0.19345059 = fieldWeight in 241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=241)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    11. 6.2012 14:22:34
  13. Soulier, L.; Jabeur, L.B.; Tamine, L.; Bahsoun, W.: On ranking relevant entities in heterogeneous networks using a language-based model (2013) 0.01
    0.005076049 = product of:
      0.010152098 = sum of:
        0.010152098 = product of:
          0.030456292 = sum of:
            0.030456292 = weight(_text_:22 in 664) [ClassicSimilarity], result of:
              0.030456292 = score(doc=664,freq=2.0), product of:
                0.15743706 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044958513 = queryNorm
                0.19345059 = fieldWeight in 664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=664)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:34:49
  14. Lee, J.-T.; Seo, J.; Jeon, J.; Rim, H.-C.: Sentence-based relevance flow analysis for high accuracy retrieval (2011) 0.00
    0.0049252026 = product of:
      0.009850405 = sum of:
        0.009850405 = product of:
          0.029551215 = sum of:
            0.029551215 = weight(_text_:c in 4746) [ClassicSimilarity], result of:
              0.029551215 = score(doc=4746,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 4746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4746)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Nunes, S.; Ribeiro, C.; David, G.: Term weighting based on document revision history (2011) 0.00
    0.0049252026 = product of:
      0.009850405 = sum of:
        0.009850405 = product of:
          0.029551215 = sum of:
            0.029551215 = weight(_text_:c in 4946) [ClassicSimilarity], result of:
              0.029551215 = score(doc=4946,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 4946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4946)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Liu, R.-L.; Huang, Y.-C.: Ranker enhancement for proximity-based ranking of biomedical texts (2011) 0.00
    0.0049252026 = product of:
      0.009850405 = sum of:
        0.009850405 = product of:
          0.029551215 = sum of:
            0.029551215 = weight(_text_:c in 4947) [ClassicSimilarity], result of:
              0.029551215 = score(doc=4947,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 4947, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4947)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  17. Costa Carvalho, A. da; Rossi, C.; Moura, E.S. de; Silva, A.S. da; Fernandes, D.: LePrEF: Learn to precompute evidence fusion for efficient query evaluation (2012) 0.00
    0.0049252026 = product of:
      0.009850405 = sum of:
        0.009850405 = product of:
          0.029551215 = sum of:
            0.029551215 = weight(_text_:c in 278) [ClassicSimilarity], result of:
              0.029551215 = score(doc=278,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=278)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Zhu, J.; Han, L.; Gou, Z.; Yuan, X.: ¬A fuzzy clustering-based denoising model for evaluating uncertainty in collaborative filtering recommender systems (2018) 0.00
    0.0049252026 = product of:
      0.009850405 = sum of:
        0.009850405 = product of:
          0.029551215 = sum of:
            0.029551215 = weight(_text_:c in 4460) [ClassicSimilarity], result of:
              0.029551215 = score(doc=4460,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 4460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4460)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Recommender systems are effective in predicting the most suitable products for users, such as movies and books. To facilitate personalized recommendations, the quality of item ratings should be guaranteed. However, a few ratings might not be accurate enough due to the uncertainty of user behavior and are referred to as natural noise. In this article, we present a novel fuzzy clustering-based method for detecting noisy ratings. The entropy of a subset of the original ratings dataset is used to indicate the data-driven uncertainty, and evaluation metrics are adopted to represent the prediction-driven uncertainty. After the repetition of resampling and the execution of a recommendation algorithm, the entropy and evaluation metrics vectors are obtained and are empirically categorized to identify the proportion of the potential noise. Then, the fuzzy C-means-based denoising (FCMD) algorithm is performed to verify the natural noise under the assumption that natural noise is primarily the result of the exceptional behavior of users. Finally, a case study is performed using two real-world datasets. The experimental results show that our proposal outperforms previous proposals and has an advantage in dealing with natural noise.
  19. González-Ibáñez, R.; Esparza-Villamán, A.; Vargas-Godoy, J.C.; Shah, C.: ¬A comparison of unimodal and multimodal models for implicit detection of relevance in interactive IR (2019) 0.00
    0.0049252026 = product of:
      0.009850405 = sum of:
        0.009850405 = product of:
          0.029551215 = sum of:
            0.029551215 = weight(_text_:c in 5417) [ClassicSimilarity], result of:
              0.029551215 = score(doc=5417,freq=2.0), product of:
                0.15508012 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044958513 = queryNorm
                0.1905545 = fieldWeight in 5417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5417)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)