Search (8 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Retrievalalgorithmen"
  • × year_i:[2010 TO 2020}
  1. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.014180663 = product of:
      0.028361326 = sum of:
        0.028361326 = product of:
          0.056722652 = sum of:
            0.056722652 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.056722652 = score(doc=1431,freq=2.0), product of:
                0.1832595 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0523325 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  2. White, H. D.: Co-cited author retrieval and relevance theory : examples from the humanities (2015) 0.01
    0.012773823 = product of:
      0.025547646 = sum of:
        0.025547646 = product of:
          0.051095292 = sum of:
            0.051095292 = weight(_text_:4 in 1687) [ClassicSimilarity], result of:
              0.051095292 = score(doc=1687,freq=2.0), product of:
                0.14201462 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0523325 = queryNorm
                0.35978895 = fieldWeight in 1687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1687)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl.: doi:10.1007/s11192-014-1483-4
  3. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.01
    0.012534053 = product of:
      0.025068106 = sum of:
        0.025068106 = product of:
          0.050136212 = sum of:
            0.050136212 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.050136212 = score(doc=2591,freq=4.0), product of:
                0.1832595 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0523325 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  4. Baloh, P.; Desouza, K.C.; Hackney, R.: Contextualizing organizational interventions of knowledge management systems : a design science perspectiveA domain analysis (2012) 0.01
    0.0088629145 = product of:
      0.017725829 = sum of:
        0.017725829 = product of:
          0.035451658 = sum of:
            0.035451658 = weight(_text_:22 in 241) [ClassicSimilarity], result of:
              0.035451658 = score(doc=241,freq=2.0), product of:
                0.1832595 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0523325 = queryNorm
                0.19345059 = fieldWeight in 241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=241)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 6.2012 14:22:34
  5. Soulier, L.; Jabeur, L.B.; Tamine, L.; Bahsoun, W.: On ranking relevant entities in heterogeneous networks using a language-based model (2013) 0.01
    0.0088629145 = product of:
      0.017725829 = sum of:
        0.017725829 = product of:
          0.035451658 = sum of:
            0.035451658 = weight(_text_:22 in 664) [ClassicSimilarity], result of:
              0.035451658 = score(doc=664,freq=2.0), product of:
                0.1832595 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0523325 = queryNorm
                0.19345059 = fieldWeight in 664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=664)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:34:49
  6. Nunes, S.; Ribeiro, C.; David, G.: Term weighting based on document revision history (2011) 0.01
    0.0053224266 = product of:
      0.010644853 = sum of:
        0.010644853 = product of:
          0.021289706 = sum of:
            0.021289706 = weight(_text_:4 in 4946) [ClassicSimilarity], result of:
              0.021289706 = score(doc=4946,freq=2.0), product of:
                0.14201462 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0523325 = queryNorm
                0.14991207 = fieldWeight in 4946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4946)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In real-world information retrieval systems, the underlying document collection is rarely stable or definitive. This work is focused on the study of signals extracted from the content of documents at different points in time for the purpose of weighting individual terms in a document. The basic idea behind our proposals is that terms that have existed for a longer time in a document should have a greater weight. We propose 4 term weighting functions that use each document's history to estimate a current term score. To evaluate this thesis, we conduct 3 independent experiments using a collection of documents sampled from Wikipedia. In the first experiment, we use data from Wikipedia to judge each set of terms. In a second experiment, we use an external collection of tags from a popular social bookmarking service as a gold standard. In the third experiment, we crowdsource user judgments to collect feedback on term preference. Across all experiments results consistently support our thesis. We show that temporally aware measures, specifically the proposed revision term frequency and revision term frequency span, outperform a term-weighting measure based on raw term frequency alone.
  7. Bhansali, D.; Desai, H.; Deulkar, K.: ¬A study of different ranking approaches for semantic search (2015) 0.01
    0.0053224266 = product of:
      0.010644853 = sum of:
        0.010644853 = product of:
          0.021289706 = sum of:
            0.021289706 = weight(_text_:4 in 2696) [ClassicSimilarity], result of:
              0.021289706 = score(doc=2696,freq=2.0), product of:
                0.14201462 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0523325 = queryNorm
                0.14991207 = fieldWeight in 2696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2696)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Search Engines have become an integral part of our day to day life. Our reliance on search engines increases with every passing day. With the amount of data available on Internet increasing exponentially, it becomes important to develop new methods and tools that help to return results relevant to the queries and reduce the time spent on searching. The results should be diverse but at the same time should return results focused on the queries asked. Relation Based Page Rank [4] algorithms are considered to be the next frontier in improvement of Semantic Web Search. The probability of finding relevance in the search results as posited by the user while entering the query is used to measure the relevance. However, its application is limited by the complexity of determining relation between the terms and assigning explicit meaning to each term. Trust Rank is one of the most widely used ranking algorithms for semantic web search. Few other ranking algorithms like HITS algorithm, PageRank algorithm are also used for Semantic Web Searching. In this paper, we will provide a comparison of few ranking approaches.
  8. Ye, Z.; Huang, J.X.: ¬A learning to rank approach for quality-aware pseudo-relevance feedback (2016) 0.01
    0.0053224266 = product of:
      0.010644853 = sum of:
        0.010644853 = product of:
          0.021289706 = sum of:
            0.021289706 = weight(_text_:4 in 2855) [ClassicSimilarity], result of:
              0.021289706 = score(doc=2855,freq=2.0), product of:
                0.14201462 = queryWeight, product of:
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0523325 = queryNorm
                0.14991207 = fieldWeight in 2855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.7136984 = idf(docFreq=7967, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2855)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.942-959