Search (50 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  1. Khopkar, Y.; Spink, A.; Giles, C.L.; Shah, P.; Debnath, S.: Search engine personalization : An exploratory study (2003) 0.01
    0.014582122 = product of:
      0.051037423 = sum of:
        0.04233065 = product of:
          0.0846613 = sum of:
            0.0846613 = weight(_text_:p in 384) [ClassicSimilarity], result of:
              0.0846613 = score(doc=384,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.63560283 = fieldWeight in 384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.125 = fieldNorm(doc=384)
          0.5 = coord(1/2)
        0.008706774 = weight(_text_:a in 384) [ClassicSimilarity], result of:
          0.008706774 = score(doc=384,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20383182 = fieldWeight in 384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=384)
      0.2857143 = coord(2/7)
    
  2. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.01
    0.014545241 = product of:
      0.033938896 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 2565) [ClassicSimilarity], result of:
              0.026456656 = score(doc=2565,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
        0.008162601 = weight(_text_:a in 2565) [ClassicSimilarity], result of:
          0.008162601 = score(doc=2565,freq=18.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.19109234 = fieldWeight in 2565, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.012547966 = product of:
          0.025095932 = sum of:
            0.025095932 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.025095932 = score(doc=2565,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
    Type
    a
  3. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.01
    0.014345173 = product of:
      0.03347207 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 2564) [ClassicSimilarity], result of:
              0.026456656 = score(doc=2564,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
        0.007695774 = weight(_text_:a in 2564) [ClassicSimilarity], result of:
          0.007695774 = score(doc=2564,freq=16.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.18016359 = fieldWeight in 2564, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.012547966 = product of:
          0.025095932 = sum of:
            0.025095932 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.025095932 = score(doc=2564,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Type
    a
  4. Dunning, A.: Do we still need search engines? (1999) 0.01
    0.013116683 = product of:
      0.045908388 = sum of:
        0.010774084 = weight(_text_:a in 6021) [ClassicSimilarity], result of:
          0.010774084 = score(doc=6021,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.25222903 = fieldWeight in 6021, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6021)
        0.035134304 = product of:
          0.07026861 = sum of:
            0.07026861 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.07026861 = score(doc=6021,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Ariadne. 1999, no.22
    Type
    a
  5. Bradley, P.: ¬The relevance of underpants to searching the Web (2000) 0.01
    0.012759356 = product of:
      0.044657744 = sum of:
        0.037039317 = product of:
          0.074078634 = sum of:
            0.074078634 = weight(_text_:p in 3961) [ClassicSimilarity], result of:
              0.074078634 = score(doc=3961,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.55615246 = fieldWeight in 3961, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3961)
          0.5 = coord(1/2)
        0.0076184273 = weight(_text_:a in 3961) [ClassicSimilarity], result of:
          0.0076184273 = score(doc=3961,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.17835285 = fieldWeight in 3961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3961)
      0.2857143 = coord(2/7)
    
    Type
    a
  6. Fiorelli, G.: Hummingbird unleashed (2013) 0.01
    0.011514188 = product of:
      0.040299654 = sum of:
        0.03464444 = weight(_text_:g in 2546) [ClassicSimilarity], result of:
          0.03464444 = score(doc=2546,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.24898648 = fieldWeight in 2546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.046875 = fieldNorm(doc=2546)
        0.005655216 = weight(_text_:a in 2546) [ClassicSimilarity], result of:
          0.005655216 = score(doc=2546,freq=6.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.13239266 = fieldWeight in 2546, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2546)
      0.2857143 = coord(2/7)
    
    Abstract
    Sometimes I think that us SEOs could be wonderful characters for a Woody Allen movie: We are stressed, nervous, paranoid, we have a tendency for sudden changes of mood...okay, maybe I am exaggerating a little bit, but that's how we tend to (over)react whenever Google announces something. One thing that doesn't help is the lack of clarity coming from Google, which not only never mentions Hummingbird in any official document (for example, in the post of its 15th anniversary), but has also shied away from details of this epochal update in the "off-the-record" declarations of Amit Singhal. In fact, in some ways those statements partly contributed to the confusion. When Google announces an update-especially one like Hummingbird-the best thing to do is to avoid trying to immediately understand what it really is based on intuition alone. It is better to wait until the dust falls to the ground, recover the original documents, examine those related to them (and any variants), take the time to see the update in action, calmly investigate, and then after all that try to find the most plausible answers.
  7. El-Ramly, N.; Peterson. R.E.; Volonino, L.: Top ten Web sites using search engines : the case of the desalination industry (1996) 0.01
    0.011217687 = product of:
      0.039261904 = sum of:
        0.03464444 = weight(_text_:g in 945) [ClassicSimilarity], result of:
          0.03464444 = score(doc=945,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.24898648 = fieldWeight in 945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
        0.0046174643 = weight(_text_:a in 945) [ClassicSimilarity], result of:
          0.0046174643 = score(doc=945,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10809815 = fieldWeight in 945, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
      0.2857143 = coord(2/7)
    
    Abstract
    The desalination industry involves the desalting of sea or brackish water and achieves the purpose of increasing the worls's effective water supply. There are approximately 4.000 desalination Web sites. The six major Internet search engines were used to determine, according to each of the six, the top twenty sites for desalination. Each site was visited and the 120 gross returns were pared down to the final ten - the 'Top Ten'. The Top Ten were then analyzed to determine what it was that made the sites useful and informative. The major attributes were: a) currency (up-to-date); b) search site capability; c) access to articles on desalination; d) newsletters; e) databases; f) product information; g) online conferencing; h) valuable links to other sites; l) communication links; j) site maps; and k) case studies. Reasons for having a Web site and the current status and prospects for Internet commerce are discussed
  8. Birmingham, J.: Internet search engines (1996) 0.01
    0.010470057 = product of:
      0.036645196 = sum of:
        0.006530081 = weight(_text_:a in 5664) [ClassicSimilarity], result of:
          0.006530081 = score(doc=5664,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.15287387 = fieldWeight in 5664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=5664)
        0.030115116 = product of:
          0.060230233 = sum of:
            0.060230233 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.060230233 = score(doc=5664,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Basically a good listing in table format of features from the major search engines
    Date
    10.11.1996 16:36:22
  9. Broder, A.; Kumar, R.; Maghoul, F.; Raghavan, P.; Rajagopalan, S.; Stata, R.; Tomkins, A.; Wiener, J.: Graph structure in the Web (2000) 0.01
    0.008534885 = product of:
      0.029872097 = sum of:
        0.021165324 = product of:
          0.04233065 = sum of:
            0.04233065 = weight(_text_:p in 5595) [ClassicSimilarity], result of:
              0.04233065 = score(doc=5595,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.31780142 = fieldWeight in 5595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5595)
          0.5 = coord(1/2)
        0.008706774 = weight(_text_:a in 5595) [ClassicSimilarity], result of:
          0.008706774 = score(doc=5595,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20383182 = fieldWeight in 5595, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5595)
      0.2857143 = coord(2/7)
    
    Abstract
    The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200M pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale
  10. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.01
    0.007842766 = product of:
      0.02744968 = sum of:
        0.023096293 = weight(_text_:g in 2596) [ClassicSimilarity], result of:
          0.023096293 = score(doc=2596,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.165991 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.004353387 = weight(_text_:a in 2596) [ClassicSimilarity], result of:
          0.004353387 = score(doc=2596,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10191591 = fieldWeight in 2596, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.2857143 = coord(2/7)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  11. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.01
    0.0074952478 = product of:
      0.026233366 = sum of:
        0.006156619 = weight(_text_:a in 1149) [ClassicSimilarity], result of:
          0.006156619 = score(doc=1149,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.14413087 = fieldWeight in 1149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1149)
        0.020076746 = product of:
          0.040153492 = sum of:
            0.040153492 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.040153492 = score(doc=1149,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Type
    a
  12. Ding, L.; Finin, T.; Joshi, A.; Peng, Y.; Cost, R.S.; Sachs, J.; Pan, R.; Reddivari, P.; Doshi, V.: Swoogle : a Semantic Web search and metadata engine (2004) 0.01
    0.0071739783 = product of:
      0.025108922 = sum of:
        0.015873993 = product of:
          0.031747986 = sum of:
            0.031747986 = weight(_text_:p in 4704) [ClassicSimilarity], result of:
              0.031747986 = score(doc=4704,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.23835106 = fieldWeight in 4704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4704)
          0.5 = coord(1/2)
        0.0092349285 = weight(_text_:a in 4704) [ClassicSimilarity], result of:
          0.0092349285 = score(doc=4704,freq=16.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.2161963 = fieldWeight in 4704, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4704)
      0.2857143 = coord(2/7)
    
    Abstract
    Swoogle is a crawler-based indexing and retrieval system for the Semantic Web, i.e., for Web documents in RDF or OWL. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is rank, a measure of the importance of a Semantic Web document.
    Content
    Vgl. unter: http://www.dblab.ntua.gr/~bikakis/LD/5.pdf Vgl. auch: http://swoogle.umbc.edu/. Vgl. auch: http://ebiquity.umbc.edu/paper/html/id/183/. Vgl. auch: Radhakrishnan, A.: Swoogle : An Engine for the Semantic Web unter: http://www.searchenginejournal.com/swoogle-an-engine-for-the-semantic-web/5469/.
    Type
    a
  13. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.01
    0.0070833503 = product of:
      0.024791725 = sum of:
        0.018707681 = product of:
          0.037415363 = sum of:
            0.037415363 = weight(_text_:p in 3144) [ClassicSimilarity], result of:
              0.037415363 = score(doc=3144,freq=4.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.28089944 = fieldWeight in 3144, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
        0.0060840435 = weight(_text_:a in 3144) [ClassicSimilarity], result of:
          0.0060840435 = score(doc=3144,freq=10.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.14243183 = fieldWeight in 3144, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
      0.2857143 = coord(2/7)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
    Type
    a
  14. Bladow, N.; Dorey, C.; Frederickson, L.; Grover, P.; Knudtson, Y.; Krishnamurthy, S.; Lazarou, V.: What's the Buzz about? : An empirical examination of Search on Yahoo! (2005) 0.01
    0.0066213845 = product of:
      0.023174845 = sum of:
        0.015873993 = product of:
          0.031747986 = sum of:
            0.031747986 = weight(_text_:p in 3072) [ClassicSimilarity], result of:
              0.031747986 = score(doc=3072,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.23835106 = fieldWeight in 3072, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3072)
          0.5 = coord(1/2)
        0.007300853 = weight(_text_:a in 3072) [ClassicSimilarity], result of:
          0.007300853 = score(doc=3072,freq=10.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.1709182 = fieldWeight in 3072, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3072)
      0.2857143 = coord(2/7)
    
    Abstract
    We present an analysis of the Yahoo Buzz Index over a period of 45 weeks. Our key findings are that: (1) It is most common for a search term to show up on the index for one week, followed by two weeks, three weeks, etc. Only two terms persist for all 45 weeks studied - Britney Spears and Jennifer Lopez. Search term longevity follows a power-law distribution or a winner-take-all structure; (2) Most search terms focus on entertainment. Search terms related to serious topics are found less often. The Buzz Index does not necessarily follow the "news cycle"; and, (3) We provide two ways to determine "star power" of various search terms - one that emphasizes staying power on the Index and another that emphasizes rank. In general, the methods lead to dramatically different results. Britney Spears performs well in both methods. We conclude that the data available on the Index is symptomatic of a celebrity-crazed, entertainment-centered culture.
  15. Page, A.: ¬The search is over : the search-engines secrets of the pros (1996) 0.00
    0.0017382982 = product of:
      0.012168087 = sum of:
        0.012168087 = weight(_text_:a in 5670) [ClassicSimilarity], result of:
          0.012168087 = score(doc=5670,freq=10.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.28486365 = fieldWeight in 5670, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5670)
      0.14285715 = coord(1/7)
    
    Abstract
    Covers 8 of the most popular search engines. Gives a summary of each and has a nice table of features that also briefly lists the pros and cons. Includes a short explanation of Boolean operators too
    Type
    a
  16. Powell, J.; Fox, E.A.: Multilingual federated searching across heterogeneous collections (1998) 0.00
    0.0013906385 = product of:
      0.0097344695 = sum of:
        0.0097344695 = weight(_text_:a in 1250) [ClassicSimilarity], result of:
          0.0097344695 = score(doc=1250,freq=10.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.22789092 = fieldWeight in 1250, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1250)
      0.14285715 = coord(1/7)
    
    Abstract
    This article describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. It details a markup language for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages.
    Type
    a
  17. Spink, A.; Gunar, O.: E-Commerce Web queries : Excite and AskJeeves study (2001) 0.00
    0.0012438248 = product of:
      0.008706774 = sum of:
        0.008706774 = weight(_text_:a in 910) [ClassicSimilarity], result of:
          0.008706774 = score(doc=910,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20383182 = fieldWeight in 910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=910)
      0.14285715 = coord(1/7)
    
  18. Shneiderman, B.; Byrd, D.; Croft, W.B.: Clarifying search : a user-interface framework for text searches (1997) 0.00
    0.0012438248 = product of:
      0.008706774 = sum of:
        0.008706774 = weight(_text_:a in 1258) [ClassicSimilarity], result of:
          0.008706774 = score(doc=1258,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20383182 = fieldWeight in 1258, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1258)
      0.14285715 = coord(1/7)
    
    Abstract
    Current user interfaces for textual database searching leave much to be desired: individually, they are often confusing, and as a group, they are seriously inconsistent. We propose a four- phase framework for user-interface design: the framework provides common structure and terminology for searching while preserving the distinct features of individual collections and search mechanisms. Users will benefit from faster learning, increased comprehension, and better control, leading to more effective searches and higher satisfaction.
    Type
    a
  19. Brin, S.; Page, L.: ¬The anatomy of a large-scale hypertextual Web search engine (1998) 0.00
    0.0012291623 = product of:
      0.008604136 = sum of:
        0.008604136 = weight(_text_:a in 947) [ClassicSimilarity], result of:
          0.008604136 = score(doc=947,freq=20.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20142901 = fieldWeight in 947, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=947)
      0.14285715 = coord(1/7)
    
    Abstract
    In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/. To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want
    Type
    a
  20. Bryan, K.; Leise, T.: ¬The $25.000.000.000 eigenvector : the linear algebra behind Google 0.00
    0.0012168088 = product of:
      0.008517661 = sum of:
        0.008517661 = weight(_text_:a in 1353) [ClassicSimilarity], result of:
          0.008517661 = score(doc=1353,freq=10.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.19940455 = fieldWeight in 1353, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1353)
      0.14285715 = coord(1/7)
    
    Abstract
    Google's success derives in large part from its PageRank algorithm, which ranks the importance of webpages according to an eigenvector of a weighted link matrix. Analysis of the PageRank formula provides a wonderful applied topic for a linear algebra course. Instructors may assign this article as a project to more advanced students, or spend one or two lectures presenting the material with assigned homework from the exercises. This material also complements the discussion of Markov chains in matrix algebra. Maple and Mathematica files supporting this material can be found at www.rose-hulman.edu/~bryan.
    Type
    a

Years