Search (170 results, page 1 of 9)

  • × type_ss:"a"
  • × theme_ss:"Suchmaschinen"
  1. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.32
    0.31853622 = product of:
      0.42471495 = sum of:
        0.07177531 = product of:
          0.21532592 = sum of:
            0.21532592 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.21532592 = score(doc=2514,freq=2.0), product of:
                0.38312992 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045191016 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.30451685 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.30451685 = score(doc=2514,freq=4.0), product of:
            0.38312992 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045191016 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.048422787 = product of:
          0.096845575 = sum of:
            0.096845575 = weight(_text_:methods in 2514) [ClassicSimilarity], result of:
              0.096845575 = score(doc=2514,freq=8.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.53303653 = fieldWeight in 2514, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this paper, we present two ways to improve the precision of HITS-based algorithms onWeb documents. First, by analyzing the limitations of current HITS-based algorithms, we propose a new weighted HITS-based method that assigns appropriate weights to in-links of root documents. Then, we combine content analysis with HITS-based algorithms and study the effects of four representative relevance scoring methods, VSM, Okapi, TLS, and CDR, using a set of broad topic queries. Our experimental results show that our weighted HITS-based method performs significantly better than Bharat's improved HITS algorithm. When we combine our weighted HITS-based method or Bharat's HITS algorithm with any of the four relevance scoring methods, the combined methods are only marginally better than our weighted HITS-based method. Between the four relevance scoring methods, there is no significant quality difference when they are combined with a HITS-based algorithm.
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  2. Furner, J.: ¬A unifying model of document relatedness for hybrid search engines (2003) 0.03
    0.026304178 = product of:
      0.10521671 = sum of:
        0.10521671 = sum of:
          0.068480164 = weight(_text_:methods in 2717) [ClassicSimilarity], result of:
            0.068480164 = score(doc=2717,freq=4.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.37691376 = fieldWeight in 2717, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.046875 = fieldNorm(doc=2717)
          0.03673655 = weight(_text_:22 in 2717) [ClassicSimilarity], result of:
            0.03673655 = score(doc=2717,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.23214069 = fieldWeight in 2717, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2717)
      0.25 = coord(1/4)
    
    Abstract
    Previous work an search-engine design has indicated that information-seekers may benefit from being given the opportunity to exploit multiple sources of evidence of document relatedness. Few existing systems, however, give users more than minimal control over the selections that may be made among methods of exploitation. By applying the methods of "document network analysis" (DNA), a unifying, graph-theoretic model of content-, collaboration-, and context-based systems (CCC) may be developed in which the nature of the similarities between types of document relatedness and document ranking are clarified. The usefulness of the approach to system design suggested by this model may be tested by constructing and evaluating a prototype system (UCXtra) that allows searchers to maintain control over the multiple ways in which document collections may be ranked and re-ranked.
    Date
    11. 9.2004 17:32:22
  3. Lakshminarayana, S.: Quality search content : a reality with next generation browsers (2007) 0.02
    0.01915605 = product of:
      0.0766242 = sum of:
        0.0766242 = weight(_text_:graphic in 4559) [ClassicSimilarity], result of:
          0.0766242 = score(doc=4559,freq=2.0), product of:
            0.29924196 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.045191016 = queryNorm
            0.25606102 = fieldWeight in 4559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4559)
      0.25 = coord(1/4)
    
    Content
    "Sir, The Internet has become the means to obtain information or to transact business. Most of the research, including recent articles by Declan Butler (2006) and Kevin Yager (2006), note a demand for quality search content from the Web. User interactions with the Internet are performed through a Web browser. An average user usually browses through, at the most, thirty to forty links out of the total delivery set, which can be quite large, from a search engine. In the early 1990s, browsers were textual, with no graphical presentations available. Subsequently, browsers could display graphic content. Some browsers can view content in the style and font desired by the user, but with limited operability. However, these browsers do not inherit any intelligence from technological research to employ as an expert system for the user. In addition, search engines have been left on their own to grow with technology. On the other hand, using the same technology has complicated Web content. This may lead to the best search engines rarely doing poorly, and the worst ones rarely doing well, but any result is possible (Salganik, Dodds, & Watts, 2006). Active research is being done with search engines to address the abundance of technological development for quality delivery of content. Some search engines look at the country, language, browser technical details (e.g., version, compatibility), and other factors before delivery. However the missing factor is user characteristics.
  4. Lewandowski, D.; Spree, U.: Ranking of Wikipedia articles in search engines revisited : fair ranking for reasonable quality? (2011) 0.02
    0.01774153 = product of:
      0.07096612 = sum of:
        0.07096612 = sum of:
          0.040352322 = weight(_text_:methods in 444) [ClassicSimilarity], result of:
            0.040352322 = score(doc=444,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.22209854 = fieldWeight in 444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0390625 = fieldNorm(doc=444)
          0.030613795 = weight(_text_:22 in 444) [ClassicSimilarity], result of:
            0.030613795 = score(doc=444,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.19345059 = fieldWeight in 444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=444)
      0.25 = coord(1/4)
    
    Abstract
    This paper aims to review the fiercely discussed question of whether the ranking of Wikipedia articles in search engines is justified by the quality of the articles. After an overview of current research on information quality in Wikipedia, a summary of the extended discussion on the quality of encyclopedic entries in general is given. On this basis, a heuristic method for evaluating Wikipedia entries is developed and applied to Wikipedia articles that scored highly in a search engine retrieval effectiveness test and compared with the relevance judgment of jurors. In all search engines tested, Wikipedia results are unanimously judged better by the jurors than other results on the corresponding results position. Relevance judgments often roughly correspond with the results from the heuristic evaluation. Cases in which high relevance judgments are not in accordance with the comparatively low score from the heuristic evaluation are interpreted as an indicator of a high degree of trust in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its necessarily incoherent user model. A further tuning of the suggested criteria catalog, for instance, the different weighing of the supplied criteria, could serve as a starting point for a user model differentiated evaluation of Wikipedia articles. Approved methods of quality evaluation of reference works are applied to Wikipedia articles and integrated with the question of search engine evaluation.
    Date
    30. 9.2012 19:27:22
  5. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.02
    0.01774153 = product of:
      0.07096612 = sum of:
        0.07096612 = sum of:
          0.040352322 = weight(_text_:methods in 1605) [ClassicSimilarity], result of:
            0.040352322 = score(doc=1605,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.22209854 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
          0.030613795 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
            0.030613795 = score(doc=1605,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.19345059 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
      0.25 = coord(1/4)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  6. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.02
    0.01774153 = product of:
      0.07096612 = sum of:
        0.07096612 = sum of:
          0.040352322 = weight(_text_:methods in 2564) [ClassicSimilarity], result of:
            0.040352322 = score(doc=2564,freq=2.0), product of:
              0.18168657 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.045191016 = queryNorm
              0.22209854 = fieldWeight in 2564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
          0.030613795 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
            0.030613795 = score(doc=2564,freq=2.0), product of:
              0.15825124 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191016 = queryNorm
              0.19345059 = fieldWeight in 2564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
      0.25 = coord(1/4)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
  7. Shapira, B.; Zabar, B.: Personalized search : integrating collaboration and social networks (2011) 0.01
    0.013345277 = product of:
      0.053381108 = sum of:
        0.053381108 = product of:
          0.106762215 = sum of:
            0.106762215 = weight(_text_:methods in 4140) [ClassicSimilarity], result of:
              0.106762215 = score(doc=4140,freq=14.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5876176 = fieldWeight in 4140, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4140)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Despite improvements in their capabilities, search engines still fail to provide users with only relevant results. One reason is that most search engines implement a "one size fits all" approach that ignores personal preferences when retrieving the results of a user's query. Recent studies (Smyth, 2010) have elaborated the importance of personalizing search results and have proposed integrating recommender system methods for enhancing results using contextual and extrinsic information that might indicate the user's actual needs. In this article, we review recommender system methods used for personalizing and improving search results and examine the effect of two such methods that are merged for this purpose. One method is based on collaborative users' knowledge; the second integrates information from the user's social network. We propose new methods for collaborative-and social-based search and demonstrate that each of these methods, when separately applied, produce more accurate search results than does a purely keyword-based search engine (referred to as "standard search engine"), where the social search engine is more accurate than is the collaborative one. However, separately applied, these methods do not produce a sufficient number of results (low coverage). Nevertheless, merging these methods with those implemented by standard search engines overcomes the low-coverage problem and produces personalized results for users that display significantly more accurate results while also providing sufficient coverage than do standard search engines. The improvement, however, is significant only for topics for which the diversity of terms used for queries among users is low.
  8. Höfer, W.: Detektive im Web (1999) 0.01
    0.012988334 = product of:
      0.051953334 = sum of:
        0.051953334 = product of:
          0.10390667 = sum of:
            0.10390667 = weight(_text_:22 in 4007) [ClassicSimilarity], result of:
              0.10390667 = score(doc=4007,freq=4.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.6565931 = fieldWeight in 4007, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4007)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.1999 20:22:06
  9. Rensman, J.: Blick ins Getriebe (1999) 0.01
    0.012988334 = product of:
      0.051953334 = sum of:
        0.051953334 = product of:
          0.10390667 = sum of:
            0.10390667 = weight(_text_:22 in 4009) [ClassicSimilarity], result of:
              0.10390667 = score(doc=4009,freq=4.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.6565931 = fieldWeight in 4009, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4009)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.1999 21:22:59
  10. Munson, K.I.: Internet search engines : understanding their design to improve information retrieval (2000) 0.01
    0.011413361 = product of:
      0.045653444 = sum of:
        0.045653444 = product of:
          0.09130689 = sum of:
            0.09130689 = weight(_text_:methods in 6105) [ClassicSimilarity], result of:
              0.09130689 = score(doc=6105,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5025517 = fieldWeight in 6105, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6105)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The relationship between the methods currently used for indexing the World Wide Web and the programs, languages, and protocols on which the World Wide Web is based is examined. Two methods for indexing the Web are described, directories being briefly discussed while search engines are considered in detail. The automated approach used to create these tools is examined with special emphasis on the parts of a document used in indexing. Shortcomings of the approach are described. Suggestions for effective use of Web search engines are given
  11. Clarke, S.J.: Search engines for the World Wide Web : an evaluation of recent developments (2000) 0.01
    0.011413361 = product of:
      0.045653444 = sum of:
        0.045653444 = product of:
          0.09130689 = sum of:
            0.09130689 = weight(_text_:methods in 6107) [ClassicSimilarity], result of:
              0.09130689 = score(doc=6107,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5025517 = fieldWeight in 6107, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6107)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Search engines are defined, and recent developments described, exemplified, and evaluated, especially those concerned with traditional search and retrieval capabilities. Discussion concentrates on two broad issues: (1) collection and indexing methods and (2) retrieval and ranking methods. It is concluded that a wider adoption of field searching, proximity searching, and relevance feedback would improve quality of search results
  12. Bar-Ilan, J.: Methods for measuring search engine performance over time (2002) 0.01
    0.011413361 = product of:
      0.045653444 = sum of:
        0.045653444 = product of:
          0.09130689 = sum of:
            0.09130689 = weight(_text_:methods in 305) [ClassicSimilarity], result of:
              0.09130689 = score(doc=305,freq=4.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5025517 = fieldWeight in 305, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=305)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This study introduces methods for evaluating search engine performance over a time period. Several measures are defined, which as a whole describe search engine functionality over time. The necessary setup for such studies is described, and the use of these measures is illustrated through a specific example. The set of measures introduced here may serve as a guideline for the search engines for testing and improving their functionality. We recommend setting up a standard suite of measures for evaluating search engine performance.
  13. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.01
    0.010823611 = product of:
      0.043294445 = sum of:
        0.043294445 = product of:
          0.08658889 = sum of:
            0.08658889 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.08658889 = score(doc=4872,freq=4.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:40:22
  14. Vidmar, D.J.: Darwin on the Web : the evolution of search tools (1999) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 3175) [ClassicSimilarity], result of:
              0.085718624 = score(doc=3175,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 3175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3175)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Computers in libraries. 19(1999) no.5, S.22-28
  15. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.085718624 = score(doc=3445,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    25. 8.2005 17:42:22
  16. ap: Suchmaschinen in neuem Gewand : Metaspinner kennt 600 Millionen Seiten (1999) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 4224) [ClassicSimilarity], result of:
              0.085718624 = score(doc=4224,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 4224, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4224)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    3. 5.1997 8:44:22
  17. Dunning, A.: Do we still need search engines? (1999) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.085718624 = score(doc=6021,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Ariadne. 1999, no.22
  18. Bawden, D.: Google and the universe of knowledge (2008) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 844) [ClassicSimilarity], result of:
              0.085718624 = score(doc=844,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=844)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    7. 6.2008 16:22:20
  19. Auf der Suche nach Suchmaschinen (1996) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 5583) [ClassicSimilarity], result of:
              0.0734731 = score(doc=5583,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 5583, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5583)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Cogito. 12(1996) H.5, S.19-22
  20. Bager, J.: Weniger ist mehr : Internet-Suchmaschinen richtig einsetzen (1998) 0.01
    0.009184138 = product of:
      0.03673655 = sum of:
        0.03673655 = product of:
          0.0734731 = sum of:
            0.0734731 = weight(_text_:22 in 1489) [ClassicSimilarity], result of:
              0.0734731 = score(doc=1489,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.46428138 = fieldWeight in 1489, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1489)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    29.12.1998 11:22:00

Years

Languages

  • e 99
  • d 69
  • f 1
  • nl 1
  • More… Less…