Search (80 results, page 1 of 4)

  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  1. Dunning, A.: Do we still need search engines? (1999) 0.03
    0.03336741 = product of:
      0.06673482 = sum of:
        0.06673482 = product of:
          0.10010222 = sum of:
            0.013307921 = weight(_text_:a in 6021) [ClassicSimilarity], result of:
              0.013307921 = score(doc=6021,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.25222903 = fieldWeight in 6021, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
            0.0867943 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.0867943 = score(doc=6021,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Ariadne. 1999, no.22
    Type
    a
  2. Birmingham, J.: Internet search engines (1996) 0.03
    0.02748698 = product of:
      0.05497396 = sum of:
        0.05497396 = product of:
          0.08246094 = sum of:
            0.008065818 = weight(_text_:a in 5664) [ClassicSimilarity], result of:
              0.008065818 = score(doc=5664,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.15287387 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
            0.07439512 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.07439512 = score(doc=5664,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Basically a good listing in table format of features from the major search engines
    Date
    10.11.1996 16:36:22
  3. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.01906709 = product of:
      0.03813418 = sum of:
        0.03813418 = product of:
          0.05720127 = sum of:
            0.007604526 = weight(_text_:a in 1149) [ClassicSimilarity], result of:
              0.007604526 = score(doc=1149,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.14413087 = fieldWeight in 1149, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
            0.049596746 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.049596746 = score(doc=1149,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Type
    a
  4. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.02
    0.018324653 = product of:
      0.036649305 = sum of:
        0.036649305 = product of:
          0.054973956 = sum of:
            0.0053772116 = weight(_text_:a in 4996) [ClassicSimilarity], result of:
              0.0053772116 = score(doc=4996,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10191591 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
            0.049596746 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
              0.049596746 = score(doc=4996,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.30952093 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    19. 2.2019 17:22:00
    Type
    a
  5. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.01
    0.013693414 = product of:
      0.027386827 = sum of:
        0.027386827 = product of:
          0.04108024 = sum of:
            0.010082272 = weight(_text_:a in 2565) [ClassicSimilarity], result of:
              0.010082272 = score(doc=2565,freq=18.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19109234 = fieldWeight in 2565, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
            0.030997967 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.030997967 = score(doc=2565,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
    Type
    a
  6. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.01
    0.013501208 = product of:
      0.027002417 = sum of:
        0.027002417 = product of:
          0.040503625 = sum of:
            0.0095056575 = weight(_text_:a in 2564) [ClassicSimilarity], result of:
              0.0095056575 = score(doc=2564,freq=16.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.18016359 = fieldWeight in 2564, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
            0.030997967 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.030997967 = score(doc=2564,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Type
    a
  7. Hodson, H.: Google's fact-checking bots build vast knowledge bank (2014) 0.01
    0.011426046 = product of:
      0.022852091 = sum of:
        0.022852091 = product of:
          0.034278136 = sum of:
            0.009313605 = weight(_text_:a in 1700) [ClassicSimilarity], result of:
              0.009313605 = score(doc=1700,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.17652355 = fieldWeight in 1700, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1700)
            0.02496453 = weight(_text_:h in 1700) [ClassicSimilarity], result of:
              0.02496453 = score(doc=1700,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.21959636 = fieldWeight in 1700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1700)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world's facts GOOGLE is building the largest store of knowledge in human history - and it's doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.
    Type
    a
  8. El-Ramly, N.; Peterson. R.E.; Volonino, L.: Top ten Web sites using search engines : the case of the desalination industry (1996) 0.01
    0.008142265 = product of:
      0.01628453 = sum of:
        0.01628453 = product of:
          0.024426792 = sum of:
            0.0057033943 = weight(_text_:a in 945) [ClassicSimilarity], result of:
              0.0057033943 = score(doc=945,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10809815 = fieldWeight in 945, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=945)
            0.018723397 = weight(_text_:h in 945) [ClassicSimilarity], result of:
              0.018723397 = score(doc=945,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.16469726 = fieldWeight in 945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=945)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The desalination industry involves the desalting of sea or brackish water and achieves the purpose of increasing the worls's effective water supply. There are approximately 4.000 desalination Web sites. The six major Internet search engines were used to determine, according to each of the six, the top twenty sites for desalination. Each site was visited and the 120 gross returns were pared down to the final ten - the 'Top Ten'. The Top Ten were then analyzed to determine what it was that made the sites useful and informative. The major attributes were: a) currency (up-to-date); b) search site capability; c) access to articles on desalination; d) newsletters; e) databases; f) product information; g) online conferencing; h) valuable links to other sites; l) communication links; j) site maps; and k) case studies. Reasons for having a Web site and the current status and prospects for Internet commerce are discussed
  9. Place, E.: Internationale Zusammenarbeit bei Internet Subject Gateways (1999) 0.01
    0.0061995937 = product of:
      0.012399187 = sum of:
        0.012399187 = product of:
          0.03719756 = sum of:
            0.03719756 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.03719756 = score(doc=4189,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.23214069 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4189)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:35:09
  10. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.01
    0.005448967 = product of:
      0.010897934 = sum of:
        0.010897934 = product of:
          0.016346902 = sum of:
            0.006985203 = weight(_text_:a in 754) [ClassicSimilarity], result of:
              0.006985203 = score(doc=754,freq=24.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13239266 = fieldWeight in 754, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
            0.009361698 = weight(_text_:h in 754) [ClassicSimilarity], result of:
              0.009361698 = score(doc=754,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.08234863 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The aim of our investigation was to discuss exactly what is formulated in the title. This will of course constitute a main part of this write-up. However, in the process of investigations it also became clear that the focus has to be extended, not to just cover Google and search engines in an isolated fashion, but to also cover other Web 2.0 related phenomena, particularly Wikipedia, Blogs, and other related community efforts. It was the purpose of our investigation to demonstrate: - Plagiarism and IPR violation are serious concerns in academia and in the commercial world - Current techniques to fight both are rudimentary, yet could be improved by a concentrated initiative - One reason why the fight is difficult is the dominance of Google as THE major search engine and that Google is unwilling to cooperate - The monopolistic behaviour of Google is also threatening how we see the world, how we as individuals are seen (complete loss of privacy) and is threatening even world economy (!) In our proposal we did present a list of typical sections that would be covered at varying depth, with the possible replacement of one or the other by items that would emerge as still more important.
    The preliminary intended and approved list was: Section 1: To concentrate on Google as virtual monopoly, and Google's reported support of Wikipedia. To find experimental evidence of this support or show that the reports are not more than rumours. Section 2: To address the copy-past syndrome with socio-cultural consequences associated with it. Section 3: To deal with plagiarism and IPR violations as two intertwined topics: how they affect various players (teachers and pupils in school; academia; corporations; governmental studies, etc.). To establish that not enough is done concerning these issues, partially due to just plain ignorance. We will propose some ways to alleviate the problem. Section 4: To discuss the usual tools to fight plagiarism and their shortcomings. Section 5: To propose ways to overcome most of above problems according to proposals by Maurer/Zaka. To examples, but to make it clear that do this more seriously a pilot project is necessary beyond this particular study. Section 6: To briefly analyze various views of plagiarism as it is quite different in different fields (journalism, engineering, architecture, painting, .) and to present a concept that avoids plagiarism from the very beginning. Section 7: To point out the many other dangers of Google or Google-like undertakings: opportunistic ranking, analysis of data as window into commercial future. Section 8: To outline the need of new international laws. Section 9: To mention the feeble European attempts to fight Google, despite Google's growing power. Section 10. To argue that there is no way to catch up with Google in a frontal attack.
    Section 11: To argue that fighting large search engines and plagiarism slice-by-slice by using dedicated servers combined by one hub could eventually decrease the importance of other global search engines. Section 12: To argue that global search engines are an area that cannot be left to the free market, but require some government control or at least non-profit institutions. We will mention other areas where similar if not as glaring phenomena are visible. Section 13: We will mention in passing the potential role of virtual worlds, such as the currently overhyped system "second life". Section 14: To elaborate and try out a model for knowledge workers that does not require special search engines, with a description of a simple demonstrator. Section 15 (Not originally part of the proposal): To propose concrete actions and to describe an Austrian effort that could, with moderate support, minimize the role of Google for Austria. Section 16: References (Not originally part of the proposal) In what follows, we will stick to Sections 1 -14 plus the new Sections 15 and 16 as listed, plus a few Appendices.
    We believe that the importance has shifted considerably since the approval of the project. We thus will emphasize some aspects much more than ever planned, and treat others in a shorter fashion. We believe and hope that this is also seen as unexpected benefit by BMVIT. This report is structured as follows: After an Executive Summary that will highlight why the topic is of such paramount importance we explain in an introduction possible optimal ways how to study the report and its appendices. We can report with some pride that many of the ideas have been accepted by the international scene at conferences and by journals as of such crucial importance that a number of papers (constituting the appendices and elaborating the various sections) have been considered high quality material for publication. We want to thank the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) for making this study possible. We would be delighted if the study can be distributed widely to European decision makers, as some of the issues involved do indeed involve all of Europe, if not the world.
  11. Dambeck, H.: Wie Google mit Milliarden Unbekannten rechnet : Teil 2: Ausgerechnet: Der Page Rank für ein Mini-Web aus drei Seiten (2009) 0.01
    0.005200944 = product of:
      0.010401888 = sum of:
        0.010401888 = product of:
          0.031205663 = sum of:
            0.031205663 = weight(_text_:h in 3080) [ClassicSimilarity], result of:
              0.031205663 = score(doc=3080,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.27449545 = fieldWeight in 3080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3080)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Dambeck, H.: Wie Google mit Milliarden Unbekannten rechnet : Teil.1 (2009) 0.00
    0.0041607553 = product of:
      0.008321511 = sum of:
        0.008321511 = product of:
          0.02496453 = sum of:
            0.02496453 = weight(_text_:h in 3081) [ClassicSimilarity], result of:
              0.02496453 = score(doc=3081,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.21959636 = fieldWeight in 3081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3081)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Gillitzer, B.: Yewno (2017) 0.00
    0.0041330624 = product of:
      0.008266125 = sum of:
        0.008266125 = product of:
          0.024798373 = sum of:
            0.024798373 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
              0.024798373 = score(doc=3447,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.15476047 = fieldWeight in 3447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3447)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 10:16:49
  14. Krempl, S.: Google muss zerschlagen werden (2007) 0.00
    0.0036406606 = product of:
      0.007281321 = sum of:
        0.007281321 = product of:
          0.021843962 = sum of:
            0.021843962 = weight(_text_:h in 753) [ClassicSimilarity], result of:
              0.021843962 = score(doc=753,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19214681 = fieldWeight in 753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=753)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. die Studie "Maurer, H. et al: Report on dangers and opportunities posed by large search engines, particularly Google" unter: http://www.iicm.tugraz.at/iicm_papers/dangers_google.pdf.
  15. Palm, G.: ¬Der Zeitgeist in der Suchmaschine : Unser alltäglicher "Google-Hupf" und seine Spuren (2002) 0.00
    0.002600472 = product of:
      0.005200944 = sum of:
        0.005200944 = product of:
          0.015602832 = sum of:
            0.015602832 = weight(_text_:h in 1226) [ClassicSimilarity], result of:
              0.015602832 = score(doc=1226,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13724773 = fieldWeight in 1226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1226)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Zeitmaschinen gibt es schon länger, spätestens seit H. G. Wells, aber eine Zeitgeistmaschine existiert erst seit 1998: Google. Trend-Gurus und ihre Trend-Büros gehören demnächst der Vergangenheit an. Eine weitere Ironie des Netzes, das heute bereits verabschiedet, was doch für morgen bestimmt war. Google macht die ohnehin so anfechtbare Zunft der Seher tendenziell arbeitslos, weil die Suchmaschine der Suchmaschinen sich nicht auf Nostradamus oder Horoskop, Kassandra oder Kaffeesatz, sondern auf Suchanfragen verlässt. Was die Welt umtreibt, was mega-in oder mega-out ist, bildet sich in Googles Zeitgeistfeature ab. Das komplexe Wunder von Google ist der Vokal "o", der bekanntlich erstaunlich dehnbar ist, wenn die Welt auf der Suche nach sich selbst ist. Google führt sich auf ein Wortspiel mit dem mathematischen Begriff "googol" zurück, eine 1 mit 100 Nullen. Rechnet man Googles Partnerschaften mit Yahoo und anderen dazu, wird pro Tag ca. 150 Millionen mal gegoogelt - Tendenz selbstverständlich steigend. Nach Google-Mitgründer Larry Page besteht der Anspruch der perfekten Suchmaschine darin, dass sie genau versteht, was der Suchende will und ihn exakt bedient. Doch das ist nur die längst nicht erreichte Sonnenseite der blitzschnell generierten Suchantworten der digitalen Wissensgesellschaft. Die vielen Fragen der Wissbegierigen sind selbst Antworten - Antworten auf die Frage nach den Interessen, Wünschen und Begierden der Netzgesellschaft.
  16. Page, A.: ¬The search is over : the search-engines secrets of the pros (1996) 0.00
    0.0025049606 = product of:
      0.0050099213 = sum of:
        0.0050099213 = product of:
          0.015029764 = sum of:
            0.015029764 = weight(_text_:a in 5670) [ClassicSimilarity], result of:
              0.015029764 = score(doc=5670,freq=10.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.28486365 = fieldWeight in 5670, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5670)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Covers 8 of the most popular search engines. Gives a summary of each and has a nice table of features that also briefly lists the pros and cons. Includes a short explanation of Boolean operators too
    Type
    a
  17. Bauckhage, C.: Marginalizing over the PageRank damping factor (2014) 0.00
    0.0022405048 = product of:
      0.0044810097 = sum of:
        0.0044810097 = product of:
          0.013443029 = sum of:
            0.013443029 = weight(_text_:a in 928) [ClassicSimilarity], result of:
              0.013443029 = score(doc=928,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.25478977 = fieldWeight in 928, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=928)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In this note, we show how to marginalize over the damping parameter of the PageRank equation so as to obtain a parameter-free version known as TotalRank. Our discussion is meant as a reference and intended to provide a guided tour towards an interesting result that has applications in information retrieval and classification.
    Type
    a
  18. Powell, J.; Fox, E.A.: Multilingual federated searching across heterogeneous collections (1998) 0.00
    0.0020039687 = product of:
      0.0040079374 = sum of:
        0.0040079374 = product of:
          0.012023811 = sum of:
            0.012023811 = weight(_text_:a in 1250) [ClassicSimilarity], result of:
              0.012023811 = score(doc=1250,freq=10.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.22789092 = fieldWeight in 1250, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1250)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article describes a scalable system for searching heterogeneous multilingual collections on the World Wide Web. It details a markup language for describing the characteristics of a search engine and its interface, and a protocol for requesting word translations between languages.
    Type
    a
  19. Ding, L.; Finin, T.; Joshi, A.; Peng, Y.; Cost, R.S.; Sachs, J.; Pan, R.; Reddivari, P.; Doshi, V.: Swoogle : a Semantic Web search and metadata engine (2004) 0.00
    0.0019011315 = product of:
      0.003802263 = sum of:
        0.003802263 = product of:
          0.011406789 = sum of:
            0.011406789 = weight(_text_:a in 4704) [ClassicSimilarity], result of:
              0.011406789 = score(doc=4704,freq=16.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.2161963 = fieldWeight in 4704, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4704)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Swoogle is a crawler-based indexing and retrieval system for the Semantic Web, i.e., for Web documents in RDF or OWL. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is rank, a measure of the importance of a Semantic Web document.
    Content
    Vgl. unter: http://www.dblab.ntua.gr/~bikakis/LD/5.pdf Vgl. auch: http://swoogle.umbc.edu/. Vgl. auch: http://ebiquity.umbc.edu/paper/html/id/183/. Vgl. auch: Radhakrishnan, A.: Swoogle : An Engine for the Semantic Web unter: http://www.searchenginejournal.com/swoogle-an-engine-for-the-semantic-web/5469/.
    Type
    a
  20. Spink, A.; Gunar, O.: E-Commerce Web queries : Excite and AskJeeves study (2001) 0.00
    0.0017924039 = product of:
      0.0035848077 = sum of:
        0.0035848077 = product of:
          0.010754423 = sum of:
            0.010754423 = weight(_text_:a in 910) [ClassicSimilarity], result of:
              0.010754423 = score(doc=910,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.20383182 = fieldWeight in 910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=910)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    

Years

Languages

  • e 50
  • d 28

Types

  • a 44
  • x 1
  • More… Less…