Search (66 results, page 1 of 4)

  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  1. Birmingham, J.: Internet search engines (1996) 0.04
    0.037244894 = product of:
      0.055867337 = sum of:
        0.01592848 = weight(_text_:of in 5664) [ClassicSimilarity], result of:
          0.01592848 = score(doc=5664,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.20732689 = fieldWeight in 5664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=5664)
        0.039938856 = product of:
          0.07987771 = sum of:
            0.07987771 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.07987771 = score(doc=5664,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Basically a good listing in table format of features from the major search engines
    Date
    10.11.1996 16:36:22
  2. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.04
    0.036480736 = product of:
      0.054721102 = sum of:
        0.028095199 = weight(_text_:of in 1149) [ClassicSimilarity], result of:
          0.028095199 = score(doc=1149,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.36569026 = fieldWeight in 1149, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1149)
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.05325181 = score(doc=1149,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  3. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.03
    0.028230444 = product of:
      0.042345665 = sum of:
        0.025704475 = weight(_text_:of in 2564) [ClassicSimilarity], result of:
          0.025704475 = score(doc=2564,freq=30.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.33457235 = fieldWeight in 2564, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.016641192 = product of:
          0.033282384 = sum of:
            0.033282384 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.033282384 = score(doc=2564,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Source
    http://vigna.di.unimi.it/ftp/papers/PageRankAsFunction.pdf [Proceedings of the ACM World Wide Web Conference (WWW), 2005]
  4. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.03
    0.025085872 = product of:
      0.037628807 = sum of:
        0.020987613 = weight(_text_:of in 2565) [ClassicSimilarity], result of:
          0.020987613 = score(doc=2565,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.27317715 = fieldWeight in 2565, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.016641192 = product of:
          0.033282384 = sum of:
            0.033282384 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.033282384 = score(doc=2565,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
    Source
    http://chato.cl/papers/baeza06_general_pagerank_damping_functions_link_ranking.pdf [Proceedings of the ACM Special Interest Group on Information Retrieval (SIGIR) Conference, SIGIR'06, August 6-10, 2006, Seattle, Washington, USA]
  5. Dunning, A.: Do we still need search engines? (1999) 0.02
    0.015531778 = product of:
      0.046595335 = sum of:
        0.046595335 = product of:
          0.09319067 = sum of:
            0.09319067 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.09319067 = score(doc=6021,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Ariadne. 1999, no.22
  6. Smith, A.G.: Search features of digital libraries (2000) 0.01
    0.009933152 = product of:
      0.029799456 = sum of:
        0.029799456 = weight(_text_:of in 940) [ClassicSimilarity], result of:
          0.029799456 = score(doc=940,freq=28.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.38787308 = fieldWeight in 940, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=940)
      0.33333334 = coord(1/3)
    
    Abstract
    Traditional on-line search services such as Dialog, DataStar and Lexis provide a wide range of search features (boolean and proximity operators, truncation, etc). This paper discusses the use of these features for effective searching, and argues that these features are required, regardless of advances in search engine technology. The literature on on-line searching is reviewed, identifying features that searchers find desirable for effective searching. A selective survey of current digital libraries available on the Web was undertaken, identifying which search features are present. The survey indicates that current digital libraries do not implement a wide range of search features. For instance: under half of the examples included controlled vocabulary, under half had proximity searching, only one enabled browsing of term indexes, and none of the digital libraries enable searchers to refine an initial search. Suggestions are made for enhancing the search effectiveness of digital libraries; for instance, by providing a full range of search operators, enabling browsing of search terms, enhancement of records with controlled vocabulary, enabling the refining of initial searches, etc.
  7. Page, A.: ¬The search is over : the search-engines secrets of the pros (1996) 0.01
    0.009893657 = product of:
      0.02968097 = sum of:
        0.02968097 = weight(_text_:of in 5670) [ClassicSimilarity], result of:
          0.02968097 = score(doc=5670,freq=10.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.38633084 = fieldWeight in 5670, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5670)
      0.33333334 = coord(1/3)
    
    Abstract
    Covers 8 of the most popular search engines. Gives a summary of each and has a nice table of features that also briefly lists the pros and cons. Includes a short explanation of Boolean operators too
  8. Sirapyan, N.: In Search of... (2001) 0.01
    0.009571825 = product of:
      0.028715475 = sum of:
        0.028715475 = weight(_text_:of in 5661) [ClassicSimilarity], result of:
          0.028715475 = score(doc=5661,freq=26.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.37376386 = fieldWeight in 5661, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5661)
      0.33333334 = coord(1/3)
    
    Abstract
    In a series of capsule reviews of 20 search engines Sirapyan gives a good overview of the state of Internet search tools. She starts out with a clear discussion of the types of search tools available, the availability of advanced features such as Boolean queries and differences between directories, regular search engines and metasearch engines. It is unclear from the article whether the author and other testers used the same searches across all of the 20 tools but each review clearly outlines perceived strengths and weaknesses, gives tips on the advanced features, if any, of the search tool in question and suggests the types of searches that are most successful. The tools which receive top honors are Google, Northern Light, HotBot and Oingo. Finally, there is an extra sidebar the discusses meta and specialized search tools such as Infozoid and FirstGov. I can't help thinking that the usefulness of this article is related to the fact that Sirapyan is PC Magazine's librarian and goes into greater depth on those features that are of interest to information professionals
  9. Ogden, J.; Summers, E.; Walker, S.: Know(ing) Infrastructure : the wayback machine as object and instrument of digital research (2023) 0.01
    0.009121501 = product of:
      0.027364502 = sum of:
        0.027364502 = weight(_text_:of in 1084) [ClassicSimilarity], result of:
          0.027364502 = score(doc=1084,freq=34.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.35617945 = fieldWeight in 1084, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1084)
      0.33333334 = coord(1/3)
    
    Abstract
    From documenting human rights abuses to studying online advertising, web archives are increasingly positioned as critical resources for a broad range of scholarly Internet research agendas. In this article, we reflect on the motivations and methodological challenges of investigating the world's largest web archive, the Internet Archive's Wayback Machine (IAWM). Using a mixed methods approach, we report on a pilot project centred around documenting the inner workings of 'Save Page Now' (SPN) - an Internet Archive tool that allows users to initiate the creation and storage of 'snapshots' of web resources. By improving our understanding of SPN and its role in shaping the IAWM, this work examines how the public tool is being used to 'save the Web' and highlights the challenges of operationalising a study of the dynamic sociotechnical processes supporting this knowledge infrastructure. Inspired by existing Science and Technology Studies (STS) approaches, the paper charts our development of methodological interventions to support an interdisciplinary investigation of SPN, including: ethnographic methods, 'experimental blackbox tactics', data tracing, modelling and documentary research. We discuss the opportunities and limitations of our methodology when interfacing with issues associated with temporality, scale and visibility, as well as critically engage with our own positionality in the research process (in terms of expertise and access). We conclude with reflections on the implications of digital STS approaches for 'knowing infrastructure', where the use of these infrastructures is unavoidably intertwined with our ability to study the situated and material arrangements of their creation.
    Source
    Convergence: The International Journal of Research into New Media Technologies [https://www.researchgate.net/publication/369660337_Knowing_Infrastructure_The_Wayback_Machine_as_object_and_instrument_of_digital_research]
  10. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.01
    0.008875302 = product of:
      0.026625905 = sum of:
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
              0.05325181 = score(doc=4996,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    19. 2.2019 17:22:00
  11. Matrix of WWW indices : a comparison of Internet indexing tools (1995) 0.01
    0.008849156 = product of:
      0.026547467 = sum of:
        0.026547467 = weight(_text_:of in 3165) [ClassicSimilarity], result of:
          0.026547467 = score(doc=3165,freq=8.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34554482 = fieldWeight in 3165, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3165)
      0.33333334 = coord(1/3)
    
    Imprint
    Ann Arbor : University of Michigan School of Information and Library Studies
  12. Brin, S.; Page, L.: ¬The anatomy of a large-scale hypertextual Web search engine (1998) 0.01
    0.008849156 = product of:
      0.026547467 = sum of:
        0.026547467 = weight(_text_:of in 947) [ClassicSimilarity], result of:
          0.026547467 = score(doc=947,freq=32.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34554482 = fieldWeight in 947, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=947)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/. To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want
  13. Schomburg, S.; Prante, J.: Search Engine Federation in Libraries - Suchmaschinenföderation in Bibliotheken (2009) 0.01
    0.008804799 = product of:
      0.026414396 = sum of:
        0.026414396 = weight(_text_:of in 2809) [ClassicSimilarity], result of:
          0.026414396 = score(doc=2809,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34381276 = fieldWeight in 2809, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2809)
      0.33333334 = coord(1/3)
    
    Abstract
    The hbz (Academic Library Center, Cologne) has a strong focus on search engine applications: Beyond the projected integration of respective technologies into the new release of the Digital Library portal solution (DigiBib6), vascoda background services also apply and take advantage of search engine technology. Experience since 2003 has given proof that building and updating of search engine indexes involves a vast amount of resources. The use of search engine federations, however, pledges major improvements: The total amount of data records held in linked indexes can be almost unlimited but also allow for a joint output of all hits retrieved. A federation also comes with excellent response times - hits retrieved can also refer to or link into the original system's layout. Nonetheless, the major challenge these days is different search engine technologies, e.g. Lucene and FAST, the variations in terms of ranking, and the implementation or non-implementation of so-called drill-downs. The lecture is designed to give a brief insight into the hbz search engine workshop with an introduction to the special project state of play.
  14. Austin, D.: How Google finds your needle in the Web's haystack : as we'll see, the trick is to ask the web itself to rank the importance of pages... (2006) 0.01
    0.007896353 = product of:
      0.023689058 = sum of:
        0.023689058 = weight(_text_:of in 93) [ClassicSimilarity], result of:
          0.023689058 = score(doc=93,freq=52.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.30833945 = fieldWeight in 93, product of:
              7.2111025 = tf(freq=52.0), with freq of:
                52.0 = termFreq=52.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
      0.33333334 = coord(1/3)
    
    Abstract
    Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.
  15. Tomaiuolo, N.G.; Packer, J.G.: Quantitative analysis of five WWW 'search engines' (1996) 0.01
    0.007663594 = product of:
      0.022990782 = sum of:
        0.022990782 = weight(_text_:of in 5675) [ClassicSimilarity], result of:
          0.022990782 = score(doc=5675,freq=6.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2992506 = fieldWeight in 5675, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5675)
      0.33333334 = coord(1/3)
    
    Abstract
    Provides a table of the results from over 100 questions actually asked at a library reference desk: The summary notes the average number of relevant 'hits' for all investigated search engines are: AltaVista: 9.3; InfoSeek: 8.3; Lycos: 8.1; Magellan: 7.8; Point: 2.1
  16. TASI: ¬A review of image search engines (2003) 0.01
    0.007663594 = product of:
      0.022990782 = sum of:
        0.022990782 = weight(_text_:of in 6757) [ClassicSimilarity], result of:
          0.022990782 = score(doc=6757,freq=6.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2992506 = fieldWeight in 6757, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=6757)
      0.33333334 = coord(1/3)
    
    Abstract
    Replacing an earlier review, TASI's report outlines the different types of image search engines available and suggests the things to look out for when using one to find images. It includes TASI's own critical evaluation of the most popular engines.
  17. Zhao, Y.; Ma, F.; Xia, X.: Evaluating the coverage of entities in knowledge graphs behind general web search engines : Poster (2017) 0.01
    0.007663594 = product of:
      0.022990782 = sum of:
        0.022990782 = weight(_text_:of in 3854) [ClassicSimilarity], result of:
          0.022990782 = score(doc=3854,freq=24.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2992506 = fieldWeight in 3854, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3854)
      0.33333334 = coord(1/3)
    
    Abstract
    Web search engines, such as Google and Bing, are constantly employing results from knowledge organization and various visualization features to improve their search services. Knowledge graph, a large repository of structured knowledge represented by formal languages such as RDF (Resource Description Framework), is used to support entity search feature of Google and Bing (Demartini, 2016). When a user searchs for an entity, such as a person, an organization, or a place in Google or Bing, it is likely that a knowledge cardwill be presented on the right side bar of the search engine result pages (SERPs). For example, when a user searches the entity Benedict Cumberbatch on Google, the knowledge card will show the basic structured information about this person, including his date of birth, height, spouse, parents, and his movies, etc. The knowledge card, which is used to present the result of entity search, is generated from knowledge graphs. Therefore, the quality of knowledge graphs is essential to the performance of entity search. However, studies on the quality of knowledge graphs from the angle of entity coverage are scant in the literature. This study aims to investigate the coverage of entities of knowledge graphs behind Google and Bing.
  18. Warnick, W.L.; Leberman, A.; Scott, R.L.; Spence, K.J.; Johnsom, L.A.; Allen, V.S.: Searching the deep Web : directed query engine applications at the Department of Energy (2001) 0.01
    0.0075087575 = product of:
      0.022526272 = sum of:
        0.022526272 = weight(_text_:of in 1215) [ClassicSimilarity], result of:
          0.022526272 = score(doc=1215,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 1215, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1215)
      0.33333334 = coord(1/3)
    
    Abstract
    Directed Query Engines, an emerging class of search engine specifically designed to access distributed resources on the deep web, offer the opportunity to create inexpensive digital libraries. Already, one such engine, Distributed Explorer, has been used to select and assemble high quality information resources and incorporate them into publicly available systems for the physical sciences. By nesting Directed Query Engines so that one query launches several other engines in a cascading fashion, enormous virtual collections may soon be assembled to form a comprehensive information infrastructure for the physical sciences. Once a Directed Query Engine has been configured for a set of information resources, distributed alerts tools can provide patrons with personalized, profile-based notices of recent additions to any of the selected resources. Due to the potentially enormous size and scope of Directed Query Engine applications, consideration must be given to issues surrounding the representation of large quantities of information from multiple, heterogeneous sources.
  19. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.01
    0.007390502 = product of:
      0.022171505 = sum of:
        0.022171505 = weight(_text_:of in 754) [ClassicSimilarity], result of:
          0.022171505 = score(doc=754,freq=62.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2885868 = fieldWeight in 754, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=754)
      0.33333334 = coord(1/3)
    
    Abstract
    The aim of our investigation was to discuss exactly what is formulated in the title. This will of course constitute a main part of this write-up. However, in the process of investigations it also became clear that the focus has to be extended, not to just cover Google and search engines in an isolated fashion, but to also cover other Web 2.0 related phenomena, particularly Wikipedia, Blogs, and other related community efforts. It was the purpose of our investigation to demonstrate: - Plagiarism and IPR violation are serious concerns in academia and in the commercial world - Current techniques to fight both are rudimentary, yet could be improved by a concentrated initiative - One reason why the fight is difficult is the dominance of Google as THE major search engine and that Google is unwilling to cooperate - The monopolistic behaviour of Google is also threatening how we see the world, how we as individuals are seen (complete loss of privacy) and is threatening even world economy (!) In our proposal we did present a list of typical sections that would be covered at varying depth, with the possible replacement of one or the other by items that would emerge as still more important.
    The preliminary intended and approved list was: Section 1: To concentrate on Google as virtual monopoly, and Google's reported support of Wikipedia. To find experimental evidence of this support or show that the reports are not more than rumours. Section 2: To address the copy-past syndrome with socio-cultural consequences associated with it. Section 3: To deal with plagiarism and IPR violations as two intertwined topics: how they affect various players (teachers and pupils in school; academia; corporations; governmental studies, etc.). To establish that not enough is done concerning these issues, partially due to just plain ignorance. We will propose some ways to alleviate the problem. Section 4: To discuss the usual tools to fight plagiarism and their shortcomings. Section 5: To propose ways to overcome most of above problems according to proposals by Maurer/Zaka. To examples, but to make it clear that do this more seriously a pilot project is necessary beyond this particular study. Section 6: To briefly analyze various views of plagiarism as it is quite different in different fields (journalism, engineering, architecture, painting, .) and to present a concept that avoids plagiarism from the very beginning. Section 7: To point out the many other dangers of Google or Google-like undertakings: opportunistic ranking, analysis of data as window into commercial future. Section 8: To outline the need of new international laws. Section 9: To mention the feeble European attempts to fight Google, despite Google's growing power. Section 10. To argue that there is no way to catch up with Google in a frontal attack.
    Section 11: To argue that fighting large search engines and plagiarism slice-by-slice by using dedicated servers combined by one hub could eventually decrease the importance of other global search engines. Section 12: To argue that global search engines are an area that cannot be left to the free market, but require some government control or at least non-profit institutions. We will mention other areas where similar if not as glaring phenomena are visible. Section 13: We will mention in passing the potential role of virtual worlds, such as the currently overhyped system "second life". Section 14: To elaborate and try out a model for knowledge workers that does not require special search engines, with a description of a simple demonstrator. Section 15 (Not originally part of the proposal): To propose concrete actions and to describe an Austrian effort that could, with moderate support, minimize the role of Google for Austria. Section 16: References (Not originally part of the proposal) In what follows, we will stick to Sections 1 -14 plus the new Sections 15 and 16 as listed, plus a few Appendices.
    We believe that the importance has shifted considerably since the approval of the project. We thus will emphasize some aspects much more than ever planned, and treat others in a shorter fashion. We believe and hope that this is also seen as unexpected benefit by BMVIT. This report is structured as follows: After an Executive Summary that will highlight why the topic is of such paramount importance we explain in an introduction possible optimal ways how to study the report and its appendices. We can report with some pride that many of the ideas have been accepted by the international scene at conferences and by journals as of such crucial importance that a number of papers (constituting the appendices and elaborating the various sections) have been considered high quality material for publication. We want to thank the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) for making this study possible. We would be delighted if the study can be distributed widely to European decision makers, as some of the issues involved do indeed involve all of Europe, if not the world.
  20. Dodge, M.: ¬A map of Yahoo! (2000) 0.01
    0.0070793247 = product of:
      0.021237973 = sum of:
        0.021237973 = weight(_text_:of in 1555) [ClassicSimilarity], result of:
          0.021237973 = score(doc=1555,freq=128.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.27643585 = fieldWeight in 1555, product of:
              11.313708 = tf(freq=128.0), with freq of:
                128.0 = termFreq=128.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
      0.33333334 = coord(1/3)
    
    Content
    "Introduction Yahoo! is the undisputed king of the Web directories, providing one of the key information navigation tools on the Internet. It has maintained its popularity over many Internet-years as the most visited Web site, against intense competition. This is because it does a good job of shifting, cataloguing and organising the Web [1] . But what would a map of Yahoo!'s hierarchical classification of the Web look like? Would an interactive map of Yahoo!, rather than the conventional listing of sites, be more useful as navigational tool? We can get some idea what a map of Yahoo! might be like by taking a look at ET-Map, a prototype developed by Hsinchun Chen and colleagues in the Artificial Intelligence Lab [2] at the University of Arizona. ET-Map was developed in 1995 as part of innovative research in automatic Internet homepage categorization and it charts a large chunk of Yahoo!, from the entertainment section representing some 110,000 different Web links. The map is a two-dimensional, multi-layered category map; its aim is to provide an intuitive visual information browsing tool. ET-Map can be browsed interactively, explored and queried, using the familiar point-and-click navigation style of the Web to find information of interest.
    The View From Above Browsing for a particular piece on information on the Web can often feel like being stuck in an unfamiliar part of town walking around at street level looking for a particular store. You know the store is around there somewhere, but your viewpoint at ground level is constrained. What you really want is to get above the streets, hovering half a mile or so up in the air, to see the whole neighbourhood. This kind of birds-eye view function has been memorably described by David D. Clark, Senior Research Scientist at MIT's Laboratory for Computer Science and the Chairman of the Invisible Worlds Protocol Advisory Board, as the missing "up button" on the browser [3] . ET-Map is a nice example of a prototype for Clark's "up-button" view of an information space. The goal of information maps, like ET-Map, is to provide the browser with a sense of the lie of the information landscape, what is where, the location of clusters and hotspots, what is related to what. Ideally, this 'big-picture' all-in-one visual summary needs to fit on a single standard computer screen. ET-Map is one of my favourite examples, but there are many other interesting information maps being developed by other researchers and companies (see inset at the bottom of this page). How does ET-Map work? Here is a sequence of screenshots of a typical browsing session with ET-Map, which ends with access to Web pages on jazz musician Miles Davis. You can also tryout ET-Map for yourself, using a fully working demo on the AI Lab's website [4] . We begin with the top-level map showing forty odd broad entertainment 'subject regions' represented by regularly shaped tiles. Each tile is a visual summary of a group of Web pages with similar content. These tiles are shaded different colours to differentiate them, while labels identify the subject of the tile and the number in brackets telling you how many individual Web page links it contains. ET-Map uses two important, but common-sense, spatial concepts in its organisation and representation of the Web. Firstly, the 'subject regions' size is directly related to the number of Web pages in that category. For example, the 'MUSIC' subject area contains over 11,000 pages and so has a much larger area than the neighbouring area of 'LIVE' which only has 4,300 odd pages. This is intuitively meaningful, as the largest tiles are visually more prominent on the map and are likely to be more significant as they contain the most links. In addition, a second spatial concept, that of neighbourhood proximity, is applied so 'subject regions' closely related in term of content are plotted close to each other on the map. For example, 'FILM' and 'YEAR'S OSCARS', at the bottom left, are neighbours in both semantic and spatial space. This make senses as many things in the real-world are ordered in this way, with things that are alike being spatially close together (e.g. layout of goods in a store, or books in a library). Importantly, ET-Map is also a multi-layer map, with sub-maps showing greater informational resolution through a finer degree of categorization. So for any subject region that contains more than two hundred Web pages, a second-level map, with more detailed categories is generated. This subdivision of information space is repeated down the hierarchy as far as necessary. In the example, the user selected the 'MUSIC' subject region which, not surprisingly, contained many thousands of pages. A second-level map with numerous different music categories is then presented to the user. Delving deeper, the user wants to learn more about jazz music, so clicking on the 'JAZZ' tile leads to a third-level map, a fine-grained map of jazz related Web pages. Finally, selecting the 'MILES DAVIS' subject region leads to more a conventional looking ranking of pages from which the user selects one to download.
    ET-Map was created using a sophisticated AI technique called Kohonen self-organizing map, a neural network approach that has been used for automatic analysis and classification of semantic content of text documents like Web pages. I do not pretend to fully understand how this technique works; I tend to think of it as a clever 'black-box' that group together things that are alike [5] . It is a real challenge to automatically classify pages from a very heterogeneous information collection like the Web into categories that will match the conceptions of a typical user. Directories like Yahoo! tend to rely on the skill of human editors to achieve this. ET-Map is an interesting prototype that I think highlights well the potential for a map-based approach to Web browsing. I am surprised none of the major search engines or directories have introduced the option of mapping results. Although, I am sure many are working on ideas. People certainly need all the help they get, as Web growth shows no sign of slowing. Just last month it was reported that the Web had surpassed one billion indexable pages [6].
    Information Maps There are many other fascinating examples that employ two dimensional interactive maps to provide a 'birds-eye' view of information. They use various underlying techniques of textual analysis and clustering to turn the mass of information into a useful summary map (see "Mining in Textual Mountains" in Mappa.Mundi Magazine). In terms of visual representations they can be divided into two groups, those that generate smooth surfaces and those that produce regular, tiled maps. Unfortunately, we don't have space to examine them in detail, but they are well worth spending some time exploring. I will be covering some of them in future columns.
    Research Prototypes Visual SiteMap Developed by Xia Lin, based at the College of Library and Information Science, Drexel University. CVG Cyberspace geography visualization, developed by Luc Girardin, at The Graduate Institute of International Studies, Switzerland. WEBSOM Maps the thousands of articles posted on Usenet newsgroups. It is being developed by researchers at the Neural Networks Research Centre, Helsinki University of Technology in Finland. TreeMaps Developed by Brian Johnson, Ben Shneiderman and colleagues in the Human-Computer Interaction Lab at the University of Maryland. Commercial Information Maps: NewsMaps Provides interactive information landscapes summarizing daily news stories, developed Cartia, Inc. Web Squirrel Creates maps known as information farms. It is developed by Eastgate Systems, Inc. Umap Produces interactive maps of Web searches. Map of the Market An interactive map of the market performance of the stocks of major US corporations developed by SmartMoney.com."

Years

Languages

  • e 53
  • d 12

Types

  • a 29
  • r 1
  • x 1
  • More… Less…