Search (7 results, page 1 of 1)

  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Bates, M.E.: Quick answers to odd questions (2004) 0.02
    0.018167997 = product of:
      0.036335994 = sum of:
        0.036335994 = product of:
          0.07267199 = sum of:
            0.07267199 = weight(_text_:i in 3071) [ClassicSimilarity], result of:
              0.07267199 = score(doc=3071,freq=26.0), product of:
                0.16122356 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04274526 = queryNorm
                0.4507529 = fieldWeight in 3071, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3071)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "One of the things I enjoyed the most when I was a reference librarian was the wide range of questions my clients sent my way. What was the original title of the first Godzilla movie? (Gojira, released in 1954) Who said 'I'm as pure as the driven slush'? (Tallulah Bankhead) What percentage of adults have gone to a jazz performance in the last year? (11%) I have found that librarians, speech writers and journalists have one thing in common - we all need to find information on all kinds of topics, and we usually need the answers right now. The following are a few of my favorite sites for finding answers to those there-must-be-an-answer-out-there questions. - For the electronic equivalent to the "ready reference" shelf of resources that most librarians keep hidden behind their desks, check out RefDesk . It is particularly good for answering factual questions - Where do I get the new Windows XP Service Pack? Where is the 386 area code? How do I contact my member of Congress? - Another resource for lots of those quick-fact questions is InfoPlease, the publishers of the Information Please almanac .- Right now, it's full of Olympics data, but it also has links to facts and factoids that you would look up in an almanac, atlas, or encyclopedia. - If you want numbers, start with the Statistical Abstract of the US. This source, produced by the U.S. Census Bureau, gives you everything from the divorce rate by state to airline cost indexes going back to 1980. It is many librarians' secret weapon for pulling numbers together quickly. - My favorite question is "how does that work?" Haven't you ever wondered how they get that Olympic torch to continue to burn while it is being carried by runners from one city to the next? Or how solar sails manage to propel a spacecraft? For answers, check out the appropriately-named How Stuff Works. - For questions about movies, my first resource is the Internet Movie Database. It is easy to search, is such a popular site that mistakes are corrected quickly, and is a fun place to catch trailers of both upcoming movies and those dating back to the 30s. - When I need to figure out who said what, I still tend to rely on the print sources such as Bartlett's Familiar Quotations . No, the current edition is not available on the web, but - and this is the librarian in me - I really appreciate the fact that I not only get the attribution but I also see the source of the quote. There are far too many quotes being attributed to a celebrity, but with no indication of the publication in which the quote appeared. Take, for example, the much-cited quote of Margaret Meade, "Never doubt that a small group of thoughtful committed people can change the world; indeed, it's the only thing that ever has!" Then see the page on the Institute for Intercultural Studies site, founded by Meade, and read its statement that it has never been able to verify this alleged quote from Meade. While there are lots of web-based sources of quotes (see QuotationsPage.com and Bartleby, for example), unless the site provides the original source for the quotation, I wouldn't rely on the citation. Of course, if you have a hunch as to the source of a quote, and it was published prior to 1923, head over to Project Gutenberg , which includes the full text of over 12,000 books that are in the public domain. When I needed to confirm a quotation of the Red Queen in "Through the Looking Glass", this is where I started. - And if you are stumped as to where to go to find information, instead of Googling it, try the Librarians' Index to the Internet. While it is somewhat US-centric, it is a great directory of web resources."
  2. Rogers, I.: ¬The Google Pagerank algorithm and how it works (2002) 0.02
    0.016796317 = product of:
      0.033592634 = sum of:
        0.033592634 = product of:
          0.06718527 = sum of:
            0.06718527 = weight(_text_:i in 2548) [ClassicSimilarity], result of:
              0.06718527 = score(doc=2548,freq=8.0), product of:
                0.16122356 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04274526 = queryNorm
                0.41672117 = fieldWeight in 2548, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Page Rank is a topic much discussed by Search Engine Optimisation (SEO) experts. At the heart of PageRank is a mathematical formula that seems scary to look at but is actually fairly simple to understand. Despite this many people seem to get it wrong! In particular "Chris Ridings of www.searchenginesystems.net" has written a paper entitled "PageRank Explained: Everything you've always wanted to know about PageRank", pointed to by many people, that contains a fundamental mistake early on in the explanation! Unfortunately this means some of the recommendations in the paper are not quite accurate. By showing code to correctly calculate real PageRank I hope to achieve several things in this response: - Clearly explain how PageRank is calculated. - Go through every example in Chris' paper, and add some more of my own, showing the correct PageRank for each diagram. By showing the code used to calculate each diagram I've opened myself up to peer review - mostly in an effort to make sure the examples are correct, but also because the code can help explain the PageRank calculations. - Describe some principles and observations on website design based on these correctly calculated examples. Any good web designer should take the time to fully understand how PageRank really works - if you don't then your site's layout could be seriously hurting your Google listings! [Note: I have nothing in particular against Chris. If I find any other papers on the subject I'll try to comment evenly]
  3. Sirapyan, N.: In Search of... (2001) 0.01
    0.010077791 = product of:
      0.020155583 = sum of:
        0.020155583 = product of:
          0.040311165 = sum of:
            0.040311165 = weight(_text_:i in 5661) [ClassicSimilarity], result of:
              0.040311165 = score(doc=5661,freq=2.0), product of:
                0.16122356 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04274526 = queryNorm
                0.25003272 = fieldWeight in 5661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5661)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a series of capsule reviews of 20 search engines Sirapyan gives a good overview of the state of Internet search tools. She starts out with a clear discussion of the types of search tools available, the availability of advanced features such as Boolean queries and differences between directories, regular search engines and metasearch engines. It is unclear from the article whether the author and other testers used the same searches across all of the 20 tools but each review clearly outlines perceived strengths and weaknesses, gives tips on the advanced features, if any, of the search tool in question and suggests the types of searches that are most successful. The tools which receive top honors are Google, Northern Light, HotBot and Oingo. Finally, there is an extra sidebar the discusses meta and specialized search tools such as Infozoid and FirstGov. I can't help thinking that the usefulness of this article is related to the fact that Sirapyan is PC Magazine's librarian and goes into greater depth on those features that are of interest to information professionals
  4. Dodge, M.: ¬A map of Yahoo! (2000) 0.01
    0.008228482 = product of:
      0.016456963 = sum of:
        0.016456963 = product of:
          0.032913927 = sum of:
            0.032913927 = weight(_text_:i in 1555) [ClassicSimilarity], result of:
              0.032913927 = score(doc=1555,freq=12.0), product of:
                0.16122356 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04274526 = queryNorm
                0.20415086 = fieldWeight in 1555, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1555)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    ET-Map was created using a sophisticated AI technique called Kohonen self-organizing map, a neural network approach that has been used for automatic analysis and classification of semantic content of text documents like Web pages. I do not pretend to fully understand how this technique works; I tend to think of it as a clever 'black-box' that group together things that are alike [5] . It is a real challenge to automatically classify pages from a very heterogeneous information collection like the Web into categories that will match the conceptions of a typical user. Directories like Yahoo! tend to rely on the skill of human editors to achieve this. ET-Map is an interesting prototype that I think highlights well the potential for a map-based approach to Web browsing. I am surprised none of the major search engines or directories have introduced the option of mapping results. Although, I am sure many are working on ideas. People certainly need all the help they get, as Web growth shows no sign of slowing. Just last month it was reported that the Web had surpassed one billion indexable pages [6].
    Information Maps There are many other fascinating examples that employ two dimensional interactive maps to provide a 'birds-eye' view of information. They use various underlying techniques of textual analysis and clustering to turn the mass of information into a useful summary map (see "Mining in Textual Mountains" in Mappa.Mundi Magazine). In terms of visual representations they can be divided into two groups, those that generate smooth surfaces and those that produce regular, tiled maps. Unfortunately, we don't have space to examine them in detail, but they are well worth spending some time exploring. I will be covering some of them in future columns.
  5. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.01
    0.007239241 = product of:
      0.014478482 = sum of:
        0.014478482 = product of:
          0.028956965 = sum of:
            0.028956965 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.028956965 = score(doc=2564,freq=2.0), product of:
                0.14968662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04274526 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 1.2016 10:22:28
  6. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.01
    0.007239241 = product of:
      0.014478482 = sum of:
        0.014478482 = product of:
          0.028956965 = sum of:
            0.028956965 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.028956965 = score(doc=2565,freq=2.0), product of:
                0.14968662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04274526 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 1.2016 10:22:28
  7. Patalong, F.: Life after Google : I. Besser suchen, wirklich finden (2002) 0.00
    0.0041990792 = product of:
      0.0083981585 = sum of:
        0.0083981585 = product of:
          0.016796317 = sum of:
            0.016796317 = weight(_text_:i in 1165) [ClassicSimilarity], result of:
              0.016796317 = score(doc=1165,freq=2.0), product of:
                0.16122356 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04274526 = queryNorm
                0.10418029 = fieldWeight in 1165, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1165)
          0.5 = coord(1/2)
      0.5 = coord(1/2)