Search (8 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.03
    0.025869856 = product of:
      0.05173971 = sum of:
        0.05173971 = sum of:
          0.006878448 = weight(_text_:a in 1149) [ClassicSimilarity], result of:
            0.006878448 = score(doc=1149,freq=4.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.14413087 = fieldWeight in 1149, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.044861265 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.044861265 = score(doc=1149,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Type
    a
  2. Haynes, M.: Your Google algorithm cheat sheet : Panda, Penguin, and Hummingbird (2013) 0.00
    0.0022338415 = product of:
      0.004467683 = sum of:
        0.004467683 = product of:
          0.008935366 = sum of:
            0.008935366 = weight(_text_:a in 2542) [ClassicSimilarity], result of:
              0.008935366 = score(doc=2542,freq=12.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.18723148 = fieldWeight in 2542, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2542)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    If you're reading the Moz blog, then you probably have a decent understanding of Google and its algorithm changes. However, there is probably a good percentage of the Moz audience that is still confused about the effects that Panda, Penguin, and Hummingbird can have on your site. I did write a post last year about the main differences between Penguin and a Manual Unnautral Links Penalty, and if you haven't read that, it'll give you a good primer. The point of this article is to explain very simply what each of these algorithms are meant to do. It is hopefully a good reference that you can point your clients to if you want to explain an algorithm change and not overwhelm them with technical details about 301s, canonicals, crawl errors, and other confusing SEO terminologies.
  3. Hodson, H.: Google's fact-checking bots build vast knowledge bank (2014) 0.00
    0.0021060861 = product of:
      0.0042121722 = sum of:
        0.0042121722 = product of:
          0.0084243445 = sum of:
            0.0084243445 = weight(_text_:a in 1700) [ClassicSimilarity], result of:
              0.0084243445 = score(doc=1700,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.17652355 = fieldWeight in 1700, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1700)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world's facts GOOGLE is building the largest store of knowledge in human history - and it's doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.
    Type
    a
  4. Zhao, Y.; Ma, F.; Xia, X.: Evaluating the coverage of entities in knowledge graphs behind general web search engines : Poster (2017) 0.00
    0.0020106873 = product of:
      0.0040213745 = sum of:
        0.0040213745 = product of:
          0.008042749 = sum of:
            0.008042749 = weight(_text_:a in 3854) [ClassicSimilarity], result of:
              0.008042749 = score(doc=3854,freq=14.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1685276 = fieldWeight in 3854, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3854)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Web search engines, such as Google and Bing, are constantly employing results from knowledge organization and various visualization features to improve their search services. Knowledge graph, a large repository of structured knowledge represented by formal languages such as RDF (Resource Description Framework), is used to support entity search feature of Google and Bing (Demartini, 2016). When a user searchs for an entity, such as a person, an organization, or a place in Google or Bing, it is likely that a knowledge cardwill be presented on the right side bar of the search engine result pages (SERPs). For example, when a user searches the entity Benedict Cumberbatch on Google, the knowledge card will show the basic structured information about this person, including his date of birth, height, spouse, parents, and his movies, etc. The knowledge card, which is used to present the result of entity search, is generated from knowledge graphs. Therefore, the quality of knowledge graphs is essential to the performance of entity search. However, studies on the quality of knowledge graphs from the angle of entity coverage are scant in the literature. This study aims to investigate the coverage of entities of knowledge graphs behind Google and Bing.
    Type
    a
  5. Hogan, A.; Harth, A.; Umbrich, J.; Kinsella, S.; Polleres, A.; Decker, S.: Searching and browsing Linked Data with SWSE : the Semantic Web Search Engine (2011) 0.00
    0.0018615347 = product of:
      0.0037230693 = sum of:
        0.0037230693 = product of:
          0.0074461387 = sum of:
            0.0074461387 = weight(_text_:a in 438) [ClassicSimilarity], result of:
              0.0074461387 = score(doc=438,freq=12.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15602624 = fieldWeight in 438, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=438)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we discuss the architecture and implementation of the Semantic Web Search Engine (SWSE). Following traditional search engine architecture, SWSE consists of crawling, data enhancing, indexing and a user interface for search, browsing and retrieval of information; unlike traditional search engines, SWSE operates over RDF Web data - loosely also known as Linked Data - which implies unique challenges for the system design, architecture, algorithms, implementation and user interface. In particular, many challenges exist in adopting Semantic Web technologies for Web data: the unique challenges of the Web - in terms of scale, unreliability, inconsistency and noise - are largely overlooked by the current Semantic Web standards. Herein, we describe the current SWSE system, initially detailing the architecture and later elaborating upon the function, design, implementation and performance of each individual component. In so doing, we also give an insight into how current Semantic Web standards can be tailored, in a best-effort manner, for use on Web data. Throughout, we offer evaluation and complementary argumentation to support our design choices, and also offer discussion on future directions and open research questions. Later, we also provide candid discussion relating to the difficulties currently faced in bringing such a search engine into the mainstream, and lessons learnt from roughly six years working on the Semantic Web Search Engine project.
  6. What is Schema.org? (2011) 0.00
    0.001823924 = product of:
      0.003647848 = sum of:
        0.003647848 = product of:
          0.007295696 = sum of:
            0.007295696 = weight(_text_:a in 4437) [ClassicSimilarity], result of:
              0.007295696 = score(doc=4437,freq=8.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15287387 = fieldWeight in 4437, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4437)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.
  7. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    0.0016993409 = product of:
      0.0033986818 = sum of:
        0.0033986818 = product of:
          0.0067973635 = sum of:
            0.0067973635 = weight(_text_:a in 3144) [ClassicSimilarity], result of:
              0.0067973635 = score(doc=3144,freq=10.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.14243183 = fieldWeight in 3144, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
    Type
    a
  8. Fiorelli, G.: Hummingbird unleashed (2013) 0.00
    0.0015795645 = product of:
      0.003159129 = sum of:
        0.003159129 = product of:
          0.006318258 = sum of:
            0.006318258 = weight(_text_:a in 2546) [ClassicSimilarity], result of:
              0.006318258 = score(doc=2546,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.13239266 = fieldWeight in 2546, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Sometimes I think that us SEOs could be wonderful characters for a Woody Allen movie: We are stressed, nervous, paranoid, we have a tendency for sudden changes of mood...okay, maybe I am exaggerating a little bit, but that's how we tend to (over)react whenever Google announces something. One thing that doesn't help is the lack of clarity coming from Google, which not only never mentions Hummingbird in any official document (for example, in the post of its 15th anniversary), but has also shied away from details of this epochal update in the "off-the-record" declarations of Amit Singhal. In fact, in some ways those statements partly contributed to the confusion. When Google announces an update-especially one like Hummingbird-the best thing to do is to avoid trying to immediately understand what it really is based on intuition alone. It is better to wait until the dust falls to the ground, recover the original documents, examine those related to them (and any variants), take the time to see the update in action, calmly investigate, and then after all that try to find the most plausible answers.