Search (75 results, page 4 of 4)

  • × theme_ss:"Suchmaschinen"
  • × type_ss:"el"
  1. Rogers, I.: ¬The Google Pagerank algorithm and how it works (2002) 0.00
    0.001245368 = product of:
      0.017435152 = sum of:
        0.017435152 = weight(_text_:web in 2548) [ClassicSimilarity], result of:
          0.017435152 = score(doc=2548,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 2548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2548)
      0.071428575 = coord(1/14)
    
    Abstract
    Page Rank is a topic much discussed by Search Engine Optimisation (SEO) experts. At the heart of PageRank is a mathematical formula that seems scary to look at but is actually fairly simple to understand. Despite this many people seem to get it wrong! In particular "Chris Ridings of www.searchenginesystems.net" has written a paper entitled "PageRank Explained: Everything you've always wanted to know about PageRank", pointed to by many people, that contains a fundamental mistake early on in the explanation! Unfortunately this means some of the recommendations in the paper are not quite accurate. By showing code to correctly calculate real PageRank I hope to achieve several things in this response: - Clearly explain how PageRank is calculated. - Go through every example in Chris' paper, and add some more of my own, showing the correct PageRank for each diagram. By showing the code used to calculate each diagram I've opened myself up to peer review - mostly in an effort to make sure the examples are correct, but also because the code can help explain the PageRank calculations. - Describe some principles and observations on website design based on these correctly calculated examples. Any good web designer should take the time to fully understand how PageRank really works - if you don't then your site's layout could be seriously hurting your Google listings! [Note: I have nothing in particular against Chris. If I find any other papers on the subject I'll try to comment evenly]
  2. Birmingham, J.: Internet search engines (1996) 0.00
    0.001147117 = product of:
      0.016059637 = sum of:
        0.016059637 = product of:
          0.04817891 = sum of:
            0.04817891 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.04817891 = score(doc=5664,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    10.11.1996 16:36:22
  3. Weigert, M.: Horizobu: Webrecherche statt Websuche (2011) 0.00
    0.0010567298 = product of:
      0.014794217 = sum of:
        0.014794217 = weight(_text_:web in 4443) [ClassicSimilarity], result of:
          0.014794217 = score(doc=4443,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.15297705 = fieldWeight in 4443, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4443)
      0.071428575 = coord(1/14)
    
    Content
    "Das Problem mit der Suchmaschinen-Optimierung Suchmaschinen sind unser Instrument, um mit der Informationsflut im Internet klar zu kommen. Wie ich in meinem Artikel Die kürzeste Anleitung zur Suchmaschinenoptimierung aller Zeiten ausgeführt habe, gibt es dabei leider das Problem, dass der Platzhirsch Google nicht wirklich die besten Suchresultate liefert: Habt ihr schon mal nach einem Hotel, einem Restaurant oder einer anderen Location gesucht - und die ersten vier Ergebnis-Seiten sind voller Location-Aggregatoren? Wenn ich ganz spezifisch nach einem Hotel soundso in der Soundso-Strasse suche, dann finde ich, das relevanteste Ergebnis ist die Webseite dieses Hotels. Das gehört auf Seite 1 an Platz 1. Dort aber finden sich nur die Webseiten, die ganz besonders dolle suchmaschinenoptimiert sind. Wobei Google Webseiten als am suchmaschinenoptimiertesten einstuft, wenn möglichst viele Links darauf zeigen und der Inhalt relevant sein soll. Die Industrie der Suchmaschinen-Optimierer erreicht dies dadurch, dass sie folgende Dinge machen: - sie lassen Programme und Praktikanten im Web rumschwirren, die sich überall mit hirnlosen Kommentaren verewigen (Hauptsache, die sind verlinkt und zeigen auf ihre zu pushende Webseite) - sie erschaffen geistlose Blogs, in denen hirnlose Texte stehen (Hauptsache, die Keyword-Dichte stimmt) - diese Texte lassen sie durch Schüler und Praktikanten oder gleich durch Software schreiben - Dann kommt es anscheinend noch auf Keywords im Titel, in der URL etc. an.
    All das führt zu folgenden negativen Begleiterscheinungen: - die meisten Kommentare heutzutage kriegt man nur noch des Links wegen: der eigentliche Sinn ist gleich Null - es gibt mittlerweile haufenweise Inhalte und ganze Blogs im Web, deren Ziel nur ist, von Google-Bots auf ihre Keyword-Dichte geprüft zu werden - aus meiner Sicht funktionieren SEO-Optimierungs-Unternehmen wie Schneeballsysteme: oben wird durch die Geschäftsführer Kohle gescheffelt, unten wird von den Praktikanten für wenig Geld sinnlos geschuftet. Aus meiner Sicht trägt Google zu diesen negativen Folgen sehr viel bei. Google legt nicht offen, sie sein Suchalgorithmus funktioniert - und es fördert damit diese Überflutung des Webs mit sinnlosen Kommentaren und Inhalten. Wie Du langsam aber sicher merkst, bin ich nicht der allergrößte Fan von Google (ich hoffe, die lesen das nicht - in Deutschland erfolgen mehr als 95% aller Suchen mit Google und ich will ja, dass der Denkpass weiterhin gut und leicht gefunden wird). horizobu - Nicht suchen, sondern recherchieren Nun ist horizobu nicht wirklich anders, zumindest in dieser Hinsicht. Aber es ist anders darin, wie es mit Suchergebnissen umgeht. Wenn Du etwas suchst, erscheinen sechs möglichst relevante Ergebnisse in einem großen Rahmen. Falls Dir diese Ergebnisse nicht zusagen, kannst Du sie einzeln (durch Klick auf das Kreuz an jedem Ergebnis) oder mehrere oder alle (durch Klick auf More) austauschen und durch die nächsten Ergebnisse ersetzen lassen. An jedem der sechs Ergebnisse gibt es auch eine Nadel zum Fixieren - dann kannst Du die anderen austauschen und dieses Ergebnis bleibt.
  4. Griesbaum, J.; Rittberger, M.; Bekavac, B.: Deutsche Suchmaschinen im Vergleich : AltaVista.de, Fireball.de, Google.de und Lycos.de (2002) 0.00
    0.0010192095 = product of:
      0.014268933 = sum of:
        0.014268933 = weight(_text_:information in 1159) [ClassicSimilarity], result of:
          0.014268933 = score(doc=1159,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27429342 = fieldWeight in 1159, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1159)
      0.071428575 = coord(1/14)
    
    Source
    Information und Mobilität: Optimierung und Vermeidung von Mobilität durch Information. Proceedings des 8. Internationalen Symposiums für Informationswissenschaft (ISI 2002), 7.-10.10.2002, Regensburg. Hrsg.: Rainer Hammwöhner, Christian Wolff, Christa Womser-Hacker
  5. Kriewel, S.; Klas, C.P.; Schaefer, A.; Fuhr, N.: DAFFODIL : strategic support for user-oriented access to heterogeneous digital libraries (2004) 0.00
    8.737902E-4 = product of:
      0.012233062 = sum of:
        0.012233062 = weight(_text_:information in 4838) [ClassicSimilarity], result of:
          0.012233062 = score(doc=4838,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 4838, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4838)
      0.071428575 = coord(1/14)
    
    Abstract
    DAFFODIL is a search system for digital libraries aiming at strategic support during the information search process. From a user point of view this strategic support is mainly implemented by high-level search functions, so-called stratagems, which provide functionality beyond today's digital libraries. Through the tight integration of stratagems and with the federation of heterogeneous digital libraries, DAFFODIL reaches high effects of synergy for information and services. These effects provide high-quality metadata for the searcher through an intuitively controllable user interface. The implementation of stratagems follows a tool-based model.
    Theme
    Information Gateway
  6. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.00
    7.6474476E-4 = product of:
      0.010706427 = sum of:
        0.010706427 = product of:
          0.032119278 = sum of:
            0.032119278 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.032119278 = score(doc=1149,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    17.12.2013 11:02:22
  7. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.00
    7.6474476E-4 = product of:
      0.010706427 = sum of:
        0.010706427 = product of:
          0.032119278 = sum of:
            0.032119278 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
              0.032119278 = score(doc=4996,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.30952093 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    19. 2.2019 17:22:00
  8. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.00
    7.472208E-4 = product of:
      0.010461091 = sum of:
        0.010461091 = weight(_text_:web in 754) [ClassicSimilarity], result of:
          0.010461091 = score(doc=754,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.108171105 = fieldWeight in 754, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=754)
      0.071428575 = coord(1/14)
    
    Abstract
    The aim of our investigation was to discuss exactly what is formulated in the title. This will of course constitute a main part of this write-up. However, in the process of investigations it also became clear that the focus has to be extended, not to just cover Google and search engines in an isolated fashion, but to also cover other Web 2.0 related phenomena, particularly Wikipedia, Blogs, and other related community efforts. It was the purpose of our investigation to demonstrate: - Plagiarism and IPR violation are serious concerns in academia and in the commercial world - Current techniques to fight both are rudimentary, yet could be improved by a concentrated initiative - One reason why the fight is difficult is the dominance of Google as THE major search engine and that Google is unwilling to cooperate - The monopolistic behaviour of Google is also threatening how we see the world, how we as individuals are seen (complete loss of privacy) and is threatening even world economy (!) In our proposal we did present a list of typical sections that would be covered at varying depth, with the possible replacement of one or the other by items that would emerge as still more important.
  9. Matrix of WWW indices : a comparison of Internet indexing tools (1995) 0.00
    7.2068995E-4 = product of:
      0.010089659 = sum of:
        0.010089659 = weight(_text_:information in 3165) [ClassicSimilarity], result of:
          0.010089659 = score(doc=3165,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 3165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3165)
      0.071428575 = coord(1/14)
    
    Imprint
    Ann Arbor : University of Michigan School of Information and Library Studies
  10. Mandalka, M.: Open semantic search zum unabhängigen und datenschutzfreundlichen Erschliessen von Dokumenten (2015) 0.00
    6.419561E-4 = product of:
      0.008987385 = sum of:
        0.008987385 = weight(_text_:retrieval in 2133) [ClassicSimilarity], result of:
          0.008987385 = score(doc=2133,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.10026272 = fieldWeight in 2133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2133)
      0.071428575 = coord(1/14)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  11. Weiß, E.-M.: ChatGPT soll es richten : Microsoft baut KI in Suchmaschine Bing ein (2023) 0.00
    5.04483E-4 = product of:
      0.0070627616 = sum of:
        0.0070627616 = weight(_text_:information in 866) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=866,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=866)
      0.071428575 = coord(1/14)
    
    Abstract
    ChatGPT, die künstliche Intelligenz der Stunde, ist von OpenAI entwickelt worden. Und OpenAI ist in der Vergangenheit nicht unerheblich von Microsoft unterstützt worden. Nun geht es ums Profitieren: Die KI soll in die Suchmaschine Bing eingebaut werden, was eine direkte Konkurrenz zu Googles Suchalgorithmen und Intelligenzen bedeutet. Bing war da bislang nicht sonderlich erfolgreich. Wie "The Information" mit Verweis auf zwei Insider berichtet, plant Microsoft, ChatGPT in seine Suchmaschine Bing einzubauen. Bereits im März könnte die neue, intelligente Suche verfügbar sein. Microsoft hatte zuvor auf der hauseigenen Messe Ignite zunächst die Integration des Bildgenerators DALL·E 2 in seine Suchmaschine angekündigt - ohne konkretes Startdatum jedoch. Fragt man ChatGPT selbst, bestätigt der Chatbot seine künftige Aufgabe noch nicht. Weiß aber um potentielle Vorteile.
  12. Tetzchner, J. von: As a monopoly in search and advertising Google is not able to resist the misuse of power : is the Internet turning into a battlefield of propaganda? How Google should be regulated (2017) 0.00
    4.368951E-4 = product of:
      0.006116531 = sum of:
        0.006116531 = weight(_text_:information in 3891) [ClassicSimilarity], result of:
          0.006116531 = score(doc=3891,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.11757882 = fieldWeight in 3891, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3891)
      0.071428575 = coord(1/14)
    
    Content
    How should Google be regulated? We should limit the amount of information that is being collected. In particular we should look at information that is being collected across sites. It should not be legal to combine data from multiple sites and services. The fact that these sites and services are using the same underlying technology does not change the fact that the user's dealings is with a site at a time and each site should not have the right to share the data with others. I believe this the cornerstone of laws in many countries today, but these laws need to be enforced. Data about us is ours alone and it should not be possible to sell it. We should also limit the ability to target users individually. In the past, ads on sites were ads on sites. You might know what kind of users visited a site and you would place tech ads on tech sites and fashion ads on fashion sites. Now the ads follow you individually. That should be made illegal as it uses data collected from multiple sources and invades our privacy. I also believe there should be regulation as to how location data is used and any information related to our mobile devices. In addition, regulators need to be vigilant as to how companies that have monopoly power use their power. That kind of goes without saying. Companies with monopoly powers should not be able to use those powers when competing in an open market or using their monopoly services to limit competition."
  13. Sirapyan, N.: In Search of... (2001) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 5661) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=5661,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 5661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5661)
      0.071428575 = coord(1/14)
    
    Abstract
    In a series of capsule reviews of 20 search engines Sirapyan gives a good overview of the state of Internet search tools. She starts out with a clear discussion of the types of search tools available, the availability of advanced features such as Boolean queries and differences between directories, regular search engines and metasearch engines. It is unclear from the article whether the author and other testers used the same searches across all of the 20 tools but each review clearly outlines perceived strengths and weaknesses, gives tips on the advanced features, if any, of the search tool in question and suggests the types of searches that are most successful. The tools which receive top honors are Google, Northern Light, HotBot and Oingo. Finally, there is an extra sidebar the discusses meta and specialized search tools such as Infozoid and FirstGov. I can't help thinking that the usefulness of this article is related to the fact that Sirapyan is PC Magazine's librarian and goes into greater depth on those features that are of interest to information professionals
  14. Günther, M.: Vermitteln Suchmaschinen vollständige Bilder aktueller Themen? : Untersuchung der Gewichtung inhaltlicher Aspekte von Suchmaschinenergebnissen in Deutschland und den USA (2016) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 3068) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3068,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3068)
      0.071428575 = coord(1/14)
    
    Source
    Young information scientists. 1(2016), S.13-29
  15. Teutsch, K.: ¬Die Welt ist doch eine Scheibe : Google-Herausforderer eyePlorer (2009) 0.00
    1.8017249E-4 = product of:
      0.0025224148 = sum of:
        0.0025224148 = weight(_text_:information in 2678) [ClassicSimilarity], result of:
          0.0025224148 = score(doc=2678,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.048488684 = fieldWeight in 2678, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2678)
      0.071428575 = coord(1/14)
    
    Content
    Eine neue visuelle Ordnung Martin Hirsch ist der Enkel des Nobelpreisträgers Werner Heisenberg. Außerdem ist er Hirnforscher und beschäftigt sich seit Jahren mit der Frage: Was tut mein Kopf eigentlich, während ich hirnforsche? Ralf von Grafenstein ist Marketingexperte und spezialisiert auf Dienstleistungen im Internet. Zusammen haben sie also am 1. Dezember 2008 eine Firma in Berlin gegründet, deren Heiliger Gral besagte Scheibe ist, auf der - das ist die Idee - bald die ganze Welt, die Internetwelt zumindest, Platz finden soll. Die Scheibe heißt eyePlorer, was sich als Aufforderung an ihre Nutzer versteht. Die sollen auf einer neuartigen, eben scheibenförmigen Plattform die unermesslichen Datensätze des Internets in eine neue visuelle Ordnung bringen. Der Schlüssel dafür, da waren sich Hirsch und von Grafenstein sicher, liegt in der Hirnforschung, denn warum nicht die assoziativen Fähigkeiten des Menschen auf Suchmaschinen übertragen? Anbieter wie Google lassen von solchen Ansätzen bislang die Finger. Hier setzt man dafür auf Volltextprogramme, also sprachbegabte Systeme, die letztlich aber, genau wie die Schlagwortsuche, nur zu opak gerankten Linksammlungen führen. Weiter als auf Seite zwei des Suchergebnisses wagt sich der träge Nutzer meistens nicht vor. Weil sie niemals wahrgenommen wird, fällt eine Menge möglicherweise kostbare Information unter den Tisch.

Years

Languages

  • e 46
  • d 28

Types

  • a 30
  • x 2
  • More… Less…