Search (1053 results, page 1 of 53)

  • × theme_ss:"Suchmaschinen"
  1. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.10
    0.09715269 = product of:
      0.16192114 = sum of:
        0.012190466 = weight(_text_:a in 4872) [ClassicSimilarity], result of:
          0.012190466 = score(doc=4872,freq=8.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.25478977 = fieldWeight in 4872, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4872)
        0.109977536 = weight(_text_:63 in 4872) [ClassicSimilarity], result of:
          0.109977536 = score(doc=4872,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.541139 = fieldWeight in 4872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.078125 = fieldNorm(doc=4872)
        0.03975313 = product of:
          0.07950626 = sum of:
            0.07950626 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.07950626 = score(doc=4872,freq=4.0), product of:
                0.14530693 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041494574 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Describes the development of EEVL and outlines the services offered. The potential market for EEVL is discussed, and a case study of promotional activities is presented
    Date
    22. 6.2002 19:40:22
    Source
    Online information review. 24(2000) no.1, S.59-63
    Type
    a
  2. Stalder, F.; Mayer, C.: ¬Der zweite Index : Suchmaschinen, Personalisierung und Überwachung (2009) 0.06
    0.06053745 = product of:
      0.10089575 = sum of:
        0.0030476165 = weight(_text_:a in 22) [ClassicSimilarity], result of:
          0.0030476165 = score(doc=22,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.06369744 = fieldWeight in 22, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=22)
        0.031010862 = product of:
          0.062021725 = sum of:
            0.062021725 = weight(_text_:dewey in 22) [ClassicSimilarity], result of:
              0.062021725 = score(doc=22,freq=2.0), product of:
                0.21583907 = queryWeight, product of:
                  5.2016215 = idf(docFreq=661, maxDocs=44218)
                  0.041494574 = queryNorm
                0.2873517 = fieldWeight in 22, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2016215 = idf(docFreq=661, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=22)
          0.5 = coord(1/2)
        0.066837266 = product of:
          0.13367453 = sum of:
            0.13367453 = weight(_text_:melvil in 22) [ClassicSimilarity], result of:
              0.13367453 = score(doc=22,freq=2.0), product of:
                0.316871 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.041494574 = queryNorm
                0.4218579 = fieldWeight in 22, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=22)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Googles Anspruch ist bekanntermaßen, "die auf der Welt vorhandene Information zu organisieren". Es ist aber unmöglich, die Information der Welt zu organisieren, ohne über ein operatives Modell der Welt zu verfügen. Am Höhepunkt der westlichen Kolonialmacht konnte Melvil(le) Dewey (1851-1931) einfach die viktorianische Weltsicht als Grundlage eines universalen Klassifikationssystems heranziehen, das zum Beispiel alle nicht-christlichen Religionen in eine einzige Kategorie zusammenfasste (Nr. 290 "Andere Religionen"). Ein derartig einseitiges Klassifizierungssystem kann, bei all seiner Nützlichkeit in Bibliotheken, in der multikulturellen Welt der globalen Kommunikation nicht funktionieren. Tatsächlich kann ein uniformes Klassifizierungssystem grundsätzlich nicht funktionieren, da es unmöglich ist, sich auf einen einzigen kulturellen Begriffsrahmen zu einigen, aufgrund dessen die Kategorien definiert werden könnten. Dies ist neben dem Problem der Skalierung der Grund, weshalb Internet-Verzeichnisse, wie sie von Yahoo! und dem Open Directory Project (demoz) eingeführt wurden, nach einer kurzen Zeit des Erfolgs zusammenbrachen. Suchmaschinen umgehen dieses Problem, indem sie die Ordnung der Ausgabe für jede Anfrage neu organisieren und die selbstreferenzielle Methode der Linkanalyse einsetzen, um die Hierarchie der Ergebnisse zu konstruieren (vgl. Katja Mayers Beitrag in diesem Band). Dieses Ranking hat den Anspruch, objektiv zu sein und die reale Topologie des Netzwerks zu spiegeln, die sich ungeplant aus den Verlinkungen, die durch die einzelnen Informationsproduzenten gesetzt werden, ergibt. Aufgrund ihrer Kenntnis dieser Topologie bevorzugen Suchmaschinen stark verlinkte Knoten gegenüber wenig verlinkten peripheren Seiten. Diese spezifische Art der Objektivität ist eines der Kernelemente von Suchmaschinen, denn sie ist problemlos skalierbar und genießt das Vertrauen der Nutzer.
    Type
    a
  3. Moukdad, H.; Large, A.: Information retrieval from full-text arabic databases : can search engines designed for English do the job? (2001) 0.06
    0.056926806 = product of:
      0.14231701 = sum of:
        0.010343953 = weight(_text_:a in 6142) [ClassicSimilarity], result of:
          0.010343953 = score(doc=6142,freq=4.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.2161963 = fieldWeight in 6142, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=6142)
        0.13197306 = weight(_text_:63 in 6142) [ClassicSimilarity], result of:
          0.13197306 = score(doc=6142,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.64936686 = fieldWeight in 6142, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.09375 = fieldNorm(doc=6142)
      0.4 = coord(2/5)
    
    Source
    Libri. 51(2001) no.2, S.63-74
    Type
    a
  4. Olvera Lobo, M.D.: Rendimiento de los sistemas de recuperacion de informacion en al World Wide Web : revision metodologica (2000) 0.05
    0.04642911 = product of:
      0.11607277 = sum of:
        0.006095233 = weight(_text_:a in 3448) [ClassicSimilarity], result of:
          0.006095233 = score(doc=3448,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.12739488 = fieldWeight in 3448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3448)
        0.109977536 = weight(_text_:63 in 3448) [ClassicSimilarity], result of:
          0.109977536 = score(doc=3448,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.541139 = fieldWeight in 3448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.078125 = fieldNorm(doc=3448)
      0.4 = coord(2/5)
    
    Source
    Revista Española de Documentaçion Cientifica. 23(2000) no.1, S.63-77
    Type
    a
  5. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.04
    0.043579534 = product of:
      0.07263255 = sum of:
        0.0095405495 = weight(_text_:a in 1673) [ClassicSimilarity], result of:
          0.0095405495 = score(doc=1673,freq=10.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.19940455 = fieldWeight in 1673, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.043415207 = product of:
          0.086830415 = sum of:
            0.086830415 = weight(_text_:dewey in 1673) [ClassicSimilarity], result of:
              0.086830415 = score(doc=1673,freq=2.0), product of:
                0.21583907 = queryWeight, product of:
                  5.2016215 = idf(docFreq=661, maxDocs=44218)
                  0.041494574 = queryNorm
                0.40229237 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2016215 = idf(docFreq=661, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
        0.019676797 = product of:
          0.039353594 = sum of:
            0.039353594 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.039353594 = score(doc=1673,freq=2.0), product of:
                0.14530693 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041494574 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
    Type
    a
  6. Notess, G.R.: Internet search techniques and strategies (1997) 0.04
    0.0379512 = product of:
      0.094878 = sum of:
        0.006895969 = weight(_text_:a in 389) [ClassicSimilarity], result of:
          0.006895969 = score(doc=389,freq=4.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.14413087 = fieldWeight in 389, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=389)
        0.087982036 = weight(_text_:63 in 389) [ClassicSimilarity], result of:
          0.087982036 = score(doc=389,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.43291122 = fieldWeight in 389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0625 = fieldNorm(doc=389)
      0.4 = coord(2/5)
    
    Abstract
    Offers advice on Internet search techniques and strategies. These include going straight to the information source, guessing URLs, and developing strategies for when to use subject directories (product searches, broad topics, and current events) and search engines (unique keywords, phrase searching, field searching, and limits), a multiple search strategy, and chopping off part of the URL when sites con not be found
    Source
    Online. 21(1997) no.4, S.63-66
    Type
    a
  7. Koehler, W.C.: Internet search note : specialized retrieval and Web search engines (1997) 0.04
    0.0379512 = product of:
      0.094878 = sum of:
        0.006895969 = weight(_text_:a in 769) [ClassicSimilarity], result of:
          0.006895969 = score(doc=769,freq=4.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.14413087 = fieldWeight in 769, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=769)
        0.087982036 = weight(_text_:63 in 769) [ClassicSimilarity], result of:
          0.087982036 = score(doc=769,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.43291122 = fieldWeight in 769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0625 = fieldNorm(doc=769)
      0.4 = coord(2/5)
    
    Abstract
    Tested 3 WWW search engines (HotBot Expert, AltaVista Advanced and Open Text Power) with a search strategy that required extensive searching using URL fragments, specifically top-level and second-level domain name tags. Tests to see which provided the greatest coverage of the target population, and which had the comprehensive index for the area under study. Concludes that HotBot Expert URL fragment searching is superior to the others when searching descriptive geographic 2nd level domains
    Source
    Searcher. 5(1997) no.5, S.63-65
    Type
    a
  8. Liu, H.-M.C.: Selection and comparison of WWW search tools (1996) 0.04
    0.03714329 = product of:
      0.092858225 = sum of:
        0.004876186 = weight(_text_:a in 617) [ClassicSimilarity], result of:
          0.004876186 = score(doc=617,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.10191591 = fieldWeight in 617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=617)
        0.087982036 = weight(_text_:63 in 617) [ClassicSimilarity], result of:
          0.087982036 = score(doc=617,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.43291122 = fieldWeight in 617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0625 = fieldNorm(doc=617)
      0.4 = coord(2/5)
    
    Source
    Journal of information; communication; and library science. 2(1996) no.4, S.41-63
    Type
    a
  9. Siebenlist, T.: MEMOSE. Spezialsuchmaschine für emotional geladene Dokumente (2012) 0.03
    0.03250038 = product of:
      0.08125094 = sum of:
        0.004266663 = weight(_text_:a in 175) [ClassicSimilarity], result of:
          0.004266663 = score(doc=175,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.089176424 = fieldWeight in 175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=175)
        0.07698428 = weight(_text_:63 in 175) [ClassicSimilarity], result of:
          0.07698428 = score(doc=175,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.37879732 = fieldWeight in 175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0546875 = fieldNorm(doc=175)
      0.4 = coord(2/5)
    
    Source
    Information - Wissenschaft und Praxis. 63(2012) H.4, S.252-260
    Type
    a
  10. Dominich, S.; Skrop, A.: PageRank and interaction information retrieval (2005) 0.03
    0.03078318 = product of:
      0.07695795 = sum of:
        0.0109714195 = weight(_text_:a in 3268) [ClassicSimilarity], result of:
          0.0109714195 = score(doc=3268,freq=18.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.22931081 = fieldWeight in 3268, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3268)
        0.06598653 = weight(_text_:63 in 3268) [ClassicSimilarity], result of:
          0.06598653 = score(doc=3268,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.32468343 = fieldWeight in 3268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.046875 = fieldNorm(doc=3268)
      0.4 = coord(2/5)
    
    Abstract
    The PageRank method is used by the Google Web search engine to compute the importance of Web pages. Two different views have been developed for the Interpretation of the PageRank method and values: (a) stochastic (random surfer): the PageRank values can be conceived as the steady-state distribution of a Markov chain, and (b) algebraic: the PageRank values form the eigenvector corresponding to eigenvalue 1 of the Web link matrix. The Interaction Information Retrieval (1**2 R) method is a nonclassical information retrieval paradigm, which represents a connectionist approach based an dynamic systems. In the present paper, a different Interpretation of PageRank is proposed, namely, a dynamic systems viewpoint, by showing that the PageRank method can be formally interpreted as a particular case of the Interaction Information Retrieval method; and thus, the PageRank values may be interpreted as neutral equilibrium points of the Web.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.1, S.63-69
    Type
    a
  11. Plath, J.: Allianz gegen Google : Streit um die Verwertungsrechte von Büchern (2008) 0.03
    0.030278323 = product of:
      0.05046387 = sum of:
        0.0021333315 = weight(_text_:a in 1333) [ClassicSimilarity], result of:
          0.0021333315 = score(doc=1333,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.044588212 = fieldWeight in 1333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1333)
        0.03849214 = weight(_text_:63 in 1333) [ClassicSimilarity], result of:
          0.03849214 = score(doc=1333,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.18939866 = fieldWeight in 1333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1333)
        0.009838399 = product of:
          0.019676797 = sum of:
            0.019676797 = weight(_text_:22 in 1333) [ClassicSimilarity], result of:
              0.019676797 = score(doc=1333,freq=2.0), product of:
                0.14530693 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041494574 = queryNorm
                0.1354154 = fieldWeight in 1333, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1333)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Empört zogen US-Autoren und -Verlage vor Gericht und handelten im Herbst einen Vergleich aus, den das Gericht noch genehmigen muss. Er sieht vor, dass Google die Verfahrenskosten trägt, jedem Autor eines ungenehmigt digitalisierten Buches 60 US-Dollar zahlt (insgesamt 45 Millionen US-Dollar) sowie 34,5 Millionen US-Dollar für die Gründung eines digitalen, von Google unabhängigen Buchregisters bereitstellt. Das Register soll die Einnahmen verteilen (37 Prozent an Google, 63 Prozent an die Rechteinhaber), die Google erwirtschaftet aus dem kostenpflichtigen Zugang zu Büchern oder ihrem Download (als Ebook oder Book on Demand), aus der Werbung sowie aus Online-Abonnements für Institutionen und Bibliotheken. Und natürlich dürfen die Scan-Maschinen weiter laufen. Die Konsequenzen des Vergleichs lassen tief durchatmen. Google erhält, warnt Robert Darnton in "The New York Review of Books", praktisch ein Digitalisierungsmonopol für die USA. Mehr noch: Die millionenfache Verletzung des Urheberrechts wird pragmatisch durchgewunken. Die Verlage erfreut nämlich, dass Google ein neues Geschäftsfeld erschlossen hat: all die vergriffenen Bücher, deren Nachdruck sie für nicht lohnend halten. Die Suchmaschinenfirma geht ihrerseits vom Prinzip der Finanzierung durch Werbung ab: Google wird auf einen Schlag weltgrößter Verleger und weltgrößter Buchhändler. Die Buchsuche wandelt sich zum Online-Buch(inhalte)verkauf.
    Date
    5. 1.1997 9:39:22
    Type
    a
  12. Ortiz-Cordova, A.; Jansen, B.J.: Classifying web search queries to identify high revenue generating customers (2012) 0.03
    0.030264964 = product of:
      0.07566241 = sum of:
        0.009675884 = weight(_text_:a in 279) [ClassicSimilarity], result of:
          0.009675884 = score(doc=279,freq=14.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.20223314 = fieldWeight in 279, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=279)
        0.06598653 = weight(_text_:63 in 279) [ClassicSimilarity], result of:
          0.06598653 = score(doc=279,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.32468343 = fieldWeight in 279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.046875 = fieldNorm(doc=279)
      0.4 = coord(2/5)
    
    Abstract
    Traffic from search engines is important for most online businesses, with the majority of visitors to many websites being referred by search engines. Therefore, an understanding of this search engine traffic is critical to the success of these websites. Understanding search engine traffic means understanding the underlying intent of the query terms and the corresponding user behaviors of searchers submitting keywords. In this research, using 712,643 query keywords from a popular Spanish music website relying on contextual advertising as its business model, we use a k-means clustering algorithm to categorize the referral keywords with similar characteristics of onsite customer behavior, including attributes such as clickthrough rate and revenue. We identified 6 clusters of consumer keywords. Clusters range from a large number of users who are low impact to a small number of high impact users. We demonstrate how online businesses can leverage this segmentation clustering approach to provide a more tailored consumer experience. Implications are that businesses can effectively segment customers to develop better business models to increase advertising conversion rates.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1426-1441
    Type
    a
  13. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.03
    0.029287467 = product of:
      0.073218666 = sum of:
        0.06590439 = product of:
          0.19771315 = sum of:
            0.19771315 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.19771315 = score(doc=2514,freq=2.0), product of:
                0.35179147 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041494574 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.0073142797 = weight(_text_:a in 2514) [ClassicSimilarity], result of:
          0.0073142797 = score(doc=2514,freq=8.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.15287387 = fieldWeight in 2514, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.4 = coord(2/5)
    
    Abstract
    In this paper, we present two ways to improve the precision of HITS-based algorithms onWeb documents. First, by analyzing the limitations of current HITS-based algorithms, we propose a new weighted HITS-based method that assigns appropriate weights to in-links of root documents. Then, we combine content analysis with HITS-based algorithms and study the effects of four representative relevance scoring methods, VSM, Okapi, TLS, and CDR, using a set of broad topic queries. Our experimental results show that our weighted HITS-based method performs significantly better than Bharat's improved HITS algorithm. When we combine our weighted HITS-based method or Bharat's HITS algorithm with any of the four relevance scoring methods, the combined methods are only marginally better than our weighted HITS-based method. Between the four relevance scoring methods, there is no significant quality difference when they are combined with a HITS-based algorithm.
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
    Type
    a
  14. Makris, C.; Plegas, Y.; Stamou, S.: Web query disambiguation using PageRank (2012) 0.03
    0.028928353 = product of:
      0.07232088 = sum of:
        0.0063343523 = weight(_text_:a in 378) [ClassicSimilarity], result of:
          0.0063343523 = score(doc=378,freq=6.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.13239266 = fieldWeight in 378, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=378)
        0.06598653 = weight(_text_:63 in 378) [ClassicSimilarity], result of:
          0.06598653 = score(doc=378,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.32468343 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.046875 = fieldNorm(doc=378)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we propose new word sense disambiguation strategies for resolving the senses of polysemous query terms issued to Web search engines, and we explore the application of those strategies when used in a query expansion framework. The novelty of our approach lies in the exploitation of the Web page PageRank values as indicators of the significance the different senses of a term carry when employed in search queries. We also aim at scalable query sense resolution techniques that can be applied without loss of efficiency to large data sets such as those on the Web. Our experimental findings validate that the proposed techniques perform more accurately than do the traditional disambiguation strategies and improve the quality of the search results, when involved in query expansion.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.8, S.1581-1592
    Type
    a
  15. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.03
    0.028463403 = product of:
      0.071158506 = sum of:
        0.0051719765 = weight(_text_:a in 2605) [ClassicSimilarity], result of:
          0.0051719765 = score(doc=2605,freq=4.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.10809815 = fieldWeight in 2605, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2605)
        0.06598653 = weight(_text_:63 in 2605) [ClassicSimilarity], result of:
          0.06598653 = score(doc=2605,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.32468343 = fieldWeight in 2605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.046875 = fieldNorm(doc=2605)
      0.4 = coord(2/5)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
    Signature
    63 TWX 2510
  16. Lewandowski, D.; Drechsler, J.; Mach, S. von: Deriving query intents from web search engine queries (2012) 0.03
    0.025220802 = product of:
      0.063052006 = sum of:
        0.008063235 = weight(_text_:a in 385) [ClassicSimilarity], result of:
          0.008063235 = score(doc=385,freq=14.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.1685276 = fieldWeight in 385, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=385)
        0.054988768 = weight(_text_:63 in 385) [ClassicSimilarity], result of:
          0.054988768 = score(doc=385,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.2705695 = fieldWeight in 385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0390625 = fieldNorm(doc=385)
      0.4 = coord(2/5)
    
    Abstract
    The purpose of this article is to test the reliability of query intents derived from queries, either by the user who entered the query or by another juror. We report the findings of three studies. First, we conducted a large-scale classification study (~50,000 queries) using a crowdsourcing approach. Next, we used clickthrough data from a search engine log and validated the judgments given by the jurors from the crowdsourcing study. Finally, we conducted an online survey on a commercial search engine's portal. Because we used the same queries for all three studies, we also were able to compare the results and the effectiveness of the different approaches. We found that neither the crowdsourcing approach, using jurors who classified queries originating from other users, nor the questionnaire approach, using searchers who were asked about their own query that they just entered into a Web search engine, led to satisfying results. This leads us to conclude that there was little understanding of the classification tasks, even though both groups of jurors were given detailed instructions. Although we used manual classification, our research also has important implications for automatic classification. We must question the success of approaches using automatic classification and comparing its performance to a baseline from human jurors.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.9, S.1773-1788
    Type
    a
  17. Souza, J.; Carvalho, A.; Cristo, M.; Moura, E.; Calado, P.; Chirita, P.-A.; Nejdl, W.: Using site-level connections to estimate link confidence (2012) 0.02
    0.024981549 = product of:
      0.062453873 = sum of:
        0.0074651055 = weight(_text_:a in 498) [ClassicSimilarity], result of:
          0.0074651055 = score(doc=498,freq=12.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.15602624 = fieldWeight in 498, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=498)
        0.054988768 = weight(_text_:63 in 498) [ClassicSimilarity], result of:
          0.054988768 = score(doc=498,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.2705695 = fieldWeight in 498, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0390625 = fieldNorm(doc=498)
      0.4 = coord(2/5)
    
    Abstract
    Search engines are essential tools for web users today. They rely on a large number of features to compute the rank of search results for each given query. The estimated reputation of pages is among the effective features available for search engine designers, probably being adopted by most current commercial search engines. Page reputation is estimated by analyzing the linkage relationships between pages. This information is used by link analysis algorithms as a query-independent feature, to be taken into account when computing the rank of the results. Unfortunately, several types of links found on the web may damage the estimated page reputation and thus cause a negative effect on the quality of search results. This work studies alternatives to reduce the negative impact of such noisy links. More specifically, the authors propose and evaluate new methods that deal with noisy links, considering scenarios where the reputation of pages is computed using the PageRank algorithm. They show, through experiments with real web content, that their methods achieve significant improvements when compared to previous solutions proposed in the literature.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.11, S.2294-2312
    Type
    a
  18. Lewandowski, D.: ¬The retrieval effectiveness of search engines on navigational queries (2011) 0.02
    0.0237195 = product of:
      0.05929875 = sum of:
        0.0043099807 = weight(_text_:a in 4537) [ClassicSimilarity], result of:
          0.0043099807 = score(doc=4537,freq=4.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.090081796 = fieldWeight in 4537, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4537)
        0.054988768 = weight(_text_:63 in 4537) [ClassicSimilarity], result of:
          0.054988768 = score(doc=4537,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.2705695 = fieldWeight in 4537, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4537)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to test major web search engines on their performance on navigational queries, i.e. searches for homepages. Design/methodology/approach - In total, 100 user queries are posed to six search engines (Google, Yahoo!, MSN, Ask, Seekport, and Exalead). Users described the desired pages, and the results position of these was recorded. Measured success and mean reciprocal rank are calculated. Findings - The performance of the major search engines Google, Yahoo!, and MSN was found to be the best, with around 90 per cent of queries answered correctly. Ask and Exalead performed worse but received good scores as well. Research limitations/implications - All queries were in German, and the German-language interfaces of the search engines were used. Therefore, the results are only valid for German queries. Practical implications - When designing a search engine to compete with the major search engines, care should be taken on the performance on navigational queries. Users can be influenced easily in their quality ratings of search engines based on this performance. Originality/value - This study systematically compares the major search engines on navigational queries and compares the findings with studies on the retrieval effectiveness of the engines on informational queries.
    Source
    Aslib proceedings. 63(2011) no.4, S.354-363
    Type
    a
  19. Linten, M.: "Was ist eine Wanderdüne?" (2012) 0.02
    0.023214554 = product of:
      0.058036383 = sum of:
        0.0030476165 = weight(_text_:a in 177) [ClassicSimilarity], result of:
          0.0030476165 = score(doc=177,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.06369744 = fieldWeight in 177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=177)
        0.054988768 = weight(_text_:63 in 177) [ClassicSimilarity], result of:
          0.054988768 = score(doc=177,freq=2.0), product of:
            0.20323344 = queryWeight, product of:
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.041494574 = queryNorm
            0.2705695 = fieldWeight in 177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8978314 = idf(docFreq=896, maxDocs=44218)
              0.0390625 = fieldNorm(doc=177)
      0.4 = coord(2/5)
    
    Source
    Information - Wissenschaft und Praxis. 63(2012) H.2, S.95-98
    Type
    a
  20. Höfer, W.: Detektive im Web (1999) 0.02
    0.022007218 = product of:
      0.05501804 = sum of:
        0.0073142797 = weight(_text_:a in 4007) [ClassicSimilarity], result of:
          0.0073142797 = score(doc=4007,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.15287387 = fieldWeight in 4007, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=4007)
        0.04770376 = product of:
          0.09540752 = sum of:
            0.09540752 = weight(_text_:22 in 4007) [ClassicSimilarity], result of:
              0.09540752 = score(doc=4007,freq=4.0), product of:
                0.14530693 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041494574 = queryNorm
                0.6565931 = fieldWeight in 4007, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4007)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 8.1999 20:22:06
    Type
    a

Years

Languages

Types

  • a 971
  • el 76
  • m 38
  • s 8
  • x 6
  • r 4
  • p 2
  • More… Less…

Subjects

Classifications