Search (17 results, page 1 of 1)

  • × author_ss:"Lewandowski, D."
  1. Lewandowski, D.; Spree, U.: Ranking of Wikipedia articles in search engines revisited : fair ranking for reasonable quality? (2011) 0.06
    0.06290409 = product of:
      0.12580818 = sum of:
        0.110636614 = weight(_text_:engines in 444) [ClassicSimilarity], result of:
          0.110636614 = score(doc=444,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.4861493 = fieldWeight in 444, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
        0.015171562 = product of:
          0.030343125 = sum of:
            0.030343125 = weight(_text_:22 in 444) [ClassicSimilarity], result of:
              0.030343125 = score(doc=444,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.19345059 = fieldWeight in 444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=444)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper aims to review the fiercely discussed question of whether the ranking of Wikipedia articles in search engines is justified by the quality of the articles. After an overview of current research on information quality in Wikipedia, a summary of the extended discussion on the quality of encyclopedic entries in general is given. On this basis, a heuristic method for evaluating Wikipedia entries is developed and applied to Wikipedia articles that scored highly in a search engine retrieval effectiveness test and compared with the relevance judgment of jurors. In all search engines tested, Wikipedia results are unanimously judged better by the jurors than other results on the corresponding results position. Relevance judgments often roughly correspond with the results from the heuristic evaluation. Cases in which high relevance judgments are not in accordance with the comparatively low score from the heuristic evaluation are interpreted as an indicator of a high degree of trust in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its necessarily incoherent user model. A further tuning of the suggested criteria catalog, for instance, the different weighing of the supplied criteria, could serve as a starting point for a user model differentiated evaluation of Wikipedia articles. Approved methods of quality evaluation of reference works are applied to Wikipedia articles and integrated with the question of search engine evaluation.
    Date
    30. 9.2012 19:27:22
  2. Lewandowski, D.; Sünkler, S.: What does Google recommend when you want to compare insurance offerings? (2019) 0.05
    0.052752987 = product of:
      0.10550597 = sum of:
        0.09033441 = weight(_text_:engines in 5288) [ClassicSimilarity], result of:
          0.09033441 = score(doc=5288,freq=4.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39693922 = fieldWeight in 5288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5288)
        0.015171562 = product of:
          0.030343125 = sum of:
            0.030343125 = weight(_text_:22 in 5288) [ClassicSimilarity], result of:
              0.030343125 = score(doc=5288,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.19345059 = fieldWeight in 5288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5288)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google's top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.
    Date
    20. 1.2015 18:30:22
  3. Lewandowski, D.: ¬The retrieval effectiveness of search engines on navigational queries (2011) 0.05
    0.04790705 = product of:
      0.1916282 = sum of:
        0.1916282 = weight(_text_:engines in 4537) [ClassicSimilarity], result of:
          0.1916282 = score(doc=4537,freq=18.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.8420352 = fieldWeight in 4537, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4537)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to test major web search engines on their performance on navigational queries, i.e. searches for homepages. Design/methodology/approach - In total, 100 user queries are posed to six search engines (Google, Yahoo!, MSN, Ask, Seekport, and Exalead). Users described the desired pages, and the results position of these was recorded. Measured success and mean reciprocal rank are calculated. Findings - The performance of the major search engines Google, Yahoo!, and MSN was found to be the best, with around 90 per cent of queries answered correctly. Ask and Exalead performed worse but received good scores as well. Research limitations/implications - All queries were in German, and the German-language interfaces of the search engines were used. Therefore, the results are only valid for German queries. Practical implications - When designing a search engine to compete with the major search engines, care should be taken on the performance on navigational queries. Users can be influenced easily in their quality ratings of search engines based on this performance. Originality/value - This study systematically compares the major search engines on navigational queries and compares the findings with studies on the retrieval effectiveness of the engines on informational queries.
  4. Lewandowski, D.: ¬A framework for evaluating the retrieval effectiveness of search engines (2012) 0.05
    0.046939135 = product of:
      0.18775654 = sum of:
        0.18775654 = weight(_text_:engines in 106) [ClassicSimilarity], result of:
          0.18775654 = score(doc=106,freq=12.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.82502264 = fieldWeight in 106, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=106)
      0.25 = coord(1/4)
    
    Abstract
    This chapter presents a theoretical framework for evaluating next generation search engines. The author focuses on search engines whose results presentation is enriched with additional information and does not merely present the usual list of "10 blue links," that is, of ten links to results, accompanied by a short description. While Web search is used as an example here, the framework can easily be applied to search engines in any other area. The framework not only addresses the results presentation, but also takes into account an extension of the general design of retrieval effectiveness tests. The chapter examines the ways in which this design might influence the results of such studies and how a reliable test is best designed.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64437.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  5. Sundin, O.; Lewandowski, D.; Haider, J.: Whose relevance? : Web search engines as multisided relevance machines (2022) 0.04
    0.038722813 = product of:
      0.15489125 = sum of:
        0.15489125 = weight(_text_:engines in 542) [ClassicSimilarity], result of:
          0.15489125 = score(doc=542,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.68060905 = fieldWeight in 542, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0546875 = fieldNorm(doc=542)
      0.25 = coord(1/4)
    
    Abstract
    This opinion piece takes Google's response to the so-called COVID-19 infodemic, as a starting point to argue for the need to consider societal relevance as a complement to other types of relevance. The authors maintain that if information science wants to be a discipline at the forefront of research on relevance, search engines, and their use, then the information science research community needs to address itself to the challenges and conditions that commercial search engines create in. The article concludes with a tentative list of related research topics.
  6. Lewandowski, D.: ¬The retrieval effectiveness of web search engines : considering results descriptions (2008) 0.04
    0.035707813 = product of:
      0.14283125 = sum of:
        0.14283125 = weight(_text_:engines in 2345) [ClassicSimilarity], result of:
          0.14283125 = score(doc=2345,freq=10.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.62761605 = fieldWeight in 2345, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2345)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to compare five major web search engines (Google, Yahoo, MSN, Ask.com, and Seekport) for their retrieval effectiveness, taking into account not only the results, but also the results descriptions. Design/methodology/approach - The study uses real-life queries. Results are made anonymous and are randomized. Results are judged by the persons posing the original queries. Findings - The two major search engines, Google and Yahoo, perform best, and there are no significant differences between them. Google delivers significantly more relevant result descriptions than any other search engine. This could be one reason for users perceiving this engine as superior. Research limitations/implications - The study is based on a user model where the user takes into account a certain amount of results rather systematically. This may not be the case in real life. Practical implications - The paper implies that search engines should focus on relevant descriptions. Searchers are advised to use other search engines in addition to Google. Originality/value - This is the first major study comparing results and descriptions systematically and proposes new retrieval measures to take into account results descriptions.
  7. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.03
    0.027659154 = product of:
      0.110636614 = sum of:
        0.110636614 = weight(_text_:engines in 3752) [ClassicSimilarity], result of:
          0.110636614 = score(doc=3752,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.4861493 = fieldWeight in 3752, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
      0.25 = coord(1/4)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  8. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.03
    0.027659154 = product of:
      0.110636614 = sum of:
        0.110636614 = weight(_text_:engines in 2580) [ClassicSimilarity], result of:
          0.110636614 = score(doc=2580,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.4861493 = fieldWeight in 2580, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2580)
      0.25 = coord(1/4)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the deep web. In addition, we bring a new concept into the discussion, the academic invisible web (AIW). We define the academic invisible web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the invisible web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on informetric laws. Literature review on approaches for uncovering information from the invisible web. Findings: Bergman's size estimate of the invisible web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the academic invisible web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the academic invisible web.
  9. Lewandowski, D.: How can library materials be ranked in the OPAC? (2009) 0.03
    0.027659154 = product of:
      0.110636614 = sum of:
        0.110636614 = weight(_text_:engines in 2810) [ClassicSimilarity], result of:
          0.110636614 = score(doc=2810,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.4861493 = fieldWeight in 2810, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2810)
      0.25 = coord(1/4)
    
    Abstract
    Some Online Public Access Catalogues offer a ranking component. However, ranking there is merely text-based and is doomed to fail due to limited text in bibliographic data. The main assumption for the talk is that we are in a situation where the appropriate ranking factors for OPACs should be defined, while the implementation is no major problem. We must define what we want, and not so much focus on the technical work. Some deep thinking is necessary on the "perfect results set" and how we can achieve it through ranking. The talk presents a set of potential ranking factors and clustering possibilities for further discussion. A look at commercial Web search engines could provide us with ideas how ranking can be improved with additional factors. Search engines are way beyond pure text-based ranking and apply ranking factors in the groups like popularity, freshness, personalisation, etc. The talk describes the main factors used in search engines and how derivatives of these could be used for libraries' purposes. The goal of ranking is to provide the user with the best-suitable results on top of the results list. How can this goal be achieved with the library catalogue and also concerning the library's different collections and databases? The assumption is that ranking of such materials is a complex problem and is yet nowhere near solved. Libraries should focus on ranking to improve user experience.
  10. Lewandowski, D.: Evaluating the retrieval effectiveness of web search engines using a representative query sample (2015) 0.03
    0.027100323 = product of:
      0.10840129 = sum of:
        0.10840129 = weight(_text_:engines in 2157) [ClassicSimilarity], result of:
          0.10840129 = score(doc=2157,freq=4.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.47632706 = fieldWeight in 2157, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=2157)
      0.25 = coord(1/4)
    
    Abstract
    Search engine retrieval effectiveness studies are usually small scale, using only limited query samples. Furthermore, queries are selected by the researchers. We address these issues by taking a random representative sample of 1,000 informational and 1,000 navigational queries from a major German search engine and comparing Google's and Bing's results based on this sample. Jurors were found through crowdsourcing, and data were collected using specialized software, the Relevance Assessment Tool (RAT). We found that although Google outperforms Bing in both query types, the difference in the performance for informational queries was rather low. However, for navigational queries, Google found the correct answer in 95.3% of cases, whereas Bing only found the correct answer 76.6% of the time. We conclude that search engine performance on navigational queries is of great importance, because users in this case can clearly identify queries that have returned correct results. So, performance on this query type may contribute to explaining user satisfaction with search engines.
  11. Lewandowski, D.: Mit welchen Kennzahlen lässt sich die Qualität von Suchmaschinen messen? (2007) 0.02
    0.01916282 = product of:
      0.07665128 = sum of:
        0.07665128 = weight(_text_:engines in 378) [ClassicSimilarity], result of:
          0.07665128 = score(doc=378,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.33681408 = fieldWeight in 378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=378)
      0.25 = coord(1/4)
    
    Source
    Macht der Suchmaschinen: The Power of Search Engines. Hrsg.: Machill, M. u. M. Beiler
  12. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.02
    0.015969018 = product of:
      0.06387607 = sum of:
        0.06387607 = weight(_text_:engines in 3144) [ClassicSimilarity], result of:
          0.06387607 = score(doc=3144,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.2806784 = fieldWeight in 3144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
      0.25 = coord(1/4)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
  13. Lewandowski, D.: Suchmaschinen - ein Thema für die Informationswissenschaft (2005) 0.01
    0.007974556 = product of:
      0.031898223 = sum of:
        0.031898223 = product of:
          0.063796446 = sum of:
            0.063796446 = weight(_text_:programming in 3183) [ClassicSimilarity], result of:
              0.063796446 = score(doc=3183,freq=2.0), product of:
                0.29361802 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.04479146 = queryNorm
                0.21727702 = fieldWeight in 3183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Neben den "harten Faktoren" der Oualität der Suchergebnisse spielt auch die Gestaltung von Suchinterfaces eine wichtige Rolle für die Akzeptanz bzw. Nicht-Akzeptanz von Suchwerkzeugen. Die Untersuchung von Jens Fauldrath und Arne Kunisch vergleicht die Interfaces der wichtigsten in Deutschland vertretenen Suchmaschinen und Portale und gibt Empfehlungen für deren Gestaltung und Funktionsumfang. Neue Wege in der Gestaltung von Ergebnismengen beschreibt der Beitrag von Fridolin Wild. Anhand des Vergleichs von bestehenden Visualisierungslösungen werden best practices für die Ergebnispräsentation herausgearbeitet. Für die Zukunft rechnet Wild mit einem zunehmenden Einsatz solcher Systeme, da er in ihnen die Möglichkeit sieht, nicht nur die Benutzeroberflächen zu verändern, sondern auch das Retrivalverfahren an sich zu verbessern. Die Internationalität des Web hat es mit sich gebracht, dass Suchmaschinen in der Regel für den weltweiten Markt entwickelt werden. Wie sie mit einzelnen Sprachen umgehen, ist bisher weitgehend un geklärt. Eine Untersuchung über den Umgang von Suchmaschinen mit den Eigenheiten der deutschen Sprache legen Esther Guggenheim und Judith Bar-Ilan vor. Sie kommen zu dem Schluss, dass die populären Suchmaschinen zunehmend besser mit deutschsprachigen Anfragen umgehen können, sehen allerdings weitere Verbesserungsmöglichkeiten. Dem noch relativ neuen Forschungsgebiet der Webometrie ist der Beitrag von Philipp Mayr und Fabio Tosques zuzuordnen. Webometrie wendet die aus der Bibliometrie bzw. Informetrie bekannten Verfahren auf den Web-Korpus an. Im vorliegenden Beitrag wird das Application Programming Interface (API) von Google auf seine Tauglichkeit für webometrische Untersuchungen getestet. Die Autoren kommen zu dem Schluss, dass kleinere Einschränkungen und Probleme nicht die zahlreichen Möglichkeiten, die das API bietet, mindern. Ein Beispiel für den Einsatz von Suchmaschinen-Technologie in der Praxis beschreibt schließlich der letzte Beitrag des Hefts. Friedrich Summann und Sebastian Wolf stellen eine Suchmaschine für wissenschaftliche Inhalte vor, die die Oualität von Fachdatenbanken mit der Benutzerfreundlichkeit von Web-Suchmaschinen verbinden soll. Im Aufsatz werden die eingesetzten Technologien und die möglichen Einsatzgebiete beschrieben. Der Gastherausgeber wünscht sich von diesem Themenheft, dass es Anregungen für weitere Forschungs- und Anwendungsprojekte geben möge, sei dies an Hochschulen oder in Unternehmen."
  14. Lewandowski, D.: Alles nur noch Google? : Entwicklungen im Bereich der WWW-Suchmaschinen (2002) 0.01
    0.0060686246 = product of:
      0.024274498 = sum of:
        0.024274498 = product of:
          0.048548996 = sum of:
            0.048548996 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.048548996 = score(doc=997,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.30952093 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    29. 9.2002 18:49:22
  15. Lewandowski, D.: Abfragesprachen und erweiterte Funktionen von WWW-Suchmaschinen (2004) 0.01
    0.0060686246 = product of:
      0.024274498 = sum of:
        0.024274498 = product of:
          0.048548996 = sum of:
            0.048548996 = weight(_text_:22 in 2314) [ClassicSimilarity], result of:
              0.048548996 = score(doc=2314,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.30952093 = fieldWeight in 2314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2314)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    28.11.2004 13:11:22
  16. Lewandowski, D.: Query understanding (2011) 0.01
    0.0060686246 = product of:
      0.024274498 = sum of:
        0.024274498 = product of:
          0.048548996 = sum of:
            0.048548996 = weight(_text_:22 in 344) [ClassicSimilarity], result of:
              0.048548996 = score(doc=344,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.30952093 = fieldWeight in 344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=344)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    18. 9.2018 18:22:18
  17. Lewandowski, D.: ¬Die Macht der Suchmaschinen und ihr Einfluss auf unsere Entscheidungen (2014) 0.00
    0.0045514684 = product of:
      0.018205874 = sum of:
        0.018205874 = product of:
          0.036411747 = sum of:
            0.036411747 = weight(_text_:22 in 1491) [ClassicSimilarity], result of:
              0.036411747 = score(doc=1491,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.23214069 = fieldWeight in 1491, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1491)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.2014 18:54:11