Search (56 results, page 1 of 3)

  • × author_ss:"Lewandowski, D."
  1. Lewandowski, D.: Abfragesprachen und erweiterte Funktionen von WWW-Suchmaschinen (2004) 0.03
    0.027334882 = product of:
      0.0683372 = sum of:
        0.005448922 = weight(_text_:a in 2314) [ClassicSimilarity], result of:
          0.005448922 = score(doc=2314,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 2314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2314)
        0.06288828 = sum of:
          0.012630116 = weight(_text_:information in 2314) [ClassicSimilarity], result of:
            0.012630116 = score(doc=2314,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1551638 = fieldWeight in 2314, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=2314)
          0.050258167 = weight(_text_:22 in 2314) [ClassicSimilarity], result of:
            0.050258167 = score(doc=2314,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.30952093 = fieldWeight in 2314, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2314)
      0.4 = coord(2/5)
    
    Date
    28.11.2004 13:11:22
    Source
    Information - Wissenschaft und Praxis. 55(2004) H.2, S.97-102
    Type
    a
  2. Lewandowski, D.; Spree, U.: Ranking of Wikipedia articles in search engines revisited : fair ranking for reasonable quality? (2011) 0.02
    0.020882932 = product of:
      0.05220733 = sum of:
        0.009632425 = weight(_text_:a in 444) [ClassicSimilarity], result of:
          0.009632425 = score(doc=444,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 444, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
        0.042574905 = sum of:
          0.011163551 = weight(_text_:information in 444) [ClassicSimilarity], result of:
            0.011163551 = score(doc=444,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13714671 = fieldWeight in 444, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=444)
          0.031411353 = weight(_text_:22 in 444) [ClassicSimilarity], result of:
            0.031411353 = score(doc=444,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=444)
      0.4 = coord(2/5)
    
    Abstract
    This paper aims to review the fiercely discussed question of whether the ranking of Wikipedia articles in search engines is justified by the quality of the articles. After an overview of current research on information quality in Wikipedia, a summary of the extended discussion on the quality of encyclopedic entries in general is given. On this basis, a heuristic method for evaluating Wikipedia entries is developed and applied to Wikipedia articles that scored highly in a search engine retrieval effectiveness test and compared with the relevance judgment of jurors. In all search engines tested, Wikipedia results are unanimously judged better by the jurors than other results on the corresponding results position. Relevance judgments often roughly correspond with the results from the heuristic evaluation. Cases in which high relevance judgments are not in accordance with the comparatively low score from the heuristic evaluation are interpreted as an indicator of a high degree of trust in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its necessarily incoherent user model. A further tuning of the suggested criteria catalog, for instance, the different weighing of the supplied criteria, could serve as a starting point for a user model differentiated evaluation of Wikipedia articles. Approved methods of quality evaluation of reference works are applied to Wikipedia articles and integrated with the question of search engine evaluation.
    Date
    30. 9.2012 19:27:22
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.117-132
    Type
    a
  3. Lewandowski, D.; Sünkler, S.: What does Google recommend when you want to compare insurance offerings? (2019) 0.02
    0.020634085 = product of:
      0.051585212 = sum of:
        0.009010308 = weight(_text_:a in 5288) [ClassicSimilarity], result of:
          0.009010308 = score(doc=5288,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 5288, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5288)
        0.042574905 = sum of:
          0.011163551 = weight(_text_:information in 5288) [ClassicSimilarity], result of:
            0.011163551 = score(doc=5288,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.13714671 = fieldWeight in 5288, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5288)
          0.031411353 = weight(_text_:22 in 5288) [ClassicSimilarity], result of:
            0.031411353 = score(doc=5288,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 5288, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5288)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google's top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Information Science in the German-speaking Countries
    Source
    Aslib journal of information management. 71(2019) no.3, S.310-324
    Type
    a
  4. Lewandowski, D.: ¬Die Macht der Suchmaschinen und ihr Einfluss auf unsere Entscheidungen (2014) 0.02
    0.020501161 = product of:
      0.0512529 = sum of:
        0.004086692 = weight(_text_:a in 1491) [ClassicSimilarity], result of:
          0.004086692 = score(doc=1491,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 1491, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1491)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 1491) [ClassicSimilarity], result of:
            0.009472587 = score(doc=1491,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 1491, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=1491)
          0.037693623 = weight(_text_:22 in 1491) [ClassicSimilarity], result of:
            0.037693623 = score(doc=1491,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 1491, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1491)
      0.4 = coord(2/5)
    
    Date
    22. 9.2014 18:54:11
    Source
    Information - Wissenschaft und Praxis. 65(2014) H.4/5, S.231-238
    Type
    a
  5. Lewandowski, D.: Alles nur noch Google? : Entwicklungen im Bereich der WWW-Suchmaschinen (2002) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 997) [ClassicSimilarity], result of:
          0.005448922 = score(doc=997,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=997)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.050258167 = score(doc=997,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    29. 9.2002 18:49:22
    Type
    a
  6. Lewandowski, D.: Query understanding (2011) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 344) [ClassicSimilarity], result of:
          0.005448922 = score(doc=344,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=344)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 344) [ClassicSimilarity], result of:
              0.050258167 = score(doc=344,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=344)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    18. 9.2018 18:22:18
    Type
    a
  7. Lewandowski, D.; Haustein, S.: What does the German-language information science community cite? (2015) 0.01
    0.008193462 = product of:
      0.020483656 = sum of:
        0.0068111527 = weight(_text_:a in 2987) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=2987,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 2987, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2987)
        0.013672504 = product of:
          0.027345007 = sum of:
            0.027345007 = weight(_text_:information in 2987) [ClassicSimilarity], result of:
              0.027345007 = score(doc=2987,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3359395 = fieldWeight in 2987, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2987)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
    Type
    a
  8. Sundin, O.; Lewandowski, D.; Haider, J.: Whose relevance? : Web search engines as multisided relevance machines (2022) 0.01
    0.008092757 = product of:
      0.020231893 = sum of:
        0.010661141 = weight(_text_:a in 542) [ClassicSimilarity], result of:
          0.010661141 = score(doc=542,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 542, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=542)
        0.009570752 = product of:
          0.019141505 = sum of:
            0.019141505 = weight(_text_:information in 542) [ClassicSimilarity], result of:
              0.019141505 = score(doc=542,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23515764 = fieldWeight in 542, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This opinion piece takes Google's response to the so-called COVID-19 infodemic, as a starting point to argue for the need to consider societal relevance as a complement to other types of relevance. The authors maintain that if information science wants to be a discipline at the forefront of research on relevance, search engines, and their use, then the information science research community needs to address itself to the challenges and conditions that commercial search engines create in. The article concludes with a tentative list of related research topics.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.5, S.637-642
    Type
    a
  9. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.01
    0.008069544 = product of:
      0.020173859 = sum of:
        0.009010308 = weight(_text_:a in 3700) [ClassicSimilarity], result of:
          0.009010308 = score(doc=3700,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 3700, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
        0.011163551 = product of:
          0.022327103 = sum of:
            0.022327103 = weight(_text_:information in 3700) [ClassicSimilarity], result of:
              0.022327103 = score(doc=3700,freq=16.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27429342 = fieldWeight in 3700, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3700)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
    Type
    a
  10. Lewandowski, D.; Womser-Hacker, C.: Information seeking behaviour (2023) 0.01
    0.0073211575 = product of:
      0.018302893 = sum of:
        0.004767807 = weight(_text_:a in 816) [ClassicSimilarity], result of:
          0.004767807 = score(doc=816,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=816)
        0.013535086 = product of:
          0.027070172 = sum of:
            0.027070172 = weight(_text_:information in 816) [ClassicSimilarity], result of:
              0.027070172 = score(doc=816,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3325631 = fieldWeight in 816, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=816)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Die Vielzahl der Publikationen zeigt, dass Information Seeking Behaviour (ISB) bzw. Informationssuchverhalten in der informationswissenschaftlichen Forschung als relevantes Thema angesehen wird. ISB versteht sich als Unterkategorie von Information Behaviour (IB) bzw. Informationsverhalten, das jegliches menschliches Verhalten mit Bezug zu Wissen und Information umfasst, also z. B. auch Informationsvermeidung oder passives Informationsverhalten. ISB hingegen wurde anfänglich meist als bewusster Prozess verstanden, um sich aufgrund einer festgestellten Wissenslücke Information zu beschaffen. Information Seeking wird als eine alltägliche Aktivität angesehen, die meist dann auftritt, wenn eine informationell unterbestimmte Handlung durchgeführt werden soll
    Type
    a
  11. Lewandowski, D.; Spree, U.: ¬Die Forschungsgruppe Search Studies an der HAW Hamburg (2019) 0.01
    0.007058388 = product of:
      0.01764597 = sum of:
        0.008173384 = weight(_text_:a in 5021) [ClassicSimilarity], result of:
          0.008173384 = score(doc=5021,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 5021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=5021)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 5021) [ClassicSimilarity], result of:
              0.018945174 = score(doc=5021,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 5021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5021)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Information - Wissenschaft und Praxis. 70(2019) H.1, S.1-2
    Type
    a
  12. Lewandowski, D.: ¬A framework for evaluating the retrieval effectiveness of search engines (2012) 0.01
    0.006334501 = product of:
      0.015836252 = sum of:
        0.009138121 = weight(_text_:a in 106) [ClassicSimilarity], result of:
          0.009138121 = score(doc=106,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 106, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=106)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 106) [ClassicSimilarity], result of:
              0.013396261 = score(doc=106,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 106, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This chapter presents a theoretical framework for evaluating next generation search engines. The author focuses on search engines whose results presentation is enriched with additional information and does not merely present the usual list of "10 blue links," that is, of ten links to results, accompanied by a short description. While Web search is used as an example here, the framework can easily be applied to search engines in any other area. The framework not only addresses the results presentation, but also takes into account an extension of the general design of retrieval effectiveness tests. The chapter examines the ways in which this design might influence the results of such studies and how a reliable test is best designed.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Type
    a
  13. Lewandowski, D.: Suchmaschinen (2013) 0.01
    0.00588199 = product of:
      0.014704974 = sum of:
        0.0068111527 = weight(_text_:a in 731) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=731,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=731)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 731) [ClassicSimilarity], result of:
              0.015787644 = score(doc=731,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 731, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=731)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
    Type
    a
  14. Lewandowski, D.; Drechsler, J.; Mach, S. von: Deriving query intents from web search engine queries (2012) 0.01
    0.005182888 = product of:
      0.012957219 = sum of:
        0.009010308 = weight(_text_:a in 385) [ClassicSimilarity], result of:
          0.009010308 = score(doc=385,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 385, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=385)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 385) [ClassicSimilarity], result of:
              0.007893822 = score(doc=385,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 385, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=385)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The purpose of this article is to test the reliability of query intents derived from queries, either by the user who entered the query or by another juror. We report the findings of three studies. First, we conducted a large-scale classification study (~50,000 queries) using a crowdsourcing approach. Next, we used clickthrough data from a search engine log and validated the judgments given by the jurors from the crowdsourcing study. Finally, we conducted an online survey on a commercial search engine's portal. Because we used the same queries for all three studies, we also were able to compare the results and the effectiveness of the different approaches. We found that neither the crowdsourcing approach, using jurors who classified queries originating from other users, nor the questionnaire approach, using searchers who were asked about their own query that they just entered into a Web search engine, led to satisfying results. This leads us to conclude that there was little understanding of the classification tasks, even though both groups of jurors were given detailed instructions. Although we used manual classification, our research also has important implications for automatic classification. We must question the success of approaches using automatic classification and comparing its performance to a baseline from human jurors.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.9, S.1773-1788
    Type
    a
  15. Lewandowski, D.: Evaluating the retrieval effectiveness of web search engines using a representative query sample (2015) 0.01
    0.0051638708 = product of:
      0.012909677 = sum of:
        0.008173384 = weight(_text_:a in 2157) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2157,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2157, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2157)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 2157) [ClassicSimilarity], result of:
              0.009472587 = score(doc=2157,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 2157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2157)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Search engine retrieval effectiveness studies are usually small scale, using only limited query samples. Furthermore, queries are selected by the researchers. We address these issues by taking a random representative sample of 1,000 informational and 1,000 navigational queries from a major German search engine and comparing Google's and Bing's results based on this sample. Jurors were found through crowdsourcing, and data were collected using specialized software, the Relevance Assessment Tool (RAT). We found that although Google outperforms Bing in both query types, the difference in the performance for informational queries was rather low. However, for navigational queries, Google found the correct answer in 95.3% of cases, whereas Bing only found the correct answer 76.6% of the time. We conclude that search engine performance on navigational queries is of great importance, because users in this case can clearly identify queries that have returned correct results. So, performance on this query type may contribute to explaining user satisfaction with search engines.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1763-1775
    Type
    a
  16. Lewandowski, D.; Sünkler, S.; Kerkmann, F.: Are ads on Google search engine results pages labeled clearly enough? : the influence of knowledge on search ads on users' selection behaviour (2017) 0.01
    0.005093954 = product of:
      0.012734884 = sum of:
        0.005898632 = weight(_text_:a in 3567) [ClassicSimilarity], result of:
          0.005898632 = score(doc=3567,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 3567, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3567)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 3567) [ClassicSimilarity], result of:
              0.013672504 = score(doc=3567,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 3567, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3567)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In an online experiment using a representative sample of the German online population (n = 1.000), we compare users' selection behaviour on two versions of the same Google search engine results page (SERP), one showing advertisements and organic results, the other showing organic results only. Selection behaviour is analyzed in relation to users' knowledge on Google's business model, on SERP design, and on these users' actual performance in marking advertisements on SERPs correctly. We find that users who were not able to mark ads correctly selected ads significantly more often. This leads to the conclusion that ads need to be labeled more clearly, and that there is a need for more information literacy in search engine users.
    Source
    Everything changes, everything stays the same? - Understanding information spaces : Proceedings of the 15th International Symposium of Information Science (ISI 2017), Berlin/Germany, 13th - 15th March 2017. Eds.: M. Gäde, V. Trkulja u. V. Petras
    Type
    a
  17. Lewandowski, D.; Krewinkel, A.; Gleissner, M.; Osterode, D.; Tolg, B.; Holle, M.; Sünkler, S.: Entwicklung und Anwendung einer Software zur automatisierten Kontrolle des Lebensmittelmarktes im Internet mit informationswissenschaftlichen Methoden (2019) 0.00
    0.0049910345 = product of:
      0.012477586 = sum of:
        0.005779455 = weight(_text_:a in 5025) [ClassicSimilarity], result of:
          0.005779455 = score(doc=5025,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 5025, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5025)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 5025) [ClassicSimilarity], result of:
              0.013396261 = score(doc=5025,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 5025, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5025)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In diesem Artikel präsentieren wir die Durchführung und die Ergebnisse eines interdisziplinären Forschungsprojekts zum Thema automatisierte Lebensmittelkontrolle im Web. Es wurden Kompetenzen aus den Disziplinen Lebensmittelwissenschaft, Rechtswissenschaft, Informationswissenschaft und Informatik dazu genutzt, ein detailliertes Konzept und einen Software-Prototypen zu entwickeln, um das Internet nach Produktangeboten zu durchsuchen, die gegen das Lebensmittelrecht verstoßen. Dabei wird deutlich, wie ein solcher Anwendungsfall von den Methoden der Information-Retrieval-Evaluierung profitiert, und wie sich mit relativ geringem Aufwand eine flexible Software programmieren lässt, die auch für eine Vielzahl anderer Fragestellungen einsetzbar ist. Die Ergebnisse des Projekts zeigen, wie komplexe Arbeitsprozesse einer Behörde mit Hilfe der Methoden von Retrieval-Tests und gängigen Verfahren aus dem maschinellen Lernen effektiv und effizient unterstützt werden können.
    Source
    Information - Wissenschaft und Praxis. 70(2019) H.1, S.33-45
    Type
    a
  18. Lewandowski, D.; Kerkmann, F.; Rümmele, S.; Sünkler, S.: ¬An empirical investigation on search engine ad disclosure (2018) 0.00
    0.0049073496 = product of:
      0.012268374 = sum of:
        0.0067426977 = weight(_text_:a in 4115) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4115,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4115)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 4115) [ClassicSimilarity], result of:
              0.011051352 = score(doc=4115,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 4115, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4115)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This representative study of German search engine users (N?=?1,000) focuses on the ability of users to distinguish between organic results and advertisements on Google results pages. We combine questions about Google's business with task-based studies in which users were asked to distinguish between ads and organic results in screenshots of results pages. We find that only a small percentage of users can reliably distinguish between ads and organic results, and that user knowledge of Google's business model is very limited. We conclude that ads are insufficiently labelled as such, and that many users may click on ads assuming that they are selecting organic results.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.420-437
    Type
    a
  19. Lewandowski, D.: Web Information Retrieval (2005) 0.00
    0.0048788195 = product of:
      0.012197048 = sum of:
        0.002724461 = weight(_text_:a in 4028) [ClassicSimilarity], result of:
          0.002724461 = score(doc=4028,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.050957955 = fieldWeight in 4028, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4028)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 4028) [ClassicSimilarity], result of:
              0.018945174 = score(doc=4028,freq=18.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274568 = fieldWeight in 4028, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4028)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    WebInformationRetrieval hat sich als gesonderter Forschungsbereich herausgebildet. Neben den im klassischen Information Retrieval behandelten Fragen ergeben sich durch die Eigenheiten des Web neue und zusätzliche Forschungsfragen. Die Unterschiede zwischen Information Retrieval und Web Information Retrieval werden diskutiert. Derzweite Teil des Aufsatzes gibt einen Überblick über die Forschungsliteratur der letzten zwei Jahre. Dieser Aufsatz gibt einen Überblick über den Stand der Forschung im Bereich Web Information Retrieval. Im ersten Teil werden die besonderen Probleme, die sich in diesem Bereich ergeben, anhand einer Gegenüberstellung mit dem "klassischen" Information Retrieval erläutert. Der weitere Text diskutiert die wichtigste in den letzten Jahren erschienene Literatur zum Thema, wobei ein Schwerpunkt auf die - so vorhanden-deutschsprachige Literatur gelegt wird. Der Schwerpunkt liegt auf Literatur aus den Jahren 2003 und 2004. Zum einen zeigt sich in dem betrachteten Forschungsfeld eine schnelle Entwicklung, so dass viele ältere Untersuchungen nur noch einen historischen bzw. methodischen Wert haben; andererseits existieren umfassende ältere Reviewartikel (s. v.a. Rasmussen 2003). Schon bei der Durchsicht der Literatur wird allerdings deutlich, dass zu einigen Themenfeldern keine oder nur wenig deutschsprachige Literatur vorhanden ist. Leider ist dies aber nicht nur darauf zurückzuführen, dass die Autoren aus den deutschsprachigen Ländern ihre Ergebnisse in englischer Sprache publizieren. Vielmehr wird deutlich, dass in diesen Ländern nur wenig Forschung im Suchmaschinen-Bereich stattfindet. Insbesondere zu sprachspezifischen Problemen von Web-Suchmaschinen fehlen Untersuchungen. Ein weiteres Problem der Forschung im Suchmaschinen-Bereich liegt in der Tatsache begründet, dass diese zu einem großen Teil innerhalb von Unternehmen stattfindet, welche sich scheuen, die Ergebnisse in großem Umfang zu publizieren, da sie fürchten, die Konkurrenz könnte von solchen Veröffentlichungen profitieren. So finden sich etwa auch Vergleichszahlen über einzelne Suchmaschinen oft nur innerhalb von Vorträgen oder Präsentationen von Firmenvertretern (z.B. Singhal 2004; Dean 2004). Das Hauptaugenmerk dieses Artikels liegt auf der Frage, inwieweit Suchmaschinen in der Lage sind, die im Web vorhanden Inhalte zu indexieren, mit welchen Methoden sie dies tun und ob bzw. wie sie ihre Ziele erreichen. Ausgenommen bleiben damit explizit Fragen der Effizienz bei der Erschließung des Web und der Skalierbarkeit von Suchmaschinen. Anders formuliert: Diese Übersicht orientiert sich an klassisch informationswissenschaftlichen Fragen und spart die eher im Bereich der Informatik diskutierten Fragen weitgehend aus.
    Eine regelmäßige Übersicht neuer US-Patente und US-Patentanmeldungen im Bereich Information Retrieval bietet die News-Seite Resourceshelf (www.resourceshelf.com).
    Content
    Mit einer Tabelle, die eine Gegenüberstellung des WebRetrieval zum 'klassischen' Information Retrieval anbietet
    Source
    Information - Wissenschaft und Praxis. 56(2005) H.1, S.5-12
    Type
    a
  20. Lewandowski, D.: Wie können sich Bibliotheken gegenüber Wissenschaftssuchmaschinen positionieren? (2007) 0.00
    0.0047055925 = product of:
      0.011763981 = sum of:
        0.005448922 = weight(_text_:a in 5112) [ClassicSimilarity], result of:
          0.005448922 = score(doc=5112,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 5112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5112)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 5112) [ClassicSimilarity], result of:
              0.012630116 = score(doc=5112,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 5112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5112)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Wa(h)re Information: 29. Österreichischer Bibliothekartag Bregenz, 19.-23.9.2006. Hrsg.: Harald Weigel
    Type
    a