Search (14 results, page 1 of 1)

  • × language_ss:"e"
  • × author_ss:"Lewandowski, D."
  1. Lewandowski, D.; Haustein, S.: What does the German-language information science community cite? (2015) 0.01
    0.0051520485 = product of:
      0.020608194 = sum of:
        0.020608194 = weight(_text_:information in 2987) [ClassicSimilarity], result of:
          0.020608194 = score(doc=2987,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3359395 = fieldWeight in 2987, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2987)
      0.25 = coord(1/4)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  2. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.00
    0.0042066295 = product of:
      0.016826518 = sum of:
        0.016826518 = weight(_text_:information in 3700) [ClassicSimilarity], result of:
          0.016826518 = score(doc=3700,freq=16.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27429342 = fieldWeight in 3700, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
      0.25 = coord(1/4)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  3. Sundin, O.; Lewandowski, D.; Haider, J.: Whose relevance? : Web search engines as multisided relevance machines (2022) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 542) [ClassicSimilarity], result of:
          0.014425736 = score(doc=542,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 542, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=542)
      0.25 = coord(1/4)
    
    Abstract
    This opinion piece takes Google's response to the so-called COVID-19 infodemic, as a starting point to argue for the need to consider societal relevance as a complement to other types of relevance. The authors maintain that if information science wants to be a discipline at the forefront of research on relevance, search engines, and their use, then the information science research community needs to address itself to the challenges and conditions that commercial search engines create in. The article concludes with a tentative list of related research topics.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.5, S.637-642
  4. Lewandowski, D.; Sünkler, S.; Kerkmann, F.: Are ads on Google search engine results pages labeled clearly enough? : the influence of knowledge on search ads on users' selection behaviour (2017) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 3567) [ClassicSimilarity], result of:
          0.010304097 = score(doc=3567,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 3567, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3567)
      0.25 = coord(1/4)
    
    Abstract
    In an online experiment using a representative sample of the German online population (n = 1.000), we compare users' selection behaviour on two versions of the same Google search engine results page (SERP), one showing advertisements and organic results, the other showing organic results only. Selection behaviour is analyzed in relation to users' knowledge on Google's business model, on SERP design, and on these users' actual performance in marking advertisements on SERPs correctly. We find that users who were not able to mark ads correctly selected ads significantly more often. This leads to the conclusion that ads need to be labeled more clearly, and that there is a need for more information literacy in search engine users.
    Source
    Everything changes, everything stays the same? - Understanding information spaces : Proceedings of the 15th International Symposium of Information Science (ISI 2017), Berlin/Germany, 13th - 15th March 2017. Eds.: M. Gäde, V. Trkulja u. V. Petras
  5. Lewandowski, D.: ¬A framework for evaluating the retrieval effectiveness of search engines (2012) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 106) [ClassicSimilarity], result of:
          0.010095911 = score(doc=106,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 106, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=106)
      0.25 = coord(1/4)
    
    Abstract
    This chapter presents a theoretical framework for evaluating next generation search engines. The author focuses on search engines whose results presentation is enriched with additional information and does not merely present the usual list of "10 blue links," that is, of ten links to results, accompanied by a short description. While Web search is used as an example here, the framework can easily be applied to search engines in any other area. The framework not only addresses the results presentation, but also takes into account an extension of the general design of retrieval effectiveness tests. The chapter examines the ways in which this design might influence the results of such studies and how a reliable test is best designed.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  6. Lewandowski, D.; Spree, U.: Ranking of Wikipedia articles in search engines revisited : fair ranking for reasonable quality? (2011) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 444) [ClassicSimilarity], result of:
          0.008413259 = score(doc=444,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
      0.25 = coord(1/4)
    
    Abstract
    This paper aims to review the fiercely discussed question of whether the ranking of Wikipedia articles in search engines is justified by the quality of the articles. After an overview of current research on information quality in Wikipedia, a summary of the extended discussion on the quality of encyclopedic entries in general is given. On this basis, a heuristic method for evaluating Wikipedia entries is developed and applied to Wikipedia articles that scored highly in a search engine retrieval effectiveness test and compared with the relevance judgment of jurors. In all search engines tested, Wikipedia results are unanimously judged better by the jurors than other results on the corresponding results position. Relevance judgments often roughly correspond with the results from the heuristic evaluation. Cases in which high relevance judgments are not in accordance with the comparatively low score from the heuristic evaluation are interpreted as an indicator of a high degree of trust in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its necessarily incoherent user model. A further tuning of the suggested criteria catalog, for instance, the different weighing of the supplied criteria, could serve as a starting point for a user model differentiated evaluation of Wikipedia articles. Approved methods of quality evaluation of reference works are applied to Wikipedia articles and integrated with the question of search engine evaluation.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.117-132
  7. Lewandowski, D.; Sünkler, S.: What does Google recommend when you want to compare insurance offerings? (2019) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 5288) [ClassicSimilarity], result of:
          0.008413259 = score(doc=5288,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 5288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5288)
      0.25 = coord(1/4)
    
    Footnote
    Beitrag in einem Special Issue: Information Science in the German-speaking Countries
    Source
    Aslib journal of information management. 71(2019) no.3, S.310-324
  8. Lewandowski, D.: Nachweis deutschsprachiger bibliotheks- und informationswissenschaftlicher Aufsätze in Google Scholar (2007) 0.00
    0.0020821756 = product of:
      0.008328702 = sum of:
        0.008328702 = weight(_text_:information in 284) [ClassicSimilarity], result of:
          0.008328702 = score(doc=284,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13576832 = fieldWeight in 284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=284)
      0.25 = coord(1/4)
    
    Source
    Information - Wissenschaft und Praxis. 58(2007) H.3, S.165-168
  9. Lewandowski, D.; Kerkmann, F.; Rümmele, S.; Sünkler, S.: ¬An empirical investigation on search engine ad disclosure (2018) 0.00
    0.0020821756 = product of:
      0.008328702 = sum of:
        0.008328702 = weight(_text_:information in 4115) [ClassicSimilarity], result of:
          0.008328702 = score(doc=4115,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13576832 = fieldWeight in 4115, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4115)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.420-437
  10. Lewandowski, D.: Evaluating the retrieval effectiveness of web search engines using a representative query sample (2015) 0.00
    0.0017847219 = product of:
      0.0071388874 = sum of:
        0.0071388874 = weight(_text_:information in 2157) [ClassicSimilarity], result of:
          0.0071388874 = score(doc=2157,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.116372846 = fieldWeight in 2157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2157)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1763-1775
  11. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 3752) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=3752,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 3752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
      0.25 = coord(1/4)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  12. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 2580) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=2580,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 2580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2580)
      0.25 = coord(1/4)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the deep web. In addition, we bring a new concept into the discussion, the academic invisible web (AIW). We define the academic invisible web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the invisible web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on informetric laws. Literature review on approaches for uncovering information from the invisible web. Findings: Bergman's size estimate of the invisible web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the academic invisible web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the academic invisible web.
  13. Lewandowski, D.; Drechsler, J.; Mach, S. von: Deriving query intents from web search engine queries (2012) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 385) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=385,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=385)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.9, S.1773-1788
  14. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 3144) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=3144,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 3144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
      0.25 = coord(1/4)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.