Search (8 results, page 1 of 1)

  • × theme_ss:"Suchmaschinen"
  • × year_i:[2020 TO 2030}
  1. Sundin, O.; Lewandowski, D.; Haider, J.: Whose relevance? : Web search engines as multisided relevance machines (2022) 0.11
    0.11218452 = product of:
      0.16827677 = sum of:
        0.08135357 = weight(_text_:search in 542) [ClassicSimilarity], result of:
          0.08135357 = score(doc=542,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.46558946 = fieldWeight in 542, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=542)
        0.086923204 = product of:
          0.17384641 = sum of:
            0.17384641 = weight(_text_:engines in 542) [ClassicSimilarity], result of:
              0.17384641 = score(doc=542,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.68060905 = fieldWeight in 542, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This opinion piece takes Google's response to the so-called COVID-19 infodemic, as a starting point to argue for the need to consider societal relevance as a complement to other types of relevance. The authors maintain that if information science wants to be a discipline at the forefront of research on relevance, search engines, and their use, then the information science research community needs to address itself to the challenges and conditions that commercial search engines create in. The article concludes with a tentative list of related research topics.
  2. Vegt, A. van der; Zuccon, G.; Koopman, B.: Do better search engines really equate to better clinical decisions? : If not, why not? (2021) 0.10
    0.10452528 = product of:
      0.15678792 = sum of:
        0.10609328 = weight(_text_:search in 150) [ClassicSimilarity], result of:
          0.10609328 = score(doc=150,freq=20.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.60717577 = fieldWeight in 150, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=150)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 150) [ClassicSimilarity], result of:
              0.10138928 = score(doc=150,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 150, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Previous research has found that improved search engine effectiveness-evaluated using a batch-style approach-does not always translate to significant improvements in user task performance; however, these prior studies focused on simple recall and precision-based search tasks. We investigated the same relationship, but for realistic, complex search tasks required in clinical decision making. One hundred and nine clinicians and final year medical students answered 16 clinical questions. Although the search engine did improve answer accuracy by 20 percentage points, there was no significant difference when participants used a more effective, state-of-the-art search engine. We also found that the search engine effectiveness difference, identified in the lab, was diminished by around 70% when the search engines were used with real users. Despite the aid of the search engine, half of the clinical questions were answered incorrectly. We further identified the relative contribution of search engine effectiveness to the overall end task success. We found that the ability to interpret documents correctly was a much more important factor impacting task success. If these findings are representative, information retrieval research may need to reorient its emphasis towards helping users to better understand information, rather than just finding it for them.
  3. Christensen, A.: Wissenschaftliche Literatur entdecken : was bibliothekarische Discovery-Systeme von der Konkurrenz lernen und was sie ihr zeigen können (2022) 0.09
    0.091598265 = product of:
      0.1373974 = sum of:
        0.0664249 = weight(_text_:search in 833) [ClassicSimilarity], result of:
          0.0664249 = score(doc=833,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.38015217 = fieldWeight in 833, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=833)
        0.070972495 = product of:
          0.14194499 = sum of:
            0.14194499 = weight(_text_:engines in 833) [ClassicSimilarity], result of:
              0.14194499 = score(doc=833,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.5557149 = fieldWeight in 833, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=833)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In den letzten Jahren ist das Angebot an Academic Search Engines für die Recherche nach Fachliteratur zu allen Wissenschaftsgebieten stark angewachsen und ergänzt die beliebten kommerziellen Angebote wie Web of Science oder Scopus. Der Artikel zeigt die wesentlichen Unterschiede zwischen bibliothekarischen Discovery-Systemen und Academic Search Engines wie Base, Dimensions oder Open Alex auf und diskutiert Möglichkeiten, wie beide von einander profitieren können. Diese Entwicklungsperspektiven betreffen Aspekte wie die Kontextualisierung von Wissen, die Datenmodellierung, die automatischen Datenanreicherung sowie den Zuschnitt von Suchräumen.
  4. Sa, N.; Yuan, X.(J.): Improving the effectiveness of voice search systems through partial query modification (2022) 0.03
    0.026839714 = product of:
      0.08051914 = sum of:
        0.08051914 = weight(_text_:search in 635) [ClassicSimilarity], result of:
          0.08051914 = score(doc=635,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.460814 = fieldWeight in 635, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=635)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper addresses the importance of improving the effectiveness of voice search systems through partial query modification. A user-centered experiment was designed to compare the effectiveness of an experimental system using partial query modification feature to a baseline system in which users could issue complete queries only, with 32 participants each searching on eight different tasks. The results indicate that the participants spent significantly more time preparing the modification but significantly less time speaking the modification by using the experimental system than by using the baseline system. The participants found that the experimental system (a) was more effective, (b) gave them more control, (c) was easier for the search tasks, and (d) saved them time than the baseline system. The results contribute to improving future voice search system design and benefiting the research community in general. System implications and future work were discussed.
  5. Haring, M.; Rudaev, A.; Lewandowski, D.: Google & Co. : wie die "Search Studies" an der HAW Hamburg unserem Nutzungsverhalten auf den Zahn fühlen: Blickpunkt angewandte Forschung (2022) 0.03
    0.025304725 = product of:
      0.075914174 = sum of:
        0.075914174 = weight(_text_:search in 630) [ClassicSimilarity], result of:
          0.075914174 = score(doc=630,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.43445963 = fieldWeight in 630, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=630)
      0.33333334 = coord(1/3)
    
    Abstract
    Die Forschungsgruppe Search Studies forscht an der HAW Hamburg zur Nutzung kommerzieller Suchmaschinen, zur Suchmaschinenoptimierung und zum Relevance Assessment von Suchmaschinen. Der Leiter der Forschungsgruppe, Prof. Dr. Dirk Lewandowski, stand für ein Interview zu seiner Tätigkeit und der seines Teams, sowie seiner Lehre an der HAW Hamburg zur Verfügung. Sollten wir Informationen aus dem Internet vertrauen oder ist Vorsicht angebracht?
  6. Kang, X.; Wu, Y.; Ren, W.: Toward action comprehension for searching : mining actionable intents in query entities (2020) 0.02
    0.022366427 = product of:
      0.06709928 = sum of:
        0.06709928 = weight(_text_:search in 5613) [ClassicSimilarity], result of:
          0.06709928 = score(doc=5613,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.3840117 = fieldWeight in 5613, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5613)
      0.33333334 = coord(1/3)
    
    Abstract
    Understanding search engine users' intents has been a popular study in information retrieval, which directly affects the quality of retrieved information. One of the fundamental problems in this field is to find a connection between the entity in a query and the potential intents of the users, the latter of which would further reveal important information for facilitating the users' future actions. In this article, we present a novel research method for mining the actionable intents for search users, by generating a ranked list of the potentially most informative actions based on a massive pool of action samples. We compare different search strategies and their combinations for retrieving the action pool and develop three criteria for measuring the informativeness of the selected action samples, that is, the significance of an action sample within the pool, the representativeness of an action sample for the other candidate samples, and the diverseness of an action sample with respect to the selected actions. Our experiment, based on the Action Mining (AM) query entity data set from the Actionable Knowledge Graph (AKG) task at NTCIR-13, suggests that the proposed approach is effective in generating an informative and early-satisfying ranking of potential actions for search users.
  7. Advanced online media use (2023) 0.02
    0.017893143 = product of:
      0.053679425 = sum of:
        0.053679425 = weight(_text_:search in 954) [ClassicSimilarity], result of:
          0.053679425 = score(doc=954,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.30720934 = fieldWeight in 954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=954)
      0.33333334 = coord(1/3)
    
    Content
    "1. Use a range of different media 2. Access paywalled media content 3. Use an advertising and tracking blocker 4. Use alternatives to Google Search 5. Use alternatives to YouTube 6. Use alternatives to Facebook and Twitter 7. Caution with Wikipedia 8. Web browser, email, and internet access 9. Access books and scientific papers 10. Access deleted web content"
  8. Zeynali-Tazehkandi, M.; Nowkarizi, M.: ¬ A dialectical approach to search engine evaluation (2020) 0.01
    0.013419857 = product of:
      0.04025957 = sum of:
        0.04025957 = weight(_text_:search in 185) [ClassicSimilarity], result of:
          0.04025957 = score(doc=185,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.230407 = fieldWeight in 185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=185)
      0.33333334 = coord(1/3)