Search (191 results, page 1 of 10)

  • × year_i:[2020 TO 2030}
  1. Sundin, O.; Lewandowski, D.; Haider, J.: Whose relevance? : Web search engines as multisided relevance machines (2022) 0.11
    0.11218452 = product of:
      0.16827677 = sum of:
        0.08135357 = weight(_text_:search in 542) [ClassicSimilarity], result of:
          0.08135357 = score(doc=542,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.46558946 = fieldWeight in 542, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=542)
        0.086923204 = product of:
          0.17384641 = sum of:
            0.17384641 = weight(_text_:engines in 542) [ClassicSimilarity], result of:
              0.17384641 = score(doc=542,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.68060905 = fieldWeight in 542, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This opinion piece takes Google's response to the so-called COVID-19 infodemic, as a starting point to argue for the need to consider societal relevance as a complement to other types of relevance. The authors maintain that if information science wants to be a discipline at the forefront of research on relevance, search engines, and their use, then the information science research community needs to address itself to the challenges and conditions that commercial search engines create in. The article concludes with a tentative list of related research topics.
  2. Vegt, A. van der; Zuccon, G.; Koopman, B.: Do better search engines really equate to better clinical decisions? : If not, why not? (2021) 0.10
    0.10452528 = product of:
      0.15678792 = sum of:
        0.10609328 = weight(_text_:search in 150) [ClassicSimilarity], result of:
          0.10609328 = score(doc=150,freq=20.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.60717577 = fieldWeight in 150, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=150)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 150) [ClassicSimilarity], result of:
              0.10138928 = score(doc=150,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 150, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Previous research has found that improved search engine effectiveness-evaluated using a batch-style approach-does not always translate to significant improvements in user task performance; however, these prior studies focused on simple recall and precision-based search tasks. We investigated the same relationship, but for realistic, complex search tasks required in clinical decision making. One hundred and nine clinicians and final year medical students answered 16 clinical questions. Although the search engine did improve answer accuracy by 20 percentage points, there was no significant difference when participants used a more effective, state-of-the-art search engine. We also found that the search engine effectiveness difference, identified in the lab, was diminished by around 70% when the search engines were used with real users. Despite the aid of the search engine, half of the clinical questions were answered incorrectly. We further identified the relative contribution of search engine effectiveness to the overall end task success. We found that the ability to interpret documents correctly was a much more important factor impacting task success. If these findings are representative, information retrieval research may need to reorient its emphasis towards helping users to better understand information, rather than just finding it for them.
  3. Chi, Y.; He, D.; Jeng, W.: Laypeople's source selection in online health information-seeking process (2020) 0.10
    0.102130555 = product of:
      0.15319583 = sum of:
        0.04744636 = weight(_text_:search in 34) [ClassicSimilarity], result of:
          0.04744636 = score(doc=34,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.27153727 = fieldWeight in 34, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=34)
        0.10574947 = sum of:
          0.07169304 = weight(_text_:engines in 34) [ClassicSimilarity], result of:
            0.07169304 = score(doc=34,freq=2.0), product of:
              0.25542772 = queryWeight, product of:
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.05027291 = queryNorm
              0.2806784 = fieldWeight in 34, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
          0.03405643 = weight(_text_:22 in 34) [ClassicSimilarity], result of:
            0.03405643 = score(doc=34,freq=2.0), product of:
              0.17604718 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05027291 = queryNorm
              0.19345059 = fieldWeight in 34, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
      0.6666667 = coord(2/3)
    
    Abstract
    For laypeople, searching online health information resources can be challenging due to topic complexity and the large number of online sources with differing quality. The goal of this article is to examine, among all the available online sources, which online sources laypeople select to address their health-related information needs, and whether or how much the severity of a health condition influences their selection. Twenty-four participants were recruited individually, and each was asked (using a retrieval system called HIS) to search for information regarding a severe health condition and a mild health condition, respectively. The selected online health information sources were automatically captured by the HIS system and classified at both the website and webpage levels. Participants' selection behavior patterns were then plotted across the whole information-seeking process. Our results demonstrate that laypeople's source selection fluctuates during the health information-seeking process, and also varies by the severity of health conditions. This study reveals laypeople's real usage of different types of online health information sources, and engenders implications to the design of search engines, as well as the development of health literacy programs.
    Date
    12.11.2020 13:22:09
  4. Huurdeman, H.C.; Kamps, J.: Designing multistage search systems to support the information seeking process (2020) 0.09
    0.09297244 = product of:
      0.13945866 = sum of:
        0.08876401 = weight(_text_:search in 5882) [ClassicSimilarity], result of:
          0.08876401 = score(doc=5882,freq=14.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5079997 = fieldWeight in 5882, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5882)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 5882) [ClassicSimilarity], result of:
              0.10138928 = score(doc=5882,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 5882, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5882)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Due to the advances in information retrieval in the past decades, search engines have become extremely efficient at acquiring useful sources in response to a user's query. However, for more prolonged and complex information seeking tasks, these search engines are not as well suited. During complex information seeking tasks, various stages may occur, which imply varying support needs for users. However, the implications of theoretical information seeking models for concrete search user interfaces (SUI) design are unclear, both at the level of the individual features and of the whole interface. Guidelines and design patterns for concrete SUIs, on the other hand, provide recommendations for feature design, but these are separated from their role in the information seeking process. This chapter addresses the question of how to design SUIs with enhanced support for the macro-level process, first by reviewing previous research. Subsequently, we outline a framework for complex task support, which explicitly connects the temporal development of complex tasks with different levels of support by SUI features. This is followed by a discussion of concrete system examples which include elements of the three dimensions of our framework in an exploratory search and sensemaking context. Moreover, we discuss the connection of navigation with the search-oriented framework. In our final discussion and conclusion, we provide recommendations for designing more holistic SUIs which potentially evolve along with a user's information seeking process.
    Source
    Understanding and improving information search [Vgl. unter: https://www.researchgate.net/publication/341747751_Designing_Multistage_Search_Systems_to_Support_the_Information_Seeking_Process]
  5. Christensen, A.: Wissenschaftliche Literatur entdecken : was bibliothekarische Discovery-Systeme von der Konkurrenz lernen und was sie ihr zeigen können (2022) 0.09
    0.091598265 = product of:
      0.1373974 = sum of:
        0.0664249 = weight(_text_:search in 833) [ClassicSimilarity], result of:
          0.0664249 = score(doc=833,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.38015217 = fieldWeight in 833, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=833)
        0.070972495 = product of:
          0.14194499 = sum of:
            0.14194499 = weight(_text_:engines in 833) [ClassicSimilarity], result of:
              0.14194499 = score(doc=833,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.5557149 = fieldWeight in 833, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=833)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In den letzten Jahren ist das Angebot an Academic Search Engines für die Recherche nach Fachliteratur zu allen Wissenschaftsgebieten stark angewachsen und ergänzt die beliebten kommerziellen Angebote wie Web of Science oder Scopus. Der Artikel zeigt die wesentlichen Unterschiede zwischen bibliothekarischen Discovery-Systemen und Academic Search Engines wie Base, Dimensions oder Open Alex auf und diskutiert Möglichkeiten, wie beide von einander profitieren können. Diese Entwicklungsperspektiven betreffen Aspekte wie die Kontextualisierung von Wissen, die Datenmodellierung, die automatischen Datenanreicherung sowie den Zuschnitt von Suchräumen.
  6. Vakkari, P.; Völske, M.; Potthast, M.; Hagen, M.; Stein, B.: Predicting essay quality from search and writing behavior (2021) 0.09
    0.0871595 = product of:
      0.13073924 = sum of:
        0.09489272 = weight(_text_:search in 260) [ClassicSimilarity], result of:
          0.09489272 = score(doc=260,freq=16.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.54307455 = fieldWeight in 260, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=260)
        0.03584652 = product of:
          0.07169304 = sum of:
            0.07169304 = weight(_text_:engines in 260) [ClassicSimilarity], result of:
              0.07169304 = score(doc=260,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2806784 = fieldWeight in 260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=260)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Few studies have investigated how search behavior affects complex writing tasks. We analyze a dataset of 150 long essays whose authors searched the ClueWeb09 corpus for source material, while all querying, clicking, and writing activity was meticulously recorded. We model the effect of search and writing behavior on essay quality using path analysis. Since the boil-down and build-up writing strategies identified in previous research have been found to affect search behavior, we model each writing strategy separately. Our analysis shows that the search process contributes significantly to essay quality through both direct and mediated effects, while the author's writing strategy moderates this relationship. Our models explain 25-35% of the variation in essay quality through rather simple search and writing process characteristics alone, a fact that has implications on how search engines could personalize result pages for writing tasks. Authors' writing strategies and associated searching patterns differ, producing differences in essay quality. In a nutshell: essay quality improves if search and writing strategies harmonize-build-up writers benefit from focused, in-depth querying, while boil-down writers fare better with a broader and shallower querying strategy.
  7. Li, Y.; Crescenzi, A.; Ward, A.R.; Capra, R.: Thinking inside the box : an evaluation of a novel search-assisting tool for supporting (meta)cognition during exploratory search (2023) 0.09
    0.0871595 = product of:
      0.13073924 = sum of:
        0.09489272 = weight(_text_:search in 1040) [ClassicSimilarity], result of:
          0.09489272 = score(doc=1040,freq=16.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.54307455 = fieldWeight in 1040, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1040)
        0.03584652 = product of:
          0.07169304 = sum of:
            0.07169304 = weight(_text_:engines in 1040) [ClassicSimilarity], result of:
              0.07169304 = score(doc=1040,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2806784 = fieldWeight in 1040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1040)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Exploratory searches involve significant cognitively demanding aiming at learning and investigation. However, users gain little support from search engines for their cognitive and metacognitive activities (e.g., discovery, synthesis, planning, transformation, monitoring, and reflection) during exploratory searches. To better support the exploratory search process, we designed a new search assistance tool called OrgBox. OrgBox allows users to drag-and-drop information they find during searches into "boxes" and "items" that can be created, labeled, and rearranged on a canvas. We conducted a controlled, within-subjects user study with 24 participants to evaluate the OrgBox versus a baseline tool called the OrgDoc that supported rich-text features. Our findings show that participants perceived the OrgBox tool to provide more support for grouping and reorganizing information, tracking thought processes, planning and monitoring search and task processes, and gaining a visual overview of the collected information. The usability test reveals users' preferences for simplicity, familiarity, and flexibility of the design of OrgBox, along with technical problems such as delay of response and restrictions of use. Our results have implications for the design of search-assisting systems that encourage cognitive and metacognitive activities during exploratory search processes.
  8. Sbaffi, L.; Zhao, C.: Modeling the online health information seeking process : information channel selection among university students (2020) 0.09
    0.087043464 = product of:
      0.1305652 = sum of:
        0.06973162 = weight(_text_:search in 5618) [ClassicSimilarity], result of:
          0.06973162 = score(doc=5618,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.39907667 = fieldWeight in 5618, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5618)
        0.060833566 = product of:
          0.12166713 = sum of:
            0.12166713 = weight(_text_:engines in 5618) [ClassicSimilarity], result of:
              0.12166713 = score(doc=5618,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.47632706 = fieldWeight in 5618, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5618)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study investigates the influence of individual and information characteristics on university students' information channel selection (that is, search engines, social question & answer sites, online health websites, and social networking sites) of online health information (OHI) for three different types of search tasks (factual, exploratory, and personal experience). Quantitative data were collected via an online questionnaire distributed to students on various postgraduate programs at a large UK university. In total, 291 responses were processed for descriptive statistics, Principal Component Analysis, and Poisson regression. Search engines are the most frequently used among the four channels of information discussed in this study. Credibility, ease of use, style, usefulness, and recommendation are the key factors influencing users' judgments of information characteristics (explaining over 62% of the variance). Poisson regression indicated that individuals' channel experience, age, student status, health status, and triangulation (comparing sources) as well as style, credibility, usefulness, and recommendation are substantive predictors for channel selection of OHI.
  9. Hasanain, M.; Elsayed, T.: Studying effectiveness of Web search for fact checking (2022) 0.07
    0.073910534 = product of:
      0.1108658 = sum of:
        0.07501928 = weight(_text_:search in 558) [ClassicSimilarity], result of:
          0.07501928 = score(doc=558,freq=10.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.4293381 = fieldWeight in 558, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=558)
        0.03584652 = product of:
          0.07169304 = sum of:
            0.07169304 = weight(_text_:engines in 558) [ClassicSimilarity], result of:
              0.07169304 = score(doc=558,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2806784 = fieldWeight in 558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=558)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Web search is commonly used by fact checking systems as a source of evidence for claim verification. In this work, we demonstrate that the task of retrieving pages useful for fact checking, called evidential pages, is indeed different from the task of retrieving topically relevant pages that are typically optimized by search engines; thus, it should be handled differently. We conduct a comprehensive study on the performance of retrieving evidential pages over a test collection we developed for the task of re-ranking Web pages by usefulness for fact-checking. Results show that pages (retrieved by a commercial search engine) that are topically relevant to a claim are not always useful for verifying it, and that the engine's performance in retrieving evidential pages is weakly correlated with retrieval of topically relevant pages. Additionally, we identify types of evidence in evidential pages and some linguistic cues that can help predict page usefulness. Moreover, preliminary experiments show that a retrieval model leveraging those cues has a higher performance compared to the search engine. Finally, we show that existing systems have a long way to go to support effective fact checking. To that end, our work provides insights to guide design of better future systems for the task.
  10. Kim, L.; Portenoy, J.H.; West, J.D.; Stovel, K.W.: Scientific journals still matter in the era of academic search engines and preprint archives (2020) 0.07
    0.07253622 = product of:
      0.10880433 = sum of:
        0.058109686 = weight(_text_:search in 5961) [ClassicSimilarity], result of:
          0.058109686 = score(doc=5961,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.33256388 = fieldWeight in 5961, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5961)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 5961) [ClassicSimilarity], result of:
              0.10138928 = score(doc=5961,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 5961, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5961)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Journals play a critical role in the scientific process because they evaluate the quality of incoming papers and offer an organizing filter for search. However, the role of journals has been called into question because new preprint archives and academic search engines make it easier to find articles independent of the journals that publish them. Research on this issue is complicated by the deeply confounded relationship between article quality and journal reputation. We present an innovative proxy for individual article quality that is divorced from the journal's reputation or impact factor: the number of citations to preprints posted on arXiv.org. Using this measure to study three subfields of physics that were early adopters of arXiv, we show that prior estimates of the effect of journal reputation on an individual article's impact (measured by citations) are likely inflated. While we find that higher-quality preprints in these subfields are now less likely to be published in journals compared to prior years, we find little systematic evidence that the role of journal reputation on article performance has declined.
  11. Hoeber, O.; Harvey, M.; Dewan Sagar, S.A.; Pointon, M.: ¬The effects of simulated interruptions on mobile search tasks (2022) 0.07
    0.07052815 = product of:
      0.105792224 = sum of:
        0.08876401 = weight(_text_:search in 563) [ClassicSimilarity], result of:
          0.08876401 = score(doc=563,freq=14.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5079997 = fieldWeight in 563, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=563)
        0.017028214 = product of:
          0.03405643 = sum of:
            0.03405643 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03405643 = score(doc=563,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.19345059 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    While it is clear that using a mobile device can interrupt real-world activities such as walking or driving, the effects of interruptions on mobile device use have been under-studied. We are particularly interested in how the ambient distraction of walking while using a mobile device, combined with the occurrence of simulated interruptions of different levels of cognitive complexity, affect web search activities. We have established an experimental design to study how the degree of cognitive complexity of simulated interruptions influences both objective and subjective search task performance. In a controlled laboratory study (n = 27), quantitative and qualitative data were collected on mobile search performance, perceptions of the interruptions, and how participants reacted to the interruptions, using a custom mobile eye-tracking app, a questionnaire, and observations. As expected, more cognitively complex interruptions resulted in increased overall task completion times and higher perceived impacts. Interestingly, the effect on the resumption lag or the actual search performance was not significant, showing the resiliency of people to resume their tasks after an interruption. Implications from this study enhance our understanding of how interruptions objectively and subjectively affect search task performance, motivating the need for providing explicit mobile search support to enable recovery from interruptions.
    Date
    3. 5.2022 13:22:33
  12. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.07
    0.06613848 = product of:
      0.09920772 = sum of:
        0.08217951 = weight(_text_:search in 5617) [ClassicSimilarity], result of:
          0.08217951 = score(doc=5617,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.47031635 = fieldWeight in 5617, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.017028214 = product of:
          0.03405643 = sum of:
            0.03405643 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
              0.03405643 = score(doc=5617,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.19345059 = fieldWeight in 5617, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5617)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  13. Ostani, M.M.; Sohrabi, M.C.; Taheri, S.M.; Asemi, A.: Localization of Schema.org for manuscript description in the Iranian-Islamic information context (2021) 0.07
    0.06542733 = product of:
      0.098141 = sum of:
        0.04744636 = weight(_text_:search in 585) [ClassicSimilarity], result of:
          0.04744636 = score(doc=585,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.27153727 = fieldWeight in 585, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=585)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 585) [ClassicSimilarity], result of:
              0.10138928 = score(doc=585,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 585, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=585)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study aims to assess the localization of Schema.org for manuscript description in the Iranian-Islamic information context using documentary and qualitative content analysis. The schema.org introduces schemas for different Web content objects so as to generate structured data. Given that the structure of Schema.org is ontological, the inheritance of the manuscript types from the properties of their parent types, as well as the localization and description of the specific properties of the manuscripts in the Iranian-Islamic information context were investigated in order to improve their indexability and semantic visibility in the Web search engines. The proposed properties specific to the manuscript type and the six proposed properties to be added to the "CreativeWork" type are found to be consistent with other schema properties. In turn, these properties lead to the localization of the existing schema for the manuscript type compatibility with the Iranian-Islamic information context. This schema is also applicable to centers with published records on the Web, and if markup with these properties, their indexability and semantic visibility in Web search engines increases accordingly. The generation of structured data in the Web environment through this schema is deemed to promote the concept of the Semantic Web, and make data and knowledge retrieval easier.
  14. Cho, H.; Pham, M.T.N.; Leonard, K.N.; Urban, A.C.: ¬A systematic literature review on image information needs and behaviors (2022) 0.05
    0.05490443 = product of:
      0.08235665 = sum of:
        0.053679425 = weight(_text_:search in 606) [ClassicSimilarity], result of:
          0.053679425 = score(doc=606,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.30720934 = fieldWeight in 606, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=606)
        0.028677218 = product of:
          0.057354435 = sum of:
            0.057354435 = weight(_text_:engines in 606) [ClassicSimilarity], result of:
              0.057354435 = score(doc=606,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.22454272 = fieldWeight in 606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=606)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose With ready access to search engines and social media platforms, the way people find image information has evolved and diversified in the past two decades. The purpose of this paper is to provide an overview of the literature on image information needs and behaviors. Design/methodology/approach Following an eight-step procedure for conducting systematic literature reviews, the paper presents an analysis of peer-reviewed work on image information needs and behaviors, with publications ranging from the years 1997 to 2019. Findings Application of the inclusion criteria led to 69 peer-reviewed works. These works were synthesized according to the following categories: research methods, users targeted, image types, identified needs, search behaviors and search obstacles. The reviewed studies show that people seek and use images for multiple reasons, including entertainment, illustration, aesthetic appreciation, knowledge construction, engagement, inspiration and social interactions. The reviewed studies also report that common strategies for image searches include keyword searches with short queries, browsing, specialization and reformulation. Observed trends suggest common deployment of query analysis, survey questionnaires and undergraduate participant pools to research image information needs and behavior. Originality/value At this point, after more than two decades of image information needs research, a holistic systematic review of the literature was long overdue. The way users find image information has evolved and diversified due to technological developments in image retrieval. By synthesizing this burgeoning field into specific foci, this systematic literature review provides a foundation for future empirical investigation. With this foundation set, the paper then pinpoints key research gaps to investigate, particularly the influence of user expertise, a need for more diverse population samples, a dearth of qualitative data, new search features and information and visual literacies instruction.
  15. Silva, S.E.; Reis, L.P.; Fernandes, J.M.; Sester Pereira, A.D.: ¬A multi-layer framework for semantic modeling (2020) 0.05
    0.05010998 = product of:
      0.07516497 = sum of:
        0.04648775 = weight(_text_:search in 5712) [ClassicSimilarity], result of:
          0.04648775 = score(doc=5712,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2660511 = fieldWeight in 5712, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=5712)
        0.028677218 = product of:
          0.057354435 = sum of:
            0.057354435 = weight(_text_:engines in 5712) [ClassicSimilarity], result of:
              0.057354435 = score(doc=5712,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.22454272 = fieldWeight in 5712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5712)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to introduce a multi-level framework for semantic modeling (MFSM) based on four signification levels: objects, classes of entities, instances and domains. In addition, four fundamental propositions of the signification process underpin these levels, namely, classification, decomposition, instantiation and contextualization. Design/methodology/approach The deductive approach guided the design of this modeling framework. The authors empirically validated the MFSM in two ways. First, the authors identified the signification processes used in articles that deal with semantic modeling. The authors then applied the MFSM to model the semantic context of the literature about lean manufacturing, a field of management science. Findings The MFSM presents a highly consistent approach about the signification process, integrates the semantic modeling literature in a new and comprehensive view; and permits the modeling of any semantic context, thus facilitating the development of knowledge organization systems based on semantic search. Research limitations/implications The use of MFSM is manual and, thus, requires a considerable effort of the team that decides to model a semantic context. In this paper, the modeling was generated by specialists, and in the future should be applicated to lay users. Practical implications The MFSM opens up avenues to a new form of classification of documents, as well as for the development of tools based on the semantic search, and to investigate how users do their searches. Social implications The MFSM can be used to model archives semantically in public or private settings. In future, it can be incorporated to search engines for more efficient searches of users. Originality/value The MFSM provides a new and comprehensive approach about the elementary levels and activities in the process of signification. In addition, this new framework presents a new form to model semantically any context classifying its objects.
  16. Singh, V.K.; Chayko, M.; Inamdar, R.; Floegel, D.: Female librarians and male computer programmers? : gender bias in occupational images on digital media platforms (2020) 0.05
    0.04626411 = product of:
      0.06939616 = sum of:
        0.03354964 = weight(_text_:search in 6) [ClassicSimilarity], result of:
          0.03354964 = score(doc=6,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 6, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6)
        0.03584652 = product of:
          0.07169304 = sum of:
            0.07169304 = weight(_text_:engines in 6) [ClassicSimilarity], result of:
              0.07169304 = score(doc=6,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2806784 = fieldWeight in 6, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Media platforms, technological systems, and search engines act as conduits and gatekeepers for all kinds of information. They often influence, reflect, and reinforce gender stereotypes, including those that represent occupations. This study examines the prevalence of gender stereotypes on digital media platforms and considers how human efforts to create and curate messages directly may impact these stereotypes. While gender stereotyping in social media and algorithms has received some examination in the recent literature, its prevalence in different types of platforms (for example, wiki vs. news vs. social network) and under differing conditions (for example, degrees of human- and machine-led content creation and curation) has yet to be studied. This research explores the extent to which stereotypes of certain strongly gendered professions (librarian, nurse, computer programmer, civil engineer) persist and may vary across digital platforms (Twitter, the New York Times online, Wikipedia, and Shutterstock). The results suggest that gender stereotypes are most likely to be challenged when human beings act directly to create and curate content in digital platforms, and that highly algorithmic approaches for curation showed little inclination towards breaking stereotypes. Implications for the more inclusive design and use of digital media platforms, particularly with regard to mediated occupational messaging, are discussed.
  17. Hammache, A.; Boughanem, M.: Term position-based language model for information retrieval (2021) 0.05
    0.04626411 = product of:
      0.06939616 = sum of:
        0.03354964 = weight(_text_:search in 216) [ClassicSimilarity], result of:
          0.03354964 = score(doc=216,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=216)
        0.03584652 = product of:
          0.07169304 = sum of:
            0.07169304 = weight(_text_:engines in 216) [ClassicSimilarity], result of:
              0.07169304 = score(doc=216,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2806784 = fieldWeight in 216, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=216)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Term position feature is widely and successfully used in IR and Web search engines, to enhance the retrieval effectiveness. This feature is essentially used for two purposes: to capture query terms proximity or to boost the weight of terms appearing in some parts of a document. In this paper, we are interested in this second category. We propose two novel query-independent techniques based on absolute term positions in a document, whose goal is to boost the weight of terms appearing in the beginning of a document. The first one considers only the earliest occurrence of a term in a document. The second one takes into account all term positions in a document. We formalize each of these two techniques as a document model based on term position, and then we incorporate it into a basic language model (LM). Two smoothing techniques, Dirichlet and Jelinek-Mercer, are considered in the basic LM. Experiments conducted on three TREC test collections show that our model, especially the version based on all term positions, achieves significant improvements over the baseline LMs, and it also often performs better than two state-of-the-art baseline models, the chronological term rank model and the Markov random field model.
  18. Delgado-Quirós, L.; Aguillo, I.F.; Martín-Martín, A.; López-Cózar, E.D.; Orduña-Malea, E.; Ortega, J.L.: Why are these publications missing? : uncovering the reasons behind the exclusion of documents in free-access scholarly databases (2024) 0.05
    0.04626411 = product of:
      0.06939616 = sum of:
        0.03354964 = weight(_text_:search in 1201) [ClassicSimilarity], result of:
          0.03354964 = score(doc=1201,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 1201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1201)
        0.03584652 = product of:
          0.07169304 = sum of:
            0.07169304 = weight(_text_:engines in 1201) [ClassicSimilarity], result of:
              0.07169304 = score(doc=1201,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2806784 = fieldWeight in 1201, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1201)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study analyses the coverage of seven free-access bibliographic databases (Crossref, Dimensions-non-subscription version, Google Scholar, Lens, Microsoft Academic, Scilit, and Semantic Scholar) to identify the potential reasons that might cause the exclusion of scholarly documents and how they could influence coverage. To do this, 116 k randomly selected bibliographic records from Crossref were used as a baseline. API endpoints and web scraping were used to query each database. The results show that coverage differences are mainly caused by the way each service builds their databases. While classic bibliographic databases ingest almost the exact same content from Crossref (Lens and Scilit miss 0.1% and 0.2% of the records, respectively), academic search engines present lower coverage (Google Scholar does not find: 9.8%, Semantic Scholar: 10%, and Microsoft Academic: 12%). Coverage differences are mainly attributed to external factors, such as web accessibility and robot exclusion policies (39.2%-46%), and internal requirements that exclude secondary content (6.5%-11.6%). In the case of Dimensions, the only classic bibliographic database with the lowest coverage (7.6%), internal selection criteria such as the indexation of full books instead of book chapters (65%) and the exclusion of secondary content (15%) are the main motives of missing publications.
  19. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.04
    0.044422872 = product of:
      0.066634305 = sum of:
        0.037957087 = weight(_text_:search in 79) [ClassicSimilarity], result of:
          0.037957087 = score(doc=79,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.21722981 = fieldWeight in 79, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.028677218 = product of:
          0.057354435 = sum of:
            0.057354435 = weight(_text_:engines in 79) [ClassicSimilarity], result of:
              0.057354435 = score(doc=79,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.22454272 = fieldWeight in 79, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
  20. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.04
    0.04298305 = product of:
      0.064474575 = sum of:
        0.04744636 = weight(_text_:search in 5844) [ClassicSimilarity], result of:
          0.04744636 = score(doc=5844,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.27153727 = fieldWeight in 5844, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5844)
        0.017028214 = product of:
          0.03405643 = sum of:
            0.03405643 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
              0.03405643 = score(doc=5844,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.19345059 = fieldWeight in 5844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5844)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22

Languages

  • e 157
  • d 33

Types

  • a 175
  • el 36
  • m 5
  • p 5
  • s 1
  • x 1
  • More… Less…