Search (12 results, page 1 of 1)

  • × author_ss:"Toms, E.G."
  1. O'Brien, H.L.; Toms, E.G.: What is user engagement? : a conceptual framework for defining user engagement with technology (2008) 0.05
    0.04787399 = product of:
      0.09574798 = sum of:
        0.07984746 = weight(_text_:term in 1721) [ClassicSimilarity], result of:
          0.07984746 = score(doc=1721,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.3645336 = fieldWeight in 1721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1721)
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 1721) [ClassicSimilarity], result of:
              0.031801023 = score(doc=1721,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 1721, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1721)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The purpose of this article is to critically deconstruct the term engagement as it applies to peoples' experiences with technology. Through an extensive, critical multidisciplinary literature review and exploratory study of users of Web searching, online shopping, Webcasting, and gaming applications, we conceptually and operationally defined engagement. Building on past research, we conducted semistructured interviews with the users of four applications to explore their perception of being engaged with the technology. Results indicate that engagement is a process comprised of four distinct stages: point of engagement, period of sustained engagement, disengagement, and reengagement. Furthermore, the process is characterized by attributes of engagement that pertain to the user, the system, and user-system interaction. We also found evidence of the factors that contribute to nonengagement. Emerging from this research is a definition of engagement - a term not defined consistently in past work - as a quality of user experience characterized by attributes of challenge, positive affect, endurability, aesthetic and sensory appeal, attention, feedback, variety/novelty, interactivity, and perceived user control. This exploratory work provides the foundation for future work to test the conceptual model in various application areas, and to develop methods to measure engaging user experiences.
    Date
    21. 3.2008 13:39:22
  2. Freund, L.; Toms, E.G.: Interacting with archival finding aids (2016) 0.03
    0.027245669 = product of:
      0.054491337 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 2851) [ClassicSimilarity], result of:
              0.028250674 = score(doc=2851,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 2851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2851)
          0.25 = coord(1/4)
        0.047428668 = product of:
          0.094857335 = sum of:
            0.094857335 = weight(_text_:assessment in 2851) [ClassicSimilarity], result of:
              0.094857335 = score(doc=2851,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.36599535 = fieldWeight in 2851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2851)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This research aimed to gain a detailed understanding of how genealogists and historians interact with, and make use of, finding aids in print and digital form. The study uses the lens of human information interaction to investigate finding aid use. Data were collected through a lab-based study of 32 experienced archives' users who completed two tasks with each of two finding aids. Participants were able to carry out the tasks, but they were somewhat challenged by the structure of the finding aid and employed various techniques to cope. Their patterns of interaction differed by task type and they reported higher rates of satisfaction, ease of use, and clarity for the assessment task than the known-item task. Four common patterns of interaction were identified: top-down, bottom-up, interrogative, and opportunistic. Results show how users interact with findings aids and identify features that support and hinder use. This research examines process and performance in addition to outcomes. Results contribute to the archival science literature and also suggest ways to extend models of human information interaction.
  3. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.01
    0.010893034 = product of:
      0.021786068 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 1786) [ClassicSimilarity], result of:
              0.023542227 = score(doc=1786,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 1786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1786)
          0.25 = coord(1/4)
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
              0.031801023 = score(doc=1786,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 1786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1786)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - One core element of interactive information retrieval (IIR) experiments is the assignment of search tasks. The purpose of this paper is to provide an analytical review of current practice in developing those search tasks to test, observe or control task complexity and difficulty. Design/methodology/approach - Over 100 prior studies of IIR were examined in terms of how each defined task complexity and/or difficulty (or related concepts) and subsequently interpreted those concepts in the development of the assigned search tasks. Findings - Search task complexity is found to include three dimensions: multiplicity of subtasks or steps, multiplicity of facets, and indeterminability. Search task difficulty is based on an interaction between the search task and the attributes of the searcher or the attributes of the search situation. The paper highlights the anomalies in our use of these two concepts, concluding with suggestions for future methodological research related to search task complexity and difficulty. Originality/value - By analyzing and synthesizing current practices, this paper provides guidance for future experiments in IIR that involve these two constructs.
    Date
    6. 4.2015 19:31:22
  4. Toms, E.G.; Taves, A.R.: Measuring user perceptions of Web site reputation (2004) 0.01
    0.009880973 = product of:
      0.039523892 = sum of:
        0.039523892 = product of:
          0.079047784 = sum of:
            0.079047784 = weight(_text_:assessment in 2565) [ClassicSimilarity], result of:
              0.079047784 = score(doc=2565,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30499613 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this study, we compare a search tool, TOPIC, with three other widely used tools that retrieve information from the Web: AltaVista, Google, and Lycos. These tools use different techniques for outputting and ranking Web sites: external link structure (TOPIC and Google) and semantic content analysis (AltaVista and Lycos). TOPIC purports to output, and highly rank within its hit list, reputable Web sites for searched topics. In this study, 80 participants reviewed the output (i.e., highly ranked sites) from each tool and assessed the quality of retrieved sites. The 4800 individual assessments of 240 sites that represent 12 topics indicated that Google tends to identify and highly rank significantly more reputable Web sites than TOPIC, which, in turn, outputs more than AltaVista and Lycos, but this was not consistent from topic to topic. Metrics derived from reputation research were used in the assessment and a factor analysis was employed to identify a key factor, which we call 'repute'. The results of this research include insight into the factors that Web users consider in formulating perceptions of Web site reputation, and insight into which search tools are outputting reputable sites for Web users. Our findings, we believe, have implications for Web users and suggest the need for future research to assess the relationship between Web page characteristics and their perceived reputation.
  5. Toms, E.G.: Task-based information searching and retrieval (2011) 0.00
    0.00411989 = product of:
      0.01647956 = sum of:
        0.01647956 = product of:
          0.06591824 = sum of:
            0.06591824 = weight(_text_:based in 544) [ClassicSimilarity], result of:
              0.06591824 = score(doc=544,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.46604872 = fieldWeight in 544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.109375 = fieldNorm(doc=544)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
  6. Bartlett, J.C.; Toms, E.G.: Developing a protocol for bioinformatics analysis : an integrated information behavior and task analysis approach (2005) 0.00
    0.003975128 = product of:
      0.015900511 = sum of:
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 5256) [ClassicSimilarity], result of:
              0.031801023 = score(doc=5256,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 5256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5256)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 14:28:55
  7. Dufour, C.; Bartlett, J.C.; Toms, E.G.: Understanding how webcasts are used as sources of information (2011) 0.00
    0.003975128 = product of:
      0.015900511 = sum of:
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 4195) [ClassicSimilarity], result of:
              0.031801023 = score(doc=4195,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 4195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4195)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2011 14:16:14
  8. Toms, E.G.: What motivates the browser? (1999) 0.00
    0.003180102 = product of:
      0.012720408 = sum of:
        0.012720408 = product of:
          0.025440816 = sum of:
            0.025440816 = weight(_text_:22 in 292) [ClassicSimilarity], result of:
              0.025440816 = score(doc=292,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.15476047 = fieldWeight in 292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=292)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2002 9:44:47
  9. Toms, E.G.; O'Brien, H.L.: Understanding the information and communication technology needs of the e-humanist (2008) 0.00
    0.002548521 = product of:
      0.010194084 = sum of:
        0.010194084 = product of:
          0.040776335 = sum of:
            0.040776335 = weight(_text_:based in 1731) [ClassicSimilarity], result of:
              0.040776335 = score(doc=1731,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28829288 = fieldWeight in 1731, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1731)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to understand the needs of humanists with respect to information and communication technology (ICT) in order to prescribe the design of an e-humanist's workbench. Design/methodology/approach - A web-based survey comprising over 60 questions gathered the following data from 169 humanists: profile of the humanist, use of ICT in teaching, e-texts, text analysis tools, access to and use of primary and secondary sources, and use of collaboration and communication tools. Findings - Humanists conduct varied forms of research and use multiple techniques. They rely on the availability of inexpensive, quality-controlled e-texts for their research. The existence of primary sources in digital form influences the type of research conducted. They are unaware of existing tools for conducting text analyses, but expressed a need for better tools. Search engines have replaced the library catalogue as the key access tool for sources. Research continues to be solitary with little collaboration among scholars. Research limitations/implications - The results are based on a self-selected sample of humanists who responded to a web-based survey. Future research needs to examine the work of the scholar at a more detailed level, preferably through observation and/or interviewing. Practical implications - The findings support a five-part framework that could serve as the basis for the design of an e-humanist's workbench. Originality/value - The paper examines the needs of the humanist, founded on an integration of information science research and humanities computing for a more comprehensive understanding of the humanist at work.
  10. Toms, E.G.: Free-Neets : delivering information to the community (1994) 0.00
    0.0023542228 = product of:
      0.009416891 = sum of:
        0.009416891 = product of:
          0.037667565 = sum of:
            0.037667565 = weight(_text_:based in 579) [ClassicSimilarity], result of:
              0.037667565 = score(doc=579,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.26631355 = fieldWeight in 579, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=579)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Computer-based systems are increasingly used by society for everyday activities. Yet these systems are rarely exploited to meet personal information needs. One development that may change this imbalance is the community online system. This paper examines one type of community online system, the Free-Net, and discusses its usefulness in delivering the information and services typically provided by community information centers
  11. Toms, E.G.; Freund, L.; Li, C.: WilRE: the Web Interactive information retrieval experimentation system prototype (2004) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 2534) [ClassicSimilarity], result of:
              0.028250674 = score(doc=2534,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 2534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2534)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    We introduce WiIRE, a prototype system for conducting interactive information retrieval (IIR) experiments via the Internet. We conceived Wi IRE to increase validity while streamlining procedures and adding efficiencies to the conduct of IIR experiments. The system incorporates password-controlled access, online questionnaires, study instructions and tutorials, conditional interface assignment, and conditional query assignment as well as provision for data collection. As an initial evaluation, we used WiIRE inhouse to conduct a Web-based IIR experiment using an external search engine with customized search interfaces and the TREC 11 Interactive Track search queries. Our evaluation of the prototype indicated significant cost efficiencies in the conduct of IIR studies, and additionally had some novel findings about the human perspective: about half participants would have preferred some personal contact with the researcher, and participants spent a significantly decreasing amount of time on tasks over the course of a session.
  12. Toms, E.G.; Campbell, D.G.; Blades, R.: Does genre define the shape of information? : the role of form and function in user interaction with digital documents (1999) 0.00
    0.0016646868 = product of:
      0.0066587473 = sum of:
        0.0066587473 = product of:
          0.02663499 = sum of:
            0.02663499 = weight(_text_:based in 6699) [ClassicSimilarity], result of:
              0.02663499 = score(doc=6699,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.18831211 = fieldWeight in 6699, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6699)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Documents belonging to a genre have a definite structure which has evolved within specific discourse communities to the point where its use is fixed and standardized. We speculate that such a structure exhibits a strong visual cue, facilitating document recognition and defining a shape of information. To test the concept of shape, 72 participants from two groups (half currently working in an academic setting and half from the general public) examined 24 documents typically used in the academic environment. The documents were in three versions: one based on form, in which the text was masked, leaving only the layout, a second based on content, in which the document was reduced to its semantic information only, and the full version, the original unaltered document. On examining each of the 24 documents (e.g., journal article, call for papers, annotated bibliography) in one of the three versions, participants identified: the type of document and, its recognizable and/or unfamiliar features. In addition, they assessed 8 print versions of the form document for suggestive features of shape. Two variables were tested: the genre element (form or content) and the participant's membership in the academic community. Not unexpectedly, participants identified more documents in the Full and Content versions than the Form versions. But Form versions were recognized twice as quickly as the other two versions. Thus when document shape was evident, the document was immediately discernible to participants; when participants were required to read the semantic content for a gist of the document and an extrapolation of its contents, it took more time. Surprisingly, discourse community had no effect