Search (9 results, page 1 of 1)

  • × author_ss:"Blandford, A."
  1. Attfield, S.; Blandford, A.; Dowell, J.: Information seeking in the context of writing : a design psychology interpretation of the "problematic situation" (2003) 0.01
    0.012067945 = product of:
      0.04827178 = sum of:
        0.04827178 = product of:
          0.09654356 = sum of:
            0.09654356 = weight(_text_:design in 4451) [ClassicSimilarity], result of:
              0.09654356 = score(doc=4451,freq=10.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.55733216 = fieldWeight in 4451, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4451)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Information seeking does not occur in a vacuum but invariably is motivated by some wider task. It is well accepted that to understand information seeking we must understand the task context within which it takes place. Writing is amongst the most common tasks within which information seeking is embedded. This paper considers how writing can be understood in order to account for embedded information seeking. Following Sharples, the paper treats writing as a design activity and explore parallels between the psychology of design and information seeking. Significant parallels can be found and ideas from the psychology of design offer explanations for a number of information seeking phenomena. Next, a design-oriented representation of writing tasks as a means of providing an account of phenomena such as information seeking uncertainty and focus refinement is developed. The paper illustrates the representation with scenarios describing the work of newspaper journalists.
  2. Makri, S.; Blandford, A.; Cox, A.L.: Investigating the information-seeking behaviour of academic lawyers : from Ellis's model to design (2008) 0.01
    0.0100566195 = product of:
      0.040226478 = sum of:
        0.040226478 = product of:
          0.080452956 = sum of:
            0.080452956 = weight(_text_:design in 2052) [ClassicSimilarity], result of:
              0.080452956 = score(doc=2052,freq=10.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.46444345 = fieldWeight in 2052, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2052)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Information-seeking is important for lawyers, who have access to many dedicated electronic resources. However there is considerable scope for improving the design of these resources to better support information-seeking. One way of informing design is to use information-seeking models as theoretical lenses to analyse users' behaviour with existing systems. However many models, including those informed by studying lawyers, analyse information-seeking at a high level of abstraction and are only likely to lead to broad-scoped design insights. We illustrate that one potentially useful (and lower-level) model is Ellis's - by using it as a lens to analyse and make design suggestions based on the information-seeking behaviour of 27 academic lawyers, who were asked to think aloud whilst using electronic legal resources to find information for their work. We identify similar information-seeking behaviours to those originally found by Ellis and his colleagues in scientific domains, along with several that were not identified in previous studies such as 'updating' (which we believe is particularly pertinent to legal information-seeking). We also present a refinement of Ellis's model based on the identification of several levels that the behaviours were found to operate at and the identification of sets of mutually exclusive subtypes of behaviours.
  3. Makri, S.; Blandford, A.; Cox, A.L.: Using information behaviors to evaluate the functionality and usability of electronic resources : from Ellis's model to evaluation (2008) 0.01
    0.006360365 = product of:
      0.02544146 = sum of:
        0.02544146 = product of:
          0.05088292 = sum of:
            0.05088292 = weight(_text_:design in 2687) [ClassicSimilarity], result of:
              0.05088292 = score(doc=2687,freq=4.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.29373983 = fieldWeight in 2687, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2687)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Information behavior (IB) research involves examining how people look for and use information, often with the sole purpose of gaining insights into the behavior displayed. However, it is also possible to examine IB with the purpose of using the insights gained to design new tools or improve the design of existing tools to support information seeking and use. This approach is advocated by David Ellis who, over two decades ago, presented a model of information seeking behaviors and made suggestions for how electronic tools might be designed to support these behaviors. Ellis also recognized that IBs might be used as the basis for evaluating as well as designing electronic resources. In this article, we present the IB evaluation methods. These two novel methods, based on an extension of Ellis's model, use the empirically observed IBs of lawyers as a framework for structuring user-centered evaluations of the functionality and usability of electronic resources. In this article, we present the IB methods and illustrate their use through the discussion of two examples. We also discuss benefits and limitations, grounded in specific features of the methods.
  4. Attfield, S.; Blandford, A.: Conceptual misfits in Email-based current-awareness interaction (2011) 0.01
    0.006360365 = product of:
      0.02544146 = sum of:
        0.02544146 = product of:
          0.05088292 = sum of:
            0.05088292 = weight(_text_:design in 4490) [ClassicSimilarity], result of:
              0.05088292 = score(doc=4490,freq=4.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.29373983 = fieldWeight in 4490, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4490)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This research aims to identify some requirements for supporting user interactions with electronic current-awareness alert systems based on data from a professional work environment. Design/methodology/approach - Qualitative data were gathered using contextual inquiry observations with 21 workers at the London office of an international law firm. The analysis uses CASSM ("Concept-based Analysis of Surface and Structural Misfits"), a usability evaluation method structured around identifying mismatches, or "misfits", between user-concepts and concepts represented within a system. Findings - Participants were frequently overwhelmed by e-mail alerts, and a key requirement is to support efficient interaction. Several misfits, which act as barriers to efficient reviewing and follow-on activities, are demonstrated. These relate to a lack of representation of key user-concepts at the interface and/or within the system, including alert items and their properties, source documents, "back-story", primary sources, content categorisations and user collections. Research limitations/implications - Given these misfits, a set of requirements is derived to improve the efficiency with which users can achieve key outcomes with current-awareness information as these occur within a professional work environment. Originality/value - The findings will be of interest to current-awareness providers. The approach is relevant to information interaction researchers interested in deriving design requirements from naturalistic studies.
  5. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.01
    0.0053969487 = product of:
      0.021587795 = sum of:
        0.021587795 = product of:
          0.04317559 = sum of:
            0.04317559 = weight(_text_:design in 2021) [ClassicSimilarity], result of:
              0.04317559 = score(doc=2021,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.24924651 = fieldWeight in 2021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  6. Makri, S.; Blandford, A.: Coming across information serendipitously : Part 1: A process model (2012) 0.01
    0.0053969487 = product of:
      0.021587795 = sum of:
        0.021587795 = product of:
          0.04317559 = sum of:
            0.04317559 = weight(_text_:design in 644) [ClassicSimilarity], result of:
              0.04317559 = score(doc=644,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.24924651 = fieldWeight in 644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=644)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This research seeks to gain a detailed understanding of how researchers come across information serendipitously, grounded in real-world examples. This research was undertaken to enrich the theoretical understanding of this slippery phenomenon. Design/methodology/approach - Semi-structured critical incident interviews were conducted with 28 interdisciplinary researchers. Interviewees were asked to discuss memorable examples of coming across information serendipitously from their research or everyday life. The data collection and analysis process followed many of the core principles of grounded theory methodology. Findings - The examples provided were varied, but shared common elements (they involved a mix of unexpectedness and insight and led to a valuable, unanticipated outcome). These elements form part of an empirically grounded process model of serendipity. In this model, a new connection is made that involves a mix of unexpectedness and insight and has the potential to lead to a valuable outcome. Projections are made on the potential value of the outcome and actions are taken to exploit the connection, leading to an (unanticipated) valuable outcome. Originality/value - The model provides researchers across disciplines with a structured means of understanding and describing serendipitous experiences.
  7. Makri, S.; Blandford, A.: Coming across information serendipitously : Part 2: A classification framework (2012) 0.00
    0.0044974573 = product of:
      0.01798983 = sum of:
        0.01798983 = product of:
          0.03597966 = sum of:
            0.03597966 = weight(_text_:design in 396) [ClassicSimilarity], result of:
              0.03597966 = score(doc=396,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.20770542 = fieldWeight in 396, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=396)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - In "Coming across information serendipitously - Part 1: a process model" the authors identified common elements of researchers' experiences of "coming across information serendipitously". These experiences involve a mix of unexpectedness and insight and lead to a valuable, unanticipated outcome. In this article, the authors aim to show how the elements of unexpectedness, insight and value form a framework for subjectively classifying whether a particular experience might be considered serendipitous and, if so, just how serendipitous. Design/methodology/approach - The classification framework was constructed by analysing 46 experiences of coming across information serendipitously provided by 28 interdisciplinary researchers during critical incident interviews. "Serendipity stories" were written to summarise each experience and to facilitate their comparison. The common elements of unexpectedness, insight and value were identified in almost all the experiences. Findings - The presence of different mixes of unexpectedness, insight and value in the interviewees' experiences define a multi-dimensional conceptual space (which the authors call the "serendipity space"). In this space, different "strengths" of serendipity exist. The classification framework can be used to reason about whether an experience falls within the serendipity space and, if so, how "pure" or "dilute" it is. Originality/value - The framework provides researchers from various disciplines with a structured means of reasoning about and classifying potentially serendipitous experiences.
  8. Makri, S.; Blandford, A.; Woods, M.; Sharples, S.; Maxwell, D.: "Making my own luck" : serendipity strategies and how to support them in digital information environments (2014) 0.00
    0.0044974573 = product of:
      0.01798983 = sum of:
        0.01798983 = product of:
          0.03597966 = sum of:
            0.03597966 = weight(_text_:design in 1525) [ClassicSimilarity], result of:
              0.03597966 = score(doc=1525,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.20770542 = fieldWeight in 1525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1525)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Serendipity occurs when unexpected circumstances and an "aha" moment of insight result in a valuable, unanticipated outcome. Designing digital information environments to support serendipity can not only provide users with new knowledge, but also propel them in directions they might not otherwise have traveled in-surprising and delighting them along the way. As serendipity involves unexpected circumstances it cannot be directly controlled, but it can be potentially influenced. However, to the best of our knowledge, no previous work has focused on providing a rich empirical understanding of how it might be influenced. We interviewed 14 creative professionals to identify their self-reported strategies aimed at increasing the likelihood of serendipity. These strategies form a framework for examining ways existing digital environments support serendipity and for considering how future environments can create opportunities for it. This is a new way of thinking about how to design for serendipity; by supporting the strategies found to increase its likelihood rather than attempting to support serendipity as a discrete phenomenon, digital environments not only have the potential to help users experience serendipity but also encourage them to adopt the strategies necessary to experience it more often.
  9. Pontis, S.; Blandford, A.; Greifeneder, E.; Attalla, H.; Neal, D.: Keeping up to date : an academic researcher's information journey (2017) 0.00
    0.003901319 = product of:
      0.015605276 = sum of:
        0.015605276 = product of:
          0.031210553 = sum of:
            0.031210553 = weight(_text_:22 in 3340) [ClassicSimilarity], result of:
              0.031210553 = score(doc=3340,freq=2.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.19345059 = fieldWeight in 3340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3340)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.22-35