Search (6 results, page 1 of 1)

  • × author_ss:"Blandford, A."
  1. Makri, S.; Blandford, A.: Coming across information serendipitously : Part 2: A classification framework (2012) 0.08
    0.08133544 = product of:
      0.16267088 = sum of:
        0.14322878 = weight(_text_:space in 396) [ClassicSimilarity], result of:
          0.14322878 = score(doc=396,freq=8.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.5765547 = fieldWeight in 396, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=396)
        0.019442094 = product of:
          0.03888419 = sum of:
            0.03888419 = weight(_text_:model in 396) [ClassicSimilarity], result of:
              0.03888419 = score(doc=396,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.21242073 = fieldWeight in 396, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=396)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - In "Coming across information serendipitously - Part 1: a process model" the authors identified common elements of researchers' experiences of "coming across information serendipitously". These experiences involve a mix of unexpectedness and insight and lead to a valuable, unanticipated outcome. In this article, the authors aim to show how the elements of unexpectedness, insight and value form a framework for subjectively classifying whether a particular experience might be considered serendipitous and, if so, just how serendipitous. Design/methodology/approach - The classification framework was constructed by analysing 46 experiences of coming across information serendipitously provided by 28 interdisciplinary researchers during critical incident interviews. "Serendipity stories" were written to summarise each experience and to facilitate their comparison. The common elements of unexpectedness, insight and value were identified in almost all the experiences. Findings - The presence of different mixes of unexpectedness, insight and value in the interviewees' experiences define a multi-dimensional conceptual space (which the authors call the "serendipity space"). In this space, different "strengths" of serendipity exist. The classification framework can be used to reason about whether an experience falls within the serendipity space and, if so, how "pure" or "dilute" it is. Originality/value - The framework provides researchers from various disciplines with a structured means of reasoning about and classifying potentially serendipitous experiences.
  2. Pontis, S.; Blandford, A.; Greifeneder, E.; Attalla, H.; Neal, D.: Keeping up to date : an academic researcher's information journey (2017) 0.02
    0.017783355 = product of:
      0.07113342 = sum of:
        0.07113342 = sum of:
          0.03888419 = weight(_text_:model in 3340) [ClassicSimilarity], result of:
            0.03888419 = score(doc=3340,freq=2.0), product of:
              0.1830527 = queryWeight, product of:
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.047605187 = queryNorm
              0.21242073 = fieldWeight in 3340, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3340)
          0.032249227 = weight(_text_:22 in 3340) [ClassicSimilarity], result of:
            0.032249227 = score(doc=3340,freq=2.0), product of:
              0.16670525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047605187 = queryNorm
              0.19345059 = fieldWeight in 3340, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3340)
      0.25 = coord(1/4)
    
    Abstract
    Keeping up to date with research developments is a central activity of academic researchers, but researchers face difficulties in managing the rapid growth of available scientific information. This study examined how researchers stay up to date, using the information journey model as a framework for analysis and investigating which dimensions influence information behaviors. We designed a 2-round study involving semistructured interviews and prototype testing with 61 researchers with 3 levels of seniority (PhD student to professor). Data were analyzed following a semistructured qualitative approach. Five key dimensions that influence information behaviors were identified: level of seniority, information sources, state of the project, level of familiarity, and how well defined the relevant community is. These dimensions are interrelated and their values determine the flow of the information journey. Across all levels of professional expertise, researchers used similar hard (formal) sources to access content, while soft (interpersonal) sources were used to filter information. An important "pain point" that future information tools should address is helping researchers filter information at the point of need.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.22-35
  3. Makri, S.; Blandford, A.: Coming across information serendipitously : Part 1: A process model (2012) 0.01
    0.011665257 = product of:
      0.046661027 = sum of:
        0.046661027 = product of:
          0.09332205 = sum of:
            0.09332205 = weight(_text_:model in 644) [ClassicSimilarity], result of:
              0.09332205 = score(doc=644,freq=8.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.50980973 = fieldWeight in 644, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=644)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This research seeks to gain a detailed understanding of how researchers come across information serendipitously, grounded in real-world examples. This research was undertaken to enrich the theoretical understanding of this slippery phenomenon. Design/methodology/approach - Semi-structured critical incident interviews were conducted with 28 interdisciplinary researchers. Interviewees were asked to discuss memorable examples of coming across information serendipitously from their research or everyday life. The data collection and analysis process followed many of the core principles of grounded theory methodology. Findings - The examples provided were varied, but shared common elements (they involved a mix of unexpectedness and insight and led to a valuable, unanticipated outcome). These elements form part of an empirically grounded process model of serendipity. In this model, a new connection is made that involves a mix of unexpectedness and insight and has the potential to lead to a valuable outcome. Projections are made on the potential value of the outcome and actions are taken to exploit the connection, leading to an (unanticipated) valuable outcome. Originality/value - The model provides researchers across disciplines with a structured means of understanding and describing serendipitous experiences.
  4. Makri, S.; Blandford, A.; Cox, A.L.: Investigating the information-seeking behaviour of academic lawyers : from Ellis's model to design (2008) 0.01
    0.008418675 = product of:
      0.0336747 = sum of:
        0.0336747 = product of:
          0.0673494 = sum of:
            0.0673494 = weight(_text_:model in 2052) [ClassicSimilarity], result of:
              0.0673494 = score(doc=2052,freq=6.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.36792353 = fieldWeight in 2052, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2052)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Information-seeking is important for lawyers, who have access to many dedicated electronic resources. However there is considerable scope for improving the design of these resources to better support information-seeking. One way of informing design is to use information-seeking models as theoretical lenses to analyse users' behaviour with existing systems. However many models, including those informed by studying lawyers, analyse information-seeking at a high level of abstraction and are only likely to lead to broad-scoped design insights. We illustrate that one potentially useful (and lower-level) model is Ellis's - by using it as a lens to analyse and make design suggestions based on the information-seeking behaviour of 27 academic lawyers, who were asked to think aloud whilst using electronic legal resources to find information for their work. We identify similar information-seeking behaviours to those originally found by Ellis and his colleagues in scientific domains, along with several that were not identified in previous studies such as 'updating' (which we believe is particularly pertinent to legal information-seeking). We also present a refinement of Ellis's model based on the identification of several levels that the behaviours were found to operate at and the identification of sets of mutually exclusive subtypes of behaviours.
  5. Makri, S.; Blandford, A.; Cox, A.L.: Using information behaviors to evaluate the functionality and usability of electronic resources : from Ellis's model to evaluation (2008) 0.01
    0.008418675 = product of:
      0.0336747 = sum of:
        0.0336747 = product of:
          0.0673494 = sum of:
            0.0673494 = weight(_text_:model in 2687) [ClassicSimilarity], result of:
              0.0673494 = score(doc=2687,freq=6.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.36792353 = fieldWeight in 2687, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2687)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Information behavior (IB) research involves examining how people look for and use information, often with the sole purpose of gaining insights into the behavior displayed. However, it is also possible to examine IB with the purpose of using the insights gained to design new tools or improve the design of existing tools to support information seeking and use. This approach is advocated by David Ellis who, over two decades ago, presented a model of information seeking behaviors and made suggestions for how electronic tools might be designed to support these behaviors. Ellis also recognized that IBs might be used as the basis for evaluating as well as designing electronic resources. In this article, we present the IB evaluation methods. These two novel methods, based on an extension of Ellis's model, use the empirically observed IBs of lawyers as a framework for structuring user-centered evaluations of the functionality and usability of electronic resources. In this article, we present the IB methods and illustrate their use through the discussion of two examples. We also discuss benefits and limitations, grounded in specific features of the methods.
  6. Pontis, S.; Blandford, A.: Understanding "influence" : an empirical test of the Data-Frame Theory of Sensemaking (2016) 0.01
    0.008418675 = product of:
      0.0336747 = sum of:
        0.0336747 = product of:
          0.0673494 = sum of:
            0.0673494 = weight(_text_:model in 2847) [ClassicSimilarity], result of:
              0.0673494 = score(doc=2847,freq=6.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.36792353 = fieldWeight in 2847, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2847)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper reports findings from a study designed to gain broader understanding of sensemaking activities using the Data/Frame Theory as the analytical framework. Although this theory is one of the dominant models of sensemaking, it has not been extensively tested with a range of sensemaking tasks. The tasks discussed here focused on making sense of structures rather than processes or narratives. Eleven researchers were asked to construct understanding of how a scientific community in a particular domain is organized (e.g., people, relationships, contributions, factors) by exploring the concept of "influence" in academia. This topic was chosen because, although researchers frequently handle this type of task, it is unlikely that they have explicitly sought this type of information. We conducted a think-aloud study and semistructured interviews with junior and senior researchers from the human-computer interaction (HCI) domain, asking them to identify current leaders and rising stars in both HCI and chemistry. Data were coded and analyzed using the Data/Frame Model to both test and extend the model. Three themes emerged from the analysis: novices and experts' sensemaking activity chains, constructing frames through indicators, and characteristics of structure tasks. We propose extensions to the Data/Frame Model to accommodate structure sensemaking.