Search (7 results, page 1 of 1)

  • × author_ss:"Borlund, P."
  1. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.08
    0.08380928 = product of:
      0.12571391 = sum of:
        0.07501928 = weight(_text_:search in 3091) [ClassicSimilarity], result of:
          0.07501928 = score(doc=3091,freq=10.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.4293381 = fieldWeight in 3091, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 3091) [ClassicSimilarity], result of:
              0.10138928 = score(doc=3091,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 3091, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3091)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
  2. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.06
    0.059128426 = product of:
      0.088692635 = sum of:
        0.06001542 = weight(_text_:search in 2019) [ClassicSimilarity], result of:
          0.06001542 = score(doc=2019,freq=10.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.34347048 = fieldWeight in 2019, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2019)
        0.028677218 = product of:
          0.057354435 = sum of:
            0.057354435 = weight(_text_:engines in 2019) [ClassicSimilarity], result of:
              0.057354435 = score(doc=2019,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.22454272 = fieldWeight in 2019, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2019)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
  3. Landvad Clemmensen, M.; Borlund, P.: Order effect in interactive information retrieval evaluation : an empirical study (2016) 0.03
    0.02739317 = product of:
      0.08217951 = sum of:
        0.08217951 = weight(_text_:search in 2865) [ClassicSimilarity], result of:
          0.08217951 = score(doc=2865,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.47031635 = fieldWeight in 2865, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2865)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper reports a study of order effect in interactive information retrieval (IIR) studies. The phenomenon of order effect is well-known, and it is the main reason why searches are permuted (counter-balanced) between test participants in IIR studies. However, the phenomenon is not yet fully understood or investigated in relation to IIR; hence the objective is to increase our knowledge of this phenomenon in the context of IIR as it has implications for test design of IIR studies Design/methodology/approach - Order effect is studied via partly a literature review and partly an empirical IIR study. The empirical IIR study is designed as a classic between-groups design. The IIR search behaviour was logged and complementary post-search interviews were conducted. Findings - The order effect between groups and within search tasks were measured against nine classic IIR performance parameters of search interaction behaviour. Order effect is seen with respect to three performance parameters (website changes, visit of webpages, and formulation of queries) shown by an increase in activity on the last performed search. Further the theories with respect to motivation, fatigue, and the good-subject effect shed light on how and why order effect may affect test participants' IR system interaction and search behaviour. Research implications/limitations - Insight about order effect has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Due to the limited sample of 20 test participants (Library and Information Science (LIS) students) inference statistics is not applicable; hence conclusions can be drawn from this sample of test participants only. Originality/Value - Only few studies in LIS focus on order effect and none from the perspective of IIR.
  4. Borlund, P.; Dreier, S.: ¬An investigation of the search behaviour associated with Ingwersen's three types of information needs (2014) 0.03
    0.026839714 = product of:
      0.08051914 = sum of:
        0.08051914 = weight(_text_:search in 2691) [ClassicSimilarity], result of:
          0.08051914 = score(doc=2691,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.460814 = fieldWeight in 2691, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=2691)
      0.33333334 = coord(1/3)
    
    Abstract
    We report a naturalistic interactive information retrieval (IIR) study of 18 ordinary users in the age of 20-25 who carry out everyday-life information seeking (ELIS) on the Internet with respect to the three types of information needs identified by Ingwersen (1986): the verificative information need (VIN), the conscious topical information need (CIN), and the muddled topical information need (MIN). The searches took place in the private homes of the users in order to ensure as realistic searching as possible. Ingwersen (1996) associates a given search behaviour to each of the three types of information needs, which are analytically deduced, but not yet empirically tested. Thus the objective of the study is to investigate whether empirical data does, or does not, conform to the predictions derived from the three types of information needs. The main conclusion is that the analytically deduced information search behaviour characteristics by Ingwersen are positively corroborated for this group of test participants who search the Internet as part of ELIS.
  5. Borlund, P.: Experimental components for the evaluation of interactive information retrieval systems (2000) 0.02
    0.015815454 = product of:
      0.04744636 = sum of:
        0.04744636 = weight(_text_:search in 4549) [ClassicSimilarity], result of:
          0.04744636 = score(doc=4549,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.27153727 = fieldWeight in 4549, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4549)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems, the aim of which is to facilitate evaluation of IIR systems in a way which is as close as possible to realistic IR processes. The experimental settings consists of 3 components: (1) the involvement of potential users as test persons; (2) the application of dynamic and individual information needs; and (3) the use of multidimensionsal and dynamic relevance judgements. Hidden under the information need component is the essential central sub-component, the simulated work task situation, the tool that triggers the (simulated) dynamic information need. This paper also reports on the empirical findings of the meta-evaluation of the application of this sub-component, the purpose of which is to discover whether the application of simulated work task situations to future evaluation of IIR systems can be recommended. Investigations are carried out to dertermine whether any search behavioural differences exist between test persons' treatment of their own real information needs versus simulated information needs. The hypothesis is that if no difference exist one can correctly substitute real information needs with simulated information needs through the application of simulated work task situations. The empirical results of the meta-evaluation provide positive evidence for the application of simulated work task situations to the evaluation of IIR systems. The results also indicate that tailoring work task situations to the group of test persons is important in motivating them. Furthermore, the results of the evaluation show that different versions of semantic openness of the simulated situations make no difference to the test persons' search treatment
  6. Borlund, P.: ¬A study of the use of simulated work task situations in interactive information retrieval evaluations : a meta-evaluation (2016) 0.01
    0.0089465715 = product of:
      0.026839713 = sum of:
        0.026839713 = weight(_text_:search in 2880) [ClassicSimilarity], result of:
          0.026839713 = score(doc=2880,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.15360467 = fieldWeight in 2880, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2880)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information retrieval (IIR) and reported in the research literature. In particular, the author is interested to learn whether the requirements of how to employ simulated work task situations are followed, and whether these requirements call for further highlighting and refinement. Design/methodology/approach - In order to study how simulated work task situations are used, the research literature in question is identified. This is done partly via citation analysis by use of Web of Science®, and partly by systematic search of online repositories. On this basis, 67 individual publications were identified and they constitute the sample of analysis. Findings - The analysis reveals a need for clarifications of how to use simulated work task situations in IIR evaluations. In particular, with respect to the design and creation of realistic simulated work task situations. There is a lack of tailoring of the simulated work task situations to the test participants. Likewise, the requirement to include the test participants' personal information needs is neglected. Further, there is a need to add and emphasise a requirement to depict the used simulated work task situations when reporting the IIR studies. Research limitations/implications - Insight about the use of simulated work task situations has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Originality/value - Simulated work task situations are widely used in IIR studies, and the present study is the first comprehensive study of the intended and unintended use of this test instrument since its introduction in the late 1990's. The paper addresses the need to carefully design and tailor simulated work task situations to suit the test participants in order to obtain the intended authentic and realistic IIR under study.
  7. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.007946501 = product of:
      0.0238395 = sum of:
        0.0238395 = product of:
          0.047679 = sum of:
            0.047679 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.047679 = score(doc=156,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    8. 3.2007 19:55:22