Search (8 results, page 1 of 1)

  • × author_ss:"Borlund, P."
  • × type_ss:"a"
  1. Schneider, J.W.; Borlund, P.: Matrix comparison, part 1 : motivation and important issues for measuring the resemblance between proximity measures or ordination results (2007) 0.02
    0.022404997 = product of:
      0.08961999 = sum of:
        0.033850174 = weight(_text_:case in 584) [ClassicSimilarity], result of:
          0.033850174 = score(doc=584,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.1942959 = fieldWeight in 584, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=584)
        0.055769812 = weight(_text_:studies in 584) [ClassicSimilarity], result of:
          0.055769812 = score(doc=584,freq=8.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.35269377 = fieldWeight in 584, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=584)
      0.25 = coord(2/8)
    
    Abstract
    The present two-part article introduces matrix comparison as a formal means of evaluation in informetric studies such as cocitation analysis. In this first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such comparisons, are introduced and discussed. The motivation is spurred by the recent debate on choice of proximity measures and their potential influence upon clustering and ordination results. The two important issues discussed here are matrix generation and the composition of proximity measures. The approach to matrix generation is demonstrated for the same data set, i.e., how data is represented and transformed in a matrix, evidently determines the behavior of proximity measures. Two different matrix generation approaches, in all probability, will lead to different proximity rankings of objects, which further lead to different ordination and clustering results for the same set of objects. Further, a resemblance in the composition of formulas indicates whether two proximity measures may produce similar ordination and clustering results. However, as shown in the case of the angular correlation and cosine measures, a small deviation in otherwise similar formulas can lead to different rankings depending on the contour of the data matrix transformed. Eventually, the behavior of proximity measures, that is whether they produce similar rankings of objects, is more or less data-specific. Consequently, the authors recommend the use of empirical matrix comparison techniques for individual studies to investigate the degree of resemblance between proximity measures or their ordination results. In part two of the article, the authors introduce and demonstrate two related statistical matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare and evaluate the degree of monotonicity between different proximity measures or their ordination results. As such, the Mantel test and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures.
  2. Schneider, J.W.; Borlund, P.: Matrix comparison, part 2 : measuring the resemblance between proximity measures or ordination results by use of the mantel and procrustes statistics (2007) 0.02
    0.020537063 = product of:
      0.082148254 = sum of:
        0.033850174 = weight(_text_:case in 582) [ClassicSimilarity], result of:
          0.033850174 = score(doc=582,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.1942959 = fieldWeight in 582, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=582)
        0.048298076 = weight(_text_:studies in 582) [ClassicSimilarity], result of:
          0.048298076 = score(doc=582,freq=6.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.30544177 = fieldWeight in 582, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=582)
      0.25 = coord(2/8)
    
    Abstract
    The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such comparisons, matrix generation, and the composition of proximity measures, are introduced and discussed. In this second part, the authors introduce and thoroughly demonstrate two related matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare and evaluate the degree of monotonicity between different proximity measures or their ordination results. In common with these techniques is the application of permutation procedures to test hypotheses about matrix resemblances. The choice of technique is related to the validation at hand. In the case of the Mantel test, the degree of resemblance between two measures forecast their potentially different affect upon ordination and clustering results. In principle, two proximity measures with a very strong resemblance most likely produce identical results, thus, choice of measure between the two becomes less important. Alternatively, or as a supplement, Procrustes analysis compares the actual ordination results without investigating the underlying proximity measures, by matching two configurations of the same objects in a multidimensional space. An advantage of the Procrustes analysis though, is the graphical solution provided by the superimposition plot and the resulting decomposition of variance components. Accordingly, the Procrustes analysis provides not only a measure of general fit between configurations, but also values for individual objects enabling more elaborate validations. As such, the Mantel test and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures.
  3. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.02
    0.017851189 = product of:
      0.14280951 = sum of:
        0.14280951 = sum of:
          0.10522648 = weight(_text_:area in 156) [ClassicSimilarity], result of:
            0.10522648 = score(doc=156,freq=4.0), product of:
              0.1952553 = queryWeight, product of:
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.03962768 = queryNorm
              0.5389174 = fieldWeight in 156, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
          0.037583023 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.037583023 = score(doc=156,freq=2.0), product of:
              0.13876937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03962768 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
      0.125 = coord(1/8)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
  4. Landvad Clemmensen, M.; Borlund, P.: Order effect in interactive information retrieval evaluation : an empirical study (2016) 0.01
    0.010672467 = product of:
      0.085379735 = sum of:
        0.085379735 = weight(_text_:studies in 2865) [ClassicSimilarity], result of:
          0.085379735 = score(doc=2865,freq=12.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.53994983 = fieldWeight in 2865, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2865)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - This paper reports a study of order effect in interactive information retrieval (IIR) studies. The phenomenon of order effect is well-known, and it is the main reason why searches are permuted (counter-balanced) between test participants in IIR studies. However, the phenomenon is not yet fully understood or investigated in relation to IIR; hence the objective is to increase our knowledge of this phenomenon in the context of IIR as it has implications for test design of IIR studies Design/methodology/approach - Order effect is studied via partly a literature review and partly an empirical IIR study. The empirical IIR study is designed as a classic between-groups design. The IIR search behaviour was logged and complementary post-search interviews were conducted. Findings - The order effect between groups and within search tasks were measured against nine classic IIR performance parameters of search interaction behaviour. Order effect is seen with respect to three performance parameters (website changes, visit of webpages, and formulation of queries) shown by an increase in activity on the last performed search. Further the theories with respect to motivation, fatigue, and the good-subject effect shed light on how and why order effect may affect test participants' IR system interaction and search behaviour. Research implications/limitations - Insight about order effect has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Due to the limited sample of 20 test participants (Library and Information Science (LIS) students) inference statistics is not applicable; hence conclusions can be drawn from this sample of test participants only. Originality/Value - Only few studies in LIS focus on order effect and none from the perspective of IIR.
  5. Borlund, P.: ¬A study of the use of simulated work task situations in interactive information retrieval evaluations : a meta-evaluation (2016) 0.01
    0.0069712265 = product of:
      0.055769812 = sum of:
        0.055769812 = weight(_text_:studies in 2880) [ClassicSimilarity], result of:
          0.055769812 = score(doc=2880,freq=8.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.35269377 = fieldWeight in 2880, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=2880)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information retrieval (IIR) and reported in the research literature. In particular, the author is interested to learn whether the requirements of how to employ simulated work task situations are followed, and whether these requirements call for further highlighting and refinement. Design/methodology/approach - In order to study how simulated work task situations are used, the research literature in question is identified. This is done partly via citation analysis by use of Web of Science®, and partly by systematic search of online repositories. On this basis, 67 individual publications were identified and they constitute the sample of analysis. Findings - The analysis reveals a need for clarifications of how to use simulated work task situations in IIR evaluations. In particular, with respect to the design and creation of realistic simulated work task situations. There is a lack of tailoring of the simulated work task situations to the test participants. Likewise, the requirement to include the test participants' personal information needs is neglected. Further, there is a need to add and emphasise a requirement to depict the used simulated work task situations when reporting the IIR studies. Research limitations/implications - Insight about the use of simulated work task situations has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Originality/value - Simulated work task situations are widely used in IIR studies, and the present study is the first comprehensive study of the intended and unintended use of this test instrument since its introduction in the late 1990's. The paper addresses the need to carefully design and tailor simulated work task situations to suit the test participants in order to obtain the intended authentic and realistic IIR under study.
  6. Soedring, T.; Borlund, P.; Helfert, M.: ¬The migration and preservation of six Norwegian municipality record-keeping systems : lessons learned (2021) 0.01
    0.0063469075 = product of:
      0.05077526 = sum of:
        0.05077526 = weight(_text_:case in 241) [ClassicSimilarity], result of:
          0.05077526 = score(doc=241,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
      0.125 = coord(1/8)
    
    Abstract
    This article presents a rare insight into the migration of municipality record-keeping databases. The migration of a database for preservation purposes poses several challenges. In particular, our findings show that relevant issues are file-format heterogeneity, collection volume, time and database structure evolution, and deviation from the governing standard. This article presents and discusses how such issues interfere with an organization's ability to undertake a migration, for preservation purposes, of records from a relational database. The case study at hand concerns six Norwegian municipality record-keeping databases covering a period from 1999 to 2012. The findings are presented with a discussion on how these issues manifest themselves as a problem for long-term preservation. The results discussed here may help an organization and Information Systems (IS) manager to establish a best practice when undertaking a migration project and enable them to avoid some of the pitfalls that were discovered during this project.
  7. Schneider, J.W.; Borlund, P.: Introduction to bibliometrics for construction and maintenance of thesauri : methodical considerations (2004) 0.00
    0.0039860546 = product of:
      0.031888437 = sum of:
        0.031888437 = product of:
          0.06377687 = sum of:
            0.06377687 = weight(_text_:area in 4423) [ClassicSimilarity], result of:
              0.06377687 = score(doc=4423,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32663327 = fieldWeight in 4423, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4423)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    The paper introduces bibliometrics to the research area of knowledge organization - more precisely in relation to construction and maintenance of thesauri. As such, the paper reviews related work that has been of inspiration for the assembly of a semi-automatic, bibliometric-based, approach for construction and maintenance. Similarly, the paper discusses the methodical considerations behind the approach. Eventually, the semi-automatic approach is used to verify the applicability of bibliometric methods as a supplement to construction and maintenance of thesauri. In the context of knowledge organization, the paper outlines two fundamental approaches to knowledge organization, that is, the manual intellectual approach and the automatic algorithmic approach. Bibliometric methods belong to the automatic algorithmic approach, though bibliometrics do have special characteristics that are substantially different from other methods within this approach.
  8. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.00
    0.0034856133 = product of:
      0.027884906 = sum of:
        0.027884906 = weight(_text_:studies in 2019) [ClassicSimilarity], result of:
          0.027884906 = score(doc=2019,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.17634688 = fieldWeight in 2019, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=2019)
      0.125 = coord(1/8)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.