Search (11 results, page 1 of 1)

  • × author_ss:"Ingwersen, P."
  1. Ingwersen, P.: Cognitive perspectives of information retrieval interaction : elements of a cognitive IR theory (1996) 0.01
    0.014599875 = product of:
      0.043799624 = sum of:
        0.02637006 = product of:
          0.05274012 = sum of:
            0.05274012 = weight(_text_:theory in 3616) [ClassicSimilarity], result of:
              0.05274012 = score(doc=3616,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.3248744 = fieldWeight in 3616, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3616)
          0.5 = coord(1/2)
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 3616) [ClassicSimilarity], result of:
              0.034859132 = score(doc=3616,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 3616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3616)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The objective of this paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set in a cognitive framework, the paper outlines the concept of polyrepresentation applied to both the user's cognitive space and the information space of IR systems. The concept seeks to represent the current user's information need, problem state, and domain work task or interest in a structure of causality. Further, it implies that we should apply different methods of representation and a variety of IR techniques of different cognitive and functional origin simultaneously to each semantic full-text entity in the information space. The cognitive differences imply that by applying cognitive overlaps of information objects, originating from different interprestations of such objects through time and by type, the degree of uncertainty inherent in IR is decreased. ... The lack of consistency among authors, indexers, evaluators or users is of an identical cognitive nature. It is unavoidable, and indeed favourable to IR. In particular, for full-text retrieval, alternative semantic entities, including Salton 'et al.'s' 'passage retrieval', are proposed to replace the traditional document record as the basic retrieval entity. These empirically observed phenomena of inconsistency and of semantic entities and values associated with data interpretation support strongly a cognitive approach to IR and the logical use of olypresentation, cognitive overlaps, and both data fusion and data diffusion
  2. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.01
    0.010099277 = product of:
      0.06059566 = sum of:
        0.06059566 = sum of:
          0.039438605 = weight(_text_:methods in 2752) [ClassicSimilarity], result of:
            0.039438605 = score(doc=2752,freq=4.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.25127584 = fieldWeight in 2752, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
          0.021157054 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
            0.021157054 = score(doc=2752,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.15476047 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
      0.16666667 = coord(1/6)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  3. Ingwersen, P.; Johansen, T.; Timmermann, P.: User-librarian negotiations and search procedures : a progress report (1980) 0.01
    0.0074585797 = product of:
      0.044751476 = sum of:
        0.044751476 = product of:
          0.08950295 = sum of:
            0.08950295 = weight(_text_:theory in 8923) [ClassicSimilarity], result of:
              0.08950295 = score(doc=8923,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.55133015 = fieldWeight in 8923, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8923)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  4. Ingwersen, P.; Wormell, I.: Modern indexing and retrieval techniques matching different types of information needs (1989) 0.01
    0.0061708074 = product of:
      0.037024844 = sum of:
        0.037024844 = product of:
          0.07404969 = sum of:
            0.07404969 = weight(_text_:22 in 7322) [ClassicSimilarity], result of:
              0.07404969 = score(doc=7322,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.5416616 = fieldWeight in 7322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7322)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    International forum on information and documentation. 14(1989), S.17-22
  5. Almind, T.C.; Ingwersen, P.: Informetric analyses on the World Wide Web : methodological approaches to 'Webometrics' (1997) 0.01
    0.005751464 = product of:
      0.034508783 = sum of:
        0.034508783 = product of:
          0.06901757 = sum of:
            0.06901757 = weight(_text_:methods in 4711) [ClassicSimilarity], result of:
              0.06901757 = score(doc=4711,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.43973273 = fieldWeight in 4711, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4711)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Introduces the application of informetric methods to the WWW, called Webometrics. A case study, in which the Danish proportion of the WWW is compared to those of other Nordic countries, presents a workable methods for general informetrc analyses of the WWW. The methodological approach is comparable with common bibliometric analyses of the ISI databases. Among other results the analyses demonstrate that Denmark would seem to fail seriously behind the other Nordic countries with respect to visibility on the Net and compared to its position in scientific databases
  6. Ingwersen, P.: ¬The cognitive perspective in information retrieval (1994) 0.00
    0.0049723866 = product of:
      0.029834319 = sum of:
        0.029834319 = product of:
          0.059668638 = sum of:
            0.059668638 = weight(_text_:theory in 2127) [ClassicSimilarity], result of:
              0.059668638 = score(doc=2127,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.36755344 = fieldWeight in 2127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2127)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Outlines the principles underlying the theory of polyrepresentation applied to the user's cognitive space and the information space of information retrieval systems, set in a cognitive framework. Uses polyrepresentation to represent the current user's information needs, problem states, and domain work tasks or interests in a structure of causality, as well as to embody semantic full text entities by means of the principle of 'intentional redundancy'
  7. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.00
    0.0049723866 = product of:
      0.029834319 = sum of:
        0.029834319 = product of:
          0.059668638 = sum of:
            0.059668638 = weight(_text_:theory in 7695) [ClassicSimilarity], result of:
              0.059668638 = score(doc=7695,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.36755344 = fieldWeight in 7695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7695)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Examnines Ranganathan's approach to knowledge organisation and its relevance to intellectual accessibility in libraries. Discusses the current and future developments of his methodology and theories in knowledge-based systems. Topics covered include: semi-automatic classification and structure of thesauri; user-intermediary interactions in information retrieval (IR); semantic value-theory and uncertainty principles in IR; and case grammar
  8. Borlund, P.; Ingwersen, P.: ¬The development of a method for the evaluation of interactive information retrieval systems (1997) 0.00
    0.004066899 = product of:
      0.024401393 = sum of:
        0.024401393 = product of:
          0.048802786 = sum of:
            0.048802786 = weight(_text_:methods in 7469) [ClassicSimilarity], result of:
              0.048802786 = score(doc=7469,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31093797 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7469)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes the development of a methods for the evaluation and comparison of interactive information retrieval systems. The method is based on the introduction of the concept of a 'simulated work task situation' or scenario and the involvement of real end users as test persons. The relevance assessments are made with reference to the concepts of situational as well as topic relevance, assessed in a non binary way and calculated as precision. The method is further based on a mixture of simulated and real information needs, and involves also assessments made by individual panel memebers
  9. Ingwersen, P.: ¬The calculation of Web impact factors (1998) 0.00
    0.004066899 = product of:
      0.024401393 = sum of:
        0.024401393 = product of:
          0.048802786 = sum of:
            0.048802786 = weight(_text_:methods in 1071) [ClassicSimilarity], result of:
              0.048802786 = score(doc=1071,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31093797 = fieldWeight in 1071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1071)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports investigations into the feasibility and reliability of calculating impact factors for web sites, called Web Impact Factors (Web-IF). analyzes a selection of 7 small and medium scale national and 4 large web domains as well as 6 institutional web sites over a series of snapshots taken of the web during a month. Describes the data isolation and calculation methods and discusses the tests. The results thus far demonstrate that Web-IFs are calculable with high confidence for national and sector domains whilst institutional Web-IFs should be approached with caution
  10. Ingwersen, P.: ¬The cognitive framework for information retrieval : a paradigmatic perspective (1996) 0.00
    0.0034859132 = product of:
      0.020915478 = sum of:
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 6114) [ClassicSimilarity], result of:
              0.041830957 = score(doc=6114,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 6114, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6114)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper presents the principles underlying the cognitive framework for Information Retrieval (IR). It introduces the concept of polyrepresentation applied simultaneously to the user's cognitive space and the information space of IR systems. The concept seeks to represent the current user's information need, problem state, and domain work task or interest in a structure of causality. Further, it suggests to apply different methods of representation and a variety of IR techniques of 'different cognitive and functional origin' simultaneously to each information object in the information space. The cognitive differences between such representations imply that by applying 'cognitive retrieval overlaps' of information objects, originating from different interpretations of such objects over time and by type, the degree of uncertainty inherent in IR is decreased and the intellectual access possibilities are increased. One consequence of the framework is its capability to elucidate the seemingly dubious assumptions underlying the predominant algorithmic retrieval models, such as, the vector space and probabilistic models
  11. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.00
    0.0029049278 = product of:
      0.017429566 = sum of:
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 3091) [ClassicSimilarity], result of:
              0.034859132 = score(doc=3091,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 3091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3091)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.