Search (10 results, page 1 of 1)

  • × author_ss:"Ingwersen, P."
  1. Ingwersen, P.: ¬The calculation of Web impact factors (1998) 0.02
    0.02406976 = product of:
      0.09627904 = sum of:
        0.07333147 = weight(_text_:web in 1071) [ClassicSimilarity], result of:
          0.07333147 = score(doc=1071,freq=18.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.75719774 = fieldWeight in 1071, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
        0.022947572 = weight(_text_:data in 1071) [ClassicSimilarity], result of:
          0.022947572 = score(doc=1071,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 1071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
      0.25 = coord(2/8)
    
    Abstract
    Reports investigations into the feasibility and reliability of calculating impact factors for web sites, called Web Impact Factors (Web-IF). analyzes a selection of 7 small and medium scale national and 4 large web domains as well as 6 institutional web sites over a series of snapshots taken of the web during a month. Describes the data isolation and calculation methods and discusses the tests. The results thus far demonstrate that Web-IFs are calculable with high confidence for national and sector domains whilst institutional Web-IFs should be approached with caution
  2. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.02
    0.020541556 = product of:
      0.082166225 = sum of:
        0.049383983 = weight(_text_:web in 3091) [ClassicSimilarity], result of:
          0.049383983 = score(doc=3091,freq=16.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.5099235 = fieldWeight in 3091, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
        0.032782245 = weight(_text_:data in 3091) [ClassicSimilarity], result of:
          0.032782245 = score(doc=3091,freq=8.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.34936053 = fieldWeight in 3091, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
      0.25 = coord(2/8)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
  3. Almind, T.C.; Ingwersen, P.: Informetric analyses on the World Wide Web : methodological approaches to 'Webometrics' (1997) 0.02
    0.017375026 = product of:
      0.0695001 = sum of:
        0.045056276 = weight(_text_:wide in 4711) [ClassicSimilarity], result of:
          0.045056276 = score(doc=4711,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 4711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4711)
        0.024443826 = weight(_text_:web in 4711) [ClassicSimilarity], result of:
          0.024443826 = score(doc=4711,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.25239927 = fieldWeight in 4711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4711)
      0.25 = coord(2/8)
    
  4. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.01
    0.010040278 = product of:
      0.04016111 = sum of:
        0.032119907 = weight(_text_:data in 2752) [ClassicSimilarity], result of:
          0.032119907 = score(doc=2752,freq=12.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.342302 = fieldWeight in 2752, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2752)
        0.008041205 = product of:
          0.01608241 = sum of:
            0.01608241 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.01608241 = score(doc=2752,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  5. Ingwersen, P.; Järvelin, K.: ¬The turn : integration of information seeking and retrieval in context (2005) 0.01
    0.006205366 = product of:
      0.024821464 = sum of:
        0.016091526 = weight(_text_:wide in 1323) [ClassicSimilarity], result of:
          0.016091526 = score(doc=1323,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.122383565 = fieldWeight in 1323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
        0.0087299375 = weight(_text_:web in 1323) [ClassicSimilarity], result of:
          0.0087299375 = score(doc=1323,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.09014259 = fieldWeight in 1323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
      0.25 = coord(2/8)
    
    Abstract
    The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies. TOC:Introduction.- The Cognitive Framework for Information.- The Development of Information Seeking Research.- Systems-Oriented Information Retrieval.- Cognitive and User-Oriented Information Retrieval.- The Integrated IS&R Research Framework.- Implications of the Cognitive Framework for IS&R.- Towards a Research Program.- Conclusion.- Definitions.- References.- Index.
    Footnote
    - Kapitel fünf enthält einen entsprechenden Überblick über die kognitive und benutzerorientierte IR-Tradition. Es zeigt, welche anderen (als nur die labororientierten) IR-Studien durchgeführt werden können, wobei sich die Betrachtung von frühen Modellen (z.B. Taylor) über Belkins ASK-Konzept bis zu Ingwersens Modell der Polyrepräsentation, und von Bates Berrypicking-Ansatz bis zu Vakkaris "taskbased" IR-Modell erstreckt. Auch Web-IR, OKAPI und Diskussionen zum Relevanzbegriff werden hier thematisiert. - Im folgenden Kapitel schlagen die Autoren ein integriertes IS&R Forschungsmodell vor, bei dem die vielfältigen Beziehungen zwischen Informationssuchenden, Systementwicklern, Oberflächen und anderen beteiligten Aspekten berücksichtigt werden. Ihr Ansatz vereint die traditionelle Laborforschung mit verschiedenen benutzerorientierten Traditionen aus IS&R, insbesondere mit den empirischen Ansätzen zu IS und zum interaktiven IR, in einem holistischen kognitiven Modell. - Kapitel sieben untersucht die Implikationen dieses Modells für IS&R, wobei besonders ins Auge fällt, wie komplex die Anfragen von Informationssuchenden im Vergleich mit der relativen Einfachheit der Algorithmen zum Auffinden relevanter Dokumente sind. Die Abbildung der vielfältig variierenden kognitiven Zustände der Anfragesteller im Rahmen der der Systementwicklung ist sicherlich keine triviale Aufgabe. Wie dabei das Problem der Einbeziehung des zentralen Aspektes der Bedeutung gelöst werden kann, sei dahingestellt. - Im achten Kapitel wird der Versuch unternommen, die zuvor diskutierten Punkte in ein IS&R-Forschungsprogramm (Prozesse - Verhalten - Systemfunktionalität - Performanz) umzusetzen, wobei auch einige kritische Anmerkungen zur bisherigen Forschungspraxis getroffen werden. - Das abschliessende neunte Kapitel fasst das Buch kurz zusammen und kann somit auch als Einstieg in dieThematik gelesen werden. Darauffolgen noch ein sehr nützliches Glossar zu allen wichtigen Begriffen, die in dem Buch Verwendung finden, eine Bibliographie und ein Sachregister. Ingwersen und Järvelin haben hier ein sehr anspruchsvolles und dennoch lesbares Buch vorgelegt. Die gebotenen Übersichtskapitel und Diskussionen sind zwar keine Einführung in die Informationswissenschaft, decken aber einen grossen Teil der heute in dieser Disziplin aktuellen und durch laufende Forschungsaktivitäten und Publikationen berührten Teilbereiche ab. Man könnte es auch - vielleicht ein wenig überspitzt - so formulieren: Was hier thematisiert wird, ist eigentlich die moderne Informationswissenschaft. Der Versuch, die beiden Forschungstraditionen zu vereinen, wird diesem Werk sicherlich einen Platz in der Geschichte der Disziplin sichern. Nicht ganz glücklich erscheint der Titel des Buches. "The Turn" soll eine Wende bedeuten, nämlich jene hin zu einer integrierten Sicht von IS und IR. Das geht vermutlich aus dem Untertitel besser hervor, doch dieser erschien den Autoren wohl zu trocken. Schade, denn "The Turn" gibt es z.B. in unserem Verbundkatalog bereits, allerdings mit dem Zusatz "from the Cold War to a new era; the United States and the Soviet Union 1983-1990". Der Verlag, der abgesehen davon ein gediegenes (wenn auch nicht gerade wohlfeiles) Produkt vorgelegt hat, hätte derlei unscharfe Duplizierend besser verhindert. Ungeachtet dessen empfehle ich dieses wichtige Buch ohne Vorbehalt zur Anschaffung; es sollte in keiner grösseren Bibliothek fehlen."
  6. Järvelin, K.; Ingwersen, P.; Niemi, T.: ¬A user-oriented interface for generalised informetric analysis based on applying advanced data modelling techniques (2000) 0.01
    0.005018736 = product of:
      0.040149886 = sum of:
        0.040149886 = weight(_text_:data in 4545) [ClassicSimilarity], result of:
          0.040149886 = score(doc=4545,freq=12.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.4278775 = fieldWeight in 4545, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4545)
      0.125 = coord(1/8)
    
    Abstract
    This article presents a novel user-oriented interface for generalised informetric analysis and demonstrates how informetric calculations can easily and declaratively be specified through advanced data modelling techniques. The interface is declarative and at a high level. Therefore it is easy to use, flexible and extensible. It enables end users to perform basic informetric ad hoc calculations easily and often with much less effort than in contemporary online retrieval systems. It also provides several fruitful generalisations of typical informetric measurements like impact factors. These are based on substituting traditional foci of analysis, for instance journals, by other object types, such as authors, organisations or countries. In the interface, bibliographic data are modelled as complex objects (non-first normal form relations) and terminological and citation networks involving transitive relationships are modelled as binary relations for deductive processing. The interface is flexible, because it makes it easy to switch focus between various object types for informetric calculations, e.g. from authors to institutions. Moreover, it is demonstrated that all informetric data can easily be broken down by criteria that foster advanced analysis, e.g. by years or content-bearing attributes. Such modelling allows flexible data aggregation along many dimensions. These salient features emerge from the query interface's general data restructuring and aggregation capabilities combined with transitive processing capabilities. The features are illustrated by means of sample queries and results in the article.
  7. Björneborn, L.; Ingwersen, P.: Toward a basic framework for Webometrics (2004) 0.00
    0.004321099 = product of:
      0.03456879 = sum of:
        0.03456879 = weight(_text_:web in 3088) [ClassicSimilarity], result of:
          0.03456879 = score(doc=3088,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.35694647 = fieldWeight in 3088, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3088)
      0.125 = coord(1/8)
    
    Abstract
    In this article, we define webometrics within the framework of informetric studies and bibliometrics, as belonging to library and information science, and as associated with cybermetrics as a generic subfield. We develop a consistent and detailed link typology and terminology and make explicit the distinction among different Web node levels when using the proposed conceptual framework. As a consequence, we propose a novel diagram notation to fully appreciate and investigate link structures between Web nodes in webometric analyses. We warn against taking the analogy between citation analyses and link analyses too far.
  8. Ingwersen, P.: Cognitive perspectives of information retrieval interaction : elements of a cognitive IR theory (1996) 0.00
    0.003548782 = product of:
      0.028390257 = sum of:
        0.028390257 = weight(_text_:data in 3616) [ClassicSimilarity], result of:
          0.028390257 = score(doc=3616,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.30255508 = fieldWeight in 3616, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3616)
      0.125 = coord(1/8)
    
    Abstract
    The objective of this paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set in a cognitive framework, the paper outlines the concept of polyrepresentation applied to both the user's cognitive space and the information space of IR systems. The concept seeks to represent the current user's information need, problem state, and domain work task or interest in a structure of causality. Further, it implies that we should apply different methods of representation and a variety of IR techniques of different cognitive and functional origin simultaneously to each semantic full-text entity in the information space. The cognitive differences imply that by applying cognitive overlaps of information objects, originating from different interprestations of such objects through time and by type, the degree of uncertainty inherent in IR is decreased. ... The lack of consistency among authors, indexers, evaluators or users is of an identical cognitive nature. It is unavoidable, and indeed favourable to IR. In particular, for full-text retrieval, alternative semantic entities, including Salton 'et al.'s' 'passage retrieval', are proposed to replace the traditional document record as the basic retrieval entity. These empirically observed phenomena of inconsistency and of semantic entities and values associated with data interpretation support strongly a cognitive approach to IR and the logical use of olypresentation, cognitive overlaps, and both data fusion and data diffusion
  9. Ingwersen, P.; Wormell, I.: Modern indexing and retrieval techniques matching different types of information needs (1989) 0.00
    0.003518027 = product of:
      0.028144216 = sum of:
        0.028144216 = product of:
          0.056288432 = sum of:
            0.056288432 = weight(_text_:22 in 7322) [ClassicSimilarity], result of:
              0.056288432 = score(doc=7322,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.5416616 = fieldWeight in 7322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7322)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    International forum on information and documentation. 14(1989), S.17-22
  10. Ingwersen, P.; Willett, P.: ¬An introduction to algorithmic and cognitive approaches for information retrieval (1995) 0.00
    0.0032782245 = product of:
      0.026225796 = sum of:
        0.026225796 = weight(_text_:data in 4344) [ClassicSimilarity], result of:
          0.026225796 = score(doc=4344,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2794884 = fieldWeight in 4344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4344)
      0.125 = coord(1/8)
    
    Abstract
    This paper provides an over-view of 2, complementary approaches to the design and implementation of information retrieval systems. The first approach focuses on the algorithms and data structures that are needed to maximise the effectiveness and the efficiency of the searches that can be carried out on text databases, while the second adopts a cognitive approach that focuses on the role of the user and of the knowledge sources involved in information retrieval. The paper argues for an holistic view of information retrieval that is capable of encompassing both of these approaches