Search (39 results, page 1 of 2)

  • × author_ss:"Järvelin, K."
  1. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.02
    0.02053662 = product of:
      0.04791878 = sum of:
        0.026331145 = weight(_text_:u in 2230) [ClassicSimilarity], result of:
          0.026331145 = score(doc=2230,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.21706703 = fieldWeight in 2230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=2230)
        0.006530081 = weight(_text_:a in 2230) [ClassicSimilarity], result of:
          0.006530081 = score(doc=2230,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.15287387 = fieldWeight in 2230, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2230)
        0.015057558 = product of:
          0.030115116 = sum of:
            0.030115116 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
              0.030115116 = score(doc=2230,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.23214069 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  2. Järvelin, K.: Evaluation (2011) 0.02
    0.019730791 = product of:
      0.06905777 = sum of:
        0.061439343 = weight(_text_:u in 548) [ClassicSimilarity], result of:
          0.061439343 = score(doc=548,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.50648975 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.109375 = fieldNorm(doc=548)
        0.0076184273 = weight(_text_:a in 548) [ClassicSimilarity], result of:
          0.0076184273 = score(doc=548,freq=2.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.17835285 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=548)
      0.2857143 = coord(2/7)
    
    Source
    Interactive information seeking, behaviour and retrieval. Eds.: Ruthven, I. u. D. Kelly
    Type
    a
  3. Vakkari, P.; Järvelin, K.; Chang, Y.-W.: ¬The association of disciplinary background with the evolution of topics and methods in Library and Information Science research 1995-2015 (2023) 0.01
    0.0139032975 = product of:
      0.032441027 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 998) [ClassicSimilarity], result of:
              0.026456656 = score(doc=998,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=998)
          0.5 = coord(1/2)
        0.0066647357 = weight(_text_:a in 998) [ClassicSimilarity], result of:
          0.0066647357 = score(doc=998,freq=12.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.15602624 = fieldWeight in 998, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=998)
        0.012547966 = product of:
          0.025095932 = sum of:
            0.025095932 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
              0.025095932 = score(doc=998,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19345059 = fieldWeight in 998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=998)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The paper reports a longitudinal analysis of the topical and methodological development of Library and Information Science (LIS). Its focus is on the effects of researchers' disciplines on these developments. The study extends an earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) by a coordinated dataset representing a content analysis of articles published in 31 scholarly LIS journals in 1995, 2005, and 2015. It is novel in its coverage of authors' disciplines, topical and methodological aspects in a coordinated dataset spanning two decades thus allowing trend analysis. The findings include a shrinking trend in the share of LIS from 67 to 36% while Computer Science, and Business and Economics increase their share from 9 and 6% to 21 and 16%, respectively. The earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) for the year 2015 identified three topical clusters of LIS research, focusing on topical subfields, methodologies, and contributing disciplines. Correspondence analysis confirms their existence already in 1995 and traces their development through the decades. The contributing disciplines infuse their concepts, research questions, and approaches to LIS and may also subsume vital parts of LIS in their own structures of knowledge production.
    Date
    22. 6.2023 18:15:06
    Type
    a
  4. Järvelin, K.; Vakkari, P.: ¬The evolution of library and information science 1965-1985 : a content analysis of journal titles (1993) 0.01
    0.013660972 = product of:
      0.0478134 = sum of:
        0.037039317 = product of:
          0.074078634 = sum of:
            0.074078634 = weight(_text_:p in 4649) [ClassicSimilarity], result of:
              0.074078634 = score(doc=4649,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.55615246 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
        0.010774084 = weight(_text_:a in 4649) [ClassicSimilarity], result of:
          0.010774084 = score(doc=4649,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.25222903 = fieldWeight in 4649, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4649)
      0.2857143 = coord(2/7)
    
    Type
    a
  5. Ferro, N.; Silvello, G.; Keskustalo, H.; Pirkola, A.; Järvelin, K.: ¬The twist measure for IR evaluation : taking user's effort into account (2016) 0.01
    0.010707001 = product of:
      0.037474502 = sum of:
        0.028870367 = weight(_text_:g in 2771) [ClassicSimilarity], result of:
          0.028870367 = score(doc=2771,freq=2.0), product of:
            0.13914184 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03704574 = queryNorm
            0.20748875 = fieldWeight in 2771, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2771)
        0.008604136 = weight(_text_:a in 2771) [ClassicSimilarity], result of:
          0.008604136 = score(doc=2771,freq=20.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.20142901 = fieldWeight in 2771, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2771)
      0.2857143 = coord(2/7)
    
    Abstract
    We present a novel measure for ranking evaluation, called Twist (t). It is a measure for informational intents, which handles both binary and graded relevance. t stems from the observation that searching is currently a that searching is currently taken for granted and it is natural for users to assume that search engines are available and work well. As a consequence, users may assume the utility they have in finding relevant documents, which is the focus of traditional measures, as granted. On the contrary, they may feel uneasy when the system returns nonrelevant documents because they are then forced to do additional work to get the desired information, and this causes avoidable effort. The latter is the focus of t, which evaluates the effectiveness of a system from the point of view of the effort required to the users to retrieve the desired information. We provide a formal definition of t, a demonstration of its properties, and introduce the notion of effort/gain plots, which complement traditional utility-based measures. By means of an extensive experimental evaluation, t is shown to grasp different aspects of system performances, to not require extensive and costly assessments, and to be a robust tool for detecting differences between systems.
    Type
    a
  6. Ingwersen, P.; Järvelin, K.: ¬The turn : integration of information seeking and retrieval in context (2005) 0.01
    0.009285761 = product of:
      0.021666776 = sum of:
        0.006614164 = product of:
          0.013228328 = sum of:
            0.013228328 = weight(_text_:p in 1323) [ClassicSimilarity], result of:
              0.013228328 = score(doc=1323,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.099312946 = fieldWeight in 1323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1323)
          0.5 = coord(1/2)
        0.0109713115 = weight(_text_:u in 1323) [ClassicSimilarity], result of:
          0.0109713115 = score(doc=1323,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.0904446 = fieldWeight in 1323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
        0.0040813005 = weight(_text_:a in 1323) [ClassicSimilarity], result of:
          0.0040813005 = score(doc=1323,freq=18.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.09554617 = fieldWeight in 1323, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
      0.42857143 = coord(3/7)
    
    Abstract
    The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies. TOC:Introduction.- The Cognitive Framework for Information.- The Development of Information Seeking Research.- Systems-Oriented Information Retrieval.- Cognitive and User-Oriented Information Retrieval.- The Integrated IS&R Research Framework.- Implications of the Cognitive Framework for IS&R.- Towards a Research Program.- Conclusion.- Definitions.- References.- Index.
    Footnote
    - Kapitel fünf enthält einen entsprechenden Überblick über die kognitive und benutzerorientierte IR-Tradition. Es zeigt, welche anderen (als nur die labororientierten) IR-Studien durchgeführt werden können, wobei sich die Betrachtung von frühen Modellen (z.B. Taylor) über Belkins ASK-Konzept bis zu Ingwersens Modell der Polyrepräsentation, und von Bates Berrypicking-Ansatz bis zu Vakkaris "taskbased" IR-Modell erstreckt. Auch Web-IR, OKAPI und Diskussionen zum Relevanzbegriff werden hier thematisiert. - Im folgenden Kapitel schlagen die Autoren ein integriertes IS&R Forschungsmodell vor, bei dem die vielfältigen Beziehungen zwischen Informationssuchenden, Systementwicklern, Oberflächen und anderen beteiligten Aspekten berücksichtigt werden. Ihr Ansatz vereint die traditionelle Laborforschung mit verschiedenen benutzerorientierten Traditionen aus IS&R, insbesondere mit den empirischen Ansätzen zu IS und zum interaktiven IR, in einem holistischen kognitiven Modell. - Kapitel sieben untersucht die Implikationen dieses Modells für IS&R, wobei besonders ins Auge fällt, wie komplex die Anfragen von Informationssuchenden im Vergleich mit der relativen Einfachheit der Algorithmen zum Auffinden relevanter Dokumente sind. Die Abbildung der vielfältig variierenden kognitiven Zustände der Anfragesteller im Rahmen der der Systementwicklung ist sicherlich keine triviale Aufgabe. Wie dabei das Problem der Einbeziehung des zentralen Aspektes der Bedeutung gelöst werden kann, sei dahingestellt. - Im achten Kapitel wird der Versuch unternommen, die zuvor diskutierten Punkte in ein IS&R-Forschungsprogramm (Prozesse - Verhalten - Systemfunktionalität - Performanz) umzusetzen, wobei auch einige kritische Anmerkungen zur bisherigen Forschungspraxis getroffen werden. - Das abschliessende neunte Kapitel fasst das Buch kurz zusammen und kann somit auch als Einstieg in dieThematik gelesen werden. Darauffolgen noch ein sehr nützliches Glossar zu allen wichtigen Begriffen, die in dem Buch Verwendung finden, eine Bibliographie und ein Sachregister. Ingwersen und Järvelin haben hier ein sehr anspruchsvolles und dennoch lesbares Buch vorgelegt. Die gebotenen Übersichtskapitel und Diskussionen sind zwar keine Einführung in die Informationswissenschaft, decken aber einen grossen Teil der heute in dieser Disziplin aktuellen und durch laufende Forschungsaktivitäten und Publikationen berührten Teilbereiche ab. Man könnte es auch - vielleicht ein wenig überspitzt - so formulieren: Was hier thematisiert wird, ist eigentlich die moderne Informationswissenschaft. Der Versuch, die beiden Forschungstraditionen zu vereinen, wird diesem Werk sicherlich einen Platz in der Geschichte der Disziplin sichern. Nicht ganz glücklich erscheint der Titel des Buches. "The Turn" soll eine Wende bedeuten, nämlich jene hin zu einer integrierten Sicht von IS und IR. Das geht vermutlich aus dem Untertitel besser hervor, doch dieser erschien den Autoren wohl zu trocken. Schade, denn "The Turn" gibt es z.B. in unserem Verbundkatalog bereits, allerdings mit dem Zusatz "from the Cold War to a new era; the United States and the Soviet Union 1983-1990". Der Verlag, der abgesehen davon ein gediegenes (wenn auch nicht gerade wohlfeiles) Produkt vorgelegt hat, hätte derlei unscharfe Duplizierend besser verhindert. Ungeachtet dessen empfehle ich dieses wichtige Buch ohne Vorbehalt zur Anschaffung; es sollte in keiner grösseren Bibliothek fehlen."
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  7. Järvelin, K.; Niemi, T.: Deductive information retrieval based on classifications (1993) 0.01
    0.009138961 = product of:
      0.031986363 = sum of:
        0.026331145 = weight(_text_:u in 2229) [ClassicSimilarity], result of:
          0.026331145 = score(doc=2229,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.21706703 = fieldWeight in 2229, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=2229)
        0.005655216 = weight(_text_:a in 2229) [ClassicSimilarity], result of:
          0.005655216 = score(doc=2229,freq=6.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.13239266 = fieldWeight in 2229, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2229)
      0.2857143 = coord(2/7)
    
    Abstract
    Modern fact databses contain abundant data classified through several classifications. Typically, users msut consult these classifications in separate manuals or files, thus making their effective use difficult. Contemporary database systems do little support deductive use of classifications. In this study we show how deductive data management techniques can be applied to the utilization of data value classifications. Computation of transitive class relationships is of primary importance here. We define a representation of classifications which supports transitive computation and present an operation-oriented deductive query language tailored for classification-based deductive information retrieval. The operations of this language are on the same abstraction level as relational algebra operations and can be integrated with these to form a powerful and flexible query language for deductive information retrieval. We define the integration of these operations and demonstrate the usefulness of the language in terms of several sample queries
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  8. Lehtokangas, R.; Järvelin, K.: Consistency of textual expression in newspaper articles : an argument for semantically based query expansion (2001) 0.01
    0.0076158014 = product of:
      0.026655303 = sum of:
        0.021942623 = weight(_text_:u in 4485) [ClassicSimilarity], result of:
          0.021942623 = score(doc=4485,freq=2.0), product of:
            0.121304214 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03704574 = queryNorm
            0.1808892 = fieldWeight in 4485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4485)
        0.00471268 = weight(_text_:a in 4485) [ClassicSimilarity], result of:
          0.00471268 = score(doc=4485,freq=6.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.11032722 = fieldWeight in 4485, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4485)
      0.2857143 = coord(2/7)
    
    Abstract
    This article investigates how consistent different newspapers are in their choice of words when writing about the same news events. News articles on the same news events were taken from three Finnish newspapers and compared in regard to their central concepts and words representing the concepts in the news texts. Consistency figures were calculated for each set of three articles (the total number of sets was sixty). Inconsistency in words and concepts was found between news articles from different newspapers. The mean value of consistency calculated on the basis of words was 65 per cent; this however depended on the article length. For short news wires consistency was 83 per cent while for long articles it was only 47 per cent. At the concept level, consistency was considerably higher, ranging from 92 per cent to 97 per cent between short and long articles. The articles also represented three categories of topic (event, process and opinion). Statistically significant differences in consistency were found in regard to length but not in regard to the categories of topic. We argue that the expression inconsistency is a clear sign of a retrieval problem and that query expansion based on semantic relationships can significantly improve retrieval performance on free-text sources.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  9. Kettunen, K.; Kunttu, T.; Järvelin, K.: To stem or lemmatize a highly inflectional language in a probabilistic IR environment? (2005) 0.01
    0.006889085 = product of:
      0.024111796 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 4395) [ClassicSimilarity], result of:
              0.026456656 = score(doc=4395,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 4395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4395)
          0.5 = coord(1/2)
        0.010883467 = weight(_text_:a in 4395) [ClassicSimilarity], result of:
          0.010883467 = score(doc=4395,freq=32.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.25478977 = fieldWeight in 4395, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4395)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - To show that stem generation compares well with lemmatization as a morphological tool for a highly inflectional language for IR purposes in a best-match retrieval system. Design/methodology/approach - Effects of three different morphological methods - lemmatization, stemming and stem production - for Finnish are compared in a probabilistic IR environment (INQUERY). Evaluation is done using a four-point relevance scale which is partitioned differently in different test settings. Findings - Results show that stem production, a lighter method than morphological lemmatization, compares well with lemmatization in a best-match IR environment. Differences in performance between stem production and lemmatization are small and they are not statistically significant in most of the tested settings. It is also shown that hitherto a rather neglected method of morphological processing for Finnish, stemming, performs reasonably well although the stemmer used - a Porter stemmer implementation - is far from optimal for a morphologically complex language like Finnish. In another series of tests, the effects of compound splitting and derivational expansion of queries are tested. Practical implications - Usefulness of morphological lemmatization and stem generation for IR purposes can be estimated with many factors. On the average P-R level they seem to behave very close to each other in a probabilistic IR system. Thus, the choice of the used method with highly inflectional languages needs to be estimated along other dimensions too. Originality/value - Results are achieved using Finnish as an example of a highly inflectional language. The results are of interest for anyone who is interested in processing of morphological variation of a highly inflected language for IR purposes.
    Type
    a
  10. Järvelin, K.; Ingwersen, P.: User-oriented and cognitive models of information retrieval (2009) 0.01
    0.006830486 = product of:
      0.0239067 = sum of:
        0.018519659 = product of:
          0.037039317 = sum of:
            0.037039317 = weight(_text_:p in 3901) [ClassicSimilarity], result of:
              0.037039317 = score(doc=3901,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.27807623 = fieldWeight in 3901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3901)
          0.5 = coord(1/2)
        0.005387042 = weight(_text_:a in 3901) [ClassicSimilarity], result of:
          0.005387042 = score(doc=3901,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.12611452 = fieldWeight in 3901, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3901)
      0.2857143 = coord(2/7)
    
    Abstract
    The domain of user-oriented and cognitive information retrieval (IR) is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain. The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical applications. Several models with different emphases on user-oriented and cognitive IR are presented-ranging from overall approaches and relevance models to procedural models, cognitive models, and task-based models. The present entry does not discuss empirical findings based on the models.
    Type
    a
  11. Hansen, P.; Järvelin, K.: Collaborative Information Retrieval in an information-intensive domain (2005) 0.01
    0.0061512026 = product of:
      0.021529209 = sum of:
        0.015873993 = product of:
          0.031747986 = sum of:
            0.031747986 = weight(_text_:p in 1040) [ClassicSimilarity], result of:
              0.031747986 = score(doc=1040,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.23835106 = fieldWeight in 1040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1040)
          0.5 = coord(1/2)
        0.005655216 = weight(_text_:a in 1040) [ClassicSimilarity], result of:
          0.005655216 = score(doc=1040,freq=6.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.13239266 = fieldWeight in 1040, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1040)
      0.2857143 = coord(2/7)
    
    Abstract
    In this article we investigate the expressions of collaborative activities within information seeking and retrieval processes (IS&R). Generally, information seeking and retrieval is regarded as an individual and isolated process in IR research. We assume that an IS&R situation is not merely an individual effort, but inherently involves various collaborative activities. We present empirical results from a real-life and information-intensive setting within the patent domain, showing that the patent task performance process involves highly collaborative aspects throughout the stages of the information seeking and retrieval process. Furthermore, we show that these activities may be categorised and related to different stages in an information seeking and retrieval process. Therefore, the assumption that information retrieval performance is purely individual needs to be reconsidered. Finally, we also propose a refined IR framework involving collaborative aspects.
    Type
    a
  12. Järvelin, K.: ¬An analysis of two approaches in information retrieval : from frameworks to study designs (2007) 0.01
    0.005854702 = product of:
      0.020491457 = sum of:
        0.015873993 = product of:
          0.031747986 = sum of:
            0.031747986 = weight(_text_:p in 326) [ClassicSimilarity], result of:
              0.031747986 = score(doc=326,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.23835106 = fieldWeight in 326, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=326)
          0.5 = coord(1/2)
        0.0046174643 = weight(_text_:a in 326) [ClassicSimilarity], result of:
          0.0046174643 = score(doc=326,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10809815 = fieldWeight in 326, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=326)
      0.2857143 = coord(2/7)
    
    Abstract
    There is a well-known gap between systems-oriented information retrieval (IR) and user-oriented IR, which cognitive IR seeks to bridge. It is therefore interesting to analyze approaches at the level of frameworks, models, and study designs. This article is an exercise in such an analysis, focusing on two significant approaches to IR: the lab IR approach and P. Ingwersen's (1996) cognitive IR approach. The article focuses on their research frameworks, models, hypotheses, laws and theories, study designs, and possible contributions. The two approaches are quite different, which becomes apparent in the use of Independent, controlled, and dependent variables in the study designs of each approach. Thus, each approach is capable of contributing very differently to understanding and developing information access. The article also discusses integrating the approaches at the study-design level.
    Type
    a
  13. Ahlgren, P.; Järvelin, K.: Measuring impact of twelve information scientists using the DCI index (2010) 0.01
    0.005854702 = product of:
      0.020491457 = sum of:
        0.015873993 = product of:
          0.031747986 = sum of:
            0.031747986 = weight(_text_:p in 3593) [ClassicSimilarity], result of:
              0.031747986 = score(doc=3593,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.23835106 = fieldWeight in 3593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3593)
          0.5 = coord(1/2)
        0.0046174643 = weight(_text_:a in 3593) [ClassicSimilarity], result of:
          0.0046174643 = score(doc=3593,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10809815 = fieldWeight in 3593, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3593)
      0.2857143 = coord(2/7)
    
    Abstract
    The Discounted Cumulated Impact (DCI) index has recently been proposed for research evaluation. In the present work an earlier dataset by Cronin and Meho (2007) is reanalyzed, with the aim of exemplifying the salient features of the DCI index. We apply the index on, and compare our results to, the outcomes of the Cronin-Meho (2007) study. Both authors and their top publications are used as units of analysis, which suggests that, by adjusting the parameters of evaluation according to the needs of research evaluation, the DCI index delivers data on an author's (or publication's) lifetime impact or current impact at the time of evaluation on an author's (or publication's) capability of inviting citations from highly cited later publications as an indication of impact, and on the relative impact across a set of authors (or publications) over their lifetime or currently.
    Type
    a
  14. Näppilä, T.; Järvelin, K.; Niemi, T.: ¬A tool for data cube construction from structurally heterogeneous XML documents (2008) 0.01
    0.005641916 = product of:
      0.019746704 = sum of:
        0.0071987375 = weight(_text_:a in 1369) [ClassicSimilarity], result of:
          0.0071987375 = score(doc=1369,freq=14.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.1685276 = fieldWeight in 1369, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.012547966 = product of:
          0.025095932 = sum of:
            0.025095932 = weight(_text_:22 in 1369) [ClassicSimilarity], result of:
              0.025095932 = score(doc=1369,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19345059 = fieldWeight in 1369, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1369)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Data cubes for OLAP (On-Line Analytical Processing) often need to be constructed from data located in several distributed and autonomous information sources. Such a data integration process is challenging due to semantic, syntactic, and structural heterogeneity among the data. While XML (extensible markup language) is the de facto standard for data exchange, the three types of heterogeneity remain. Moreover, popular path-oriented XML query languages, such as XQuery, require the user to know in much detail the structure of the documents to be processed and are, thus, effectively impractical in many real-world data integration tasks. Several Lowest Common Ancestor (LCA)-based XML query evaluation strategies have recently been introduced to provide a more structure-independent way to access XML documents. We shall, however, show that this approach leads in the context of certain - not uncommon - types of XML documents to undesirable results. This article introduces a novel high-level data extraction primitive that utilizes the purpose-built Smallest Possible Context (SPC) query evaluation strategy. We demonstrate, through a system prototype for OLAP data cube construction and a sample application in informetrics, that our approach has real advantages in data integration.
    Date
    9. 2.2008 17:22:42
    Type
    a
  15. Saastamoinen, M.; Järvelin, K.: Search task features in work tasks of varying types and complexity (2017) 0.01
    0.005621435 = product of:
      0.019675022 = sum of:
        0.0046174643 = weight(_text_:a in 3589) [ClassicSimilarity], result of:
          0.0046174643 = score(doc=3589,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.10809815 = fieldWeight in 3589, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3589)
        0.015057558 = product of:
          0.030115116 = sum of:
            0.030115116 = weight(_text_:22 in 3589) [ClassicSimilarity], result of:
              0.030115116 = score(doc=3589,freq=2.0), product of:
                0.12972787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03704574 = queryNorm
                0.23214069 = fieldWeight in 3589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3589)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Information searching in practice seldom is an end in itself. In work, work task (WT) performance forms the context, which information searching should serve. Therefore, information retrieval (IR) systems development/evaluation should take the WT context into account. The present paper analyzes how WT features: task complexity and task types, affect information searching in authentic work: the types of information needs, search processes, and search media. We collected data on 22 information professionals in authentic work situations in three organization types: city administration, universities, and companies. The data comprise 286 WTs and 420 search tasks (STs). The data include transaction logs, video recordings, daily questionnaires, interviews. and observation. The data were analyzed quantitatively. Even if the participants used a range of search media, most STs were simple throughout the data, and up to 42% of WTs did not include searching. WT's effects on STs are not straightforward: different WT types react differently to WT complexity. Due to the simplicity of authentic searching, the WT/ST types in interactive IR experiments should be reconsidered.
    Type
    a
  16. Järvelin, K.; Ingwersen, P.; Niemi, T.: ¬A user-oriented interface for generalised informetric analysis based on applying advanced data modelling techniques (2000) 0.01
    0.0053343037 = product of:
      0.018670062 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 4545) [ClassicSimilarity], result of:
              0.026456656 = score(doc=4545,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 4545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4545)
          0.5 = coord(1/2)
        0.0054417336 = weight(_text_:a in 4545) [ClassicSimilarity], result of:
          0.0054417336 = score(doc=4545,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.12739488 = fieldWeight in 4545, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4545)
      0.2857143 = coord(2/7)
    
    Abstract
    This article presents a novel user-oriented interface for generalised informetric analysis and demonstrates how informetric calculations can easily and declaratively be specified through advanced data modelling techniques. The interface is declarative and at a high level. Therefore it is easy to use, flexible and extensible. It enables end users to perform basic informetric ad hoc calculations easily and often with much less effort than in contemporary online retrieval systems. It also provides several fruitful generalisations of typical informetric measurements like impact factors. These are based on substituting traditional foci of analysis, for instance journals, by other object types, such as authors, organisations or countries. In the interface, bibliographic data are modelled as complex objects (non-first normal form relations) and terminological and citation networks involving transitive relationships are modelled as binary relations for deductive processing. The interface is flexible, because it makes it easy to switch focus between various object types for informetric calculations, e.g. from authors to institutions. Moreover, it is demonstrated that all informetric data can easily be broken down by criteria that foster advanced analysis, e.g. by years or content-bearing attributes. Such modelling allows flexible data aggregation along many dimensions. These salient features emerge from the query interface's general data restructuring and aggregation capabilities combined with transitive processing capabilities. The features are illustrated by means of sample queries and results in the article.
    Type
    a
  17. Vakkari, P.; Chang, Y.-W.; Järvelin, K.: Disciplinary contributions to research topics and methodology in Library and Information Science : leading to fragmentation? (2022) 0.01
    0.0053343037 = product of:
      0.018670062 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 767) [ClassicSimilarity], result of:
              0.026456656 = score(doc=767,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=767)
          0.5 = coord(1/2)
        0.0054417336 = weight(_text_:a in 767) [ClassicSimilarity], result of:
          0.0054417336 = score(doc=767,freq=8.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.12739488 = fieldWeight in 767, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=767)
      0.2857143 = coord(2/7)
    
    Abstract
    The study analyses contributions to Library and Information Science (LIS) by researchers representing various disciplines. How are such contributions associated with the choice of research topics and methodology? The study employs a quantitative content analysis of articles published in 31 scholarly LIS journals in 2015. Each article is seen as a contribution to LIS by the authors' disciplines, which are inferred from their affiliations. The unit of analysis is the article-discipline pair. Of the contribution instances, the share of LIS is one third. Computer Science contributes one fifth and Business and Economics one sixth. The latter disciplines dominate the contributions in information retrieval, information seeking, and scientific communication indicating strong influences in LIS. Correspondence analysis reveals three clusters of research, one focusing on traditional LIS with contributions from LIS and Humanities and survey-type research; another on information retrieval with contributions from Computer Science and experimental research; and the third on scientific communication with contributions from Natural Sciences and Medicine and citation analytic research. The strong differentiation of scholarly contributions in LIS hints to the fragmentation of LIS as a discipline.
    Type
    a
  18. Vakkari, P.; Järvelin, K.: Explanation in information seeking and retrieval (2005) 0.00
    0.004990278 = product of:
      0.017465971 = sum of:
        0.010582662 = product of:
          0.021165324 = sum of:
            0.021165324 = weight(_text_:p in 643) [ClassicSimilarity], result of:
              0.021165324 = score(doc=643,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.15890071 = fieldWeight in 643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03125 = fieldNorm(doc=643)
          0.5 = coord(1/2)
        0.0068833097 = weight(_text_:a in 643) [ClassicSimilarity], result of:
          0.0068833097 = score(doc=643,freq=20.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.16114321 = fieldWeight in 643, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=643)
      0.2857143 = coord(2/7)
    
    Abstract
    Information Retrieval (IR) is a research area both within Computer Science and Information Science. It has by and large two communities: a Computer Science oriented experimental approach and a user-oriented Information Science approach with a Social Science background. The communities hold a critical stance towards each other (e.g., Ingwersen, 1996), the latter suspecting the realism of the former, and the former suspecting the usefulness of the latter. Within Information Science the study of information seeking (IS) also has a Social Science background. There is a lot of research in each of these particular areas of information seeking and retrieval (IS&R). However, the three communities do not really communicate with each other. Why is this, and could the relationships be otherwise? Do the communities in fact belong together? Or perhaps each community is better off forgetting about the existence of the other two? We feel that the relationships between the research areas have not been properly analyzed. One way to analyze the relationships is to examine what each research area is trying to find out: which phenomena are being explained and how. We believe that IS&R research would benefit from being analytic about its frameworks, models and theories, not just at the level of meta-theories, but also much more concretely at the level of study designs. Over the years there have been calls for more context in the study of IS&R. Work tasks as well as cultural activities/interests have been proposed as the proper context for information access. For example, Wersig (1973) conceptualized information needs from the tasks perspective. He argued that in order to learn about information needs and seeking, one needs to take into account the whole active professional role of the individuals being investigated. Byström and Järvelin (1995) analysed IS processes in the light of tasks of varying complexity. Ingwersen (1996) discussed the role of tasks and their descriptions and problematic situations from a cognitive perspective on IR. Most recently, Vakkari (2003) reviewed task-based IR and Järvelin and Ingwersen (2004) proposed the extension of IS&R research toward the task context. Therefore there is much support to the task context, but how should it be applied in IS&R?
    Source
    New directions in cognitive information retrieval. Eds.: A. Spink, C. Cole
    Type
    a
  19. Tuomaala, O.; Järvelin, K.; Vakkari, P.: Evolution of library and information science, 1965-2005 : content analysis of journal articles (2014) 0.00
    0.0048789186 = product of:
      0.017076215 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 1309) [ClassicSimilarity], result of:
              0.026456656 = score(doc=1309,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 1309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1309)
          0.5 = coord(1/2)
        0.003847887 = weight(_text_:a in 1309) [ClassicSimilarity], result of:
          0.003847887 = score(doc=1309,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.090081796 = fieldWeight in 1309, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1309)
      0.2857143 = coord(2/7)
    
    Abstract
    This article first analyzes library and information science (LIS) research articles published in core LIS journals in 2005. It also examines the development of LIS from 1965 to 2005 in light of comparable data sets for 1965, 1985, and 2005. In both cases, the authors report (a) how the research articles are distributed by topic and (b) what approaches, research strategies, and methods were applied in the articles. In 2005, the largest research areas in LIS by this measure were information storage and retrieval, scientific communication, library and information-service activities, and information seeking. The same research areas constituted the quantitative core of LIS in the previous years since 1965. Information retrieval has been the most popular area of research over the years. The proportion of research on library and information-service activities decreased after 1985, but the popularity of information seeking and of scientific communication grew during the period studied. The viewpoint of research has shifted from library and information organizations to end users and development of systems for the latter. The proportion of empirical research strategies was high and rose over time, with the survey method being the single most important method. However, attention to evaluation and experiments increased considerably after 1985. Conceptual research strategies and system analysis, description, and design were quite popular, but declining. The most significant changes from 1965 to 2005 are the decreasing interest in library and information-service activities and the growth of research into information seeking and scientific communication.
    Type
    a
  20. Järvelin, K.; Vakkari, P.: LIS research across 50 years: content analysis of journal articles : offering an information-centric conception of memes (2022) 0.00
    0.0048789186 = product of:
      0.017076215 = sum of:
        0.013228328 = product of:
          0.026456656 = sum of:
            0.026456656 = weight(_text_:p in 949) [ClassicSimilarity], result of:
              0.026456656 = score(doc=949,freq=2.0), product of:
                0.13319843 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03704574 = queryNorm
                0.19862589 = fieldWeight in 949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=949)
          0.5 = coord(1/2)
        0.003847887 = weight(_text_:a in 949) [ClassicSimilarity], result of:
          0.003847887 = score(doc=949,freq=4.0), product of:
            0.04271548 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03704574 = queryNorm
            0.090081796 = fieldWeight in 949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research longitudinally from 1965 to 2015. Design/methodology/approach The study employs a quantitative intellectual content analysis of articles published in 30+ scholarly LIS journals, following the design by Tuomaala et al. (2014). In the content analysis, we classify articles along eight dimensions covering topical content and methodology. Findings The topical findings indicate that the earlier strong LIS emphasis on L&I services has declined notably, while scientific and professional communication has become the most popular topic. Information storage and retrieval has given up its earlier strong position towards the end of the years analyzed. Individuals are increasingly the units of observation. End-user's and developer's viewpoints have strengthened at the cost of intermediaries' viewpoint. LIS research is methodologically increasingly scattered since survey, scientometric methods, experiment, case studies and qualitative studies have all gained in popularity. Consequently, LIS may have become more versatile in the analysis of its research objects during the years analyzed. Originality/value Among quantitative intellectual content analyses of LIS research, the study is unique in its scope: length of analysis period (50 years), width (8 dimensions covering topical content and methodology) and depth (the annual batch of 30+ scholarly journals).
    Type
    a