Search (23 results, page 1 of 2)

  • × author_ss:"Järvelin, K."
  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Näppilä, T.; Järvelin, K.; Niemi, T.: ¬A tool for data cube construction from structurally heterogeneous XML documents (2008) 0.03
    0.033506516 = product of:
      0.06701303 = sum of:
        0.011456838 = weight(_text_:information in 1369) [ClassicSimilarity], result of:
          0.011456838 = score(doc=1369,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13714671 = fieldWeight in 1369, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.055556197 = sum of:
          0.02331961 = weight(_text_:technology in 1369) [ClassicSimilarity], result of:
            0.02331961 = score(doc=1369,freq=2.0), product of:
              0.1417311 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.047586527 = queryNorm
              0.16453418 = fieldWeight in 1369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1369)
          0.032236587 = weight(_text_:22 in 1369) [ClassicSimilarity], result of:
            0.032236587 = score(doc=1369,freq=2.0), product of:
              0.16663991 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047586527 = queryNorm
              0.19345059 = fieldWeight in 1369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1369)
      0.5 = coord(2/4)
    
    Abstract
    Data cubes for OLAP (On-Line Analytical Processing) often need to be constructed from data located in several distributed and autonomous information sources. Such a data integration process is challenging due to semantic, syntactic, and structural heterogeneity among the data. While XML (extensible markup language) is the de facto standard for data exchange, the three types of heterogeneity remain. Moreover, popular path-oriented XML query languages, such as XQuery, require the user to know in much detail the structure of the documents to be processed and are, thus, effectively impractical in many real-world data integration tasks. Several Lowest Common Ancestor (LCA)-based XML query evaluation strategies have recently been introduced to provide a more structure-independent way to access XML documents. We shall, however, show that this approach leads in the context of certain - not uncommon - types of XML documents to undesirable results. This article introduces a novel high-level data extraction primitive that utilizes the purpose-built Smallest Possible Context (SPC) query evaluation strategy. We demonstrate, through a system prototype for OLAP data cube construction and a sample application in informetrics, that our approach has real advantages in data integration.
    Date
    9. 2.2008 17:22:42
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.3, S.435-449
  2. Järvelin, K.: ¬An analysis of two approaches in information retrieval : from frameworks to study designs (2007) 0.02
    0.016717333 = product of:
      0.033434667 = sum of:
        0.0194429 = weight(_text_:information in 326) [ClassicSimilarity], result of:
          0.0194429 = score(doc=326,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.23274569 = fieldWeight in 326, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=326)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 326) [ClassicSimilarity], result of:
              0.027983533 = score(doc=326,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 326, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=326)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    There is a well-known gap between systems-oriented information retrieval (IR) and user-oriented IR, which cognitive IR seeks to bridge. It is therefore interesting to analyze approaches at the level of frameworks, models, and study designs. This article is an exercise in such an analysis, focusing on two significant approaches to IR: the lab IR approach and P. Ingwersen's (1996) cognitive IR approach. The article focuses on their research frameworks, models, hypotheses, laws and theories, study designs, and possible contributions. The two approaches are quite different, which becomes apparent in the use of Independent, controlled, and dependent variables in the study designs of each approach. Thus, each approach is capable of contributing very differently to understanding and developing information access. The article also discusses integrating the approaches at the study-design level.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.7, S.971-986
  3. Järvelin, K.; Persson, O.: ¬The DCI-index : discounted cumulated impact-based research evaluation (2008) 0.02
    0.015808811 = product of:
      0.031617623 = sum of:
        0.012961932 = weight(_text_:information in 2332) [ClassicSimilarity], result of:
          0.012961932 = score(doc=2332,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.1551638 = fieldWeight in 2332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2332)
        0.01865569 = product of:
          0.03731138 = sum of:
            0.03731138 = weight(_text_:technology in 2332) [ClassicSimilarity], result of:
              0.03731138 = score(doc=2332,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.2632547 = fieldWeight in 2332, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2332)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.14, S.2350-2352
  4. Lehtokangas, R.; Keskustalo, H.; Järvelin, K.: Experiments with transitive dictionary translation and pseudo-relevance feedback using graded relevance assessments (2008) 0.02
    0.015414905 = product of:
      0.03082981 = sum of:
        0.016838044 = weight(_text_:information in 1349) [ClassicSimilarity], result of:
          0.016838044 = score(doc=1349,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.20156369 = fieldWeight in 1349, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1349)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 1349) [ClassicSimilarity], result of:
              0.027983533 = score(doc=1349,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 1349, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1349)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this article, the authors present evaluation results for transitive dictionary-based cross-language information retrieval (CLIR) using graded relevance assessments in a best match retrieval environment. A text database containing newspaper articles and a related set of 35 search topics were used in the tests. Source language topics (in English, German, and Swedish) were automatically translated into the target language (Finnish) via an intermediate (or pivot) language. Effectiveness of the transitively translated queries was compared to that of the directly translated and monolingual Finnish queries. Pseudo-relevance feedback (PRF) was also used to expand the original transitive target queries. Cross-language information retrieval performance was evaluated on three relevance thresholds: stringent, regular, and liberal. The transitive translations performed well achieving, on the average, 85-93% of the direct translation performance, and 66-72% of monolingual performance. Moreover, PRF was successful in raising the performance of transitive translation routes in absolute terms as well as in relation to monolingual and direct translation performance applying PRF.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.3, S.476-488
  5. Järvelin, K.; Persson, O.: ¬The DCI index : discounted cumulated impact-based research evaluation (2008) 0.01
    0.013973147 = product of:
      0.027946293 = sum of:
        0.011456838 = weight(_text_:information in 2694) [ClassicSimilarity], result of:
          0.011456838 = score(doc=2694,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13714671 = fieldWeight in 2694, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2694)
        0.016489455 = product of:
          0.03297891 = sum of:
            0.03297891 = weight(_text_:technology in 2694) [ClassicSimilarity], result of:
              0.03297891 = score(doc=2694,freq=4.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.23268649 = fieldWeight in 2694, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2694)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Erratum in: Järvelin, K., O. Persson: The DCI-index: discounted cumulated impact-based research evaluation. Erratum re. In: Journal of the American Society for Information Science and Technology. 59(2008) no.14, S.2350-2352.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1433-1440
  6. Talvensaari, T.; Juhola, M.; Laurikkala, J.; Järvelin, K.: Corpus-based cross-language information retrieval in retrieval of highly relevant documents (2007) 0.01
    0.01393111 = product of:
      0.02786222 = sum of:
        0.016202414 = weight(_text_:information in 139) [ClassicSimilarity], result of:
          0.016202414 = score(doc=139,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 139, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=139)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 139) [ClassicSimilarity], result of:
              0.02331961 = score(doc=139,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 139, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=139)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information retrieval systems' ability to retrieve highly relevant documents has become more and more important in the age of extremely large collections, such as the World Wide Web (WWW). The authors' aim was to find out how corpus-based cross-language information retrieval (CLIR) manages in retrieving highly relevant documents. They created a Finnish-Swedish comparable corpus from two loosely related document collections and used it as a source of knowledge for query translation. Finnish test queries were translated into Swedish and run against a Swedish test collection. Graded relevance assessments were used in evaluating the results and three relevance criterion levels-liberal, regular, and stringent-were applied. The runs were also evaluated with generalized recall and precision, which weight the retrieved documents according to their relevance level. The performance of the Comparable Corpus Translation system (COCOT) was compared to that of a dictionarybased query translation program; the two translation methods were also combined. The results indicate that corpus-based CUR performs particularly well with highly relevant documents. In average precision, COCOT even matched the monolingual baseline on the highest relevance level. The performance of the different query translation methods was further analyzed by finding out reasons for poor rankings of highly relevant documents.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.3, S.322-334
  7. Pharo, N.; Järvelin, K.: "Irrational" searchers and IR-rational researchers (2006) 0.01
    0.013869986 = product of:
      0.027739972 = sum of:
        0.013748205 = weight(_text_:information in 4922) [ClassicSimilarity], result of:
          0.013748205 = score(doc=4922,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.16457605 = fieldWeight in 4922, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4922)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 4922) [ClassicSimilarity], result of:
              0.027983533 = score(doc=4922,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 4922, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4922)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this article the authors look at the prescriptions advocated by Web search textbooks in the light of a selection of empirical data of real Web information search processes. They use the strategy of disjointed incrementalism, which is a theoretical foundation from decision making, to focus an how people face complex problems, and claim that such problem solving can be compared to the tasks searchers perform when interacting with the Web. The findings suggest that textbooks an Web searching should take into account that searchers only tend to take a certain number of sources into consideration, that the searchers adjust their goals and objectives during searching, and that searchers reconsider the usefulness of sources at different stages of their work tasks as well as their search tasks.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.2, S.222-232
  8. Pirkola, A.; Järvelin, K.: Employing the resolution power of search keys (2001) 0.01
    0.01383271 = product of:
      0.02766542 = sum of:
        0.011341691 = weight(_text_:information in 5907) [ClassicSimilarity], result of:
          0.011341691 = score(doc=5907,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.13576832 = fieldWeight in 5907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5907)
        0.016323728 = product of:
          0.032647457 = sum of:
            0.032647457 = weight(_text_:technology in 5907) [ClassicSimilarity], result of:
              0.032647457 = score(doc=5907,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.23034787 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5907)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.7, S.575-583
  9. Ingwersen, P.; Järvelin, K.: ¬The turn : integration of information seeking and retrieval in context (2005) 0.01
    0.01219605 = product of:
      0.0243921 = sum of:
        0.018562198 = weight(_text_:information in 1323) [ClassicSimilarity], result of:
          0.018562198 = score(doc=1323,freq=42.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.22220306 = fieldWeight in 1323, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1323)
        0.0058299024 = product of:
          0.011659805 = sum of:
            0.011659805 = weight(_text_:technology in 1323) [ClassicSimilarity], result of:
              0.011659805 = score(doc=1323,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.08226709 = fieldWeight in 1323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1323)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies. TOC:Introduction.- The Cognitive Framework for Information.- The Development of Information Seeking Research.- Systems-Oriented Information Retrieval.- Cognitive and User-Oriented Information Retrieval.- The Integrated IS&R Research Framework.- Implications of the Cognitive Framework for IS&R.- Towards a Research Program.- Conclusion.- Definitions.- References.- Index.
    Footnote
    Rez. in: Mitt. VÖB 59(2006) H.2, S.81-83 (O. Oberhauser): "Mit diesem Band haben zwei herausragende Vertreter der europäischen Informationswissenschaft, die Professoren Peter Ingwersen (Kopenhagen) und Kalervo Järvelin (Tampere) ein Werk vorgelegt, das man vielleicht dereinst als ihr opus magnum bezeichnen wird. Mich würde dies nicht überraschen, denn die Autoren unternehmen hier den ambitionierten Versuch, zwei informations wissenschaftliche Forschungstraditionen, die einander bisher in eher geringem Ausmass begegneten, unter einem gesamtheitlichen kognitiven Ansatz zu vereinen - das primär im sozialwissenschaftlichen Bereich verankerte Forschungsgebiet "Information Seeking and Retrieval" (IS&R) und das vorwiegend im Informatikbereich angesiedelte "Information Retrieval" (IR). Dabei geht es ihnen auch darum, den seit etlichen Jahren zwar dominierenden, aber auch als zu individualistisch kritisierten kognitiven Ansatz so zu erweitern, dass technologische, verhaltensbezogene und kooperative Aspekte in kohärenter Weise berücksichtigt werden. Dies geschieht auf folgende Weise in neun Kapiteln: - Zunächst werden die beiden "Lager" - die an Systemen und Laborexperimenten orientierte IR-Tradition und die an Benutzerfragen orientierte IS&R-Fraktion - einander gegenübergestellt und einige zentrale Begriffe geklärt. - Im zweiten Kapitel erfolgt eine ausführliche Darstellung der kognitiven Richtung der Informationswissenschaft, insbesondere hinsichtlich des Informationsbegriffes. - Daran schliesst sich ein Überblick über die bisherige Forschung zu "Information Seeking" (IS) - eine äusserst brauchbare Einführung in die Forschungsfragen und Modelle, die Forschungsmethodik sowie die in diesem Bereich offenen Fragen, z.B. die aufgrund der einseitigen Ausrichtung des Blickwinkels auf den Benutzer mangelnde Betrachtung der Benutzer-System-Interaktion. - In analoger Weise wird im vierten Kapitel die systemorientierte IRForschung in einem konzentrierten Überblick vorgestellt, in dem es sowohl um das "Labormodell" als auch Ansätze wie die Verarbeitung natürlicher Sprache und Expertensysteme geht. Aspekte wie Relevanz, Anfragemodifikation und Performanzmessung werden ebenso angesprochen wie die Methodik - von den ersten Laborexperimenten bis zu TREC und darüber hinaus.
    Series
    The Kluwer international series on information retrieval ; 18
    Theme
    Information
  10. Niemi, T.; Hirvonen, L.; Järvelin, K.: Multidimensional data model and query language for informetrics (2003) 0.01
    0.011856608 = product of:
      0.023713216 = sum of:
        0.00972145 = weight(_text_:information in 1753) [ClassicSimilarity], result of:
          0.00972145 = score(doc=1753,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.116372846 = fieldWeight in 1753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1753)
        0.013991767 = product of:
          0.027983533 = sum of:
            0.027983533 = weight(_text_:technology in 1753) [ClassicSimilarity], result of:
              0.027983533 = score(doc=1753,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.19744103 = fieldWeight in 1753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1753)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.10, S.939-951
  11. Kekäläinen, J.; Järvelin, K.: Using graded relevance assessments in IR evaluation (2002) 0.01
    0.0098805055 = product of:
      0.019761011 = sum of:
        0.008101207 = weight(_text_:information in 5225) [ClassicSimilarity], result of:
          0.008101207 = score(doc=5225,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.09697737 = fieldWeight in 5225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5225)
        0.011659805 = product of:
          0.02331961 = sum of:
            0.02331961 = weight(_text_:technology in 5225) [ClassicSimilarity], result of:
              0.02331961 = score(doc=5225,freq=2.0), product of:
                0.1417311 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.047586527 = queryNorm
                0.16453418 = fieldWeight in 5225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5225)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.13, S.1120-xxxx
  12. Pirkola, A.; Hedlund, T.; Keskustalo, H.; Järvelin, K.: Dictionary-based cross-language information retrieval : problems, methods, and research findings (2001) 0.01
    0.008019786 = product of:
      0.032079145 = sum of:
        0.032079145 = weight(_text_:information in 3908) [ClassicSimilarity], result of:
          0.032079145 = score(doc=3908,freq=4.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.3840108 = fieldWeight in 3908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3908)
      0.25 = coord(1/4)
    
    Source
    Information retrieval. 4(2001), S.209-230
  13. Hansen, P.; Järvelin, K.: Collaborative Information Retrieval in an information-intensive domain (2005) 0.01
    0.007291087 = product of:
      0.029164348 = sum of:
        0.029164348 = weight(_text_:information in 1040) [ClassicSimilarity], result of:
          0.029164348 = score(doc=1040,freq=18.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.34911853 = fieldWeight in 1040, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1040)
      0.25 = coord(1/4)
    
    Abstract
    In this article we investigate the expressions of collaborative activities within information seeking and retrieval processes (IS&R). Generally, information seeking and retrieval is regarded as an individual and isolated process in IR research. We assume that an IS&R situation is not merely an individual effort, but inherently involves various collaborative activities. We present empirical results from a real-life and information-intensive setting within the patent domain, showing that the patent task performance process involves highly collaborative aspects throughout the stages of the information seeking and retrieval process. Furthermore, we show that these activities may be categorised and related to different stages in an information seeking and retrieval process. Therefore, the assumption that information retrieval performance is purely individual needs to be reconsidered. Finally, we also propose a refined IR framework involving collaborative aspects.
    Source
    Information processing and management. 41(2005) no.5, S.1101-1120
  14. Niemi, T.; Junkkari, M.; Järvelin, K.; Viita, S.: Advanced query language for manipulating complex entities (2004) 0.01
    0.0056708455 = product of:
      0.022683382 = sum of:
        0.022683382 = weight(_text_:information in 4218) [ClassicSimilarity], result of:
          0.022683382 = score(doc=4218,freq=2.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.27153665 = fieldWeight in 4218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4218)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 40(2004) no.6, S.869-
  15. Vakkari, P.; Järvelin, K.: Explanation in information seeking and retrieval (2005) 0.01
    0.0056126816 = product of:
      0.022450726 = sum of:
        0.022450726 = weight(_text_:information in 643) [ClassicSimilarity], result of:
          0.022450726 = score(doc=643,freq=24.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.2687516 = fieldWeight in 643, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=643)
      0.25 = coord(1/4)
    
    Abstract
    Information Retrieval (IR) is a research area both within Computer Science and Information Science. It has by and large two communities: a Computer Science oriented experimental approach and a user-oriented Information Science approach with a Social Science background. The communities hold a critical stance towards each other (e.g., Ingwersen, 1996), the latter suspecting the realism of the former, and the former suspecting the usefulness of the latter. Within Information Science the study of information seeking (IS) also has a Social Science background. There is a lot of research in each of these particular areas of information seeking and retrieval (IS&R). However, the three communities do not really communicate with each other. Why is this, and could the relationships be otherwise? Do the communities in fact belong together? Or perhaps each community is better off forgetting about the existence of the other two? We feel that the relationships between the research areas have not been properly analyzed. One way to analyze the relationships is to examine what each research area is trying to find out: which phenomena are being explained and how. We believe that IS&R research would benefit from being analytic about its frameworks, models and theories, not just at the level of meta-theories, but also much more concretely at the level of study designs. Over the years there have been calls for more context in the study of IS&R. Work tasks as well as cultural activities/interests have been proposed as the proper context for information access. For example, Wersig (1973) conceptualized information needs from the tasks perspective. He argued that in order to learn about information needs and seeking, one needs to take into account the whole active professional role of the individuals being investigated. Byström and Järvelin (1995) analysed IS processes in the light of tasks of varying complexity. Ingwersen (1996) discussed the role of tasks and their descriptions and problematic situations from a cognitive perspective on IR. Most recently, Vakkari (2003) reviewed task-based IR and Järvelin and Ingwersen (2004) proposed the extension of IS&R research toward the task context. Therefore there is much support to the task context, but how should it be applied in IS&R?
    Series
    The information retrieval series, vol. 19
    Source
    New directions in cognitive information retrieval. Eds.: A. Spink, C. Cole
  16. Järvelin, K.; Ingwersen, P.: User-oriented and cognitive models of information retrieval (2009) 0.00
    0.0049110963 = product of:
      0.019644385 = sum of:
        0.019644385 = weight(_text_:information in 3901) [ClassicSimilarity], result of:
          0.019644385 = score(doc=3901,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.23515764 = fieldWeight in 3901, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3901)
      0.25 = coord(1/4)
    
    Abstract
    The domain of user-oriented and cognitive information retrieval (IR) is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain. The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical applications. Several models with different emphases on user-oriented and cognitive IR are presented-ranging from overall approaches and relevance models to procedural models, cognitive models, and task-based models. The present entry does not discuss empirical findings based on the models.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  17. Pharo, N.; Järvelin, K.: ¬The SST method : a tool for analysing Web information search processes (2004) 0.00
    0.0045287125 = product of:
      0.01811485 = sum of:
        0.01811485 = weight(_text_:information in 2533) [ClassicSimilarity], result of:
          0.01811485 = score(doc=2533,freq=10.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.21684799 = fieldWeight in 2533, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2533)
      0.25 = coord(1/4)
    
    Abstract
    The article presents the search situation transition (SST) method for analysing Web information search (WIS) processes. The idea of the method is to analyse searching behaviour, the process, in detail and connect both the searchers' actions (captured in a log) and his/her intentions and goals, which log analysis never captures. On the other hand, ex post factor surveys, while popular in WIS research, cannot capture the actual search processes. The method is presented through three facets: its domain, its procedure, and its justification. The method's domain is presented in the form of a conceptual framework which maps five central categories that influence WIS processes; the searcher, the social/organisational environment, the work task, the search task, and the process itself. The method's procedure includes various techniques for data collection and analysis. The article presents examples from real WIS processes and shows how the method can be used to identify the interplay of the categories during the processes. It is shown that the method presents a new approach in information seeking and retrieval by focusing on the search process as a phenomenon and by explicating how different information seeking factors directly affect the search process.
    Source
    Information processing and management. 40(2004) no.4, S.633-654
  18. Halttunen, K.; Järvelin, K.: Assessing learning outcomes in two information retrieval learning environments (2005) 0.00
    0.004209511 = product of:
      0.016838044 = sum of:
        0.016838044 = weight(_text_:information in 996) [ClassicSimilarity], result of:
          0.016838044 = score(doc=996,freq=6.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.20156369 = fieldWeight in 996, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=996)
      0.25 = coord(1/4)
    
    Abstract
    In order to design information retrieval (IR) learning environments and instruction, it is important to explore learning outcomes of different pedagogical solutions. Learning outcomes have seldom been evaluated in IR instruction. The particular focus of this study is the assessment of learning outcomes in an experimental, but naturalistic, learning environment compared to more traditional instruction. The 57 participants of an introductory course on IR were selected for this study, and the analysis illustrates their learning outcomes regarding both conceptual change and development of IR skill. Concept mapping of student essays was used to analyze conceptual change and log-files of search exercises provided data for performance assessment. Students in the experimental learning environment changed their conceptions more regarding linguistic aspects of IR and paid more emphasis on planning and management of search process. Performance assessment indicates that anchored instruction and scaffolding with an instructional tool, the IR Game, with performance feedback enables students to construct queries with fewer semantic knowledge errors also in operational IR systems.
    Source
    Information processing and management. 41(2005) no.4, S.949-972
  19. Sormunen, E.; Kekäläinen, J.; Koivisto, J.; Järvelin, K.: Document text characteristics affect the ranking of the most relevant documents by expanded structured queries (2001) 0.00
    0.0040506036 = product of:
      0.016202414 = sum of:
        0.016202414 = weight(_text_:information in 4487) [ClassicSimilarity], result of:
          0.016202414 = score(doc=4487,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 4487, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4487)
      0.25 = coord(1/4)
    
    Abstract
    The increasing flood of documentary information through the Internet and other information sources challenges the developers of information retrieval systems. It is not enough that an IR system is able to make a distinction between relevant and non-relevant documents. The reduction of information overload requires that IR systems provide the capability of screening the most valuable documents out of the mass of potentially or marginally relevant documents. This paper introduces a new concept-based method to analyse the text characteristics of documents at varying relevance levels. The results of the document analysis were applied in an experiment on query expansion (QE) in a probabilistic IR system. Statistical differences in textual characteristics of highly relevant and less relevant documents were investigated by applying a facet analysis technique. In highly relevant documents a larger number of aspects of the request were discussed, searchable expressions for the aspects were distributed over a larger set of text paragraphs, and a larger set of unique expressions were used per aspect than in marginally relevant documents. A query expansion experiment verified that the findings of the text analysis can be exploited in formulating more effective queries for best match retrieval in the search for highly relevant documents. The results revealed that expanded queries with concept-based structures performed better than unexpanded queries or Ñnatural languageÒ queries. Further, it was shown that highly relevant documents benefit essentially more from the concept-based QE in ranking than marginally relevant documents.
  20. Saarikoski, J.; Laurikkala, J.; Järvelin, K.; Juhola, M.: ¬A study of the use of self-organising maps in information retrieval (2009) 0.00
    0.0040506036 = product of:
      0.016202414 = sum of:
        0.016202414 = weight(_text_:information in 2836) [ClassicSimilarity], result of:
          0.016202414 = score(doc=2836,freq=8.0), product of:
            0.083537094 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047586527 = queryNorm
            0.19395474 = fieldWeight in 2836, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2836)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The aim of this paper is to explore the possibility of retrieving information with Kohonen self-organising maps, which are known to be effective to group objects according to their similarity or dissimilarity. Design/methodology/approach - After conventional preprocessing, such as transforming into vector space, documents from a German document collection were trained for a neural network of Kohonen self-organising map type. Such an unsupervised network forms a document map from which relevant objects can be found according to queries. Findings - Self-organising maps ordered documents to groups from which it was possible to find relevant targets. Research limitations/implications - The number of documents used was moderate due to the limited number of documents associated to test topics. The training of self-organising maps entails rather long running times, which is their practical limitation. In future, the aim will be to build larger networks by compressing document matrices, and to develop document searching in them. Practical implications - With self-organising maps the distribution of documents can be visualised and relevant documents found in document collections of limited size. Originality/value - The paper reports on an approach that can be especially used to group documents and also for information search. So far self-organising maps have rarely been studied for information retrieval. Instead, they have been applied to document grouping tasks.