Search (5 results, page 1 of 1)

  • × author_ss:"Leide, J.E."
  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Cole, C.; Leide, J.E.; Large, A,; Beheshti, J.; Brooks, M.: Putting it together online : information need identification for the domain novice user (2005) 0.01
    0.013874464 = product of:
      0.04162339 = sum of:
        0.04162339 = product of:
          0.062435087 = sum of:
            0.025961377 = weight(_text_:online in 3469) [ClassicSimilarity], result of:
              0.025961377 = score(doc=3469,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 3469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3469)
            0.03647371 = weight(_text_:retrieval in 3469) [ClassicSimilarity], result of:
              0.03647371 = score(doc=3469,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 3469, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3469)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Domain novice users in the beginning stages of researching a topic find themselves searching for information via information retrieval (IR) systems before they have identified their information need. Pre-Internet access technologies adapted by current IR systems poorly serve these domain novice users, whose behavior might be characterized as rudderless and without a compass. In this article we describe a conceptual design for an information retrieval system that incorporates standard information need identification classification and subject cataloging schemes, called the INIIReye System, and a study that tests the efficacy of the innovative part of the INIIReye System, called the Associative Index. The Associative Index helps the user put together his or her associative thoughts-Vannevar Bush's idea of associative indexing for his Memex machine that he never actually described. For the first time, data from the study reported here quantitatively supports the theoretical notion that the information seeker's information need is identified through transformation of his/her knowledge structure (i.e., the seeker's cognitive map or perspective an the task far which information is being sought).
  2. Julien, C.-A.; Leide, J.E.; Bouthillier, F.: Controlled user evaluations of information visualization interfaces for text retrieval : literature review and meta-analysis (2008) 0.00
    0.0048631616 = product of:
      0.014589485 = sum of:
        0.014589485 = product of:
          0.043768454 = sum of:
            0.043768454 = weight(_text_:retrieval in 1718) [ClassicSimilarity], result of:
              0.043768454 = score(doc=1718,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2835858 = fieldWeight in 1718, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1718)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This review describes experimental designs (users, search tasks, measures, etc.) used by 31 controlled user studies of information visualization (IV) tools for textual information retrieval (IR) and a meta-analysis of the reported statistical effects. Comparable experimental designs allow research designers to compare their results with other reports, and support the development of experimentally verified design guidelines concerning which IV techniques are better suited to which types of IR tasks. The studies generally use a within-subject design with 15 or more undergraduate students performing browsing to known-item tasks on sets of at least 1,000 full-text articles or Web pages on topics of general interest/news. Results of the meta-analysis (N = 8) showed no significant effects of the IV tool as compared with a text-only equivalent, but the set shows great variability suggesting an inadequate basis of comparison. Experimental design recommendations are provided which would support comparison of existing IV tools for IR usability testing.
  3. Leide, J.E.; Cole, C.; Beheshti, J.; Large, A.; Lin, Y.: Task-based information retrieval : structuring undergraduate history essays for better course evaluation using essay-type visualizations (2007) 0.00
    0.004052635 = product of:
      0.012157904 = sum of:
        0.012157904 = product of:
          0.03647371 = sum of:
            0.03647371 = weight(_text_:retrieval in 460) [ClassicSimilarity], result of:
              0.03647371 = score(doc=460,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 460, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=460)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    When domain novices are in C.C. Kuhlthau's (1993) Stage 3, the exploration stage of researching an assignment, they often do not know their information need; this causes them to go back to Stage 2, the topic-selection stage, when they are selecting keywords to formulate their query to an Information Retrieval (IR) system. Our hypothesis is that instead of going backward, they should be going forward toward a goal state-the performance of the task for which they are seeking the information. If they can somehow construct their goal state into a query, this forward-looking query better operationalizes their information need than does a topic-based query. For domain novice undergraduates seeking information for a course essay, we define their task as selecting a high-impact essay structure which will put the students' learning on display for the course instructor who will evaluate the essay. We report a study of first-year history undergraduate students which tested the use and effectiveness of "essay type" as a task-focused query-formulation device. We randomly assigned 78 history undergraduates to an intervention group and a control group. The dependent variable was essay quality, based on (a) an evaluation of the student's essay by a research team member, and (b) the marks given to the student's essay by the course instructor. We found that conscious or formal consideration of essay type is inconclusive as a basis of a task-focused query-formulation device for IR.
  4. Cole, C.; Leide, J.E.: Using the user's mental model to guide the integration of information space into information need (2003) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 1237) [ClassicSimilarity], result of:
              0.025790809 = score(doc=1237,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 1237, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1237)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The study reported here tested the efficacy of an information retrieval system output summary and visualization scheme for undergraduates taking a Vietnam War history who were in Kuhlthau's Stage 3 of researching a history essay. The visualization scheme consisted of (a) the undergraduate's own visualization of his or her essay topic, drawn by the student an the bottom half of a sheet of paper, and (b) a visualization of the information space (determined by index term counting) an the tophalf of the same page. To test the visualization scheme, students enrolled in a Vietnam War history course were randomly assigned to either the visualization scheme group, who received a high recall search output, or the nonvisualization group, who received a high precision search output. The dependent variable was the mark awarded the essay by the course instructor. There was no significant difference between the mean marks for the two groups. We were pleasantly surprised with this result given the bad reputation of high recall as a practical search strategy. We hypothesize that a more proactive visualization system is needed that takes the student through the process of using the visualization scheme, including steps that induce student cognition about task-subject objectives.
  5. Yi, K.; Beheshti, J.; Cole, C.; Leide, J.E.; Large, A.: User search behavior of domain-specific information retrieval systems : an analysis of the query logs from PsycINFO and ABC-Clio's Historical Abstracts/America: History and Life (2006) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 197) [ClassicSimilarity], result of:
              0.025790809 = score(doc=197,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=197)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)