Search (5 results, page 1 of 1)

  • × author_ss:"Koshman, S."
  1. Koshman, S.: Comparing usability between a visualization and text-based system for information retrieval (2004) 0.02
    0.0188353 = product of:
      0.0565059 = sum of:
        0.0565059 = product of:
          0.08475885 = sum of:
            0.031153653 = weight(_text_:online in 4424) [ClassicSimilarity], result of:
              0.031153653 = score(doc=4424,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 4424, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4424)
            0.05360519 = weight(_text_:retrieval in 4424) [ClassicSimilarity], result of:
              0.05360519 = score(doc=4424,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.34732026 = fieldWeight in 4424, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4424)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This investigation tested the designer assumption that VIBE is a tool for an expert user and asked: what are the effects of user expertise on usability when VIBE's non-traditional interface is compared with a more traditional text-based interface? Three user groups - novices, online searching experts, and VIBE system experts - totaling 31 participants, were asked to use and compare VIBE to a more traditional text-based system, askSam. No significant differences were found; however, significant performance differences were found for some tasks on the two systems. Participants understood the basic principles underlying VIBE although they generally favored the askSam system. The findings suggest that VIBE is a learnable system and its components have pragmatic application to the development of visualized information retrieval systems. Further research is recommended to maximize the retrieval potential of IR visualization systems.
  2. Koshman, S.: Testing user interaction with a prototype visualization-based information retrieval system (2005) 0.02
    0.018085763 = product of:
      0.054257285 = sum of:
        0.054257285 = product of:
          0.081385925 = sum of:
            0.036714934 = weight(_text_:online in 3562) [ClassicSimilarity], result of:
              0.036714934 = score(doc=3562,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23710167 = fieldWeight in 3562, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3562)
            0.04467099 = weight(_text_:retrieval in 3562) [ClassicSimilarity], result of:
              0.04467099 = score(doc=3562,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 3562, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3562)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The VIBE (Visual Information Browsing Environment) prototype system, which was developed at Molde College in Norway in conjunction with researchers at the University of Pittsburgh, allows users to evaluate documents from a retrieved set that is graphically represented as geometric icons within one screen display. While the formal modeling behind VIBE and other information visualization retrieval systems is weIl known, user interaction with the system is not. This investigation tested the designer assumption that VIBE is a tool for a smart (expert) user and asked: What are the effects of the different levels of user expertise upon VIBE usability? Three user groups including novices, online searching experts, and VIBE system experts totaling 31 participants were tested over two sessions with VIBE. Participants selected appropriate features to complete tasks, but did not always solve the tasks correctly. Task timings improved over repeated use with VIBE and the nontypical visually oriented tasks were resolved more successfully than others. Statistically significant differences were not found among all parameters examined between novices and online experts. The VIBE system experts provided the predicted baseline for this study and the VIBE designer assumption was shown to be correct. The study's results point toward further exploration of cognitive preattentive processing, which may help to understand better the novice/expert paradigm when testing a visualized interface design for information retrieval.
  3. Koshman, S.; Heidorn, B.; Kim, H.: ACM SIGIR '93 provides information retrieval roundup (1993) 0.01
    0.00810527 = product of:
      0.024315808 = sum of:
        0.024315808 = product of:
          0.07294742 = sum of:
            0.07294742 = weight(_text_:retrieval in 5692) [ClassicSimilarity], result of:
              0.07294742 = score(doc=5692,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.47264296 = fieldWeight in 5692, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5692)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports from the 16th Annual International Association of Computing Machinery Special Interest Group on Information Retrieval, Pittsburgh, 17 June - 1 Jul 1993. Discusses natural language processing; query operations; full text analysis; data compression and file structure; document operations; demonstrations; post-conference workshops; query expansion and association methods
  4. Spink, A.; Park, M.; Koshman, S.: Factors affecting assigned information problem ordering during Web search : an exploratory study (2006) 0.00
    0.0034387745 = product of:
      0.0103163235 = sum of:
        0.0103163235 = product of:
          0.03094897 = sum of:
            0.03094897 = weight(_text_:retrieval in 991) [ClassicSimilarity], result of:
              0.03094897 = score(doc=991,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20052543 = fieldWeight in 991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=991)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Multitasking is the human ability to handle the demands of multiple tasks. Multitasking behavior involves the ordering of multiple tasks and switching between tasks. People often multitask when using information retrieval (IR) technologies as they seek information on more than one information problem over single or multiple search episodes. However, limited studies have examined how people order their information problems, especially during their Web search engine interaction. The aim of our exploratory study was to investigate assigned information problem ordering by forty (40) study participants engaged in Web search. Findings suggest that assigned information problem ordering was influenced by the following factors, including personal interest, problem knowledge, perceived level of information available on the Web, ease of finding information, level of importance and seeking information on information problems in order from general to specific. Personal interest and problem knowledge were the major factors during assigned information problem ordering. Implications of the findings and further research are discussed. The relationship between information problem ordering and gratification theory is an important area for further exploration.
  5. Spink, A.; Jansen, B.J.; Blakely, C.; Koshman, S.: ¬A study of results overlap and uniqueness among major Web search engines (2006) 0.00
    0.0022925164 = product of:
      0.006877549 = sum of:
        0.006877549 = product of:
          0.020632647 = sum of:
            0.020632647 = weight(_text_:retrieval in 993) [ClassicSimilarity], result of:
              0.020632647 = score(doc=993,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13368362 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=993)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The performance and capabilities of Web search engines is an important and significant area of research. Millions of people world wide use Web search engines very day. This paper reports the results of a major study examining the overlap among results retrieved by multiple Web search engines for a large set of more than 10,000 queries. Previous smaller studies have discussed a lack of overlap in results returned by Web search engines for the same queries. The goal of the current study was to conduct a large-scale study to measure the overlap of search results on the first result page (both non-sponsored and sponsored) across the four most popular Web search engines, at specific points in time using a large number of queries. The Web search engines included in the study were MSN Search, Google, Yahoo! and Ask Jeeves. Our study then compares these results with the first page results retrieved for the same queries by the metasearch engine Dogpile.com. Two sets of randomly selected user-entered queries, one set was 10,316 queries and the other 12,570 queries, from Infospace's Dogpile.com search engine (the first set was from Dogpile, the second was from across the Infospace Network of search properties were submitted to the four single Web search engines). Findings show that the percent of total results unique to only one of the four Web search engines was 84.9%, shared by two of the three Web search engines was 11.4%, shared by three of the Web search engines was 2.6%, and shared by all four Web search engines was 1.1%. This small degree of overlap shows the significant difference in the way major Web search engines retrieve and rank results in response to given queries. Results point to the value of metasearch engines in Web retrieval to overcome the biases of individual search engines.