Search (7 results, page 1 of 1)

  • × author_ss:"Koshman, S."
  1. Koshman, S.: Testing user interaction with a prototype visualization-based information retrieval system (2005) 0.04
    0.036453407 = product of:
      0.10936022 = sum of:
        0.10936022 = product of:
          0.21872044 = sum of:
            0.21872044 = weight(_text_:designer in 3562) [ClassicSimilarity], result of:
              0.21872044 = score(doc=3562,freq=4.0), product of:
                0.36824805 = queryWeight, product of:
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.048437484 = queryNorm
                0.59394866 = fieldWeight in 3562, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3562)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The VIBE (Visual Information Browsing Environment) prototype system, which was developed at Molde College in Norway in conjunction with researchers at the University of Pittsburgh, allows users to evaluate documents from a retrieved set that is graphically represented as geometric icons within one screen display. While the formal modeling behind VIBE and other information visualization retrieval systems is weIl known, user interaction with the system is not. This investigation tested the designer assumption that VIBE is a tool for a smart (expert) user and asked: What are the effects of the different levels of user expertise upon VIBE usability? Three user groups including novices, online searching experts, and VIBE system experts totaling 31 participants were tested over two sessions with VIBE. Participants selected appropriate features to complete tasks, but did not always solve the tasks correctly. Task timings improved over repeated use with VIBE and the nontypical visually oriented tasks were resolved more successfully than others. Statistically significant differences were not found among all parameters examined between novices and online experts. The VIBE system experts provided the predicted baseline for this study and the VIBE designer assumption was shown to be correct. The study's results point toward further exploration of cognitive preattentive processing, which may help to understand better the novice/expert paradigm when testing a visualized interface design for information retrieval.
  2. Koshman, S.: Comparing usability between a visualization and text-based system for information retrieval (2004) 0.03
    0.030931745 = product of:
      0.09279523 = sum of:
        0.09279523 = product of:
          0.18559046 = sum of:
            0.18559046 = weight(_text_:designer in 4424) [ClassicSimilarity], result of:
              0.18559046 = score(doc=4424,freq=2.0), product of:
                0.36824805 = queryWeight, product of:
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.048437484 = queryNorm
                0.5039822 = fieldWeight in 4424, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.602543 = idf(docFreq=59, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4424)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This investigation tested the designer assumption that VIBE is a tool for an expert user and asked: what are the effects of user expertise on usability when VIBE's non-traditional interface is compared with a more traditional text-based interface? Three user groups - novices, online searching experts, and VIBE system experts - totaling 31 participants, were asked to use and compare VIBE to a more traditional text-based system, askSam. No significant differences were found; however, significant performance differences were found for some tasks on the two systems. Participants understood the basic principles underlying VIBE although they generally favored the askSam system. The findings suggest that VIBE is a learnable system and its components have pragmatic application to the development of visualized information retrieval systems. Further research is recommended to maximize the retrieval potential of IR visualization systems.
  3. Spink, A.; Jansen, B.J.; Blakely, C.; Koshman, S.: ¬A study of results overlap and uniqueness among major Web search engines (2006) 0.03
    0.028435402 = product of:
      0.085306205 = sum of:
        0.085306205 = weight(_text_:web in 993) [ClassicSimilarity], result of:
          0.085306205 = score(doc=993,freq=28.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.5396523 = fieldWeight in 993, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=993)
      0.33333334 = coord(1/3)
    
    Abstract
    The performance and capabilities of Web search engines is an important and significant area of research. Millions of people world wide use Web search engines very day. This paper reports the results of a major study examining the overlap among results retrieved by multiple Web search engines for a large set of more than 10,000 queries. Previous smaller studies have discussed a lack of overlap in results returned by Web search engines for the same queries. The goal of the current study was to conduct a large-scale study to measure the overlap of search results on the first result page (both non-sponsored and sponsored) across the four most popular Web search engines, at specific points in time using a large number of queries. The Web search engines included in the study were MSN Search, Google, Yahoo! and Ask Jeeves. Our study then compares these results with the first page results retrieved for the same queries by the metasearch engine Dogpile.com. Two sets of randomly selected user-entered queries, one set was 10,316 queries and the other 12,570 queries, from Infospace's Dogpile.com search engine (the first set was from Dogpile, the second was from across the Infospace Network of search properties were submitted to the four single Web search engines). Findings show that the percent of total results unique to only one of the four Web search engines was 84.9%, shared by two of the three Web search engines was 11.4%, shared by three of the Web search engines was 2.6%, and shared by all four Web search engines was 1.1%. This small degree of overlap shows the significant difference in the way major Web search engines retrieve and rank results in response to given queries. Results point to the value of metasearch engines in Web retrieval to overcome the biases of individual search engines.
  4. Jansen, B.J.; Spink, A.; Koshman, S.: Web searcher interaction with the Dogpile.com metasearch engine (2007) 0.03
    0.026868932 = product of:
      0.080606796 = sum of:
        0.080606796 = weight(_text_:web in 270) [ClassicSimilarity], result of:
          0.080606796 = score(doc=270,freq=16.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.5099235 = fieldWeight in 270, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=270)
      0.33333334 = coord(1/3)
    
    Abstract
    Metasearch engines are an intuitive method for improving the performance of Web search by increasing coverage, returning large numbers of results with a focus on relevance, and presenting alternative views of information needs. However, the use of metasearch engines in an operational environment is not well understood. In this study, we investigate the usage of Dogpile.com, a major Web metasearch engine, with the aim of discovering how Web searchers interact with metasearch engines. We report results examining 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005 and compare these results with findings from other Web searching studies. We collect data on geographical location of searchers, use of system feedback, content selection, sessions, queries, and term usage. Findings show that Dogpile.com searchers are mainly from the USA (84% of searchers), use about 3 terms per query (mean = 2.85), implement system feedback moderately (8.4% of users), and generally (56% of users) spend less than one minute interacting with the Web search engine. Overall, metasearchers seem to have higher degrees of interaction than searchers on non-metasearch engines, but their sessions are for a shorter period of time. These aspects of metasearching may be what define the differences from other forms of Web searching. We discuss the implications of our findings in relation to metasearch for Web searchers, search engines, and content providers.
  5. Koshman, S.; Spink, A.; Jansen, B.J.: Web searching on the Vivisimo search engine (2006) 0.03
    0.025133584 = product of:
      0.07540075 = sum of:
        0.07540075 = weight(_text_:web in 216) [ClassicSimilarity], result of:
          0.07540075 = score(doc=216,freq=14.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.47698978 = fieldWeight in 216, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=216)
      0.33333334 = coord(1/3)
    
    Abstract
    The application of clustering to Web search engine technology is a novel approach that offers structure to the information deluge often faced by Web searchers. Clustering methods have been well studied in research labs; however, real user searching with clustering systems in operational Web environments is not well understood. This article reports on results from a transaction log analysis of Vivisimo.com, which is a Web meta-search engine that dynamically clusters users' search results. A transaction log analysis was conducted on 2-week's worth of data collected from March 28 to April 4 and April 25 to May 2, 2004, representing 100% of site traffic during these periods and 2,029,734 queries overall. The results show that the highest percentage of queries contained two terms. The highest percentage of search sessions contained one query and was less than 1 minute in duration. Almost half of user interactions with clusters consisted of displaying a cluster's result set, and a small percentage of interactions showed cluster tree expansion. Findings show that 11.1% of search sessions were multitasking searches, and there are a broad variety of search topics in multitasking search sessions. Other searching interactions and statistics on repeat users of the search engine are reported. These results provide insights into search characteristics with a cluster-based Web search engine and extend research into Web searching trends.
  6. Spink, A.; Park, M.; Koshman, S.: Factors affecting assigned information problem ordering during Web search : an exploratory study (2006) 0.02
    0.022799043 = product of:
      0.06839713 = sum of:
        0.06839713 = weight(_text_:web in 991) [ClassicSimilarity], result of:
          0.06839713 = score(doc=991,freq=8.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43268442 = fieldWeight in 991, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=991)
      0.33333334 = coord(1/3)
    
    Abstract
    Multitasking is the human ability to handle the demands of multiple tasks. Multitasking behavior involves the ordering of multiple tasks and switching between tasks. People often multitask when using information retrieval (IR) technologies as they seek information on more than one information problem over single or multiple search episodes. However, limited studies have examined how people order their information problems, especially during their Web search engine interaction. The aim of our exploratory study was to investigate assigned information problem ordering by forty (40) study participants engaged in Web search. Findings suggest that assigned information problem ordering was influenced by the following factors, including personal interest, problem knowledge, perceived level of information available on the Web, ease of finding information, level of importance and seeking information on information problems in order from general to specific. Personal interest and problem knowledge were the major factors during assigned information problem ordering. Implications of the findings and further research are discussed. The relationship between information problem ordering and gratification theory is an important area for further exploration.
  7. Jansen, B.J.; Spink, A.; Blakely, C.; Koshman, S.: Defining a session on Web search engines (2007) 0.02
    0.016453793 = product of:
      0.049361378 = sum of:
        0.049361378 = weight(_text_:web in 285) [ClassicSimilarity], result of:
          0.049361378 = score(doc=285,freq=6.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.3122631 = fieldWeight in 285, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=285)
      0.33333334 = coord(1/3)
    
    Abstract
    Detecting query reformulations within a session by a Web searcher is an important area of research for designing more helpful searching systems and targeting content to particular users. Methods explored by other researchers include both qualitative (i.e., the use of human judges to manually analyze query patterns on usually small samples) and nondeterministic algorithms, typically using large amounts of training data to predict query modification during sessions. In this article, we explore three alternative methods for detection of session boundaries. All three methods are computationally straightforward and therefore easily implemented for detection of session changes. We examine 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005. We compare session analysis using (a) Internet Protocol address and cookie; (b) Internet Protocol address, cookie, and a temporal limit on intrasession interactions; and (c) Internet Protocol address, cookie, and query reformulation patterns. Overall, our analysis shows that defining sessions by query reformulation along with Internet Protocol address and cookie provides the best measure, resulting in an 82% increase in the count of sessions. Regardless of the method used, the mean session length was fewer than three queries, and the mean session duration was less than 30 min. Searchers most often modified their query by changing query terms (nearly 23% of all query modifications) rather than adding or deleting terms. Implications are that for measuring searching traffic, unique sessions may be a better indicator than the common metric of unique visitors. This research also sheds light on the more complex aspects of Web searching involving query modifications and may lead to advances in searching tools.