Search (3 results, page 1 of 1)

  • × author_ss:"Koshman, S."
  • × theme_ss:"Benutzerstudien"
  1. Spink, A.; Jansen, B.J.; Blakely, C.; Koshman, S.: ¬A study of results overlap and uniqueness among major Web search engines (2006) 0.03
    0.02683375 = product of:
      0.09391812 = sum of:
        0.025709987 = weight(_text_:wide in 993) [ClassicSimilarity], result of:
          0.025709987 = score(doc=993,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=993)
        0.052189093 = weight(_text_:web in 993) [ClassicSimilarity], result of:
          0.052189093 = score(doc=993,freq=28.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5396523 = fieldWeight in 993, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=993)
        0.0040358636 = weight(_text_:information in 993) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=993,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=993)
        0.0119831795 = weight(_text_:retrieval in 993) [ClassicSimilarity], result of:
          0.0119831795 = score(doc=993,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.13368362 = fieldWeight in 993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=993)
      0.2857143 = coord(4/14)
    
    Abstract
    The performance and capabilities of Web search engines is an important and significant area of research. Millions of people world wide use Web search engines very day. This paper reports the results of a major study examining the overlap among results retrieved by multiple Web search engines for a large set of more than 10,000 queries. Previous smaller studies have discussed a lack of overlap in results returned by Web search engines for the same queries. The goal of the current study was to conduct a large-scale study to measure the overlap of search results on the first result page (both non-sponsored and sponsored) across the four most popular Web search engines, at specific points in time using a large number of queries. The Web search engines included in the study were MSN Search, Google, Yahoo! and Ask Jeeves. Our study then compares these results with the first page results retrieved for the same queries by the metasearch engine Dogpile.com. Two sets of randomly selected user-entered queries, one set was 10,316 queries and the other 12,570 queries, from Infospace's Dogpile.com search engine (the first set was from Dogpile, the second was from across the Infospace Network of search properties were submitted to the four single Web search engines). Findings show that the percent of total results unique to only one of the four Web search engines was 84.9%, shared by two of the three Web search engines was 11.4%, shared by three of the Web search engines was 2.6%, and shared by all four Web search engines was 1.1%. This small degree of overlap shows the significant difference in the way major Web search engines retrieve and rank results in response to given queries. Results point to the value of metasearch engines in Web retrieval to overcome the biases of individual search engines.
    Source
    Information processing and management. 42(2006) no.5, S.1379-1391
  2. Spink, A.; Park, M.; Koshman, S.: Factors affecting assigned information problem ordering during Web search : an exploratory study (2006) 0.02
    0.017672222 = product of:
      0.082470365 = sum of:
        0.041844364 = weight(_text_:web in 991) [ClassicSimilarity], result of:
          0.041844364 = score(doc=991,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43268442 = fieldWeight in 991, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=991)
        0.022651227 = weight(_text_:information in 991) [ClassicSimilarity], result of:
          0.022651227 = score(doc=991,freq=28.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.4354273 = fieldWeight in 991, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=991)
        0.01797477 = weight(_text_:retrieval in 991) [ClassicSimilarity], result of:
          0.01797477 = score(doc=991,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 991, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=991)
      0.21428572 = coord(3/14)
    
    Abstract
    Multitasking is the human ability to handle the demands of multiple tasks. Multitasking behavior involves the ordering of multiple tasks and switching between tasks. People often multitask when using information retrieval (IR) technologies as they seek information on more than one information problem over single or multiple search episodes. However, limited studies have examined how people order their information problems, especially during their Web search engine interaction. The aim of our exploratory study was to investigate assigned information problem ordering by forty (40) study participants engaged in Web search. Findings suggest that assigned information problem ordering was influenced by the following factors, including personal interest, problem knowledge, perceived level of information available on the Web, ease of finding information, level of importance and seeking information on information problems in order from general to specific. Personal interest and problem knowledge were the major factors during assigned information problem ordering. Implications of the findings and further research are discussed. The relationship between information problem ordering and gratification theory is an important area for further exploration.
    Source
    Information processing and management. 42(2006) no.5, S.1366-1378
  3. Koshman, S.: Testing user interaction with a prototype visualization-based information retrieval system (2005) 0.01
    0.005317847 = product of:
      0.037224926 = sum of:
        0.011280581 = weight(_text_:information in 3562) [ClassicSimilarity], result of:
          0.011280581 = score(doc=3562,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21684799 = fieldWeight in 3562, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3562)
        0.025944345 = weight(_text_:retrieval in 3562) [ClassicSimilarity], result of:
          0.025944345 = score(doc=3562,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 3562, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3562)
      0.14285715 = coord(2/14)
    
    Abstract
    The VIBE (Visual Information Browsing Environment) prototype system, which was developed at Molde College in Norway in conjunction with researchers at the University of Pittsburgh, allows users to evaluate documents from a retrieved set that is graphically represented as geometric icons within one screen display. While the formal modeling behind VIBE and other information visualization retrieval systems is weIl known, user interaction with the system is not. This investigation tested the designer assumption that VIBE is a tool for a smart (expert) user and asked: What are the effects of the different levels of user expertise upon VIBE usability? Three user groups including novices, online searching experts, and VIBE system experts totaling 31 participants were tested over two sessions with VIBE. Participants selected appropriate features to complete tasks, but did not always solve the tasks correctly. Task timings improved over repeated use with VIBE and the nontypical visually oriented tasks were resolved more successfully than others. Statistically significant differences were not found among all parameters examined between novices and online experts. The VIBE system experts provided the predicted baseline for this study and the VIBE designer assumption was shown to be correct. The study's results point toward further exploration of cognitive preattentive processing, which may help to understand better the novice/expert paradigm when testing a visualized interface design for information retrieval.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.8, S.824-833