Search (5 results, page 1 of 1)

  • × theme_ss:"Suchtaktik"
  • × author_ss:"Jansen, B.J."
  1. Jansen, B.J.; Booth, D.L.; Smith, B.K.: Using the taxonomy of cognitive learning to model online searching (2009) 0.01
    0.006068985 = product of:
      0.02427594 = sum of:
        0.02427594 = weight(_text_:information in 4223) [ClassicSimilarity], result of:
          0.02427594 = score(doc=4223,freq=16.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 4223, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4223)
      0.25 = coord(1/4)
    
    Abstract
    In this research, we investigated whether a learning process has unique information searching characteristics. The results of this research show that information searching is a learning process with unique searching characteristics specific to particular learning levels. In a laboratory experiment, we studied the searching characteristics of 72 participants engaged in 426 searching tasks. We classified the searching tasks according to Anderson and Krathwohl's taxonomy of the cognitive learning domain. Research results indicate that applying and analyzing, the middle two of the six categories, generally take the most searching effort in terms of queries per session, topics searched per session, and total time searching. Interestingly, the lowest two learning categories, remembering and understanding, exhibit searching characteristics similar to the highest order learning categories of evaluating and creating. Our results suggest the view of Web searchers having simple information needs may be incorrect. Instead, we discovered that users applied simple searching expressions to support their higher-level information needs. It appears that searchers rely primarily on their internal knowledge for evaluating and creating information needs, using search primarily for fact checking and verification. Overall, results indicate that a learning theory may better describe the information searching process than more commonly used paradigms of decision making or problem solving. The learning style of the searcher does have some moderating effect on exhibited searching characteristics. The implication of this research is that rather than solely addressing a searcher's expressed information need, searching systems can also address the underlying learning need of the user.
    Source
    Information processing and management. 45(2009) no.6, S.643-663
  2. Jansen, B.J.; Resnick, M.: ¬An examination of searcher's perceptions of nonsponsored and sponsored links during ecommerce Web searching (2006) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 221) [ClassicSimilarity], result of:
          0.017165681 = score(doc=221,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 221, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=221)
      0.25 = coord(1/4)
    
    Abstract
    In this article, we report results of an investigation into the effect of sponsored links on ecommerce information seeking on the Web. In this research, 56 participants each engaged in six ecommerce Web searching tasks. We extracted these tasks from the transaction log of a Web search engine, so they represent actual ecommerce searching information needs. Using 60 organic and 30 sponsored Web links, the quality of the Web search engine results was controlled by switching nonsponsored and sponsored links on half of the tasks for each participant. This allowed for investigating the bias toward sponsored links while controlling for quality of content. The study also investigated the relationship between searching self-efficacy, searching experience, types of ecommerce information needs, and the order of links on the viewing of sponsored links. Data included 2,453 interactions with links from result pages and 961 utterances evaluating these links. The results of the study indicate that there is a strong preference for nonsponsored links, with searchers viewing these results first more than 82% of the time. Searching self-efficacy and experience does not increase the likelihood of viewing sponsored links, and the order of the result listing does not appear to affect searcher evaluation of sponsored links. The implications for sponsored links as a long-term business model are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.14, S.1949-1961
  3. Liu, Z.; Jansen, B.J.: ASK: A taxonomy of accuracy, social, and knowledge information seeking posts in social question and answering (2017) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 3345) [ClassicSimilarity], result of:
          0.017165681 = score(doc=3345,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 3345, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3345)
      0.25 = coord(1/4)
    
    Abstract
    Many people turn to their social networks to find information through the practice of question and answering. We believe it is necessary to use different answering strategies based on the type of questions to accommodate the different information needs. In this research, we propose the ASK taxonomy that categorizes questions posted on social networking sites into three types according to the nature of the questioner's inquiry of accuracy, social, or knowledge. To automatically decide which answering strategy to use, we develop a predictive model based on ASK question types using question features from the perspectives of lexical, topical, contextual, and syntactic as well as answer features. By applying the classifier on an annotated data set, we present a comprehensive analysis to compare questions in terms of their word usage, topical interests, temporal and spatial restrictions, syntactic structure, and response characteristics. Our research results show that the three types of questions exhibited different characteristics in the way they are asked. Our automatic classification algorithm achieves an 83% correct labeling result, showing the value of the ASK taxonomy for the design of social question and answering systems.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.333-347
  4. Jansen, B.J.; Booth, D.L.; Spink, A.: Determining the informational, navigational, and transactional intent of Web queries (2008) 0.00
    0.0025748524 = product of:
      0.01029941 = sum of:
        0.01029941 = weight(_text_:information in 2091) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2091,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2091)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 44(2008) no.3, S.1251-1266
  5. Jansen, B.J.; Booth, D.L.; Spink, A.: Patterns of query reformulation during Web searching (2009) 0.00
    0.0025748524 = product of:
      0.01029941 = sum of:
        0.01029941 = weight(_text_:information in 2936) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2936,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2936)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.7, S.1358-1371