Search (34 results, page 2 of 2)

  • × author_ss:"Jansen, B.J."
  1. Jansen, B.J.; Booth, D.L.; Spink, A.: Patterns of query reformulation during Web searching (2009) 0.01
    0.0068425946 = product of:
      0.04789816 = sum of:
        0.041844364 = weight(_text_:web in 2936) [ClassicSimilarity], result of:
          0.041844364 = score(doc=2936,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43268442 = fieldWeight in 2936, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2936)
        0.0060537956 = weight(_text_:information in 2936) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=2936,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 2936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2936)
      0.14285715 = coord(2/14)
    
    Abstract
    Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.7, S.1358-1371
  2. Jansen, B.J.; Pooch , U.: ¬A review of Web searching studies and a framework for future research (2001) 0.01
    0.0063583 = product of:
      0.044508096 = sum of:
        0.034519844 = weight(_text_:web in 5186) [ClassicSimilarity], result of:
          0.034519844 = score(doc=5186,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35694647 = fieldWeight in 5186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5186)
        0.009988253 = weight(_text_:information in 5186) [ClassicSimilarity], result of:
          0.009988253 = score(doc=5186,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 5186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5186)
      0.14285715 = coord(2/14)
    
    Abstract
    Jansen and Pooch review three major search engine studies and compare them to three traditional search system studies and three OPAC search studies, to determine if user search characteristics differ. The web search engine studies indicate that most searchers use two, two search term queries per session, no boolean operators, and look only at the top ten items returned, while reporting the location of relevant information. In traditional search systems we find seven to 16 queries of six to nine terms, while about ten documents per session were viewed. The OPAC studies indicated two to five queries per session of two or less terms, with Boolean search about 1% and less than 50 documents viewed.
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.3, S.235-246
  3. Jansen, B.J.: Seeking and implementing automated assistance during the search process (2005) 0.01
    0.005129378 = product of:
      0.035905644 = sum of:
        0.0104854815 = weight(_text_:information in 1055) [ClassicSimilarity], result of:
          0.0104854815 = score(doc=1055,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.20156369 = fieldWeight in 1055, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1055)
        0.025420163 = weight(_text_:retrieval in 1055) [ClassicSimilarity], result of:
          0.025420163 = score(doc=1055,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 1055, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1055)
      0.14285715 = coord(2/14)
    
    Abstract
    Searchers seldom make use of the advanced searching features that could improve the quality of the search process because they do not know these features exist, do not understand how to use them, or do not believe they are effective or efficient. Information retrieval systems offering automated assistance could greatly improve search effectiveness by suggesting or implementing assistance automatically. A critical issue in designing such systems is determining when the system should intervene in the search process. In this paper, we report the results of an empirical study analyzing when during the search process users seek automated searching assistance from the system and when they implement the assistance. We designed a fully functional, automated assistance application and conducted a study with 30 subjects interacting with the system. The study used a 2G TREC document collection and TREC topics. Approximately 50% of the subjects sought assistance, and over 80% of those implemented that assistance. Results from the evaluation indicate that users are willing to accept automated assistance during the search process, especially after viewing results and locating relevant documents. We discuss implications for interactive information retrieval system design and directions for future research.
    Source
    Information processing and management. 41(2005) no.4, S.909-928
  4. Jansen, B.J.; Spink, A.; Blakely, C.; Koshman, S.: Defining a session on Web search engines (2007) 0.01
    0.0050347717 = product of:
      0.0352434 = sum of:
        0.03019857 = weight(_text_:web in 285) [ClassicSimilarity], result of:
          0.03019857 = score(doc=285,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3122631 = fieldWeight in 285, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=285)
        0.0050448296 = weight(_text_:information in 285) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=285,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 285, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=285)
      0.14285715 = coord(2/14)
    
    Abstract
    Detecting query reformulations within a session by a Web searcher is an important area of research for designing more helpful searching systems and targeting content to particular users. Methods explored by other researchers include both qualitative (i.e., the use of human judges to manually analyze query patterns on usually small samples) and nondeterministic algorithms, typically using large amounts of training data to predict query modification during sessions. In this article, we explore three alternative methods for detection of session boundaries. All three methods are computationally straightforward and therefore easily implemented for detection of session changes. We examine 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005. We compare session analysis using (a) Internet Protocol address and cookie; (b) Internet Protocol address, cookie, and a temporal limit on intrasession interactions; and (c) Internet Protocol address, cookie, and query reformulation patterns. Overall, our analysis shows that defining sessions by query reformulation along with Internet Protocol address and cookie provides the best measure, resulting in an 82% increase in the count of sessions. Regardless of the method used, the mean session length was fewer than three queries, and the mean session duration was less than 30 min. Searchers most often modified their query by changing query terms (nearly 23% of all query modifications) rather than adding or deleting terms. Implications are that for measuring searching traffic, unique sessions may be a better indicator than the common metric of unique visitors. This research also sheds light on the more complex aspects of Web searching involving query modifications and may lead to advances in searching tools.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.862-871
  5. Coughlin, D.M.; Campbell, M.C.; Jansen, B.J.: ¬A web analytics approach for appraising electronic resources in academic libraries (2016) 0.01
    0.0050347717 = product of:
      0.0352434 = sum of:
        0.03019857 = weight(_text_:web in 2770) [ClassicSimilarity], result of:
          0.03019857 = score(doc=2770,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3122631 = fieldWeight in 2770, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2770)
        0.0050448296 = weight(_text_:information in 2770) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2770,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2770, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2770)
      0.14285715 = coord(2/14)
    
    Abstract
    University libraries provide access to thousands of journals and spend millions of dollars annually on electronic resources. With several commercial entities providing these electronic resources, the result can be silo systems and processes to evaluate cost and usage of these resources, making it difficult to provide meaningful analytics. In this research, we examine a subset of journals from a large research library using a web analytics approach with the goal of developing a framework for the analysis of library subscriptions. This foundational approach is implemented by comparing the impact to the cost, titles, and usage for the subset of journals and by assessing the funding area. Overall, the results highlight the benefit of a web analytics evaluation framework for university libraries and the impact of classifying titles based on the funding area. Furthermore, they show the statistical difference in both use and cost among the various funding areas when ranked by cost, eliminating the outliers of heavily used and highly expensive journals. Future work includes refining this model for a larger scale analysis tying metrics to library organizational objectives and for the creation of an online application to automate this analysis.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.518-534
  6. Jansen, B.J.; McNeese, M.D.: Evaluating the Effectiveness of and Patterns of Interactions With Automated Searching Assistance (2005) 0.00
    0.004954607 = product of:
      0.034682248 = sum of:
        0.008737902 = weight(_text_:information in 4815) [ClassicSimilarity], result of:
          0.008737902 = score(doc=4815,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 4815, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4815)
        0.025944345 = weight(_text_:retrieval in 4815) [ClassicSimilarity], result of:
          0.025944345 = score(doc=4815,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 4815, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4815)
      0.14285715 = coord(2/14)
    
    Abstract
    We report quantitative and qualitative results of an empirical evaluation to determine whether automated assistance improves searching performance and when searchers desire system intervention in the search process. Forty participants interacted with two fully functional information retrieval systems in a counterbalanced, within-participant study. The systems were identical in all respects except that one offered automated assistance and the other did not. The study used a client-side automated assistance application, an approximately 500,000-document Text REtrieval Conference content collection, and six topics. Results indicate that automated assistance can improve searching performance. However, the improvement is less dramatic than one might expect, with an approximately 20% performance increase, as measured by the number of userselected relevant documents. Concerning patterns of interaction, we identified 1,879 occurrences of searchersystem interactions and classified them into 9 major categories and 27 subcategories or states. Results indicate that there are predictable patterns of times when searchers desire and implement searching assistance. The most common three-state pattern is Execute Query-View Results: With Scrolling-View Assistance. Searchers appear receptive to automated assistance; there is a 71% implementation rate. There does not seem to be a correlation between the use of assistance and previous searching performance. We discuss the implications for the design of information retrieval systems and future research directions.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.14, S.1480-1503
  7. Jansen, B.J.; Booth, D.L.; Smith, B.K.: Using the taxonomy of cognitive learning to model online searching (2009) 0.00
    0.004529155 = product of:
      0.031704083 = sum of:
        0.017435152 = weight(_text_:web in 4223) [ClassicSimilarity], result of:
          0.017435152 = score(doc=4223,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 4223, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4223)
        0.014268933 = weight(_text_:information in 4223) [ClassicSimilarity], result of:
          0.014268933 = score(doc=4223,freq=16.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27429342 = fieldWeight in 4223, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4223)
      0.14285715 = coord(2/14)
    
    Abstract
    In this research, we investigated whether a learning process has unique information searching characteristics. The results of this research show that information searching is a learning process with unique searching characteristics specific to particular learning levels. In a laboratory experiment, we studied the searching characteristics of 72 participants engaged in 426 searching tasks. We classified the searching tasks according to Anderson and Krathwohl's taxonomy of the cognitive learning domain. Research results indicate that applying and analyzing, the middle two of the six categories, generally take the most searching effort in terms of queries per session, topics searched per session, and total time searching. Interestingly, the lowest two learning categories, remembering and understanding, exhibit searching characteristics similar to the highest order learning categories of evaluating and creating. Our results suggest the view of Web searchers having simple information needs may be incorrect. Instead, we discovered that users applied simple searching expressions to support their higher-level information needs. It appears that searchers rely primarily on their internal knowledge for evaluating and creating information needs, using search primarily for fact checking and verification. Overall, results indicate that a learning theory may better describe the information searching process than more commonly used paradigms of decision making or problem solving. The learning style of the searcher does have some moderating effect on exhibited searching characteristics. The implication of this research is that rather than solely addressing a searcher's expressed information need, searching systems can also address the underlying learning need of the user.
    Source
    Information processing and management. 45(2009) no.6, S.643-663
  8. Jansen, B.J.; Zhang, M.; Schultz, C.D.: Brand and its effect on user perception of search engine performance (2009) 0.00
    0.004243123 = product of:
      0.029701859 = sum of:
        0.02465703 = weight(_text_:web in 2948) [ClassicSimilarity], result of:
          0.02465703 = score(doc=2948,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 2948, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2948)
        0.0050448296 = weight(_text_:information in 2948) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2948,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2948)
      0.14285715 = coord(2/14)
    
    Abstract
    In this research we investigate the effect of search engine brand on the evaluation of searching performance. Our research is motivated by the large amount of search traffic directed to a handful of Web search engines, even though many have similar interfaces and performance. We conducted a laboratory experiment with 32 participants using a 42 factorial design confounded in four blocks to measure the effect of four search engine brands (Google, MSN, Yahoo!, and a locally developed search engine) while controlling for the quality and presentation of search engine results. We found brand indeed played a role in the searching process. Brand effect varied in different domains. Users seemed to place a high degree of trust in major search engine brands; however, they were more engaged in the searching process when using lesser-known search engines. It appears that branding affects overall Web search at four stages: (a) search engine selection, (b) search engine results page evaluation, (c) individual link evaluation, and (d) evaluation of the landing page. We discuss the implications for search engine marketing and the design of empirical studies measuring search engine performance.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.8, S.1572-1595
  9. Ortiz-Cordova, A.; Yang, Y.; Jansen, B.J.: External to internal search : associating searching on search engines with searching on sites (2015) 0.00
    0.004243123 = product of:
      0.029701859 = sum of:
        0.02465703 = weight(_text_:web in 2675) [ClassicSimilarity], result of:
          0.02465703 = score(doc=2675,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 2675, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2675)
        0.0050448296 = weight(_text_:information in 2675) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2675,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2675)
      0.14285715 = coord(2/14)
    
    Abstract
    We analyze the transitions from external search, searching on web search engines, to internal search, searching on websites. We categorize 295,571 search episodes composed of a query submitted to web search engines and the subsequent queries submitted to a single website search by the same users. There are a total of 1,136,390 queries from all searches, of which 295,571 are external search queries and 840,819 are internal search queries. We algorithmically classify queries into states and then use n-grams to categorize search patterns. We cluster the searching episodes into major patterns and identify the most commonly occurring, which are: (1) Explorers (43% of all patterns) with a broad external search query and then broad internal search queries, (2) Navigators (15%) with an external search query containing a URL component and then specific internal search queries, and (3) Shifters (15%) with a different, seemingly unrelated, query types when transitioning from external to internal search. The implications of this research are that external search and internal search sessions are part of a single search episode and that online businesses can leverage these search episodes to more effectively target potential customers.
    Source
    Information processing and management. 51(2015) no.5, S.718-736
  10. Ortiz-Cordova, A.; Jansen, B.J.: Classifying web search queries to identify high revenue generating customers (2012) 0.00
    0.0038537113 = product of:
      0.026975978 = sum of:
        0.020922182 = weight(_text_:web in 279) [ClassicSimilarity], result of:
          0.020922182 = score(doc=279,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=279)
        0.0060537956 = weight(_text_:information in 279) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=279,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=279)
      0.14285715 = coord(2/14)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1426-1441
  11. Coughlin, D.M.; Jansen, B.J.: Modeling journal bibliometrics to predict downloads and inform purchase decisions at university research libraries (2016) 0.00
    0.003211426 = product of:
      0.022479981 = sum of:
        0.017435152 = weight(_text_:web in 3094) [ClassicSimilarity], result of:
          0.017435152 = score(doc=3094,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 3094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3094)
        0.0050448296 = weight(_text_:information in 3094) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3094,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3094)
      0.14285715 = coord(2/14)
    
    Abstract
    University libraries provide access to thousands of online journals and other content, spending millions of dollars annually on these electronic resources. Providing access to these online resources is costly, and it is difficult both to analyze the value of this content to the institution and to discern those journals that comparatively provide more value. In this research, we examine 1,510 journals from a large research university library, representing more than 40% of the university's annual subscription cost for electronic resources at the time of the study. We utilize a web analytics approach for the creation of a linear regression model to predict usage among these journals. We categorize metrics into two classes: global (journal focused) and local (institution dependent). Using 275 journals for our training set, our analysis shows that a combination of global and local metrics creates the strongest model for predicting full-text downloads. Our linear regression model has an accuracy of more than 80% in predicting downloads for the 1,235 journals in our test set. The implications of the findings are that university libraries that use local metrics have better insight into the value of a journal and therefore more efficient cost content management.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.9, S.2263-2273
  12. Liu, Z.; Jansen, B.J.: ASK: A taxonomy of accuracy, social, and knowledge information seeking posts in social question and answering (2017) 0.00
    7.2068995E-4 = product of:
      0.010089659 = sum of:
        0.010089659 = weight(_text_:information in 3345) [ClassicSimilarity], result of:
          0.010089659 = score(doc=3345,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 3345, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3345)
      0.071428575 = coord(1/14)
    
    Abstract
    Many people turn to their social networks to find information through the practice of question and answering. We believe it is necessary to use different answering strategies based on the type of questions to accommodate the different information needs. In this research, we propose the ASK taxonomy that categorizes questions posted on social networking sites into three types according to the nature of the questioner's inquiry of accuracy, social, or knowledge. To automatically decide which answering strategy to use, we develop a predictive model based on ASK question types using question features from the perspectives of lexical, topical, contextual, and syntactic as well as answer features. By applying the classifier on an annotated data set, we present a comprehensive analysis to compare questions in terms of their word usage, topical interests, temporal and spatial restrictions, syntactic structure, and response characteristics. Our research results show that the three types of questions exhibited different characteristics in the way they are asked. Our automatic classification algorithm achieves an 83% correct labeling result, showing the value of the ASK taxonomy for the design of social question and answering systems.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.333-347
  13. Jansen, B.J.; Zhang, M.; Sobel, K.; Chowdury, A.: Twitter power : tweets as electronic word of mouth (2009) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 3157) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3157,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3157)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.11, S.2169-2188
  14. Jansen, B.J.; Liu, Z.; Simon, Z.: ¬The effect of ad rank on the performance of keyword advertising campaigns (2013) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 1095) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=1095,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 1095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1095)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.2115-2132