Search (7 results, page 1 of 1)

  • × author_ss:"Li, Y."
  1. Li, Y.; Kobsa, A.: Context and privacy concerns in friend request decisions (2020) 0.02
    0.020866206 = product of:
      0.08346482 = sum of:
        0.08346482 = weight(_text_:sites in 5873) [ClassicSimilarity], result of:
          0.08346482 = score(doc=5873,freq=2.0), product of:
            0.2408473 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046071928 = queryNorm
            0.34654665 = fieldWeight in 5873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=5873)
      0.25 = coord(1/4)
    
    Abstract
    Friend request acceptance and information disclosure constitute 2 important privacy decisions for users to control the flow of their personal information in social network sites (SNSs). These decisions are greatly influenced by contextual characteristics of the request. However, the contextual influence may not be uniform among users with different levels of privacy concerns. In this study, we hypothesize that users with higher privacy concerns may consider contextual factors differently from those with lower privacy concerns. By conducting a scenario-based survey study and structural equation modeling, we verify the interaction effects between privacy concerns and contextual factors. We additionally find that users' perceived risk towards the requester mediates the effect of context and privacy concerns. These results extend our understanding about the cognitive process behind privacy decision making in SNSs. The interaction effects suggest strategies for SNS providers to predict user's friend request acceptance and to customize context-aware privacy decision support based on users' different privacy attitudes.
  2. Zhang, X.; Li, Y.; Liu, J.; Zhang, Y.: Effects of interaction design in digital libraries on user interactions (2008) 0.01
    0.0100566195 = product of:
      0.040226478 = sum of:
        0.040226478 = product of:
          0.080452956 = sum of:
            0.080452956 = weight(_text_:design in 1898) [ClassicSimilarity], result of:
              0.080452956 = score(doc=1898,freq=10.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.46444345 = fieldWeight in 1898, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1898)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This study aims to investigate the effects of different search and browse features in digital libraries (DLs) on task interactions, and what features would lead to poor user experience. Design/methodology/approach - Three operational DLs: ACM, IEEE CS, and IEEE Xplore are used in this study. These three DLs present different features in their search and browsing designs. Two information-seeking tasks are constructed: one search task and one browsing task. An experiment was conducted in a usability laboratory. Data from 35 participants are collected on a set of measures for user interactions. Findings - The results demonstrate significant differences in many aspects of the user interactions between the three DLs. For both search and browse designs, the features that lead to poor user interactions are identified. Research limitations/implications - User interactions are affected by specific design features in DLs. Some of the design features may lead to poor user performance and should be improved. The study was limited mainly in the variety and the number of tasks used. Originality/value - The study provided empirical evidence to the effects of interaction design features in DLs on user interactions and performance. The results contribute to our knowledge about DL designs in general and about the three operational DLs in particular.
  3. Li, Y.; Crescenzi, A.; Ward, A.R.; Capra, R.: Thinking inside the box : an evaluation of a novel search-assisting tool for supporting (meta)cognition during exploratory search (2023) 0.01
    0.006360365 = product of:
      0.02544146 = sum of:
        0.02544146 = product of:
          0.05088292 = sum of:
            0.05088292 = weight(_text_:design in 1040) [ClassicSimilarity], result of:
              0.05088292 = score(doc=1040,freq=4.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.29373983 = fieldWeight in 1040, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1040)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Exploratory searches involve significant cognitively demanding aiming at learning and investigation. However, users gain little support from search engines for their cognitive and metacognitive activities (e.g., discovery, synthesis, planning, transformation, monitoring, and reflection) during exploratory searches. To better support the exploratory search process, we designed a new search assistance tool called OrgBox. OrgBox allows users to drag-and-drop information they find during searches into "boxes" and "items" that can be created, labeled, and rearranged on a canvas. We conducted a controlled, within-subjects user study with 24 participants to evaluate the OrgBox versus a baseline tool called the OrgDoc that supported rich-text features. Our findings show that participants perceived the OrgBox tool to provide more support for grouping and reorganizing information, tracking thought processes, planning and monitoring search and task processes, and gaining a visual overview of the collected information. The usability test reveals users' preferences for simplicity, familiarity, and flexibility of the design of OrgBox, along with technical problems such as delay of response and restrictions of use. Our results have implications for the design of search-assisting systems that encourage cognitive and metacognitive activities during exploratory search processes.
  4. Crespo, J.A.; Herranz, N.; Li, Y.; Ruiz-Castillo, J.: ¬The effect on citation inequality of differences in citation practices at the web of science subject category level (2014) 0.01
    0.0055172984 = product of:
      0.022069193 = sum of:
        0.022069193 = product of:
          0.044138387 = sum of:
            0.044138387 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.044138387 = score(doc=1291,freq=4.0), product of:
                0.16133605 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046071928 = queryNorm
                0.27358043 = fieldWeight in 1291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article studies the impact of differences in citation practices at the subfield, or Web of Science subject category level, using the model introduced in Crespo, Li, and Ruiz-Castillo (2013a), according to which the number of citations received by an article depends on its underlying scientific influence and the field to which it belongs. We use the same Thomson Reuters data set of about 4.4 million articles used in Crespo et al. (2013a) to analyze 22 broad fields. The main results are the following: First, when the classification system goes from 22 fields to 219 subfields the effect on citation inequality of differences in citation practices increases from ?14% at the field level to 18% at the subfield level. Second, we estimate a set of exchange rates (ERs) over a wide [660, 978] citation quantile interval to express the citation counts of articles into the equivalent counts in the all-sciences case. In the fractional case, for example, we find that in 187 of 219 subfields the ERs are reliable in the sense that the coefficient of variation is smaller than or equal to 0.10. Third, in the fractional case the normalization of the raw data using the ERs (or subfield mean citations) as normalization factors reduces the importance of the differences in citation practices from 18% to 3.8% (3.4%) of overall citation inequality. Fourth, the results in the fractional case are essentially replicated when we adopt a multiplicative approach.
  5. Liu, J.; Li, Y.; Hastings, S.K.: Simplified scheme of search task difficulty reasons (2019) 0.01
    0.0053969487 = product of:
      0.021587795 = sum of:
        0.021587795 = product of:
          0.04317559 = sum of:
            0.04317559 = weight(_text_:design in 5224) [ClassicSimilarity], result of:
              0.04317559 = score(doc=5224,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.24924651 = fieldWeight in 5224, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5224)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article reports on a study that aimed at simplifying a search task difficulty reason scheme. Liu, Kim, and Creel (2015) (denoted LKC15) developed a 21-item search task difficulty reason scheme using a controlled laboratory experiment. The current study simplified the scheme through another experiment that followed the same design as LKC15 and involved 32 university students. The study had one added questionnaire item that provided a list of the 21 difficulty reasons in the multiple-choice format. By comparing the current study with LKC15, a concept of primary top difficulty reasons was proposed, which reasonably simplified the 21-item scheme to an 8-item top reason list. This limited number of reasons is more manageable and makes it feasible for search systems to predict task difficulty reasons from observable user behaviors, which builds the basis for systems to improve user satisfaction based on predicted search difficulty reasons.
  6. Li, Y.; Belkin, N.J.: ¬A faceted approach to conceptualizing tasks in information seeking (2008) 0.00
    0.0044974573 = product of:
      0.01798983 = sum of:
        0.01798983 = product of:
          0.03597966 = sum of:
            0.03597966 = weight(_text_:design in 2442) [ClassicSimilarity], result of:
              0.03597966 = score(doc=2442,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.20770542 = fieldWeight in 2442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2442)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The nature of the task that leads a person to engage in information interaction, as well as of information seeking and searching tasks, have been shown to influence individuals' information behavior. Classifying tasks in a domain has been viewed as a departure point of studies on the relationship between tasks and human information behavior. However, previous task classification schemes either classify tasks with respect to the requirements of specific studies or merely classify a certain category of task. Such approaches do not lead to a holistic picture of task since a task involves different aspects. Therefore, the present study aims to develop a faceted classification of task, which can incorporate work tasks and information search tasks into the same classification scheme and characterize tasks in such a way as to help people make predictions of information behavior. For this purpose, previous task classification schemes and their underlying facets are reviewed and discussed. Analysis identifies essential facets and categorizes them into Generic facets of task and Common attributes of task. Generic facets of task include Source of task, Task doer, Time, Action, Product, and Goal. Common attributes of task includes Task characteristics and User's perception of task. Corresponding sub-facets and values are identified as well. In this fashion, a faceted classification of task is established which could be used to describe users' work tasks and information search tasks. This faceted classification provides a framework to further explore the relationships among work tasks, search tasks, and interactive information retrieval and advance adaptive IR systems design.
  7. Arora, S.K.; Li, Y.; Youtie, J.; Shapira, P.: Using the wayback machine to mine websites in the social sciences : a methodological resource (2016) 0.00
    0.0044974573 = product of:
      0.01798983 = sum of:
        0.01798983 = product of:
          0.03597966 = sum of:
            0.03597966 = weight(_text_:design in 3050) [ClassicSimilarity], result of:
              0.03597966 = score(doc=3050,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.20770542 = fieldWeight in 3050, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3050)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Websites offer an unobtrusive data source for developing and analyzing information about various types of social science phenomena. In this paper, we provide a methodological resource for social scientists looking to expand their toolkit using unstructured web-based text, and in particular, with the Wayback Machine, to access historical website data. After providing a literature review of existing research that uses the Wayback Machine, we put forward a step-by-step description of how the analyst can design a research project using archived websites. We draw on the example of a project that analyzes indicators of innovation activities and strategies in 300 U.S. small- and medium-sized enterprises in green goods industries. We present six steps to access historical Wayback website data: (a) sampling, (b) organizing and defining the boundaries of the web crawl, (c) crawling, (d) website variable operationalization, (e) integration with other data sources, and (f) analysis. Although our examples draw on specific types of firms in green goods industries, the method can be generalized to other areas of research. In discussing the limitations and benefits of using the Wayback Machine, we note that both machine and human effort are essential to developing a high-quality data set from archived web information.