Search (3 results, page 1 of 1)

  • × author_ss:"Kim, Y.-M."
  1. Kim, Y.-M.: ¬The adoption of university library Web site resources : a multigroup analysis (2010) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 3451) [ClassicSimilarity], result of:
              0.00994303 = score(doc=3451,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 3451, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3451)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    University libraries invest a massive amount of resources in digitizing information for the Web, yet there is growing concern that much of this information is being underutilized. The present study uses the technology acceptance model (TAM) to investigate university library website resources (ULWR) usage. We categorize users based on academic roles and then analyze them as subgroups in order to observe different adoption patterns across groups. A total of 299 usable responses was collected from four different universities and across three populations: undergraduate, master, and doctoral student/faculty groups. The findings show that different library users indeed access ULWR for different reasons, resulting in a need for tailored managerial efforts. Overall, the extended TAM explains undergraduate students' usage best; the explanatory power of the model is significantly lower for the doctoral student/faculty group. Some of the findings challenge results reported in TAM research in other fields. The unexpected findings may result from the application of the model to a different context. Detailed theoretical implications and managerial guidance are offered.
    Type
    a
  2. Kim, Y.-M.: Validation of psychometric research instruments : the case of information science (2009) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2853) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2853,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2853, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2853)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Instrument validation is a critical step that researchers should employ in order to ensure the generation of scientifically valid knowledge. Without it, the basis of research findings and the generalization of such are threatened. This is especially true in the social sciences, a discipline in which the majority of published articles utilize subjective instruments in the collection of data. Consequently, instrument validation has increasingly become common practice in the social sciences, yet implementation of this practice differs greatly among the social-science disciplines. The assessment of instrument validation undertaken in this study attempts to provide guidance for reviewers, editors, authors, and readers, by offering various techniques of validity and analyzing the quality of a set of psychometric journal articles. In this research, six attributes of instrument validation areas are identified as validation protocol. The Journal of the American Society for Information Science and Technology (JASIST), which is widely recognized as a leading journal in the field of information science, was selected for examination to determine how well a set of research articles ranked in meeting instrument-validation protocol. Findings show that while researchers are becoming increasingly attentive to certain validation issues, standards on validation processes and reporting might prove helpful. This paper identifies areas for improvement in the reporting of validity measures and offers ways for researchers to implement them.
    Type
    a
  3. Rieh, S.Y.; Kim, Y.-M.; Markey, K.: Amount of invested mental effort (AIME) in online searching (2012) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2726) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2726,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2726, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2726)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research investigates how people's perceptions of information retrieval (IR) systems, their perceptions of search tasks, and their perceptions of self-efficacy influence the amount of invested mental effort (AIME) they put into using two different IR systems: a Web search engine and a library system. It also explores the impact of mental effort on an end user's search experience. To assess AIME in online searching, two experiments were conducted using these methods: Experiment 1 relied on self-reports and Experiment 2 employed the dual-task technique. In both experiments, data were collected through search transaction logs, a pre-search background questionnaire, a post-search questionnaire and an interview. Important findings are these: (1) subjects invested greater mental effort searching a library system than searching the Web; (2) subjects put little effort into Web searching because of their high sense of self-efficacy in their searching ability and their perception of the easiness of the Web; (3) subjects did not recognize that putting mental effort into searching was something needed to improve the search results; and (4) data collected from multiple sources proved to be effective for assessing mental effort in online searching.
    Type
    a