Search (13 results, page 1 of 1)

  • × author_ss:"Stvilia, B."
  • × language_ss:"e"
  1. Stvilia, B.; Mon, L.; Yi, Y.J.: ¬A model for online consumer health information quality (2009) 0.01
    0.009418173 = product of:
      0.06592721 = sum of:
        0.046783425 = weight(_text_:web in 3092) [ClassicSimilarity], result of:
          0.046783425 = score(doc=3092,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.48375595 = fieldWeight in 3092, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3092)
        0.019143783 = weight(_text_:information in 3092) [ClassicSimilarity], result of:
          0.019143783 = score(doc=3092,freq=20.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.36800325 = fieldWeight in 3092, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3092)
      0.14285715 = coord(2/14)
    
    Abstract
    This article describes a model for online consumer health information consisting of five quality criteria constructs. These constructs are grounded in empirical data from the perspectives of the three main sources in the communication process: health information providers, consumers, and intermediaries, such as Web directory creators and librarians, who assist consumers in finding healthcare information. The article also defines five constructs of Web page structural markers that could be used in information quality evaluation and maps these markers to the quality criteria. Findings from correlation analysis and multinomial logistic tests indicate that use of the structural markers depended significantly on the type of Web page and type of information provider. The findings suggest the need to define genre-specific templates for quality evaluation and the need to develop models for an automatic genre-based classification of health information Web pages. In addition, the study showed that consumers may lack the motivation or literacy skills to evaluate the information quality of health Web pages, which suggests the need to develop accessible automatic information quality evaluation tools and ontologies.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.9, S.1781-1791
  2. Choi, W.; Stvilia, B.: Web credibility assessment : conceptualization, operationalization, variability, and models (2015) 0.01
    0.008991993 = product of:
      0.06294395 = sum of:
        0.048818428 = weight(_text_:web in 2469) [ClassicSimilarity], result of:
          0.048818428 = score(doc=2469,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.50479853 = fieldWeight in 2469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2469)
        0.014125523 = weight(_text_:information in 2469) [ClassicSimilarity], result of:
          0.014125523 = score(doc=2469,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 2469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2469)
      0.14285715 = coord(2/14)
    
    Abstract
    This article reviews theoretical and empirical studies on information credibility, with particular questions as to how scholars have conceptualized credibility, which is known as a multifaceted concept with underlying dimensions; how credibility has been operationalized and measured in empirical studies, especially in the web context; what are the important user characteristics that contribute to the variability of web credibility assessment; and how the process of web credibility assessment has been theorized. An agenda for future research on information credibility is also discussed.
    Series
    Advances in information science
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2399-2414
  3. Jörgensen, C.; Stvilia, B.; Wu, S.: Assessing the relationships among tag syntax, semantics, and perceived usefulness (2014) 0.00
    0.003790876 = product of:
      0.02653613 = sum of:
        0.00856136 = weight(_text_:information in 1244) [ClassicSimilarity], result of:
          0.00856136 = score(doc=1244,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 1244, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1244)
        0.01797477 = weight(_text_:retrieval in 1244) [ClassicSimilarity], result of:
          0.01797477 = score(doc=1244,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 1244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1244)
      0.14285715 = coord(2/14)
    
    Abstract
    With the recent interest in socially created metadata as a potentially complementary resource for image description in relation to established tools such as thesauri and other forms of controlled vocabulary, questions remain about the quality and reuse value of these metadata. This study describes and examines a set of tags using quantitative and qualitative methods and assesses relationships among categories of image tags, tag assignment order, and users' perceptions of usefulness of index terms and user-contributed tags. The study found that tags provide much descriptive information about an image but that users also value and trust controlled vocabulary terms. The study found no correlation between tag length and assignment order, and tag length and its perceived usefulness. The findings of this study can contribute to the design of controlled vocabularies, indexing processes, and retrieval systems for images. In particular, the findings of the study can advance the understanding of image tagging practices, tag facet/category distributions, relative usefulness and importance of these categories to the user, and potential mechanisms for identifying useful terms.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.836-849
  4. Lee, D.J.; Stvilia, B.; Ha, S.; Hahn, D.: ¬The structure and priorities of researchers' scholarly profile maintenance activities : a case of institutional research information management system (2023) 0.00
    0.0025674426 = product of:
      0.017972097 = sum of:
        0.011280581 = weight(_text_:information in 884) [ClassicSimilarity], result of:
          0.011280581 = score(doc=884,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21684799 = fieldWeight in 884, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=884)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 884) [ClassicSimilarity], result of:
              0.020074548 = score(doc=884,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=884)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Research information management systems (RIMS) have become critical components of information technology infrastructure on university campuses. They are used not just for sharing and promoting faculty research, but also for conducting faculty evaluation and development, facilitating research collaborations, identifying mentors for student projects, and expert consultants for local businesses. This study is one of the first empirical investigations of the structure of researchers' scholarly profile maintenance activities in a nonmandatory institutional RIMS. By analyzing the RIMS's log data, we identified 11 tasks researchers performed when updating their profiles. These tasks were further grouped into three activities: (a) adding publication, (b) enhancing researcher identity, and (c) improving research discoverability. In addition, we found that junior researchers and female researchers were more engaged in maintaining their RIMS profiles than senior researchers and male researchers. The results provide insights for designing profile maintenance action templates for institutional RIMS that are tailored to researchers' characteristics and help enhance researchers' engagement in the curation of their research information. This also suggests that female and junior researchers can serve as early adopters of institutional RIMS.
    Date
    22. 1.2023 18:43:02
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.186-204
  5. Stvilia, B.; Hinnant, C.C.; Schindler, K.; Worrall, A.; Burnett, G.; Burnett, K.; Kazmer, M.M.; Marty, P.F.: Composition of scientific teams and publication productivity at a national science lab (2011) 0.00
    0.001676621 = product of:
      0.011736346 = sum of:
        0.0050448296 = weight(_text_:information in 4191) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4191,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4191)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 4191) [ClassicSimilarity], result of:
              0.020074548 = score(doc=4191,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 4191, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4191)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Date
    22. 1.2011 13:19:42
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.270-283
  6. Stvilia, B.; Gasser, L.; Twidale, M.B.; Smith, L.C.: ¬A framework for information quality assessment (2007) 0.00
    8.64828E-4 = product of:
      0.012107591 = sum of:
        0.012107591 = weight(_text_:information in 610) [ClassicSimilarity], result of:
          0.012107591 = score(doc=610,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 610, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=610)
      0.071428575 = coord(1/14)
    
    Abstract
    One cannot manage information quality (IQ) without first being able to measure it meaningfully and establishing a causal connection between the source of IQ change, the IQ problem types, the types of activities affected, and their implications. In this article we propose a general IQ assessment framework. In contrast to context-specific IQ assessment models, which usually focus on a few variables determined by local needs, our framework consists of comprehensive typologies of IQ problems, related activities, and a taxonomy of IQ dimensions organized in a systematic way based on sound theories and practices. The framework can be used as a knowledge resource and as a guide for developing IQ measurement models for many different settings. The framework was validated and refined by developing specific IQ measurement models for two large-scale collections of two large classes of information objects: Simple Dublin Core records and online encyclopedia articles.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1720-1733
  7. Stvilia, B.; Wu, S.; Lee, D.J.: Researchers' uses of and disincentives for sharing their research identity information in research information management systems (2018) 0.00
    7.2068995E-4 = product of:
      0.010089659 = sum of:
        0.010089659 = weight(_text_:information in 4373) [ClassicSimilarity], result of:
          0.010089659 = score(doc=4373,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 4373, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4373)
      0.071428575 = coord(1/14)
    
    Abstract
    This study examined how researchers used research information systems (RIMSs) and the relationships among researchers' seniority, discipline, and types and extent of RIMS use. Most researchers used RIMSs to discover research content. Fewer used RIMSs for sharing and promoting their research. Early career researchers were more frequent users of RIMSs than were associate and full professors. Likewise, assistant professors and postdocs exhibited a higher probability of using RIMSs to promote their research than did students and full professors. Humanities researchers were the least frequent users of RIMSs. Moreover, humanities scholars used RIMSs to evaluate research less than did scholars in other disciplines. The tasks of discovering papers, monitoring the literature, identifying potential collaborators, and promoting research were predictors of higher RIMS use. Researchers who engaged in promoting their research, evaluating research, or monitoring the literature showed a greater propensity to have a public RIMS profile. Furthermore, researchers mostly agreed that not being required, having no effect on their status, not being useful, or not being a norm were reasons for not having a public RIMS profile. Humanities scholars were also more likely than social scientists to agree that having a RIMS profile was not a norm in their fields.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.8, S.1035-1045
  8. Stvilia, B.; Lee, D.J.; Han, N.-e.: "Striking out on your own" : a study of research information management problems on university campuses (2021) 0.00
    7.2068995E-4 = product of:
      0.010089659 = sum of:
        0.010089659 = weight(_text_:information in 309) [ClassicSimilarity], result of:
          0.010089659 = score(doc=309,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 309, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=309)
      0.071428575 = coord(1/14)
    
    Abstract
    Here, we report on a qualitative study that examined research information management (RIM) ecosystems on research university campuses from the perspectives of research information (RI) managers and librarians. In the study, we identified 21 RIM services offered to researchers, ranging from discovering, storing, and sharing authored content to identifying expertise, recruiting faculty, and ensuring the diversity of committee assignments. In addition, we identified 15 types of RIM service provision and adoption problems, analyzed their activity structures, and connected them to strategies for their resolution. Finally, we report on skills that the study participants reported as being needed in their work. These findings can inform the development of best practice guides for RIM on university campuses. The study also advances the state of the art of RIM research by applying the typology of contradictions from activity theory to categorize the problems of RIM service provision and connect their resolution to theories and findings of prior studies in the literature. In this way, the research expands the theoretical base used to study RIM in general and RIM at research universities in particular.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.8, S.963-978
  9. Stvilia, B.; Twidale, M.B.; Smith, L.C.; Gasser, L.: Information quality work organization in wikipedia (2008) 0.00
    6.241359E-4 = product of:
      0.008737902 = sum of:
        0.008737902 = weight(_text_:information in 1859) [ClassicSimilarity], result of:
          0.008737902 = score(doc=1859,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 1859, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1859)
      0.071428575 = coord(1/14)
    
    Abstract
    The classic problem within the information quality (IQ) research and practice community has been the problem of defining IQ. It has been found repeatedly that IQ is context sensitive and cannot be described, measured, and assured with a single model. There is a need for empirical case studies of IQ work in different systems to develop a systematic knowledge that can then inform and guide the construction of context-specific IQ models. This article analyzes the organization of IQ assurance work in a large-scale, open, collaborative encyclopedia - Wikipedia. What is special about Wikipedia as a resource is that the quality discussions and processes are strongly connected to the data itself and are accessible to the general public. This openness makes it particularly easy for researchers to study a particular kind of collaborative work that is highly distributed and that has a particularly substantial focus, not just on error detection but also on error correction. We believe that the study of those evolving debates and processes and of the IQ assurance model as a whole has useful implications for the improvement of quality in other more conventional databases.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.6, S.983-1001
  10. Stvilia, B.; Gasser, L.: Value-based metadata quality assessment (2008) 0.00
    5.7655195E-4 = product of:
      0.008071727 = sum of:
        0.008071727 = weight(_text_:information in 252) [ClassicSimilarity], result of:
          0.008071727 = score(doc=252,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=252)
      0.071428575 = coord(1/14)
    
    Source
    Library and information science research. 30(2008) no.1, S.67-74
  11. Huang, H.; Stvilia, B.; Jörgensen, C.; Bass, H.W.: Prioritization of data quality dimensions and skills requirements in genome annotation work (2012) 0.00
    5.0960475E-4 = product of:
      0.0071344664 = sum of:
        0.0071344664 = weight(_text_:information in 4971) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=4971,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 4971, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4971)
      0.071428575 = coord(1/14)
    
    Abstract
    The rapid accumulation of genome annotations, as well as their widespread reuse in clinical and scientific practice, poses new challenges to management of the quality of scientific data. This study contributes towards better understanding of scientists' perceptions of and priorities for data quality and data quality assurance skills needed in genome annotation. This study was guided by a previously developed general framework for assessment of data quality and by a taxonomy of data-quality (DQ) skills, and intended to define context-sensitive models of criteria for data quality and skills for genome annotation. Analysis of the results revealed that genomics scientists recognize specific sets of criteria for quality in the genome-annotation context. Seventeen data quality dimensions were reduced to 5-factor constructs, and 17 relevant skills were grouped into 4-factor constructs. The constructs defined by this study advance the understanding of data quality relationships and are an important contribution to data and information quality research. In addition, the resulting models can serve as valuable resources to genome data curators and administrators for developing data-curation policies and designing DQ-assurance strategies, processes, procedures, and infrastructure. The study's findings may also inform educators in developing data quality assurance curricula and training courses.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.1, S.195-207
  12. Stvilia, B.; Hinnant, C.C.; Wu, S.; Worrall, A.; Lee, D.J.; Burnett, K.; Burnett, G.; Kazmer, M.M.; Marty, P.F.: Research project tasks, data, and perceptions of data quality in a condensed matter physics community (2015) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 1631) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=1631,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 1631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1631)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.2, S.246-263
  13. Stvilia, B.; Jörgensen, C.: Member activities and quality of tags in a collection of historical photographs in Flickr (2010) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 4117) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4117,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4117)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2477-2489