Search (2 results, page 1 of 1)

  • × author_ss:"Arazy, O."
  • × year_i:[2010 TO 2020}
  1. Arazy, O.; Kopak, R.: On the measurability of information quality (2011) 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 4135) [ClassicSimilarity], result of:
              0.1025501 = score(doc=4135,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 4135, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4135)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The notion of information quality (IQ) has been investigated extensively in recent years. Much of this research has been aimed at conceptualizing IQ and its underlying dimensions (e.g., accuracy, completeness) and at developing instruments for measuring these quality dimensions. However, less attention has been given to the measurability of IQ. The objective of this study is to explore the extent to which a set of IQ dimensions-accuracy, completeness, objectivity, and representation-lend themselves to reliable measurement. By reliable measurement, we refer to the degree to which independent assessors are able to agree when rating objects on these various dimensions. Our study reveals that multiple assessors tend to agree more on certain dimensions (e.g., accuracy) while finding it more difficult to agree on others (e.g., completeness). We argue that differences in measurability stem from properties inherent to the quality dimension (i.e., the availability of heuristics that make the assessment more tangible) as well as on assessors' reliance on these cues. Implications for theory and practice are discussed.
  2. Arazy, O.; Yeo, L.; Nov, O.: Stay on the Wikipedia task : when task-related disagreements slip into personal and procedural conflicts (2013) 0.02
    0.021364605 = product of:
      0.04272921 = sum of:
        0.04272921 = product of:
          0.08545842 = sum of:
            0.08545842 = weight(_text_:assessment in 1006) [ClassicSimilarity], result of:
              0.08545842 = score(doc=1006,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.30499613 = fieldWeight in 1006, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1006)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In Wikipedia, volunteers collaboratively author encyclopedic entries, and therefore managing conflict is a key factor in group success. Behavioral research describes 3 conflict types: task-related, affective, and process. Affective and process conflicts have been consistently found to impede group performance; however, the effect of task conflict is inconsistent. We propose that these inconclusive results are due to underspecification of the task conflict construct, and focus on the transition phase where task-related disagreements escalate into affective and process conflict. We define these transitional phases as distinct constructs-task-affective and task-process conflict-and develop a theoretical model that explains how the various task-related conflict constructs, together with the composition of the wiki editor group, determine the quality of the collaboratively authored wiki article. Our empirical study of 96 Wikipedia articles involved multiple data-collection methods, including analysis of Wikipedia system logs, manual content analysis of articles' discussion pages, and a comprehensive assessment of articles' quality using the Delphi method. Our results show that when group members' disagreements-originally task related-escalate into personal attacks or hinge on procedure, these disagreements impede group performance. Implications for research and practice are discussed.

Authors