Search (1 results, page 1 of 1)

  • × author_ss:"Belitz, C."
  • × year_i:[2020 TO 2030}
  1. Belitz, C.; Ocumpaugh, J.; Ritter, S.; Baker, R.S.; Fancsali, S.E.; Bosch, N.: Constructing categories : moving beyond protected classes in algorithmic fairness (2023) 0.00
    0.0049464838 = product of:
      0.019785935 = sum of:
        0.019785935 = product of:
          0.03957187 = sum of:
            0.03957187 = weight(_text_:software in 962) [ClassicSimilarity], result of:
              0.03957187 = score(doc=962,freq=2.0), product of:
                0.18056466 = queryWeight, product of:
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.045514934 = queryNorm
                0.21915624 = fieldWeight in 962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9671519 = idf(docFreq=2274, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=962)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Automated, data-driven decision making is increasingly common in a variety of application domains. In educational software, for example, machine learning has been applied to tasks like selecting the next exercise for students to complete. Machine learning methods, however, are not always equally effective for all groups of students. Current approaches to designing fair algorithms tend to focus on statistical measures concerning a small subset of legally protected categories like race or gender. Focusing solely on legally protected categories, however, can limit our understanding of bias and unfairness by ignoring the complexities of identity. We propose an alternative approach to categorization, grounded in sociological techniques of measuring identity. By soliciting survey data and interviews from the population being studied, we can build context-specific categories from the bottom up. The emergent categories can then be combined with extant algorithmic fairness strategies to discover which identity groups are not well-served, and thus where algorithms should be improved or avoided altogether. We focus on educational applications but present arguments that this approach should be adopted more broadly for issues of algorithmic fairness across a variety of applications.