Search (1 results, page 1 of 1)

  • × author_ss:"Berg, A."
  • × year_i:[2020 TO 2030}
  1. Berg, A.; Nelimarkka, M.: Do you see what I see? : measuring the semantic differences in image-recognition services' outputs (2023) 0.01
    0.0065539777 = product of:
      0.039323866 = sum of:
        0.039323866 = weight(_text_:computer in 1070) [ClassicSimilarity], result of:
          0.039323866 = score(doc=1070,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 1070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=1070)
      0.16666667 = coord(1/6)
    
    Abstract
    As scholars increasingly undertake large-scale analysis of visual materials, advanced computational tools show promise for informing that process. One technique in the toolbox is image recognition, made readily accessible via Google Vision AI, Microsoft Azure Computer Vision, and Amazon's Rekognition service. However, concerns about such issues as bias factors and low reliability have led to warnings against research employing it. A systematic study of cross-service label agreement concretized such issues: using eight datasets, spanning professionally produced and user-generated images, the work showed that image-recognition services disagree on the most suitable labels for images. Beyond supporting caveats expressed in prior literature, the report articulates two mitigation strategies, both involving the use of multiple image-recognition services: Highly explorative research could include all the labels, accepting noisier but less restrictive analysis output. Alternatively, scholars may employ word-embedding-based approaches to identify concepts that are similar enough for their purposes, then focus on those labels filtered in.