We build. You grow.

Get best community software here

Start a social network, a fan-site, an education project with oxwall - free opensource community software

the AI reproduction of biased imagery in global health visuals | Forum

Topic location: Forum home » Support » General Questions
ppyadv48
ppyadv48 Feb 18

the AI reproduction of biased imagery in global health visuals

The Health Policy paper of Esmita Charani and colleagues,1 showed how stereotypical global health tropes (such as the so-called suffering subject and white saviour) can be perpetuated through the images chosen to illustrate publications on global health. We propose that generative artificial intelligence (AI), which uses real images as a basis for learning, might further serve to show how deeply embedded the existing tropes and prejudices are within global health images. This in turn can perpetuate oversimplified social categorisations and power imbalances.To get more news about black storm pills, you can visit herbal-hall.com official website.
Using the Midjourney Bot Version 5.1 (released in May, 2023), we attempted to invert these tropes and stereotypes by entering various image-generating prompts to create visuals for Black African doctors or traditional healers providing medicine, vaccines, or care to sick, White, and suffering children. However, despite the supposed enormous generative power of AI, it proved incapable of avoiding the perpetuation of existing inequality and prejudice.
Although it could readily generate an image of a group of suffering White children or an image of Black African doctors (Figure 1, Figure 2), when we tried to merge the first two prompts, asking the AI to render Black African doctors providing care for White suffering children, in the over 300 images generated the recipients of care were, shockingly, always rendered Black (figure 3).

Occasionally the renderings for Black African doctors presented White people, effectively reproducing the saviour trope that we were trying to challenge (figure 4). This was also the case for traditional African healers prompts that often showed White men in exotic clothing (figure 5), also posing the question of gendered biases in such AI-generated global health images.
Eventually we were able to invert only one stereotypical global health image, by asking the AI to generate an image of a traditional African healer healing a White child; however, the rendered White child is wearing clothing that could be understood as a caricature of broadly defined African clothing and bodily practices (figure 6).
When requested to produce images of doctors helping children in Africa, the AI generated images of doctors and patients with exaggerated and culturally offensive African elements such as wildlife (figure 7). In further probing the rendering bias, we discovered that AI couples HIV status with Blackness. Nearly all rendered patients for an HIV patient receiving care prompt (150 of 152) were rendered Black (figure 8).