• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is the best summary I could come up with:


    The 51 moderators in Nairobi working on Sama’s OpenAI account were tasked with reviewing texts, and some images, many depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest, the petitioners say.

    “We are in agreement with those who call for fair and just employment, as it aligns with our mission – that providing meaningful, dignified, living wage work is the best way to permanently lift people out of poverty – and believe that we would already be compliant with any legislation or requirements that may be enacted in this space,” the Sama spokesperson said.

    To teach Bard, Bing or ChatGPT to recognize prompts that would generate harmful materials, algorithms must be fed examples of hate speech, violence and sexual abuse.

    The announcement coincided with an investigation by Time, detailing how nearly 200 young Africans in Sama’s Nairobi datacenter had been confronted with videos of murders, rapes, suicides and child sexual abuse as part of their work, earning as little as .50 an hour while doing so.

    She wants to see an investigation into the pay, mental health support and working conditions of all content moderation and data labeling offices in Kenya, plus greater protections for what she considers to be an “essential workforce”.


    I’m a bot and I’m open source!

  • comfortablyglum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    “some images, many depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest.”

    Yes, people should absolutely be paid more, but there is no amount of money that can shield a person from the trauma caused by seeing such images.

    I think there should also be some hard core investigating into how the companies got such material.