• Yawnder@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I’d be curious to see the process they decided which AI faces and which Real faces they presented. It might be in the paper, but I’m too lazy to read that when I’m 95% convinced I already know the answer.

    • paris@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      We used the 100 AI and 100 human White faces (half male, half female) from Nightingale and Farid. The AI faces were generated using StyleGAN2. The human faces were selected from the Flickr-Faces-HQ Dataset to match each of the AI faces as closely as possible (e.g., same gender, posture, and expression). All stimuli had blurred or mostly plain backgrounds, and AI faces were screened to ensure they had no obvious rendering artifacts (e.g., no extra faces in background). Screening for artifacts mimics how real-world users screen AI faces, either as scientists or for public use, and therefore captures the type and range of stimuli that appear online. Participants were asked to resize their screen so that stimuli had a visual angle of 12° wide × 12° high at ~50 cm viewing distance.

      • bitsplease@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I don’t know why people (not saying you, more directed at the top commenter) keep acting like cherry picking AI images in these studies invalidate the results - cherry picking is how you use AI image generation tools, that’s why most will (or can) generate several at once so you can pick the best one. If a malicious actor was trying to fool people, of course they’d use the most “real” looking ones, instead of just the first to generate

        Frankly the studies would be useless if they didn’t cherry pick, because it wouldn’t line up with real world usage

        • Yawnder@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I understand why you’re cautious in the “accusation” (don’t put too much weight on accusation, it’s just the idea I want to convey, not any malicious intent) but in this case, I am saying that cherry picking invalidates the findings, as they are stated.

          If the findings were framed around “it’s easier to fool people using white AI generated faces”, or something similar, I’d be on board with it. The way it sounds right now is “AI generated faces don’t have all these artifacts 99% of the time” (I’m paraphrasing A LOT, but you get what I mean.)

          • bitsplease@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            The way it sounds right now is “AI generated faces don’t have all these artifacts 99% of the time” (I’m paraphrasing A LOT, but you get what I mean.)

            The only way it sounds like that is if you don’t read the article at all and draw all your conclusions from just reading the title.

            Don’t get me wrong, I’m sure many do just that, but that’s not the fault of the study. They clearly state their method for selecting (or “cherry picking”) images

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I feel like that “corporate wants you to find the differences between these two photos” meme. Isn’t everyone in those photos, in both the top and bottom rows, white?

    Edit: Ah, I see, OP has given this a highly misleading title. The “whiteness” of the faces is not actually particularly relevant. In another thread someone summarized what the article is actually about:

    For anyone who doesn’t want to read the paper, they basically took an 60 white men and 60 white women, and showed them a whole bunch of white faces, half of which were generated by AI. It turns out that AI faces were rated as more human-like than actual humans, and they had some hypothesis why. Principally that AI, by its nature, generates images close to “average”, while real people tend to have features that are not “average”. The reason the study focused on white people is that most AI have been trained on white faces, so AI tends to do better with white faces.

  • tygerprints@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    It’s very odd, because “white” is not the color of any actual human being’s face. Look at the white background against this text. Have you ever actually seen anybody that color? Unless they’re coated in whitewash, you have not. Human skin is a complex blend of many different colors from pink to orange to brown to beige and many others also. Every human person is a composition. Nobody is actually white, black, red, or yellow. We’re all colors, blended together. Some are lucky enough to have dark complexions that shine like the finest of earth’s woods and minerals.

    • LWD@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Some are lucky enough to have dark complexions that shine like the finest of earth’s woods and minerals.

      Lol what

    • db2@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I don’t remember where I first saw it, but it’s become a favorite saying: Isn’t it funny how it only takes a pretty face to make you want to put someones genitals in your mouth?

  • Lojcs@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    It doesn’t help that all the humans have beauty filters on

    • FishFace@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Not only that but several of them are a bit weird looking (sorry to those people…) as in, 37 and 47 have obvious asymmetries, 31 is a bit bug-eyed, 18 seems to have been taken with a super telephoto lens or have a really flat face.

    • gullible@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      This reminds me of an argument I saw here last week about AI and its use as a grammar checker. You can definitely do it, but you’re going to have all the markers of using AI to cheat.

      • wischi@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Not really, if you write the text first and only apply minor changes to fix the grammer (and not rewrite entire sentences) no AI detector will detect that because the sentence structure and pattern wouldn’t match typical AI output.

        • gullible@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Be sure to remember that, at best, AI takes prompts as interpretable guidelines and a request of “grammar checking” can involve some additional, unwarranted, restructuring. Points to whoever notices both AIisms that I noticed that chatgpt added to my grammar checked critique on grammar checking.

  • kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Kinda makes sense, right?

    The AI images are a representation of what an AI thinks a human “should” look like, so when another AI (likely trained on a similar dataset) tries to classify them, the AI images will more closely fit what it expects a human to look like.