The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Literally no one is reading the article.

    The terms still prohibit use to cause harm.

    The change is that a general ban on military use has been removed in favor of a generalized ban on harm.

    So for example, the Army could use it to do their accounting, but not to generate a disinformation campaign against a hostile nation.

    If anyone actually really read the article, we could have a productive conversation around whether any military usage is truly harmless, the nuances of the usefulness of a military ban in a world where so much military labor is outsourced to private corporations which could ‘launder’ terms compliance, or the general inability of terms to preemptively prevent harmful use at all.

    Instead, we have people taking the headline only and discussing AI being put in charge of nukes.

    Lemmy seems to care a lot more about debating straw men arguments about how terrible AI is than engaging with reality.

  • funkforager@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Remember when open ai was a nonprofit first and foremost, and we were supposed to trust they would make AI for good and not evil? Feels like it was only Thanksgiving…

    • Moira_Mayhem@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      It seems to be a trend that any service that claims not to be evil is just waiting for the right moment to drop that pretense.

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I mean, there was all that drama where the board formed to prevent this from happening kicked out the CEO trying to do this stuff, then the board got booted out and replaced with a new board and brought back that CEO guy. So this was pretty much going to happen.

      • hoshikarakitaridia@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        And some people pointed it out even back then. There were signs that the employees were very loyal to Altmann, but Altmann didn’t meet the security concerns of the board. So stuff like this was just a matter of time.

      • Sasha@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Effective altruism is just capitalism camoflauge, it’s also just really bad at being camoflauge

    • wooki@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      I wouldnt be too worried they’ve just made an over glorified word predictor and blender of peoples art

  • Alto@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    So while this is obviously bad, did any of you actually think for a moment that this was stopping anything? If the military wants to use ChatGPT, they’re going to find a way whether or not OpenAI likes it. In their minds they may as well get paid for it.

  • ArmokGoB@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Finally, I can have it generate a picture of a flamethrower without it lecturing me like I’m a child making finger guns at school.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I can’t wait until we find out AI trained on military secrets is leaking military secrets.

    • Jknaraa@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I can’t wait until people find out that you don’t even need to train it on secrets, for it to “leak” secrets.