• Pohl@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you ever needed a lesson in the difference between power and authority, this is a good one.

    The leaders of this coup read the rules and saw that they could use the board to remove Altman, they had the authority to make the move and “win” the game.

    It seems that they, like many fools mistook authority for power. The “rules” said they could do it! Alas they did not have the power to execute the coup. All the rules in the world cannot make the organization follow you.

    Power comes from people who grant it to you. Authority comes from paper. Authority is the guidelines for the use of power, without power, it is pointless.

  • edric@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I’m honestly not up-to-date with the news on this fiasco. Can someone help reconcile the news about employees saying Altman deprioritized safety over profit and this one where employees actually want him back? Are these different groups?

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    You’re not going to develop AI for the benefit of humanity at Microsoft. If they go there, we’ll know "Open"AI’s mission was all a lie.

    • Gork@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yeah Microsoft is definitely not going to be benevolent. But I saw this as a foregone conclusion since AI is so disruptive that heavy commercialization is inevitable.

      We likely won’t have free access like we do now and it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

      • MeatsOfRage@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        “Hey Bing AI can I get a recipe that includes cinnamon”

        “Sure! Before we begin did you hear about the great Black Friday deals at Sephora”

        “Not interested”

        “No problem. You’re using query 9 of 20 this month. Do you want to proceed?”

        “Yes”

        “Before we begin, Bing Max+ has a one month trial starting at just $1 for your first month*. Want to give that a try?”

        “Not now”

        “No problem. With cinnamon you can make Cinnamon Rolls”

        “What else?”

        “Sure! Before I continue did you hear the McRib is back for a limited time at McDonald’s. (ba, da, ba, ba, ba) I’m lovin’ it.”

      • banneryear1868@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

        Yeah this is why I’m so skeptical about the way it will presumably change the world. It will change things, but the economic relations that determine it’s ability to do so will overrule the technological capabilities, since it will be infeasible or not economically viable to deliver on a lot of the hype.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      And if they don’t, we’re supposed to keep on believing all of this is somehow benefiting us?

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The way I understand it, Microsoft gave OpenAI $10 billion, but they didn’t get any votes. They had no say in their matters.

        • Alto@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          On paper, sure. They gave them $10B. They absolutely have some sort of voice here

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman.

    Let’s have all OpenAI employees move to Microsoft. What could possibly go wrong?

  • PatFusty@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Wow this is the biggest show of dick ridership I have probably ever seen. Why do they want this CEO to be at the helm so badly?

    • Aatube@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      …it’s not just about Altman. They fired him without proof and then fired the interim CEO, along with the reasons in the document

      • PatFusty@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Except now they gave the ultimatum to vacate the entire board and reinstate altman or they leave and go to altman at microsoft anyways. In any case, the main goal is that they want to be led by this guy.

        • Aatube@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Not necessarily. Microsoft has guaranteed spots and they feel like OpenAI is a sinking ship.

      • PatFusty@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Yes I editted it within the first like 3 minutes of posting… I had remembered reading that and after checking, deleted when i didnt see anything on it… i must have gotten Sam Altman and Sam Bankman mixed up in my brain.

        • 0ops@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Well good on you for checking yourself. I’ve been hearing rumors the last few days but nothing concrete

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Indeed. I’m so tired of coming to “technology” forums and instead of seeing discussion of technology it’s just a bunch of “person involved in technology is a vegetable molester” and so forth.

            Though I suppose this particular topic is inextricably tied up in personality issues right now, so I shouldn’t complain too hard on this one.

      • V0lD@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Did he edit his comment? The current version doesn’t accuse Altman of any felony

  • Jakdracula@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    We just taught AI that humans are mercurial, unpredictable, emotional, irrational, and willing to terminate anyone unexpectedly. Gee, I wonder how it will react with its army of robots when it comes to humans.

      • LWD@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Why do people treat Sam Altman like a villain? He only wanted to use his proprietary mechanical orb to scan the iris of every Kenyan in exchange for proprietary cryptocurrency!

            • SnipingNinja@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Aah, in this case the board is non-profit, so in that context and given the origins of OpenAI as global good open source AI (though the open source part is gone, in part due to his actions). For actual details on what he’s done specifically, others have shared, but again most of that is in context of OpenAI for-profit arm being a subsidiary of the non-profit one is what makes it weird.

    • SnipingNinja@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I assume it’s to resign from the board, which doesn’t mean he’ll leave the company entirely. Like they had Greg stay on board despite relieving him from the duties of president.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    wasn’t Ilya the one who gave Altman the news he was fired? I read it as he was siding with the board at first.

    Edit:

    Ilya posted this on Twitter:

    “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

  • Marxism-Fennekinism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    https://time.com/6247678/openai-chatgpt-kenya-workers/

    To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

    OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

    The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

    […]

    Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

    […]

    One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

    […]

    That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

    Gonna leave this here.

      • JonEFive@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        No, you’re right, you should be. We don’t want to normalize this shit, it should continue to shock and offend.

        These are the dark sides of modern technology. The kids working cobalt mines. The workers being paid pennies to categorize data so bad that it is traumatic to even read it. I can’t imagine how the people who have to look at pictures can do it.

        I feel like I could handle some dark text here or there, but if I had to do it for 40-50 hours a week? Hundreds of passages every day. That would warp me pretty quickly.

        • smooth_tea@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Very much yes police authorities have CSAM databases. If what you want to do with it really is above board and sensible they’ll let you access that stuff.

            I don’t doubt anything that OpenAI could do with that stuff can be above board, but sensible is another question: Any model that can detect something can be used to train a model which can generate it. As such those models are under lock and key just like their training sets, (social) media platforms which have a use for these things and the resources run them, under the watchful eye of the authorities. Think faceboogle. OpenAI could, in principle, try to get into the business of selling companies at that scale models they can, and have, trained themselves, I don’t really see that making sense from the business POV, either.

          • Floshie@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Consider the impact on human psychology. Not everyone has the guts to read and even look through these. And even though they appear to have, it still scars them inside.

            Maybe There is no alternative for now, but don’t do that to people with such low paycheck. Consider even the background of these people who may work on these tasks to not even live, but to survive. I would have preffered to wait 10 years than to indulge these horrifying tasks to those persons.

            I’m sure there are lots of people who are in jail for creating/sharing or even making a profit off of these content. They could do that work ? But then again, even though it bothers me less than people who has no choice to live their lives, that is still an Idea I find ethically very questionable.

          • Marxism-Fennekinism@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            If absolutely nothing else and even assuming for the sake of the argument that work of this nature is completely justified, they still have to answer for the fact that they severely underpaid foreign workers in clickfarms to do this and traumatize themselves on their behalf presumably so no one in the West had to.

            Personally, my opinion is very strongly that if you can’t develop a technology without committing such serious ethical breaches, for example seeking out and accumulating CSAM, then it’s either too early to develop that technology or it’s not worth developing at all. One may counter this with something like “well it’s basically inevitable that unscrupulous people will harm others to develop technology” but I would also argue that while that is true, the inevitability of something doesn’t make the act itself any less unethical.

            As a bit of context: The reason why even accessing and possessing CSAM is illegal almost everywhere in the world is because the generally accepted philosophy around this kind of material is that every time someone views it for any reason, it victimizes that child all over again, which is also very consistent with the opinions of actual CSAM survivors so I don’t feel it’s right to question that at all. I obviously cannot speak on their behalf in any way, but my guess would be the vast majority of CSAM victims do not want photos and videos the most terrifying and traumatic moments of their lives being used in this way, especially not by a for-profit company so they can develop a product with the goal of making themselves richer.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          This is actually extremely critical work, if results are going to be used by ai’s that are going to be used widely. This essentially determines the “moral compass” of the ai.

          Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for “evil” ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).

          • ColdFenix@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it’s the only job available and otherwise they and their family might starve. Source This is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.

    • Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

      Ugh.

  • conditional_soup@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I’d like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn’t have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I’m dying to know what it is.

    • los_chill@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Altman wanted profit. Board prioritized (rightfully, and to their mission) responsible, non-profit care of AI. Employees now side with Altman out of greed and view the board as denying them their mega payday. Microsoft dangling jobs for employees wanting to jump ship and make as much money possible. This whole thing seems pretty simple: greed (Altman, Microsoft, employees) vs the original non-profit mission (the board).

      Edit: spelling

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        That’s what I thought it was at first too. But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

        But do they know things we don’t know? They certainly might. Or it might just be bandwagoning or the likes.

        • los_chill@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

          I would have thought so too of the employees, but threatening a move to Microsoft kinda says the opposite. That or they are just all-in on Altman as a person.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I don’t think msft convinced him with money, but rather opportunity. He clearly still wants to work with AI and 2nd best place for that after openAI is Microsoft

      • SnipingNinja@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Second best would be Google, but for him it’s Microsoft because he’s probably getting a sweetheart deal as being in control of his destiny (not really, but at least for a short while)

        • morrowind@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Microsoft has access to a lot of OpenAI’s code, weights etc. and he’s already been working with them. It would be much better for him than to join some other company he has no experience with.

          • SnipingNinja@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            He’s not the guy who writes code, he’s a VC or management guy. You might say he has good ideas, as ChatGPT interface is attributed to him, but he didn’t make it.

      • Melt@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The tone of the blog post is so amateurish I feel like I’m reading a reddit post on r/Cryptocurrency

      • Bal@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I don’t know a lot about the background but this article feels super biased against one side.

      • conditional_soup@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Thanks for sharing. That is… Weird in ways I didn’t anticipate. “Weird cult of pseudointellectuals upending the biggest name in silicon valley” wasn’t on my bingo board.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          IMO there are some good reasons to be concerned about AI, but those reasons are along the lines of “it’s going to be massively disruptive to the economy and we need to prepare for that to ensure it’s a net positive”, not “it’s going to take over our minds and turn us into paperclips.”

          • diablexical@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            The author did a poor job of explaining that. He’s referencing the thought experiment of a businessman instructing a super effective AI to make paperclips. Given a terse enough objective and an effective enough AI, one can imagine a scenario in which the businessman and the whole world in fact are turned into paperclips. This is obviously not the businessman’s goal, but it was the instruction he gave the AI. The implication of the thought experiment is that AI needs guardrails, perhaps even ethics, or else it can unintentionally result in a doomsday scenario.

    • Ullallulloo@civilloquy.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The only explanation I can come up with is that the workers and Altman both agreed in monetizing AI as much as possible. They’re worried that if the board doesn’t resign, the company will remain a non-profit more conservative in selling its products, so they won’t get their share of the money that could be made.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yeah, the speed at which MS snapped him up makes me think of Zampella and West from Infinity Ward.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Wanting to know why is reasonable but it’s sus that we do t already know. Why haven’t they made that clear? How did they think they could do this without a solid explanation? Why hasn’t one been delivered to set the rumors to rest?

      It stinks of incompetence, or petty personal drama. Otherwise we’d know by now the very good reason they had.