For those not aware, Google is rolling out their new AI-based “Generative AI” search, which seems to mesh Bard with the standard experience.
I asked it today why Google no longer follows their “don’t be evil” motto… The results are pretty hilarious.
I asked Bard the same “don’t be evil” prompt and this is the reply (they have an option to shorten the response but I didn’t use it):
-Google’s “Don’t be evil” motto was removed from the company’s code of conduct in April 2018. There are a few reasons why this may have happened.
-The meaning of “evil” became too ambiguous. The phrase “Don’t be evil” is open to interpretation, and what one person might consider evil, another person might not. This ambiguity could have led to confusion and disagreement within the company about what the motto meant and how it should be applied.
-Google’s business grew and its operations became more complex. As Google grew, its business became more complex and its operations had a wider impact on the world. This made it more difficult for the company to adhere to a simple motto like “Don’t be evil.”
-Google faced criticism for its business practices. In the years leading up to 2018, Google faced criticism for a number of its business practices, including its handling of user data, its involvement in the Chinese government’s censorship program, and its work on developing autonomous weapons. This criticism may have led the company to believe that the “Don’t be evil” motto was no longer an accurate reflection of its values.It is important to note that Google has not completely abandoned the “Don’t be evil” motto. The phrase is still included in the company’s code of conduct, and it is still used by some Google employees and executives to describe the company’s values. However, the removal of the motto from the preface of the code of conduct suggests that Google no longer sees it as the defining principle of its business.
In its place, Google has adopted a new motto: “Do the right thing.” This motto is more specific and less ambiguous than “Don’t be evil,” and it is intended to provide guidance to Google employees as they make decisions about the company’s business practices.
Funnily enough, I find “Do the right thing” to be even more ambiguous!
Do the right thing by whom? Their shareholders? Well now we’re just throwing the problem over the proverbial fence, now aren’t we?
“We’re not evil! We’re doing the right things according to our shareholders!.. who just so happen to have evil intentions”In its place, Google has adopted a new motto: “Do the right thing.” This motto is more specific and less ambiguous than “Don’t be evil,”
As long as we are interpreting “evil” as an adjective describing ones actions these seem pretty similar.
The convenient ambiguity of the word “right” to refer to both morality and cold pragmatism is definitely part of the decision. Something can be “right” as in strategically optimal but evil.
Interesting, so the new feature is additional ambiguity!
it’s wrong answer actually based on user comments it scraped since it was a trending news when they changed code of conduct. news were actually fake and about 99 percent internet users are it up. in reality they didn’t remove the “don’t be evil”. they moved it from top of the code of conduct to end of the code of conduct.
It wasn’t moved, the opening and closing sentences had don’t be evil, they removed the preface paragraph at the top, but left they closing sentence.
“Preface Don’t be evil.” Googlers generally apply those words to how we serve our users. But “Don’t be evil” is much more than that. Yes, it’s about providing our users unbiased access to information, focusing on their needs and giving them the best products and services that we can. But it’s also about doing the right thing more generally – following the law, acting honorably, and treating co-workers with courtesy and respect. The Google Code of Conduct is one of the ways we put “Don’t be evil” into practice.”
Closing sentence:
And remember… don’t be evil, and if you see something that you think isn’t right – speak up!
The closing sentence that remains doesn’t carry much weight without the preface.
I’m wondering if there’s going to be a real problem when content gets dominated by AI and AI starts scraping their own hallucinations.
There’s really no substitute for expert knowledge for content. The LLMs are simply going to speed up the negative feedback loop, exactly as you suspect. GIGO at its finest.
In a way it might turn out to be a good thing for the internet because it will force us all to reevaluate the rest we produce and consume online content.
iirc AI scraping AI has already started to become a problem as it tends to compound pre-existing flaws.
I saved this because there’s no way it will continue to be a result once Google is aware of it.
Clearly, the corporate malfeasance layer of it’s neural network needs improvement.
The world of politics and business is a cesspit. We are surprised when they give us honest and candid answers.
Scientists and engineers don’t have this option. None of their shit works on lies. Physics just is, whether you like it or not.
I sometimes switch between Bard and ChatGPT.
Bard is good for summaries I’ve found, overall, however the web based results gives it an edge in some regards.
I pretty much only use Bard. Bing Chat is just too fucking slow. I have used Bard for my SQL stuff and other small coding stuff, and it works well & really fast. Bing Chat was rushed, and I doubt Microsoft is ever going to fix the shitty code that their employees wrote to meet the deadline.
Honestly, I’m excited to see them weave this into Google Assistant. That assistant has been as dumb as a rock for years, and I swear it actually got worse over time since the original launch.
With this tied in, we’re going to actually be able to ask questions and get good answers, albeit long-winded ones at the moment. I have no idea when or if they plan on doing this, though… I just figure it can’t be that hard to connect the dots.
How do you use it? I’d like to try it out as well.
“Mom said it’s my turn on the world altering maybe not evil artificial intelligence”
 this looks like it’s actually from their normal search with the labs feature turned on for ai. Bard is separate but uses the same tech.
Despite what the other guy said, this is not Bard. This is available in Google’s new search labs (just search it up) if you’re using chrome and in the U.S. I got it working in Safari in Australia using a VPN and changing the user agent when on the correct site though.
I was able to immediately check it out on my Android phone by simply going to the Google App, and joining the beta program.
My companies SEO expert made me aware of this program a week or two back, and he has it running on his browser, but I believe that requires some sort of a waiting list (or, it did last I checked).
Edit: Oh, actually I can use it in a browser now as well! I just had to use Chrome to make that happen, and I didn’t notice that because I always use Firefox.
Every villain is the hero of their own story.
Thats some dystopian humor! I like it!
#1 what about dont be evil led workers to organize? Or did they just do their jobs at random previously?
#2 honesty of a rarified level
#3 worded hillariously
#4 explain.
#5 is a self defeating assertation.
WRT #1 It’s sad that pro-evil workers have to form a union just to be recognized in the tech industry these days
Do bean counters have unions?
#4 you could also point out that a dozen out of how many thousands of employees is close enough to zero that it’s not really worth mentioning.
Really depends on who those dozen people are, but my very vague recollection of the incident is that it wasn’t anyone important.
Now that is pretty interesting.
Google AI admits that Google is evil. HMMM.
Let’s not forget that it’s also sentient, supposedly.
Pretty ironic really, a company slogan exactly the opposite of how they turned out.
Google’s dating profile is “No drama” and “If you can’t handle me at my worst you don’t deserve me at my best.”
It’s a standard propaganda technique. Beware any group that has a phrase about what they claim to be. If you are something, you don’t need to announce it. e.g. Jehovah’s witnesses call their dogma “the truth”
deleted by creator
But this phrase was intentionally about what they claim to not be.
It’s a good approach when it’s well intentioned, because instead of encouraging an action, it discourages one.
“Be a good person” dogma can be used to justify a lot of things. “Don’t be an asshole” is much more limited in how it can be abused.
I asked Bing AI “is windows better than linux” and it gave the generic AI roundabout way of explaining that every system has its uses
Surely you’d need more parameters if you’d like a better answer. Better for whom and at what, for starters.
And that is actually a fairly good point.
deleted by creator
We are coming up on the intersection, am I turning LEFT to Burger King or am I turning RIGHT to McDonald’s??
Go straight through to Habit Burger
The habit hear near me makes steamed patties… it’s not good
deleted by creator
deleted by creator
I prefer them over BK fries tbh. McD’s are so soggy and trashy and that’s just perfect, it fits my expectation. BK fries are ok, I guess.
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
Software is a tool. You should always use the right tool for the job - that’s the objectively correct answer
It has about the same tone as a typical autistic tech worker with an overdeveloped sense of justice and a loose sense for when it’s impolitic to drop truth bombs
(for context, I am an autistic dev that’s worked for some big corporations in my career)
Google reached a point where “not being evil” was incompatible with its business goals.
You can’t fault it for a lack of honesty. Google is evil because it’s good business.
We can help derp about capitalism all we want but this wouldn’t change in a government run program. An organization is only as ethical as the people that make it up. The military question was an inflection point where the organization was really forced to deal with the question of how to define evil.
Suddenly every person in that organization was forced to answer some questions. Is the existence of a military evil? Is it evil if I don’t directly support those solutions? What if something I build is used to develop it indirectly? Even if it is not, am I now complicit?
Now, I’m a Soldier so I have a massive bias here. I personally cannot see why anyone would intentionally want to contribute to us getting killed or losing a war. Tech products are already used in the process. Toyota is not complicit in terrorism just because their trucks are the trucks of choice for insurgent logistics. That being said, if they started accepting contracts with them, there would be an issue.
A lot of it comes down to the thoughts on the war on terror at the time. The funny thing is that the solutions that they built are focused on Eastern Europe right now in a conflict that most people support and were not completed in time to do counter insurgency.
The funny thing about the COIN fight is that information products simply made things more accurate with better intelligence. It meant less terrorism due to less insurgents and less civilian casualties resulting in blowback. If poorer information resulted in higher civilian casualties, are the pacifists complicit in that?
Again, I’m biased so my perspective is one of this issue being a detractor to doing my job better. In the end, defining evil is not black and white, even if you could theoretically come to a specific answer for a specific circumstance with the magical power of all the knowledge in the world. It broke the culture of the company.
Love the new slogan.
“Google. We’re evil now.”
Should hang a black goatee on their logo