• 4 Posts
  • 100 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle

  • Once you come to terms with how bad faith arguments work and why they are used then a bunch of problems in life unravel elegantly but frustratingly. It’s kinda simple. You want to prove yourself right so you sabotage anything that can prove you wrong, you seek things that can prove you right and intentionally avoid scrutiny of the validation. OP has no honest intentions and just wants to use the tricks that he thinks will influence his partner. He has demonstrated elsewhere that he doesn’t care about her, only his possession of her.



  • OP clearly has a lot of maturing to do and it’s possible he has reached his maximum maturity. The best thing for all parties is that OP continues to mark himself as an outcast to be shunned by other people. He doesn’t deserve his partner and I hope she moves on sooner rather than later.

    I hope you grow and have a happy life OP. I doubt you can though, so keep marking yourself with your stench so we can avoid you.

    /toughLove


  • The way it helps you get through the day would mostly be from getting relief from the withdrawal symptoms. Only addicts will get that.

    How do you feel smoking helps you through the day?

    Here’s a challenge. Quit for 24 full hours and then continue. If the way it helps you through the day seems to be intensifying then its the addiction withdrawal. Don’t 24 hour quit quietly either, tell your partner that you are going to prove yourself. Stay honest please.















  • At some point it all stops mattering. You treat bots like humans and humans like bots. It’s all about logic and good/bad faith.

    I’ve had an embarrassing attempt to identify a bot and learned a fair bit.

    There is significant overlap between the smartest bots, and the dumbest humans.

    A human can:

    • Get angry that they are being tested
    • Fail an AI-test
    • Intentionally fail an AI-test
    • Pass a test that an AI can also pass, expecting an AI to fail.

    It’s too unethical to test, so I feel that the best course of action is to rely on good/bad faith tests, and logic of the argument.

    Turing tests are very obsolete. The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?

    A well made LLM can exceed a dumb person pretty easily. It can also be more enjoyable to talk with or more loving and supportive.

    Of course there are things that current LLMs can’t do well that we could design tests around. Also long conversations have a higher chance to show a failure of the AI. Secret AIs and future AIs might be harder of course.

    I believe dead internet theories spirit. Strap in meat-peoples, rides gonna get bumpy.