• 2 Posts
  • 96 Comments
Joined 2 years ago
cake
Cake day: January 20th, 2023

help-circle






    • They attended a ceremony in a part of the cemetery off-limits to photography and political events, due to some sort of dealings instigated by House Speaker Mike Johnson.
    • Trump was told he could only attend in a personal capacity and that no hangers-on would be allowed in.
    • When an official tried to enforce these limitations on his visit, a large staffer physically pushed her aside, claiming that photography was allowed, so he and at least one or two staffers were there instead of just Trump himself.
    • Trump has (of course) posted this crap all over his social media, using it for political purposes, in spite of the conditions under which his visit was supposed to take place. (Honoring a fallen soldier in a civilian capacity.)
    • And after it is all over and starting to blow up in his face, his response is unsurprisingly to lie about it all, despite photographic evidence (of the politics and extra staffers) and eye witness confirmation (by Army personnel) that it happened.




  • These are a bit unique from the lists everyone else has, I think:

    • Lemmy Keyboard Navigation (like the kbd shortcuts from RES)
    • Google Popup Blocker (stop the annoying log in with Google popups everywhere on the web)
    • OneTab (this one lets you collapse a whole window of tabs down into a list in the OneTab tab that you can later reexpand into a window again when you re-attack whatever subject all the tabs were about)

    These are the more standard ones that everyone seems to run:

    • UBlock Origin
    • Reddit Enhancement Suite
    • 2FAS Extension
    • BitWarden



  • I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, “Aw, it’s sick and hallucinating, otherwise it would tell us the truth.”

    Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of “It’s making stuff up” which seems closer to the mechanisms behind an LLM to me.

    I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.



  • heavyboots@lemmy.mltoAsklemmy@lemmy.mldeleted
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    I would absolutely send him an email to the effect of

    “Per our multiple verbal conversations, this is just to serve as notice that, in my professional opinion, your refusal to allow me to upgrade a system at risk of multiple security vulnerabilities on a platform that is no longer supported is a risk that you are choosing to accept against my advise.”

    with a list of known major vulnerabilities attached if possible.

    That way at least if this comes back to bite the company on the ass, he can’t say “Well he never told me this was a problem!”