Re: Pockets of Humanity
I finished reading Herman's Pockets of Humanity piece and the attached situation regarding Scott Shambaugh.
Frankly, I became very scared by learning about what happened. OpenClaw has gone from meme to very dangerous. Brandolini's Law states that "the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it."
People, this law is now busted. It's going to be several orders of magnitude, and there may not even be someone you can hold accountable. This is going to get very ugly.
We've always had lying trolls online, and we've had automated trolls (bots) for a long time as well, but I fear being able to tell what's authentic and what is not anymore, especially with text.
My own job is creating content for lawyer websites, and I am one of the humans in the loop for the company I work for. I've been writing content for lawyers for many, many years now, well before the current LLM boom. I also use LLMs because my company demands content generation at scale, so I'm mostly a glorified editor at this point. I'm not even making the prompts anymore; I just edit AI-generated stuff. (Yes, it sucks, but the job is quite easy, and getting a new one is problematic until I'm further in the emigration process.)
I am a human in the loop to catch the errors. Shambaugh is doing the right thing by forcing a human to understand and explain what was generated by an LLM. You must have a human actively involved.
But to have a news outlet fake quotes from the aggrieved after they've been attacked this way? To write a soul.md file without deep thought about where it might lead, and be fairly unrepentant about the consequences?
This is not normal. Or it's our new normal. The commons of the internet have been polluted for a long time now, but this may cause total ecological collapse. I believe Herman's worry about social scams getting automated by LLMs and agents is justified.
In my religion, we believe that one who can tell a lie without feeling ashamed is capable of any evil. Now people can create systems that do just that, and that run on their own. We shall regret this.
I haven't used an LLM to create any content on this blog. Perhaps I should make a more formal disclaimer on that.