Trust Networks as Antidote to AI Slop

This week, AWS went down, along with a quarter of the internet. It’s funny how much we rely on cloud infrastructure even for services that should natively work offline.

Postman and Eight Sleep failure during AWS outage

That is, “funny” as long as you’re not a customer of said services trying to do something important to you. I know how frustrating it was when Grammarly stopped correcting my writing during the outage, even if it’s anything but a critical service to me.

While AWS engineers were busy trying to get the services back online, the internet was busy mocking Amazon. Elon Musk’s tweet got turbo-popular, quickly getting several million pageviews and sparking buzz from Reddit to serious pundits.

elon musk sharing fake tweet on aws outage

Admittedly, it was spot on. No wonder it spread like wildfire. I got it as a meme, like an hour later, from a colleague. It would fit well with some of my snarky comments about AI, wouldn’t it?

However, before joining the mocking crowd, I tried to look up the source.

Don’t Trust Random Tweets

Finding the article used as a screenshot was easy enough. It was a CNBC piece on Matt Garman. Except the title didn’t say anything about how much AI-generated code AWS pushes to production.

Fair enough. Media are known to A/B test their titles to see which gets the most clicks. So I read the article, hoping to find a relevant reference. Nope. Nothing. Nil.

The article, as the title clearly suggests, is about something completely different.

I tried to google up the exact phrase. It returned only a Redit/X trail of the original “You don’t say” retort. Googling exact quotes from the CNBC article did return several links that republished the piece, but all used the original title, not the one from the smartass comment. It didn’t seem CNBC had been A/B testing the headline.

By that point, I was like, compare these two pictures. Find five differences (the bottom one is the legitimate screenshot).

matt garman fake and actual article
Top picture from the tweet Elon Musk shared. Bottom from the actual CNBC article.

So yes, jokes on you, jokers.

Except no one cares, really. Everyone laughed, and few, if anyone, cared to check the source. Few, if anyone, cared to utter “sorry.”

Trustworthiness as the New Currency

I received Musk’s tweet as a meme from my colleagues. It went through at least two of them before landing in my Slack channel. They passed it with good intent. I mean, why would you double-check a screenshot from an article?

It’s a friggin’ screenshot, after all.

Except it’s not.

This story showcases the challenge we’re facing in the AI era. We have to raise our guard regarding what we trust. We increasingly have to assume that whatever we receive is not genuine.

It may be a meme, and we’ll have a laugh and move on. Whatever. It won’t hurt Matt Garman’s bonus. It won’t have a dent in Elon Musk’s trustworthiness (even if there were such a thing).

It may be a resume, though. A business offer. A networking invitation, recommendation, technical article, website, etc. It’s just so easy to generate any of these.

What’s more, a randomly chosen bit on the internet is already more likely to be AI-generated than created by a human. Statistically speaking, there’s a flip-of-a-coin chance that this article has been generated by an LLM.

It wasn’t, no worries. Trust me.

Well, if you know me, I probably didn’t need to ask you for a leap of faith in the originality of my writing. The reason is trustworthiness. That’s the currency we exchange here. You trust I wouldn’t throw AI slop at you.

If you landed here from a random place on the internet, well, you can’t know. That is, unless you got here via a share from someone whom you trust (at least a bit) and you extend the courtesy.

Trust in Business Dealings

The same pattern works in any professional situation. And, sadly, it is as much affected by the AI-generated flood as blogs/newsletters/articles.

When a company receives an application for an open position, it can’t know whether a candidate even applied for the job. It might have been an AI agent working on behalf of someone mass-applying to thousands of companies.

While we’re still beating a dead horse of resume-based recruitment, it’s beyond recovery. Hiring wasn’t healthy to start with, but with AI, we utterly broke it.

A way out? If someone you know (or someone known by someone you know) applies, you kinda trust it’s genuine. You will trust not only the act of applying but, most likely, extend it to the candidate’s self-assessment.

Trust is a universal hack to work around the flood of AI slop.

Outreach in a professional context? Same story. Cold outreach was broken before LLMs, but now we almost have to assume that it’s all AI agents hunting for gullible. But if someone you know made the connection, you’d listen.

Networking? Same thing. You can’t know whether a comment, post, or networking request was written by a human or a bot. OK, sometimes it’s almost obvious, but there’s a huge gray zone. In someone you trust does the intro, though? A different game.

linkedin exchange with ai bot

The pattern is the same. Trust is like an antidote to all those things broken by AI slop.

Don’t We Care About Quality?

Let me get back to the stuff we read online for a moment. One argument that pops up in this context is that all we should care about is quality. It’s either good enough or not. If it is, why should we care who or what wrote it?

Fair enough. As long as consuming a bit of content is all we care about.

If I consider interacting with content in any way, it’s a different game.

With AI capabilities, we can generate almost infinitely more writing, art, music, etc. than what humans create. Some of it will be good enough, sure. I mean, ultimately, most of what humans create is mediocre, too. The bar is not that high.

There’s only one problem. We might have more stuff to consume, but we don’t have any more attention than we had.

100x content 1x attention

Now, the big question. Would you rather interact with a human or a bot? If the former, then you may want to optimize the choice of what you consume accordingly.

Engageability of our creations will be an increasingly important factor. And it won’t be only a function of what kind of call to action a consumer feels after reading a piece, but also whether they trust there’s a human being on the other side.

It’s trust, again.

Trust Networks as the New Operating System

Relying solely on what we personally trust would be impractical. There are only so many people I have met and learned to trust to a reasonable degree.

Limiting my options to hiring only among them, reading only what they create, doing business only with them, etc., would be plain stupid. So how do we balance our necessarily limited trust circle with the realities of untrustworthiness boosted by AI capabilities?

Elementary. Trust networks.

If I trust Jose, and Jose trusts Martin, then I extend my trust to Martin. If our connection works and I learn that Martin trusts James, then I trust James, too. And then I extend that to James’ acquaintances, as well. And yes, that’s an actual trust chain that worked for me.

By the same token, if you trust me with my writing, you can assume that I don’t link shit in my posts. Sure, I won’t guarantee that I have never ever linked anything AI-generated. Yet I check the links and definitely don’t share AI slop intentionally.

If such a thing happened, it would have been like Musk’s “you don’t say” meme I received—passed by my colleagues with good intent.

The degree to which such a trust network spans depends on how reliably a node has worked so far. A strong connection would reinforce its subnetwork, while a failing (no longer trustworthy) node would weaken its connections.

strong and weak trust networks

Strong nodes would allow further connections, while weak ones would atrophy. It is essentially a case of a fitness landscape.

New Solutions Will Rely on Trust Networks

The changes we’ve made to our landscape with AI are irreversible. In one discussion I’ve had, someone suggested a no-AI subinternet.

It’s not feasible. Even if there were a way to reliably validate an internet user as a human (there isn’t), nothing would stop evil actors from copypasting AI slop semi-manually anyway.

In other words, we will have to navigate this information dumpster for the time being. To do that, we will rely on our trust networks.

Whatever new recruitment solution eventually emerges, it will employ extended trust networks. That’s what small business owners in a physical world already do. They reach out to their staff and acquaintances and ask whether they know anyone suitable for an open position.

Content creation and consumption are already evolving toward increasingly closed connections (paywalled content, Substacks, etc.), where we consciously choose what we read and from whom. Oh, and of course, the publishing platforms actively push recommendation engines.

Business connections? Same story. We will evolve to care even more about warm intros and in-person meetings.

trust networks everywhere meme

Eventually, large parts of the internet will be an irradiated area where bots create for bots, while we will be building shelters of trustworthiness, where genuine human connection will be the currency.

Like hunters-gatherers. Like we did for millennia.


Thank you for reading. I appreciate if you sign-up for getting new articles to your email.

I also publish on Pre-Pre-Seed substack, where I focus more narrowly on anything related to early-stage product development.


Comments

6 responses to “Trust Networks as Antidote to AI Slop”

  1. Johannes Gerlach Avatar
    Johannes Gerlach

    Strange, it hits a chord. I‘m reading the book Tiny Experiments by Anne-Laure le Couff and the last few chapter are about learning in public and sharing the learnings with fellow people who are interested in the same topics or run through same topics.

    Sitting behind a screen and reading preferably blogs for a long time I finally decided to just write in a personal blog. But reading the article I am confused, maybe even lost or afraid being to late?!? How would I gain your trust? Just curios as I‘m living in Europe and with high propability won‘t meet you ever in person.

    KR

    Johannes

  2. Pawel Brodzinski Avatar

    @Johannes And how did we build trust on the internet 20 or 30 years ago when we didn’t have social media run by algorithms?

    We joined interesting communities, like user groups, interacted with others, tried to be helpful, etc. Eventually, these interactions become the first steps to initial recognizability.

    And when what we do seems valuable to others, it tends to spread beyond our control.

    Sure, we changed the patterns of our behavior, but as AI pollutes more and more of the internet, we might as well go back to tried-and-true methods.

    As an example, these days, almost no one leaves comments on blogs. We moved the discussions to dedicated spaces—mostly social media. But they, in turn, become easy targets for AI slop. I need to consider whether a random comment on LinkedIn comes from a human or was generated by an AI bot. I don’t have such doubts here. Precisely because it’s a more remote/less attended place on the internet. Thus, it’s easier to become recognized, too.

  3. Bob Donaldson Avatar

    I recently revisited after a very long absence to find some really relevant comments about AI. I’m involved in education now (rather than software development or localization), and the challenges presented by AI are immense. I think you are on to something with your comments about trust networks though. And ultimately, as I encourage my students to think for themselves and avoid “AI slop”, I plan to use this concept.

    Thank you.

  4. yikes Avatar
    yikes

    AWS downtime was caused by literal DNS failure so attributing it to AI code is the most perfect example of Elon having literally zero technical clue about anything.

  5. Pawel Brodzinski Avatar

    @Bob The times change, so the topics do too. I’m glad that the posts are still relevant to you.

    I won’t get far into the education territory as I’m not an expert. Yet, observing the system as a parent, it’s seriously frightening. It’s as if the education was not prepared with capabilities of LLMs and, to add insult to injury, had no idea how te respond when observing what’s happening.

  6. Pawel Brodzinski Avatar

    @yikes I doubt whether, at that stage, anyone had any clue what actually went down. Save for the engineers who were up to their necks in fix attempts, that is.

    So yes, it was a largely clueless cheap shot, which (I’m speculating here) was an instant reaction to seeing a funny fake. And who would care whether something is fake? (Answer: definitely not Musk.)

    I’d go as far as to suggest that he would have done the same even if he actually knew what was the source of the issue.

    Also, funnily enough, there was no shortage of people saying “it must have been DNS” before we actually knew it indeed was so.

Leave a Reply

Your email address will not be published. Required fields are marked *