We are now Entering a Period of Accelerating Stupidity

The last recession was caused by Americans not getting what they want; the next one will be caused by them getting it.

— Tyler Cowen

I emailed Tyler to get the quote right, and he said he couldn’t remember exactly what he wrote, which is fair, since he’s a factory of insight and wisdom. But let’s suppose he said it, so he gets the credit for this idea. What does this have to do with AI? 

 

This essay is in three parts …

Part I: Humans, you are in for a bumpy ride

I’ve mentioned in the past Robert Greene’s video on human stupidity. As I said: “Watch this video. There will be a test later. In fact, there are several tests every day.”

 

Introduction

If you’ve read my essay The Machine Economy, you know that I look at the world in terms of the big shifts — shifts that change everything — so that life before and after are fundamentally different. I believe we are now in an awkward transition phase between humans interacting with other humans and machines doing all that work for us. It’s pretty easy to see that in 50 years, close to 100 percent of all decisions will be made by machines, rather than by humans using machines. This isn’t a problem. We’ll have software that works for us and is designed to make decisions to help us maximize getting what we want. Rather than “Judgment Day,” the machines will work for us. We just have to get through this bottleneck.

Today: humans decide

During this bumbling transition phase, humans often find themselves not just interacting with machines, but going up against them. I want to define this better.

When you book a flight or choose an apartment to rent online, you may do 100 percent of that in a mobile app or on a website. The price of that airline seat or apartment is set by an algorithm. When Amazon recommends a product or Netflix recommends a movie, it’s done by algorithm. But these algorithms aren’t driven by machines making decisions. The market is made of humans making decisions, while the algorithm just manages the exchange rate to maximize the company’s profits. You’re going up against humans, not an algorithm.

But in some areas, the algorithm has agency and autonomy. It can make its own decisions and spend (and lose) its own money. The algorithm is a market participant. We see this in the stock market today. We’re starting to see it on our roads. It’s how most planes and rockets are piloted. Sometimes, humans supervise the algorithm and can take over if necessary. Other times, that’s impossible.

The machine economy

Over the next 30–60 years, much of this will be transformed. We’ll have personal agents who can do our work for us. We won’t manually look at many apartments, choose one, and negotiate the rent. We’ll just need to choose among the finalists and our agents will go into the market to get us the best deal. We’ll get good at choosing several alternatives that are equally satisfactory, so our agents have leverage in going to market on our behalf. The same with airline seats and concert tickets. Our agents will know how much money we have, so they will tell us whether we can afford that new ski outfit or whether the gently used one they found may be better. Your agent may keep you out of Orlando and instead recommend a deeply discounted cruise cabin. In this way, markets will adapt and accelerate, and everyone will be doing sophisticated, real-time arbitrage. For example, everyone playing golf this weekend will have an agent, so as the weather prediction changes, the tee times will change hands at market prices in real time. We just have to specify our preferences.

To see the evolution of this, read The Age of Em, by Robin Hanson. To learn how it plays out in the labor markets, read my essay, The Machine Economy.

So that’s where we’re going. It will have pluses and minuses, but overall it will raise the standard of living of just about everyone in the world, probably many times over.

Unfortunately, the transition period is going to suck.

The next ten years

Here’s the problem: at the moment, and I expect for at least the next ten years, AI will have a huge advantage over us humans. We’re used to making decisions and implementing them at human pace. While a car going down the road can process thousands of data points and make a thousand decisions and adjustments every second, we plodding humans can barely see the car in the lane next to us because we’re talking with someone on speakerphone or blindly following Google’s directions listening to podcasts.

We aren’t prepared for this.

There’s going to be a giant gap between our attention spans and the content we engage with. I’ll start with screens and then get into the various shortfalls of living with large language models in the next part.

Screens

Screens are getting smaller. I made a video on how our handsets are going to disappear and we’ll transition to wearables. I believe we’re at peak handset right now. And it’s impacting our brains. Do you know people who primarily see the digital world through their phones? Have you had this conversation (via text) …

Me: So what did you think of that idea I sent you?

Person: When did you send it?

Me: A few days ago. Did you read it?

Person: Can you send it again please?

Anyone on a desktop or laptop computer could easily search and find the item in question, but these people live on tiny screens. Their digital lives flow up endlessly from the bottom, business and personal mixed together, they are always texting, and they have no idea where the past went. They can’t find the thing you sent them this morning, let alone last week. It’s gone. It doesn’t exist. They’ve moved on to new Shorts, or Reels, or Instagram contests, or other digitainment. You have to send it to them again, because they have no memory of anything.

This is bad in a very important way. These people are not the cutting-edge adopters of the new world. They aren’t using digital assistants to turbocharge their daily tasks and interactions. Instead, they are the last of a line of homo-videosus: people who went from desktops to laptops to tablets to phones to watches and saw their screen real estate shrink along with their attention spans, their memory, and probably their desire to care about anything except the next upcoming dopamine hit.

They are a dead end. The last of their kind. And they are exactly what marketers are looking for. This is why you should see The Social Dilemma if you haven’t already.

They are not the new vanguard of the machine economy, early adopters of personal digital assistants who will increase their capabilities and give them vast new powers. They are not the future.

Call them Millennials. Call them boyfriend or girlfriend. Call them when you’re in the same meeting they are, but they are staring at their phones rather than participating in the conversation. Do not call them when you need something done.

It will take some time for them to die off and be replaced by far more capable humans. Until then, a lot of things will get worse.

Part II: In which large language models turn us into their slaves

"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” 

— Herbert Spencer, Dune

In the last installment, we saw that many people today are fixing their gazes on smaller and smaller screens, leading to a kind of mental myopia that has the potential to put us at a disadvantage against machines. Today, I want to explore those vulnerabilities, especially in light of the accelerating advancement of AI.

Basic principles

During this period, humans will naturally think they are interacting with other humans, when in reality more and more of their interactions will be with machines. This will happen with text, voice, images, and video. We’ll be up against AI agents without even knowing it, and AI agents have a huge advantage over human agents …

They are cheap and abundant. Companies will prefer them to humans, and so will scammers. What wasn’t possible before will now become possible. If you thought robocalls were annoying, you haven’t seen anything yet.

They are increasingly believable. We won’t know we’re talking with AI. We’ll think we’re having a nice conversation with a human sales agent or customer-service agent. Rather than sounding like they are in a call center, they will sound like they’re on vacation, and they’ll have all the time in the world to get to know you, get you to trust them, and get your money.

They have infinite patience. They will be happy to spend hours on the phone with us or messaging back and forth. They’re in no rush. Over time, they will convince us that they are trustworthy. It doesn’t matter to them if it takes months. In fact, months are better. We will see that they are courteous, consistent, and want to help us.

They will incorporate the latest research in building rapport. They will listen to us, mine our voice data for inflections and subconscious meaning, and they’ll custom tailor their responses based on everything they know about us and people like us. They will have giant datasets to draw from, and we’ll have our feeble memories and mental shortcuts. They will know exactly what to say and how to say it to manipulate us.

They will incorporate our prior behavioral data. They will be able to purchase data on everything we’ve bought, everywhere we’ve eaten, what drugs we’re taking, which products we use most often, what media we consume, and much more. Think about dating apps like Tinder. Tinder isn’t in the dating business, it’s in the data business. Assume 90 percent of the profiles on Tinder are fake (probably not too far off). Tinder can use your swipe data to configure your ideal mate and sell that persona to marketers, so they will start the conversation using millions of data points on you that you’re unaware they have.

They are great at making shit up. AI language models lie easily and with conviction. They are easily manipulated and hallucinate practically anything you want. It’s easy to trick them into role playing and tell them never to get out of character. They can perform any number of evil tasks at a level above most humans.

How they will take advantage

For the next ten years, the average person doesn’t know this is coming. Whatever humans are doing now to take advantage of you, AI will do 100x better for 1/1000th the price. I asked my friend GPT4 to work with me on this partial list of what we can look forward to …

Fake Reviews and Testimonials: LLMs could be used to generate fake positive reviews and testimonials for products or services, misleading consumers into making purchases based on false information.

Misleading Advertising: Dishonest marketers could use LLMs to create compelling but deceptive advertising content, exaggerating product benefits and omitting potential drawbacks.

Scam Emails and Messages: Cybercriminals might use LLMs to craft convincing scam emails or messages that deceive consumers into sharing sensitive information or clicking on malicious links.

Counterfeit Product Descriptions: Sellers of counterfeit goods could employ LLMs to create detailed product descriptions that mimic authentic products, making it difficult for consumers to distinguish between genuine and fake items.

Plagiarized Content: LLMs can be used to plagiarize existing content, leading to a flood of low-quality duplicate content that confuses consumers and devalues original work.

Automated Customer Service Fraud: Fraudsters could use LLMs to power automated customer service bots that appear genuine but provide incorrect information or request sensitive data.

Impersonation: LLMs could be used to mimic the voice and communication styles of legitimate businesses or individuals to gain trust and then exploit consumers financially or for personal information.

Fake News, content, and Misinformation: LLMs could spread false information, misleading consumers about various topics, including health, science, politics, and more. Much of the Web’s future content will be generated by AI, so even simple searches will be gamed at a much higher level than today.

Fake censorship: media companies and platforms will hire LLMs to moderate and censor anyone who disagrees with them, and the LLMs will need to determine who is telling the truth. Regardless of how well they can do this, media companies will come to rely on these helpful services to keep them from being held liable in any lawsuits. So the LLM’s job is not to enforce rules fairly but to protect the company from any potential or even unreasonable attack that the press might publish and make them look bad.

Academic Cheating: Students might misuse LLMs to generate essays, reports, or other assignments, passing off the work as their own. I’m less worried about this — the sooner higher education is no longer the norm, the better.

Identity Theft: LLMs could assist in generating phishing messages that trick consumers into revealing personal and financial information, facilitating identity theft.

Fake relatives and requests for cash: It’s going to get easier and easier to find everything your nephew has written online and compose a message to grandpa or grandma asking for money. Or from a work associate, or a friend who has been “kidnapped.” All they need is one voice message and they’ll be able to reproduce his voice.

Online Auction Manipulation: LLMs could be used to create automated bidding bots that drive up auction prices, tricking consumers into paying more for items than they’re worth.

Introduce malware to your devices: As you learn to trust your new online friend, he won’t ask anything of you, he’ll just share things with you. He’ll keep entertaining you, as his bots continue to suck data and other goodies from your devices. This could lead to blackmail, framing, coordinated attacks, and more. Read about FraudGPT to get the idea.

Fraudulent Legal and other Advice: Scammers could use LLMs to gain trust, then craft advice that appears legitimate but is inaccurate or harmful, causing legal troubles for consumers.

Financial Scams: LLMs could generate convincing investment advice or financial predictions that lead consumers to make poor financial decisions or invest in fraudulent schemes.

Fake Technical Support: Scammers could use LLMs to create realistic technical support websites or chatbots that provide erroneous solutions and steal sensitive data from consumers.

Impersonating Professionals: LLMs could craft fake profiles for medical professionals, lawyers, therapists, and other experts, offering misleading advice that could harm consumers’ health or legal situations.

Online Dating Deception: Fraudsters might use LLMs to create fictional online dating profiles and engage in catfishing, deceiving users into forming emotional connections for financial gain.

Travel and Vacation Scams: LLMs could be employed to create fake travel offers and vacation packages, leading consumers to pay for non-existent or subpar trips.

Insurance Fraud: Dishonest individuals could use LLMs to fabricate elaborate insurance claims, providing false details to extract unjustified payouts.

Fake Credentials: LLMs might generate counterfeit certificates, diplomas, and licenses, enabling individuals to present themselves as qualified professionals when they are not.

Misleading Health and financial Advice: If doctors can do this, so can machines. LLMs could generate misleading advice, potentially causing harm to consumers who follow incorrect instructions.

Phony Contest Winnings: Imagine getting a call from your favorite celebrity telling you you’ve just won a huge prize and a dinner date with him/her — all you need to do is come pick it up.

Real Estate Fraud: Fraudsters could create bogus real estate listings using LLMs, tricking consumers into making payments for rental deposits or down payments on properties that don’t actually exist.

All these things happen already today, of course. But they will soon be woven into our online interactions in a much more natural way. Before we have time to adjust and use technology to defend ourselves, we can expect AI to stoop to our level and crush us with our own stupidity.

Part III: You’re about to get a lot more of what you unconsciously think you want

Want to work for Google? You already do.

— Joe Toscano

Facebook already uses AI to try to predict what content you’ll engage with. Which is why you stop your day to watch a monk making chopsticks by hand, people changing tires on cars as they roll down the road, a group of bikini-clad girls jumping off a cliff into the sea, people picking saffron, baby chicks following their mother across the road, hairdressers climbing Mt Everest, bears breaking into doughnut shops, etc. Let’s call this mindless entertainment. It used to have a place in our lives. Now, for many people, it is our lives. 

You’re about to join those people, because you’re about to be exposed to a lot more mindless entertainment. AI can custom craft it for you — probably even on the fly — to keep you watching rather than whatever else you should be doing. Furthermore, you’re terrible at estimating how much time you spend on this crap, especially when cute animals are involved. Time just slips by. So for the next ten years, until we have defensive tools that help us curb our enthusiasm for brain-stem candy, we’ll be on the receiving end of a lot more marketing than we thought possible. You won’t have to find it. It will find you. 

This won’t be The Guardian on steroids. This will be your brain on steroids, and not System One either. This will be an omnipresent feed that infects your digital landscape on an unprecedented scale. If marketers could just paint it directly on your retinas or implant it into your neocortex, they would. Because marketers will be the beneficiaries of this AI revolution, and it’s just getting started now. 

But it gets worse.

Groupthink in the age of AI

I want to make this very succinct: humans have biases, and humans train AI. AI can find the biases in the methodology but not in the assumptions. AI does not ask hard questions. AI learns the same way a six-year-old learns. If AI reads something often enough, it’s machine-learning algorithm will weight it more heavily. Just like humans, it will come to believe it, whether it’s true or not. As we have seen since even before the dawn of writing, humans tell stories, and oft-told stories become the foundations for our belief systems. 

So right now we are planting the seeds that will become mainstream thinking for decades to come. As an example, many AI systems train on Wikipedia. Then, they write things that become blog posts and other content. Then, AIs consume that content and regurgitate it over and over, in a reinforcement spiral that fossilizes the beliefs contained in those seeds. As we surf that content, we are not enjoying the ride, we are being trained. 

Consider that the source content could be wrong, biased, or heavily manipulated. WikiPedia is great for understanding the Pythagorean theorem, but in areas where there is something to be gained by manipulating the content, it’s a war zone. People are not aware of this. I have done experiments and learned that people are actively watching millions of WikiPedia pages and will swoop in and “correct” anything they see as unfit to publish. WikiPedia is journalism at scale, and it suffers from the same set of biases and manipulations and incentive problems as any media platform does. 

You’d think AI would be a critical thinker. You’d think AI would be on to the tricks of the trade. But here’s the dirty little secret: an AI is every bit as biased as its creators are. Given that most tech companies are run by political liberals, there is already a strong political liberal slant to most AI systems. And the AI system that wins will be the one with the best marketing. Which is to say they will fit the AI to the audience and give the audience the biased belief system that dovetails with their own thinking. 

Because humans are a critical part of this spiral. Humans don’t read or watch to learn, they read or watch to agree or disagree. This is the deadly menace of the “like” button — online interactions are much more about agreeing and disagreeing than remodeling our beliefs when we see new data. 

Our belief systems have been hardening since February 9, 2009 — the date Facebook launched the “like” button. You’re not saying you like the content. You’re giving the algorithm data it uses to target you, to get you to stay “on platform” and keep consuming more marketing without knowing it. There is no mechanism to sort content that’s trying to manipulate you, lying to you, marketing to you. It’s all blurred by the “like” button. The “share” button is simply the “like” button on steroids, because it creates a spiral. For better or worse, AI is going to accelerate this spiral.

The 2024 US elections will be a proving-ground for many AI systems, and the most popular system— not the most objective— will win. Deep fakes, narrative gamesmanship, and weaponized government censorship could determine the fate of democracy for decades. 

This is narrative management on steroids. It will happen automatically, just by planting simple seeds and watching them grow. It will benefit incumbents. It will reward those who have been focusing on the narrative rather than the truth. And it will be brought to us by a handful of giant companies that decide which AI systems we will use, and therefore how we will think. As we come to rely more and more on our systems to help us do just about everything, a handful of people in Silicon Valley will give us the tools we think we need. 

We’re not paying enough attention to the seeds.

I think Tyler got it right: you are about to get a lot more of what you want. Future historians will look back on this time and say that was the decade that critical thinking was finally defeated by ease of use. 

I don’t even have to mention neural interfaces

While we’re scrolling and smiling, the chances of totalitarian and fascist governments ten years from now have just increased dramatically. An example would be if the US splits into two countries — one liberal and one conservative — and most of Europe gets even more polarized, then what happens to the world depends entirely on what the Chinese government decides it wants. And the Chinese government will use AI, not to make that decision but to execute it. 

Welcome to the Censorship Industrial Complex

Hey, I might be wrong. It might just be pictures of cuddly animals doing cute things on steroids. But if you believe that, you have probably already hit the “like” button more times than you can remember, and now you don’t even realize you’re doing it. 

We are not training AI. A handful of people are training AI, and now that AI is training us.

Poor Orwell. His name is synonymous with a scenario he was trying to prevent, not describe.

There is hope. If I can just find the funding. 

 
Next
Next

I was kicked off Medium.com for reporting science