Deep Dive: Before You Bet on AI, Check the Odds
FOMO and competitive advantage is making us drunk to the risks from adopting a developing solution.
TL;DR
In this article, I explore the pros and cons of AI adoption, starting with the excitement and rapid investments in AI-driven solutions. While businesses are eager to leverage AI for efficiency and innovation, there are growing concerns about data privacy, regulation gaps, and AI's unpredictability. Using real-world examples like the HSBC deepfake incident, I’ll highlight risks posed by bad actors and customers' resistance to AI in customer service. I’ll talk about how FOMO and competitive advantage is prompting professionals to ignore the risks and dive head first into full AI adoption.
"There’s a lot of venture capital money in here right now. At some point the venture capital runs out. [Players need to make profits]. [And that’s when] you get into what Cory Doctorow calls the 'enshitification' cycle where things that were once adding a lot of value to the user begin extracting a lot of value from the user"
During an episode of The Ezra Klein Show, host Ezra Klein and Professor Ethan Mollick engaged in a lively, insightful discussion on a popular topic: How should I be using AI right now?
Their seventy-five-minute conversation ranged from giving AI a personality to why spending 10 hours experimenting with ChatGPT can radically boost your output. The conversation is packed with practical tips, offering a glimpse into how AI can multiply efficiency.
But beneath the enthusiasm, there’s a clear undercurrent of caution throughout the podcast. And it’s this cautionary approach professionals should be listening out for.
Most professionals know they need AI but don’t know why? And that is a dangerous.
AI: The big short?
It’s easy to get swept up in the excitement of AI—automated processes, chatbots, decision-making algorithms, all the new shiny things Silicon Valley throws at us once a decade. But according to Klein and Mollick, the real value of this technology might still be hidden for a lot of us.
An opinion shared by the likes of Sam Altman (the man who started OpenAI) says that we don’t know enough about how the algorithm behind large language models and generative AI works. Building a better understanding of these tools could be an influencing factor on Mark Zuckerburg’s decision to make Meta’s Llama 3 Open Source.
Workers no longer feel threatened by convoluted strategies or complex roadmaps required to build a digital product. AI’s making that whole process simpler and more accessible so they can concentrate on creating things. It’s an attractive prospect for both innovator and investor.
Ethan Mollick’s experience confirms this. He’s a professor at Wharton, and when he used AI while writing his book, Co-Intelligence, Living and Working with AI, it didn’t take over—it amplified his work.
Writer’s block, citations, and critiques of his drafts all provided him more head space to focus on the big-picture ideas. It wasn’t doing the work for him—it was like having a really smart assistant, pushing him to think more deeply and do more.
That could explain why AI has attracted huge financial backing. In 2023, AI startups hit a peak, raising $15.9 billion in Q1 alone. And in the U.S., over $31 billion poured into AI companies; $1 billion by Microsoft into OpenAI, Anthropic securing nearly $7 billion in total funding, and Inflection AI locking down $1.3 billion from the likes of Microsoft, Nvidia, and Bill Gates.
In a recent article from Bloomberg, Google will add AI facilities that will contribute $4 billion to Thailand’s economy by 2029 and support 14,000 jobs annually over the next five years, Google said Monday, citing a Deloitte study.
So, from a commercial standpoint, it’s got to be a no-brainer for anyone chasing bigger profits…right?
Humans are emotional apes.
I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so I can do my laundry and dishes. - Joanna Maciejewska
AI isn’t very intuitive. If you don’t give it all the information it needs, it won’t make a decision out of best judgement (even if it tells you it will). During my FinTech days, i heard a similar tone from accountants…
“Do you remember when Excel gave us more time to do our work, yeah me neither”, said an old accounting subscriber of ours.
The truth is that AI doesn’t always slide perfectly into every business or workflow, no matter how much we push. There’s still a mighty learning curve and testing process to understand how it will fit with more old-school, manual processes.
Employees are still getting used to AI’s quirks, and let’s face it, many are still more comfortable with their tried-and-true methods. We’re still in the experimental phase of AI adoption, and as homo sapiens, our emotions are running ripe as we figure out this exciting new toy.
People are still slacking off, according to Slack
Slack’s annual Workforce Index confirms this. It found that while many managers and executives want to onboard AI, workers feel differently.
Like accountants back in the 90’s, most knowledge workers don’t know what to do with their spare time AI supposedly gives them, so they fill it with shallow work.
Senior Vice President of research and analytics, Christina Janzer said a “lack of trust, and a lack of training” meant that only 36% of desk workers use AI. Further research found:
93% of workers don’t fully trust AI for work-related tasks, citing lack of trust and training as barriers to adoption.
Only 15% feel adequately trained to use AI effectively.
Managers should shift focus from activity metrics to output and create a supportive environment for experimenting with AI.
“There’s all this hype around it, but people aren’t being enabled by their leaders and employers to actually be able to use it,” Janzer said.
There’s a mismatch between AI’s capabilities and the tasks that truly matter to these businesses. And finding a way to have this tool easily fit into daily workflows is harder than we’re told it should be.
The .com bubble
Experts have pointed out eerie similarities between the current AI frenzy and the dot-com era of 2001. Just as venture capitalists threw billions at unproven internet companies back in the late 1990s, we’re seeing a similar gold rush today with AI startups.
Many of these companies have yet to prove their profitability, and investors are increasingly wary that the inflated valuations might be masking fundamental weaknesses. While the technology is undoubtedly impressive, the long-term value and profitability of many AI companies remain uncertain.
Core AI solutions will still be around (like they were before this hype), but startups and smaller businesses who went all in on AI operational tech will have a lot of rebuilding to do. Third and forth party suppliers are a critical risk (as we saw with Crowdstrike).
“In my little group chat with my tech CEO friends there’s this betting pool for the first year that there is a one-person billion-dollar company,” says Sam Altman of OpenAI.
So where are they?
According to documents reviewed by the The New York Times (I recommend reading the comments of this article), OpenAI’s monthly revenue hit $300 million in August, up 1,700 % since the beginning of 2023, and the company expects about $3.7 billion in annual sales this year.
OpenAI estimates that its revenue will balloon to $11.6 billion next year. But it will lose roughly $5 billion in costs related to running the service (which they haven’t disclosed to investors).
Proceed with caution: Risks and ethical concerns
“These systems are growing in power and capabilities at an astonishing rate.” say Klein during the podcast.
And with that rapid growth comes uncertainty, particularly when it comes to data privacy. If your AI solution isn’t directly generating revenue, it might be using your customers’ data to do so.
This raises major ethical concerns—especially as companies scramble to outpace one another and potentially ignore the digital holes they’re creating in the protection of their frameworks and confidential data.
There’s also the issue of regulation gaps. As Mollick points out, AI development has been moving faster than the frameworks designed to govern it. This means businesses adopting AI today are often doing so without the safety net of established regulations.
The risks are that you might find yourself navigating legally murky waters, facing future regulatory consequences as oversight catches up, and potentially having to heavily regulate or even remove AI solutions you’ve embedded in your organisation.
“We have no idea”
The unintended consequences of AI’s unpredictability are high. Even the developers behind these systems aren’t always sure how changes to one part of the algorithm will affect another.
In the podcast, they referenced how fine-tuning an AI model to perform better at maths, might have inadvertently made it worse at language processing as reported by AI developers at OpenAI. “We have no idea why it’s doing that” says one senior developer in a Slack message.
This unpredictability means businesses are betting on a tool that’s still evolving, and that can be risky when used in high-stakes environments.
A $25m Deepfake - cyber crime
In a show of genius, the bad actors that had convinced a lone worker to join an urgent virtual conference call ignored him for the twenty-odd minutes; as would happen in reality.
This created the illusion the person would recognise, and so when the CFO told him to transfer the $25 million to a European bank account, he complied.
The ability to do this rested on enhanced Deepfake AI. You too can achieve this through technology accessible to anyone. Microsoft’s Vall-E, Google’s Tacatron 3, and Deepface Lab’s open source video creator are affordable and ready for sign-up, today.
When it comes to AI, it’s not just businesses that see the potential—bad actors do too. And we’re only just scratching the surface of what is capable. In a time when smaller, nobodies were exempt from large scale, profitable breaches, bad actors are likely to build and release millions of automated, Deepfake attacks targetted at any business with data.
OK, so now what?
AI is powerful, but it’s not without its risks. I encourage you to make decisions with your eyes open to the trade-offs, especially around privacy, ethics, and the long-term impact on your business model if it were to fail.
Ask yourself, if your competitor wasn’t using it, would you? Is the FOMO aspect of this shiny new tool blinding us to the reality of substantial risks it brings with it?
If it does work out - which I hope it does - then doing your due diligence now is nothing short of good business. Only use it for tasks you that aren’t valuable. Never give it access to private data. And don’t rely on it as a major component to your business operations. In other words - for now - it’s best kept as a tool to write dad jokes.