The problem with chatbots | Weeks of 13 - 27 Jan '25
Threat concerns this week: Ivanti VPN hack. LA Insurance fortune-tellers. AI chatbots leak data via Google.
🎙️ Listen to a summary of this fortnight’s risks
Hello 👋 get a brew on because these are the top 3 emerging risks between January 13th, and January 27th, 2025…
Technological | AI chatbot startup WotNot suffered a major data leak, exposing around 346,000 sensitive personal files, including passports, medical records, resumes, and travel details. These documents contained full names, contact information, and addresses, posing risks of identity theft and fraud. The breach occurred due to a misconfigured Google Cloud Storage bucket, which was left accessible to the public without security measures. WotNot admitted that the exposure happened after changes were made to the storage settings without verifying access controls, leading to the unintended leak. This case study reminds one of the risks to the exceptional growth of new AI focused start ups like WotNot. The speed in which these tools are becoming accessible significantly exceeds that of policy and protection. We dive into this story further below and discuss a reminder of the risks to onboarding unvalidated SAAS suppliers.
Economic | In the months leading up to the recent Los Angeles wildfires, several major insurance providers, including State Farm, Allstate, and Chubb, significantly reduced their fire coverage by dropping or refusing to renew thousands of policies. State Farm alone dropped approximately 1,600 policies in Pacific Palisades in July and over 2,000 more across other Los Angeles zip codes.
Other insurers took similar actions, with Allstate ceasing to issue new policies as early as 2022, and Chubb, along with its subsidiaries, halting new coverage for high-value homes in high-risk wildfire areas in 2021. These pre-emptive measures by insurers left many homeowners without coverage just before the wildfires struck, raising concerns about the growing challenges of securing fire insurance in wildfire-prone regions. It has sparked conversations about the ethical nature of insurance providers, and additional questions on their almost fortune-telling outlook on increasing risks. We will be diving into this story further in a longer-form piece in a week where we look at the technology and tools insurers are likely using to outplay their clients. Keep an eye on your inboxes for when that's released.
Technological | Ivanti, a major software provider, has discovered a critical zero-day vulnerability (CVE-2025-0282) in its widely used Connect Secure VPN, which is actively being exploited by hackers. This flaw allows attackers to remotely install malware without requiring authentication, potentially giving them unauthorised access to sensitive business data, communications, and internal systems. The impact of this breach is significant, as VPNs are crucial for securing remote access to corporate networks. A successful attack could lead to data breaches, financial losses, operational disruptions, and reputational damage. While a patch is now available for Connect Secure, updates for other affected Ivanti products were delayed until January 21st, leaving many businesses vulnerable. Cybersecurity agencies, including CISA and the NCSC, have issued urgent warnings, highlighting the severity of the situation.
In addition to the exploited zero-day flaw, Ivanti has identified a second vulnerability (CVE-2025-0283) within its VPN software, which has not yet been exploited but poses a significant risk. This vulnerability could potentially allow attackers to bypass security measures and gain unauthorised access to corporate networks if left unpatched. Although no known attacks have leveraged this flaw so far, we must stress the importance of proactive patching and monitoring to prevent future exploits. We urge organisations using Ivanti’s VPN solutions to implement security updates as soon as they become available to mitigate potential threats and protect critical business operations.
Our thoughts
When we set out to start Unbreakable Ventures, our team was concerned about trying to maintain a fortnightly email that could provide at least three credible risks alongside deep dive pieces. What we've come to realise is that the challenge lies in reducing those risks down to only three. Since early January, the media came alight with the terrible LA fires, and then flooded with Trumps inauguration and first 100 days. But as predicted back in December, it's cyber and technological risks that are causing a significant risk globally.
As risk and business continuity specialists, it is difficult to watch the sheer number of businesses encouraging and adopting new AI start ups as part of their app-stack and operational suite. The risks a new business brings cannot be overstated, but one that classifies as an AI provider (a tool that is still largely undefined and uncontrolled - a sentiment shared by OpenAI founder Sam Altman) is a critical risk unique to our times. Weighing the benefits of fast-moving AI tools for your business verses solid policy and procurement is starting to catch businesses out who decide on the former.
And finally, we've been working with some exciting business partners who specialise in digital transformation, Geospatial data control and analysis, as well as risk forecasting tools that can provide comprehensive oversight on the impacts of events unique to your location. These meetings have prompted us to explore what similar tools insurers may be using to stay one step ahead of paying out a premium. Whether it's just good timing or that they do have advanced risk monitoring technology, the systems used to outsmart 'acts of god' do exist, and we've seen them. More on this in the coming weeks.
Want to discuss how these risks might effect your business?
Book 30 minutes with us, free ↗
346k files publicly accessible via a Google Drive
Category: Technological
Review our report’s terminology here ↗
In summary: Though a newer risk, the vulnerability of AI SaaS will soon become the norm. We see many examples online and in businesses we associate with, where professionals freely and proudly subscribe to new, exciting AI tools. But, those promises of streamlining, automation, and competitive edge become irrelevant if you lose customer data. This can easily happen by onboarding a poor-quality supplier like WotNot.
Businesses are deeply integrating these tools into their operations, using them as a core way to service clients (chatbots currently being the most popular AI application). Continuing to rely on them or integrating them into your current strategies is very risky. The problem isn't AI itself, but rather the businesses selling it. These companies are often new, fast-growing startups with poor profit margins. They rely on subscriptions to afford accreditations like ISO/IEC 27001:2022 and SOC 1 & 2, which can take over 12 months to obtain.
Compliance and regulation will impact these tool providers, ultimately affecting you. It's crucial to ensure these are not key components of your application stack. And may this be a reminder that if you haven't conducted a third or fourth-party review in the last year, you need to do one as soon as possible.
Sources:
You should be concerned if…
Financial Services & Banking: With strict regulations like GDPR and PCI-DSS, financial institutions face severe penalties if customer data is exposed through AI tools. Reliance on these technologies for fraud detection and customer support means any sudden removal due to compliance issues could disrupt operations and erode customer trust.
Healthcare & Pharmaceuticals: AI tools used for patient management and diagnostics handle highly sensitive medical data protected under regulations like HIPAA. A data breach could lead to legal consequences, reputational damage, and loss of patient trust, making it crucial to vet AI vendors regularly.
Legal & Professional Services: Law firms and consulting businesses deal with confidential client data that, if leaked, could result in regulatory fines and legal liabilities. Compliance-driven restrictions on AI tools could hinder workflow efficiency and impact client service delivery.
E-commerce & Retail: With AI tools integrated into customer service and personalization, leaked personal data could result in financial penalties and loss of consumer confidence. A lack of thorough vendor assessment increases exposure to such risks.
Technology & SaaS Companies: As early adopters of AI, tech firms risk intellectual property theft and compliance challenges if vendor security isn't rigorously assessed. Operational reliance on AI tools means sudden compliance restrictions could significantly impact business continuity.
These items are generic assumptions. We recommend considering your own unique risk landscape against your critical dependencies. If you don’t know what they are, get in touch.
Disruption Risk
Supplier / third-party negligence
Breach of contract or financial losses due to negligence of a third party / supplier.
Preventative actions
Data Encryption
Ensure all sensitive data is encrypted both in transit and at rest using strong encryption standards like AES-256 to protect against unauthorised access in the event of a breach.
Authentication Protocols
Implement multi-factor authentication (MFA) and strong password policies to reduce the risk of unauthorised access and account takeovers.
Regular Security Audits
Conduct periodic assessments of AI tools and their integrations to identify vulnerabilities, ensure compliance, and address potential security gaps before they can be exploited.
Comprehensive Logging and Monitoring
Deploy real-time monitoring systems to track AI tool activity, detect suspicious behaviour, and respond to incidents quickly to minimise damage.
Data Minimization and Retention Policies
Limit the collection and storage of personal data to only what is necessary, and establish clear timelines for data deletion to reduce exposure in the event of a leak.
User Education
Train employees on the risks associated with AI tools, emphasising secure usage practices, recognising phishing attempts, and maintaining compliance with company policies.
Third-Party Vendor Assessment
Regularly review and vet AI vendors for their security posture, compliance with data protection regulations, and incident response capabilities to ensure they align with your organisation's risk tolerance.
Need support?
At Fixinc, we are passionate about helping people get through disasters. That’s why our team of Advisors bring you this resource free of charge. If you need help understanding these threats and building a plan against them, the same Advisors are here to help over a 30-minute online call. Once complete, if you like what was provided, you can choose to provide a donation or subscribe to Unreasonable Ventures to support this channel.