• PreSeed Now
  • Posts
  • This founder canned his therapist chatbot to build better AI protections

This founder canned his therapist chatbot to build better AI protections

What went wrong with AI therapist Yara, and how its co-founder reacted

In partnership with

Let’s round off the year with something festive… a post-mortem.

Okay, that’s not very festive but it’s certainly an interesting read.

Earlier this year we profiled an AI startup called Yara. But now they’ve called it a day. Why? And what does it say about AI’s capabilities for startups?

Read on to find out. And how the experience has inspired the co-founder’s next venture.

– Martin

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

Chatbots make bad therapists. Now this founder is building better AI protections

Joe Braidwood with his GLACIS co-founder, Dr Jennifer Shannon

When we covered Yara back in March, I was struck by co-founder Joe Braidwood’s conviction that AI could be a great therapist, if given right guardrails.

And yet fast forward to November, and he had shut the startup down citing a lack of confidence in generative AI’s ability to safely support vulnerable people.

Braidwood’s announcement post on LinkedIn about the closure has to date garnered over 3,200 reactions and almost 400 comments.

So what went wrong? To find out, and to discover what he’s building instead, I hopped on a late-evening call with Braidwood. Although he’s well known in the UK startup world as a former part of Microsoft-acquired SwiftKey, he resides in Seattle these days.

We also talked about his new startup, GLACIS, co-founded with Dr Jennifer Shannon, which aims to increase confidence in AI safety.

This conversation has been edited for clarity.

MB = Martin SFP Bryant, JB = Joe Braidwood

MB: So Yara was in a positive place in March. What happened since then that led to it shutting down?

The biggest thing was in August, when Illinois made it very clear that they were going to come after anyone who was not extremely diligent in this space.

We had always been as diligent as we could be, but one of the topics that was coming up more and more, was the gap between determinism—thou shalt, thou shalt not, etc—and what transformer architecture actually makes possible, which is not deterministic, to put it directly.

The specific rule is that calling yourself a ‘wellness product’ is not enough to give yourself a shield. You have to actually make sure that the product that you're shipping, if it's a chatbot and if it gives people emotional support, is extraordinarily diligent, almost perhaps to a fault, around detecting and off-boarding people who might be seeking clinical care.

While that is extremely reasonable on paper, it's actually extremely difficult to pull off. I think the main reason for that difficulty is that there's not a very good transparency layer around the guardrails and other safety features that companies run when they're deploying AI.

For Yara, it was just a headwind that I thought was insurmountable, and I think a number of other things have transpired since that make me double down in that conviction.

In parallel with that, a number of other things happened. My dad was diagnosed with cancer, and I used AI to try to steer his treatment. I remain vehemently an AI optimist. I think that the access to clinical wisdom that, if you know what you're doing, you can get is tremendous.

But when you have a personality disorder or are in a very critical place with your mental health, chatbots are not a great place to spend time. They're used to rational instructions. They're not used to the psyche of someone who is very sick, and so it just becomes a dangerous place.

We spent months trying to very diligently guardrail every corner of the product. And we were close to launching when this new legislation came out, and that threw me for a loop. I just couldn't get conviction.

After I posted about it on LinkedIn, probably 10x the volume of people that you see in the comments wrote to me privately, and many of them shared very personal anecdotes about how AI psychosis was something they'd witnessed personally and that they really subscribed to my warning that this is something that needs to be something we talk about very publicly and that we try to rein in.

MB: Do you think that large language models are fundamentally unsuitable for this kind of product, or could they be modified to be more suitable?

JB: It's important to disclose that I'm not an expert in computer science, but from everything that I know, I wouldn't say that it was a fundamental block.

I would say that there are a number of things that I believe need to change. Building transparency in a responsible way is probably the starting point for anyone deploying AI in a high-risk environment. And that doesn't just mean hand-waving about SOC 2 or ISO compliance. It means actually showing your work, proving that you've done your diligence and that you've built the requisite controls to make sure that the system is operating safely.

I think that there's also just a massive amount of ignorance, unfortunately, and that's on us as technologists to educate people. People don't realise that these models are essentially stochastic parrots. They don't behave reliably, predictably, deterministically.

That's part of their design. It's what makes them pass the Turing Test, because you can't pin them down. And with that novelty comes a tremendous amount of regulatory difficulty.

An open conversation about that is something that has been surprisingly lacking. Over and over again, we see frontier labs getting into a bit of a mess with things, and it takes weeks, if not months, if not years, for it to come out.

So like any catalysing step-change in technology, you've got to be very mindful of safety. And I was just quite cynical about those warnings going into this. How bad could it really be?

But there’s a Wikipedia page that acts as a ledger for all deaths attributed to chatbots, and it's pretty sobering stuff. So I'm both optimistic about the catalysing nature of the frontier LLMs and somewhat cautious. People who are operating in particularly high-risk environments need to be very, very careful with how they deploy them.

MB: It was legislation by a US state that led to the end of Yara. Now David Sacks and the Trump administration want to put an end to individual state regulations around AI and have one federal rule set for tech companies to follow (even if the approach has been questioned). What are your thoughts on this?

JB: It's objectively true that 50 legal systems, all with different rules, is a nightmare for startups. It's also objectively true that no regulation is a nightmare for consumers. So I think that we have to strike a balance.

I actually don't have anything negative to say about the Illinois rules. It was a wake up call for me. It was a difficult wake up call, but it was one that I needed to have. I think we need a federal rule set that is not zero and that is not every industry having its own set of federated rules that come from within a society that manages that industry or something. It needs to be relatively coherent and relatively transparent and ideally relatively uniform.

But if you look at the National Institute of Science and Technology (NIST) rules, they do a pretty good job of talking about transparency and responsible deployment. ISO 42001 does a pretty good job.

And so I think really, it's going to come down to how the insurers, auditors and legal systems react, and that inherently is going to take a couple of years to play out.

But if you look at first principles, the idea of having a duty to your customer is as old as business itself. And the laws of tort and negligence are well trodden. They set pretty reasonable, pretty robust standards for fair dealing and for what organisations should do.

So I believe that when we think about AI systems, especially agentic systems with autonomy, that needs to be front and centre of how they're set up and how they're established otherwise, OpenAI can maybe settle the lawsuits that are thrown its way, but many smaller businesses can't. So I think the whole system of liability and risk management needs to mature quite quickly to rein this in.

How GLACIS presents itself on its website

MB: So tell me about your new startup, GLACIS.

JB: Rather than just sit in a comfy armchair, flinging LinkedIn posts around, it's been my intention to put my money where my mouth is. And so I'm very fortunate to be talking to you from an incubator in Seattle that has invested in GLACIS.

GLACIS uses cryptography and other modern methods for deep proof, mathematical proof, to show, rather than tell, that safety features execute in production.

It's early days. We're about a month and a half in, but already have a couple of customers that have leaned in on the healthcare AI side, and we've been having a lot of really interesting conversations with regulators and professionals from across industry.

It does make me relatively bullish that there are a lot of very smart people who care about this problem space. There are grumblings from certain political foghorns about trying to get rid of all AI regulation in the USA. And I think it's going to get a lot messier before it gets cleaned up, which is why we're trying to create a standard that doesn't require any blessing from any legislative authority.

MB: Do you have any examples of how this new product can be used?

JB: It’s a bit like a black box recorder on a plane, recording the telemetry and the decisions in a tamper-evident way, so that if anything then needs to be audited later, it can be.

But unlike just a recording system, what GLACIS is actually doing is it's enforcing. Think of it as a sort of watchdog. When data is sent to an AI system, it checks if the data has been properly established and set up, if there is any prompt injection, or anything nefarious in the outbound data.

And then on the way back, we look for hallucination and drift; we look for bias; we look for really clear signs within our policy enforcement stack that the AI is acting within or outside of the scope of its role.

That's what most people in the media would call a guardrail. The critical difference is that we are not just saying that the guardrails ran, but we're proving that they ran by putting an anonymised hash of the outcome of the guardrail into a transparency log.

That helps organisations that are going through security and AI safety reviews at health systems or banks. It helps them sell into those larger systems by proving that they've got compliant and responsible AI systems in their products.

So it's very similar in that respect to compliance and governance tools that help you manage evidence collection for things like SOC 2 and ISO 27001. We're trying to re-envisage that for the AI era.

MB: Looking back home to the UK, what are your thoughts on AI regulation here?

JB: I think that the UK has a really interesting role to play right now in AI leadership. I'm really pleasantly surprised by some of the things we've accomplished, like the US and UK partnership for AI safety is a tremendous accomplishment, and it makes me proud to see just how seriously the current government's taking AI and the energy crisis that it potentially creates.

In some ways, it's ironic, actually, because I was quite opposed to the idea of leaving the EU, but the latitude that we've been given away from EU regulation to do things slightly differently, I think it's become quite valuable.

If you're an entrepreneur operating in the United Kingdom, thinking about the future of AI, there's a lot of opportunity there, and it's a great place to be a founder.

Obviously it's easy for me to say I'm over on the US West Coast, but I do think that there's a really interesting opportunity space, especially as some of the bluster around the recent AI hype cycle wanes.

I think 2026 will be a transformative year for startups, especially in the UK, where there's a good combination of appetite and regulatory flexibility around how to think about using these models in applied settings.

Were you forwarded this email? Subscribe here to get PreSeed Now straight to your inbox: