- PreSeed Now
- Posts
- Helping A.I. understand the world like a human
Helping A.I. understand the world like a human
Aligned AI is building a 'safer' alternative to GPT-4
PreSeed Now brings you an in-depth profile of a different B2B or deep tech startup every Tuesday and Thursday. Subscribe to get it straight to your inbox.
Hello there,
“Artificial intelligence could lead to extinction, experts warn,” a BBC News headline explained earlier this week.
This is a topic I first wrote about nine years ago, and aside from the image of Terminators it used (sorry!) my article has only become more relevant with time.
Today’s startup profile is a kind of sequel to that old article, as the startup co-founded by the person I interviewed back then. Scroll down to read all about Aligned AI.
But first:
I notice (thanks, Claire) that the UK Government is conducting a survey to help it better understand the university spinout landscape. I know a lot of spinout founders read PreSeed Now, so if that’s you, take a look: See the survey
“It’s like Crunchbase, but better for pre-seed” is one of my favourite bits of feedback so far about our PreSeed Now Startup Tracker. Check it out
– Martin
💡 Investors: let’s talk due diligence
When you invest in a startup, you need to be sure their tech–the true value behind the deal–is everything they claim it is.
PreSeed Now is sponsored by thestartupfactory.tech’s Technical Due Diligence offering
This service goes much deeper than the technology itself, taking in leadership, technology strategy, product and roadmap, people, engineering practices, and commercial analysis
Aligned AI is building a ‘safer’ alternative to GPT-4 that understands the world more like a human
10 years ago, at a conference in London, I had a fascinating conversation that I instantly knew I needed to follow up with an interview.
That conversation was with Stuart Armstrong, then with the Future of Humanity Institute at the University of Oxford. The resulting interview, titled non-hyperbolically ‘Artificial Intelligence could kill us all. Meet the man who takes that risk seriously’, was eventually published in March 2014.
I still think about that piece a lot, especially as the existential threat A.I. could eventually pose has become a mainstream topic of conversation (even on radio phone-ins!) in 2023.
Armstrong, meanwhile, has shifted from exploring the theory of an A.I. apocalypse to trying to do something about stopping it as the co-founder of a new startup.
Aligned AI is developing what it’s pitching as a safer alternative to the likes of GPT-4. “Unleash the power of artificial intelligence safely and without disasters,” its website reads.
“Fundamentally, we're making safer and more robust artificial intelligences,” explains Armstrong’s co-founder at the startup, Rebecca Gorman. “We've developed fundamental methods for improving machine learning.”
What is ‘concept extrapolation’?
Gorman says Aligned AI’s tech has applications in fields such as computer vision systems and robotics, but the startup will be applying it first to the large language model market, with the goal of providing a safer alternative to models currently available, such as the much-hyped GPT-4.
At the core of this is what Gorman and Armstrong describe as ‘concept extrapolation’, which is designed to do a better job of figuring out what a human wants from the A.I.
“Artificial intelligences built today are very fragile. They perform very well on their training data, but they perform poorly outside of the training data,” Gorman says.
She gives the example of an A.I. vision system trained to identify huskies and lions. Given that you never see a husky on an African plain and you never see a lion in the snow, the A.I. in effect becomes good at telling yellow things from white things. A husky in a desert may well confuse it.
Another example is how self-driving cars can perform well on the streets of Arizona or California, but take them to a less predictable environment with, say, more variable weather conditions, and they can struggle.
While A.I. developers can counter this problem by drawing on more diverse training data, Gorman believes Aligned AI has achieved a notable increase in performance over comparable technology.
She draws on Asimov’s first Law of Robotics to paint a clearer picture:
“If you want to tell a robot not to harm a human being, you have to find a way to be able to communicate to it what a human is and what harm is, in pretty much the same way that a human understands those concepts.
“With traditional machine learning, we're nowhere near communicating that. With the concept extrapolation that we've been developing at Aligned AI, we're getting closer, and we'll continue to get closer with our research.”
In the lab
Aligned AI’s website offers a description of how its tech can gain a better understanding of the differences between celebrities Owen Wilson and Beyoncé than other A.I.s might, resulting in a more ‘human’ understanding of the world.
As a demo, the startup has built a ‘Safer Prompt Evaluator’ designed to help keep A.I. chatbots from acting in ways their operators might not like.
On GitHub, Aligned A.I. demonstrates how its code can be used to screen users’ ChatGPT prompts to avoid ‘jailbreaks’. A ChatGPT jailbreak is where a user persuades the A.I. to roleplay as a character and then perform tasks it would not otherwise be allowed to.
Upgrade to Premium to read the rest of this article 👇