09/22/2025 | Press release | Distributed by Public on 09/22/2025 09:41
The Conversation asked Dr Dan Nicolau, who researches the interface of biology and computing, medical science and artificial intelligence (AI), for his thoughts.
AI has been the focus of this deal, but what other technologies could it benefit?
It's a large amount of money, even for a wealthy medium-sized country like the UK. The big unknown is whether the various technical challenges associated with these technologies are overcome. It's a big bet. It's important if you want to see big economic benefits over the next ten years, say.
Those challenges include hallucinations by AI, where they invent information. The other thing relates to whether we have enough electricity. The government could build nuclear power plants. But that's a big commitment over many years.
It's also not totally clear that some of these tech advances directly translate to economic growth. There's a quantum computing component to this deal and it's directed towards developing lots of new medicinal drugs.
It's not completely clear that if you had 100 new drugs for cancer that you would have the capacity to do the clinical trials to see that they worked. Maybe AI can accelerate the clinical trials but all of this is a big question mark. It's a big bet that we can solve these problems in three to four years so we can get some economic growth over the next ten years.
Occasionally, there is some big breakthrough with quantum computing. Last year, Google announced that it had built a processor that drastically reduces the errors that quantum computers are prone to. But it's not clear that running quantum computers at very cold temperatures or scaling or reducing the error rates translate into concrete breakthroughs. It might do, but it's a little bit of a gamble. We don't necessarily know if we'll have answers in the next five years, but I think we'll know a lot more in the next two.
Of the potential technologies, such as self-driving cars, drones and advanced chips, which could yield real benefits to people?
The question is about timescale. If I knew that I had a drug that could cure breast cancer or lung cancer, say. Even if I could prove that scientifically, it would be ten years before anyone could receive that drug. That's because I'd have to publish the paper, it would have to be tested in mice, then clinical trials in humans.
In other areas, like for example, accounting, planning, contracts in law - all of that stuff, things can be done now. Some things can be done so much faster, it's hard to imagine there wouldn't be major economic impact in two years. Because AI tools are able to accelerate writing computer code, I think that's an area where we'd see half of code, maybe, being automated in the next two years. Whether that's good for people in coding jobs, let's see.
With drones, there are a couple of challenges - they can't operate alone so they need constant human supervision. We're working on a project where we're trying to upload a fly brain into a drone. The fly already knows how to fly so if you put its brain into a drone, the drone will be able to fly on its own. AI tools can really help with that and making cheap, genuinely autonomous drones and other things like mini submarines for example.
Also, drones can only operate for short periods of time. So if AI tools were able to help them recharge on their own or share tasks, they could go from something that's used by hobbyists and the military to self-driving cars, drug delivery to people at home, all of that stuff becomes much much easier. It opens up infrastructure development for drones, like highways in the sky specifically for drones. The potential economic openings are huge, but each of them has a question mark over them.
How likely is it that some of these problems can be overcome, do you think?
The billion pound question. With the hallucination problem with AI, there is no viable solution on the horizon. There are lots of patches and some of them are quite clever.
We don't understand the reason hallucinations are happening because it's a science problem and there are hardly any scientists working on it. OpenAI has 100 scientists in that team and 1,000 engineers. Google is the same. The engineers can't fix it because it's not an engineering problem.
But it's wider than that. LLMs will provide answers based on the data that they've been fed, but they're missing common sense. Until these problems are solved they're not going to be able to operate without loads of human oversight.