Skip to main content

Blog Innovate UK

Innovate UK
Innovate UK

What will artificial intelligence (AI) done well look like?

Posted by: , Posted on: - Categories: ISCF, Support

This week, I’m taking part in the Financial Times’s Innovation Dialogue Number 1, which is debating the impact and implications of Artificial Intelligence (AI) working alongside human beings.

It is a hot topic, receiving significant attention from academics, civil societies, businesses and Government. Many people are excited about the opportunities, but concerns have also been raised at the highest level of the science community. With this emerging debate in mind, what should we in the UK actually do around AI?

One thing we should certainly not be doing is turning our backs on AI, preventing its use entirely would squash its huge potential to benefit the UK. But neither should we be ignoring its potential risks.

Would you want electricity in the bath?

Cartoon image of a superhero character in a bubble bath with a yellow toy duck.

All very powerful technologies have, simultaneously, the potential to be forces for good and for bad. Take electricity. It can kill you. Easily. And yet we have it in every home in the country, and the global community is working to make it more accessible everywhere on the planet.

I would not want to live in a home without electricity. But neither do I want it in my bath.

What we have to achieve with new technologies is their responsible and safe use; reaping the benefits, and avoiding the problems.

Artificial intelligence has such huge potential that we cannot afford not to look at how it can be usefully exploited. It offers us one overarching capability that we cannot afford to leave unused.

Probably the best decision-making in the world?

At its heart, what AI offers is better decision making. In all of the application examples I’ve seen, this capability is at its root. It has the potential to:

  • help manufacturing robots make better decisions based on what they sense
  • recognise faces to prevent fraud
  • determine how a manufacturing facility could be optimised
  • help interpret medical X-Rays
  • much more

At no point in human history has this capability been so important. At no point has the world been more complicated. At no point have we had so much data. And at no point has it been more important to make good decisions.

These decisions might concern the treatment of disease, the growing of food, the moving around of goods and people, the amelioration of climate change or many other things. As we understand more about how these things are interrelated, the problem of finding the best possible solution becomes increasingly difficult. We need all the help we can get.

The next reason that the UK should proceed, with caution, to explore the opportunities presented by AI is more pragmatic. We live in an increasingly connected world. Nations trade with each other (well, actually, firms trade across national boundaries) and people can buy the products they most like, wherever they come from.

The success, or failure, of companies is largely driven by the quality of the decisions they make, and their ability to implement them. These can be strategic decisions at Board level (whether to launch a major investment programme) or right in the heart of operations (for example whether the calibration of a temperature controller on the factory floor has drifted).

The best decisions are driven by insight, which is usually derived from data. I’m old enough to remember, before the current success of the UK automotive industry, when the UK-owned motorcycle and car manufacturers were substantially put out of business by stronger overseas companies.

This decline happened because foreign cars/bikes were more reliable and better value-for-money. Vehicles could be produced for less and they broke down less frequently. A major reason for the success, particularly of the Japanese firms, was their better use of data. They used Statistical Process Control (SPC) to control and improve their factories and products. W. Edwards Deming taught these principles during the 1950s and the impact was felt across the world in the subsequent decades.

SPC is just a kind of decision making tool. It helps you decide whether to make an operating change to your process or not. And how big that change should be. Good use of data is essential to make the best decision. SPC has since evolved into 6 Sigma, but the principle is the same…

Use the data – get better decisions

Human head containing icons for people, targets, charts, cloud against a backdrop of DNA strands

We are now entering a new phase in data analytics. One in which increasingly large volumes of data can be analysed, increasingly quickly, in ever growing types of applications. Done well, the output will be even better decisions. AI and machine learning (ML) are key tools to enable this improvement, and they are starting to be developed and exploited across the globe.

The question is not whether the UK should use AI. Instead we should ask: Why would we ever want our companies and organisations to make poorer decisions than their global competitors?

So, in fact, the UK has no choice. We have to do it. Or UK businesses that are currently world beating risk being overtaken by companies from elsewhere that make better decisions.

Might there be job losses as a result?

Undoubtedly some. No-one really knows how many. According to a recent report by Deloitte, the introduction of ICT resulted in 800,000 job losses in the UK: but it created nearly 3.5 million new jobs, paying on average £10,000 per year higher salaries.

There are fears that AI/ML might not create more new jobs than the old ones it disrupts. These are valid concerns. But, based on the experience of the car and motorbike industry, it is probably the case that NOT to invest in AI/ML makes organisations and everyone in them more vulnerable.

Whatever the risks of investing in AI/ML they are probably less than the risks of not doing so

And so to autonomy. As I said, AI/ML provide a better way of analysing data and making decisions – at least, making recommendations.

The question for those implementing AI/ML is: who acts upon these recommendations? Does the AI propose a course of action that the user (a human-in-the-loop) weighs up and then decides to implement or not; or does the AI implement the decision itself (autonomy)? Which applies should be case, and risk, dependent. And in some cases, it is specified in law.

My kettle is autonomous, which is fine with me, but I don’t want to cede control of everything

Boiling glass see through kettle

Human intelligence is generally held to be a good thing. We tend to admire it. But what really matters is what people chose to do with their intellect. There are, sadly, highly intelligent people in the world who put their ability to less than desirable ends.

So it is with AI. There is nothing inherently bad about an intelligent system. It is what then happens that matters. Often when people raise concerns about AI, they are raising concerns not about the AI itself, but about the autonomy they fear it may be granted.

I do think that we need to be very careful what level of freedom to act we confer on physical or cyber systems linked to AI decision making. Autonomy isn’t necessarily a bad thing. My kettle could be said to be autonomous, in that it ‘knows’ when it is boiling and turns itself off. (OK, it’s a hardware controller, a thermo-mechanical switch, not a software one, but the outcome is the same). I have no objection to my kettle switching itself off.

It is really important, as AI is implemented, that we harness its potential to do good, and find ways to prevent the less desirable (or risky) aspects of letting it take control of our lives in an unfettered way.

AI devices will need to give reasons for their decisions

Whilst I’m happy that my kettle switches itself off, I know why it has done so. Would I feel the same way, if an AI system informed me that I couldn’t receive treatment for a disease I was suffering from or denied me an insurance policy or a job interview? I very much doubt it. I’d want to know why, and I’d want to form my own view on the fairness of the decision. And ML systems aren’t very good at explaining their reasoning, because they have none. They recognise patterns, but can’t necessarily explain their reasoning.

In order for society to accept AI, it will have to have checks and balances built into it, including risk mitigation and the ability to explain why certain recommendations or actions are proposed.

So, what will the impact AI/ML (done well) look like?

It will give the UK economy an advantage, enabling more success for UK firms, and increasing UK global market share. It will probably keep some UK firms in business.

It will improve the way services are delivered; more safely and potentially at lower cost. People will directly benefit as a result.

It will achieve the appropriate balance between ‘controlling’ things and ‘informing’ decision-making. Getting the degree of autonomy right is the real challenge, not the AI itself.

It will be integrated into our lives in a responsible way, with due respect for the views of civil societies, the privacy of individuals and the assignment of liability and redress if things go wrong.

It will help the global community to optimise solutions to an increasingly complex set of problems, such as those articulated in the UN’s Sustainable Development Goals.

Structures will be put in place to help those whose jobs are affected; through retraining, up-skilling, or through help in finding new work opportunities.

And AI will do all this by helping us to make better decisions, and on controlled occasions by implementing them for us. So let’s keep an open mind on AI to ensure we prepare to realise its benefits for everyone in the UK.

You can follow Innovate UK on:

Sharing and comments

Share this page


  1. Comment by Bristol Builders Network posted on

    Electricity in my bath? I'm glad I only have a shower and replaced my tub with a walk-in wet room!

  2. Comment by Andrea Maria posted on

    I agree with this article. But some were reluctant because they felt AI, at some point, will completely replace humans, which means lack of employment opportunities in generations that follow.

  3. Comment by Paul Errington posted on

    We are already in this Augmented age. I believe A I will help us reduce stress in work for way longer than we thonk before a fantasy sentient A I take over