01/19/2026 | Press release | Distributed by Public on 01/19/2026 09:32
UK SMEs offered a clear approach to responsible design and use of AI
A NEW online course to educate UK business leaders on the need for the responsible design and use of artificial intelligence (AI) is to be launched by the University of Edinburgh.
Part of the UKRI-funded BRAID (Bridging Responsible AI Divides) research programme, 'Responsible AI for SMEs' is designed specifically to help small and medium-sized businesses make informed, grounded decisions about AI and to narrow the so-called 'AI trust gap', the growing divide between the capabilities of AI systems and the level of trust people are willing to place in them.
Spanning five self-paced modules, the course prioritises responsible judgement equipping entrepreneurs with the insight and tools to assess whether, how, and when to engage with AI, without promising shortcuts.
'Responsible AI for SMEs' includes insights and contributions from UK SME business owners and responsible-AI leaders across industry and civil society, bringing in real-world perspectives on both the promise and pitfalls of AI. Rather than promoting AI use at all costs, the course supports businesses to:
The course will be freely available via the edX platform and is also accessible through Coursera.
Professor Shannon Vallor, Co-Director of BRAID, is an AI ethicist with more than 15 years' experience working at the intersection of technology, philosophy and industry, including two years at Google, where she helped build practical and ethical approaches to AI.
"This course is designed to help SMEs filter through the noise around AI, offering grounded, practical guidance that helps them make good decisions, including the decision not to adopt AI at all.
"Our goal is to help businesses move forward with confidence, not just because they feel they should adopt AI, but because they understand it well enough to make the right call for their business and their customers. It's perfect for thoughtful leaders who value depth, careful judgement, customer trust and risk management."
Professor Vallor co-designed the course alongside Professor Ewa Luger, Co-Director of BRAID and Chair of Human-Data Interaction at the University of Edinburgh, who added:
"AI is already in the workplace, whether businesses know it or not. What we're offering is not a fast-track to AI success, but a way to understand the implications, manage the risks and protect the trust that businesses work so hard to build."
Ana Betancourt is co-founder of Edinburgh-based creative technology company Black Goblin. The company uses AI to support professional sound designers by reducing time spent on repetitive tasks.
"For many SMEs and start-ups, the biggest challenge with AI is not technical, it's judgement," she says. "Speed is often presented as the answer, but going slower allowed us to avoid building something our users would reject or worse, something that undermined their trust in us."
"In some cases, the responsible decision may be not to use AI at all, or to limit its role very carefully. Listening to concerns led us to pivot away from features that crossed the line between assistance and replacement, even if they were technically impressive."
Daniele Quercia, a Research Director at Nokia Bell Labs in Cambridge, where he has spent a decade developing tools that help organisations build fair, transparent and accountable AI systems.
Daniele said: "Because we are building systems and AI is going to be everywhere, we need to make sure we're not generating risks for people. A developer's small decision might have a massive impact. The big issues are misinformation, deepfakes, the unfair distribution of opportunities, and a significant reconfiguration of the job market.
"At the moment, 60% of AI development cost doesn't go into developing AI; it goes into redeveloping AI, and that cost exponentially increases with the stage of development. To fix something at design stage is very cheap; to fix it after deployment is disastrous, especially for a startup."
Each module of Responsible AI for SMEs tackles a core topic, including:
Learners will build a Responsible AI roadmap to support them turn these insights into practical next steps tailored to their business. In addition, reflective activities and peer dialogues offer further value, helping leaders to shape their own ethical framework around the technology.
The BRAID programme is funded by the Arts and Humanities Research Council (AHRC), part of UK Research and Innovation (UKRI).
To find out more about the course or to register interest, visit: www.ed.ac.uk/online-learning/register-your-interest-responsible-ai-for-smes-short-course