Dell Technologies Inc.

09/25/2025 | Press release | Distributed by Public on 09/25/2025 08:22

Building an AI-Skilled Workforce

This blog is authored by David Nicholson, Chief Technology Advisor of The Futurum Group.

tl;dr: Many organizations struggle with a lack of in-house AI expertise. Experts from Dell and MIT discuss how education, upskilling and collaboration can close the skills gap and ensure responsible AI adoption.

In January, I had the opportunity to speak with Vivek Mohindra, Dell Technologies Senior Vice President of Corporate Strategy and Cynthia Breazeal, MIT Dean for Digital Learning.

We touched on a recent study conducted by ESG, which found that a lack of in-house employee expertise or skill is the No. 1 challenge organizations are facing today when implementing GenAI applications.

Our conversation covered some of the most important considerations around AI adoption, including how to address technical complexity, interdisciplinary skills and talent scarcity.

The following has been edited for length and readability.

There was a recent study by ESG that basically asked, "We've got this amazing generative AI technology. What's keeping you from realizing the pot of gold that's at the end of the rainbow?" The number one thing that they came back with was this idea that there's a skills gap. Employees, members of organizations don't have the skills necessary to take advantage of AI to the fullest.

Breazeal: At MIT, I'm the Dean for Digital Learning. I'm also the Director of the RAISE initiative. RAISE stands for Responsible AI for Social Empowerment and Education. Through that initiative, we believe AI is for everyone. If you're using a digital technology, it's affecting you, whether it's your beliefs, opinions, how you learn, how you find information, you name it. This is beyond being digitally literate and a digital citizen. We need to create an AI literate world. At the Media Lab, we want to prepare people for a future where AI and people together can create more value together than either AI alone or people alone.

What do we do about it? The Media Lab, the home of constructionist learning-learning by making and creating. We have been creating a host of curriculum materials starting in K-12. That is one way you can reach generations to learn about these technologies and how they work to demystify them, but also the responsible design and social implications of the technology through open free curriculum and tools. We could talk about low floor, high ceiling of creativity, and wide wall. We are trying to create curriculum based on learning through making and doing, but also at MIT we talk a lot about computational action. You can make working mobile apps using AI with tools like MIT App Inventor. Kids can make things that make a difference to themselves and their community. That is super empowering and gives them a broad perspective. How do you create? You innovate. How do you create new value by harnessing AI in a responsible way? To be able to do that in a much broader way, a much broader segment of population, a much more diverse and inclusive set is important for our future with AI.

I know Dell is involved in education, but what about from the enterprise perspective where you have adults? This is the classic teaching old dogs new tricks. Have we been through anything like this before? How is your perspective different on this when you have companies demanding ROI from AI, from everyone in the industry?

Mohindra: We're first making sure that our employees are well-versed with AI. We created a simple four-module AI fundamentals training, which we did not make mandatory. Pretty much the entire company took it to make sure that people understand AI basics. Second thing we've done is create specific to particular jobs, AI-skilling courses. I would call them 201, 301-level courses. If you want to use AI for content creation, for coding. We have created those tracks that people can train themselves on.

The third thing: We have created in partnership with NVIDIA a skills and certification program on AI, which we have taken to our customers, partners and communities, so they can leverage what we are doing from our learnings. Companies are asking about ROI. It starts with use cases, data, making sure their models and how they're implementing AI are the best and most economic possible, and then the infrastructure is the most economic and responsible. All of that requires fundamentally new skills.

The industry's seen this before. When PCs came about, people had similar sentiments. But look what's happened to desktop publishing and productivity since then. When spreadsheets came along, human calculator, and companies were adding numbers. People were worried about that. Those jobs migrated to financial planning jobs. Prompt engineers didn't exist about 18 months ago. Now that's one of the hottest fields that combines classic computer science training with more classic humanities-oriented training. This is an exciting era. There's a lot that will be different. Companies who are embracing it will see massive ROI, but they'll have to go about it in a thoughtful way.

Cynthia, Vivek mentioned spreadsheets as an example. I'll admit that I am far from a power user of Excel. I probably know how to leverage 5% of its capabilities, yet it's an extremely powerful tool for me. There are others who are power users who probably can leverage 30% of what's available. When we talk about skills from an AI perspective, the skills required to fine-tune a large language model are very different from the skills necessary to just use the tool as an end point worker or as a consumer. Are you focusing on both of those things at MIT? What are your thoughts on that spectrum of requirements for skills?

Breazeal: When you learn through making, you get a more visceral understanding of what it takes to make these systems work than just watching videos and doing problems. There's a lot of intuition that you build when you're trying to create something with these tools and technologies.

Along with that, trying to lower the floor and give people the creative power to create things that are interesting and meaningful to them. In industry, teams of people create solutions. It's important that all people on the team have a shared understanding of AI vocabulary. I ask folks in industry, "How much are your designers or these other kinds of people talking to your core technology people?" There's still not enough conversation and collaboration around AI of these kinds of tasks and skills.

Part of it is trying to build a common enough understanding of vocabulary, so you can have more effective collaboration when you're trying to create innovative solutions with these technologies. There's a lot of opportunity in the broad upskilling aspect of this.

Mohindra: Common vocabulary is an important point. This is why the four simple 15-minute Dell AI fundamental modules we put in place established that across the whole company. It's a practical way for companies to go about and establish that. The lowering of the floor is important for lots of companies and enterprises as well. You could lower the floor to use these tools and empower a whole range of other people in lots of economies to take advantage of coding with lower threshold. Similarly, content creation and a slew of other areas. Both points are important. I'm glad that institutions like MIT are attacking it from one vantage point while we as companies are attacking it from the other. They are all consistent and meet in the middle in some way.

Is that a taboo subject at MIT, the idea that maybe you'll be graduating really capable AI folks who don't necessarily have some of the computer science skills?

Breazeal: MIT anticipated this. We created the Schwarzman College of Computing around that idea. We were getting overwhelmed by requests of our students wanting to take computer science classes that let us bring these tools and technologies across the schools and disciplines. You're seeing a lot of innovation and use of computation and AI in all these subjects: from science, technology, humanities, arts, you name it. Let's push it into all those disciplines, and they could advance those tools and technologies and practices within those disciplines.

That was responsive. Appreciating that AI is transforming so many different industries and aspects of society. That it's a really powerful tool that we want many more people to be able to use to their advantage, create value, have higher IOI, and enable opportunity.

We need to figure out a way to be more inclusive of who can master these skills to get access to jobs and opportunities, beyond the four-year institutions. Community colleges are a terrific place to consider. What are those practical skills that we can create and credential or certify against that are meaningful to industry, so they can get into those jobs much faster without incurring as much debt? Bringing the middle class into this wave is important. It's going to take a focused effort to say we want to innovate in these other segments of our population to make sure that this AI-powered future is inclusive.

Mohindra: Dave, you mentioned I went to MIT for grad school. I've always been impressed with how MIT has consistently over decades demonstrated this ability to be forward leaning into these types of things. When open learning platforms came about, MIT was beginning to think through: With these open learning platforms, how should a four-year degree experience change? Now with AI, I'm not surprised to hear what Cynthia is describing: MIT and I bet other institutions are continuing to lead that way.

As companies, we are thinking, rethinking: What do we need in these different roles? We have always had a traditional definition of these job specifications, but what do we need now recognizing that these new types of tools are now available? That either somebody has learned them before they enter our workforce, or we can ramp them up quickly and then allow them to do something very different? This will start emerging. This whole notion of it's a very different workforce, lowering of the barriers. What do you do? How do you use these tools to get much better outcomes?

How do you balance the requirements for speed of innovation in AI with making sure that we are being responsible? From a society perspective, there are all sorts of different angles. Privacy is just one.

Breazeal: A lot of it begins with having the right education, training, practices, and tools to help to ensure as much as possible responsible design that's unlocking opportunity and minimizing potential harm. Thinking about K-12, starting as young as possible, MIT was, I think the first educational institution to say the way we need to build AI literacy is not just to teach about AI and how it works, but dovetailed with that, the societal implications, both potentially positive and negative and the responsible design of these technologies. So no matter who you are growing up and whatever profession you take, we all have that foundation of understanding how AI works in an appropriate way, having an informed voice of how we want it used in society, and then preparing young people to feel they have the mindsets and the skill sets to shape the future with AI.

Starting in middle school, we created the first curriculum, "AI and Ethics". When you weave those two together, young people's eyes light up because the first assumption was math? Code? It's all neutral, right? It's all neutral. We're like not so fast. Once you start to optimize for something, try to train an algorithm to maximize a certain outcome, you have now encoded a value into that code. Values are not neutral. Whose values are those? What things are you trying to maximize? Who does that potentially preferentially benefit or harm by making that decision? We try to make those decisions, and these are every day designs decisions. Anybody who's making an AI powered solution has to contend with making a decision. You want to be transparent and understand who your stakeholders are. What their values are. You may choose to design something a certain way, but you need to have a full understanding of why you're doing it and how.

I want to hear what you think and what you think industry's role is. Industry at large, would a company like Dell Technologies, do you need to be agnostic purveyors of the foundational gear for this and then let others manage the rest?

Mohindra: Thinking about it from a fundamental first principles perspective. First, companies are grappling with which use cases they should point this towards. Responsibility lies in there. You've got to figure out you're not as a company enabling anything which may not be responsible. Which obviously takes the form of training your employees and having a governance in place. We were one of the first in the industry to appoint a chief AI officer. We got a head start on this and we put very good governance in place, so that's where it starts.

Number two, data is the fuel. You've got to make sure-same principle of my own data-that you are using data that you are entitled to use and in the way it was meant to be used. You're providing the ability for people to opt out of the data usage.

Then there's processes and tools and technology that feed off of each other, all of which have an element in it. In the tools and technology domain, responsibility for us takes the form of making sure that whatever we are doing: the product development is responsible, our software development principle is responsible. They can be validated. There's openness around how it's all developed. We've been working with customers to make sure they understand the governance principles we put in place, so they can apply the same. This really is the broad shared responsibility. The trickiest thing over here is the speed, innovation and responsibility. That really does require very strong governance and a moral compass for a company. It also requires how the government and the regulations come into play. I know governments are grappling with this. This is the number one topic when I speak with different government ministers all over the world. That is something that really needs to be balanced well.

Breazeal: Right now, AI is not a particularly diverse or inclusive discipline, and a lot of the responsible AI is making sure we have people from very different lived experiences being the designers and makers of this technology because they're going to really bring the viewpoints of the users that we're trying to support through these technologies. Diversity and inclusion are important in how we reach and train those learners. This is why K-12 and community college are so important. We got to meet those learners where they are and help create a path for them into these positions of opportunity.

Dell Technologies Inc. published this content on September 25, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 25, 2025 at 14:22 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]