

Ben Pring is a world-recognized IT futurist, author, and thought leader with a storied track record of analyzing the cutting edge of business and technology. He co-authored three best-selling and award-winning books. He is a frequent speaker at the World Economic Forum, and his work is covered in the Wall Street Journal, the Financial Times, the New York Times, NPR, PBS, Axios, and many other leading media outlets.
Ben shared his thoughts with RGP on AI’s impact, why AI causes so much anxiety, and a model for upskilling workers for the AI-driven future.
You have observed and written about the intersection of business and technology for much of your career. How would you characterize this current moment in time?
AI is the great story of our time. Every day, there’s a new moment where something that you thought was 10 or 20 years away is actually happening in real-time in front of us.
I’ve been in technology for 40 years, and I was one of the first people to pick up on the emergence of cloud computing 25 years ago. But that cloud wave is going to be nothing in comparison to the AI wave of the next 25 years. It’s going to see new empires rise and old empires crumble.
Preparing for the Coming AI Wave
What will the AI wave mean for workers?
A lot of knowledge workers are going to find the wind against them rather than behind them in the next few years, and that’s going to be very disruptive. Even so, you can’t not be excited about the possibilities that this incredible new technology is going to bring us, particularly if you’re a technologist.
What technology of the last 30 or 50 years would you consider the best analogy to AI in terms of its impact?
People talk about the Fourth Industrial Revolution, a phrase that came out of the World Economic Forum a few years ago. And that’s a good analogy. Having lived through the cloud journey — how technology is created, how it’s deployed, how it’s optimized — there’s much to be learned that will be useful with regard to AI going forward.
When the Apple computer was launched 40 years ago, Steve Jobs called it a bicycle for the mind — it would help you go further, faster than walking. The computers we have now aren’t bicycles for the mind, they’re motorcycles.
But let’s look at the automobile industry and automation. The advent of the automobile industry created huge amounts of work for people and raised hundreds of thousands—maybe millions—of people out of poverty. Detroit became a fabulously wealthy city on the back of that. Then the work slowly became automated, and jobs were lost, and we saw what happened to Detroit.
We’re on the cusp now of the automation of knowledge work. What’s going to be the consequence of that in 50 years? It might be very troubling.

Technology itself is neutral. How it is deployed determines if the consequences are positive or negative. With AI, how do we maximize the positive and, if not avoid negative consequences, at least minimize them?
First, we must answer, “Who is we?” If it’s society at large, the answer is education and training. It’s about allowing people to go on this journey with us.
Government has a role to play, business has a role to play, and education, particularly, has a huge role to play.
I often quote Kevin Kelly, founding editor of Wired magazine, who said, “The formula for the future of your work is X plus AI.” X is whatever you do—whether you’re a writer, a coder, a doctor, a lawyer. If you add AI to that, that’s how you get to the next threshold of productivity and competitiveness.
That’s a very simple formula, and I think individually and collectively within corporate structures or societal structures, people are just beginning to understand what that means.
Embracing a Shared Model for Workforce Training
What should CEOs be doing in the face of this coming disruption, and how do they evolve their corporations?
There are a lot of things that CEOs can and should be doing, and many are. There are few models for corporations to consider.
A few years ago, one organization basically put out an APB to their entire workforce, sketching out the scenario that we’re talking about now, saying, “The world is changing. The opportunity for us as a company is changing. We need collectively to change, and we, the corporation, want you as an individual contributor to succeed within this changing environment.”
The deal that company made with their workforce was to pay for the training that employees needed to continue to be relevant, but the quid pro quo was that employees had to do it on their own time—evenings, weekends, vacation time, etc. And I believe that’s been quite a successful model, because both parties have skin in the game.
Some people find this very odd, but I think that’s a very good, balanced approach that shows that while we all have individual agency, the corporation does have a role in helping people. I’ve seen quite a few clients use this model.
What about this is odd, as you term it, to some people?
There are those who think that the training and the time to do it should be the company’s burden. And, you know, I can see that logic. But my example speaks to a balanced view. Employers can’t afford to be as paternalistic as they were 50, 60, or 70 years ago.
Will the kind of training people need be related to understanding and keeping up with technology, or will it be centered on human skills—the ones AI cannot replace?
Certainly, an element of it will be technological. In the early days of software as a service, many providers believed that as software moved more to self-service, it would be easy for employees to transition from the old software to the new, and they would find the new software intuitive.
For many employees the transition was hard, and then the software wasn’t as intuitive as the vendors claimed. The same is going to be true using these new generative AI tools. We’ve all played around with them but integrating these tools into the job that you do is going to be very different from how most people have used it thus far.
There’s going to be a huge amount of process re-engineering that the big consultancies are going to help their clients with—rewriting the way we structure workflow and making it more efficient, rather than just automating it and doing bad things faster.
Training to Be a Better Human
What about soft skills?
We may see a renaissance of the Swiss finishing school model. The way we conduct business has degraded in the last 40 or 50 years. It has become faster, more transactional, coarser. I think we may see a renaissance of trying to teach young people the manners of business, how to be a good businessperson, a good human being, frankly.
If I’m going to hire a $1,000-an-hour lawyer, and I know that the work is really being done by AI through a large language model, that lawyer better give me a bloody good meeting. It better be a bloody good lunch.
If you’re a leader at a company, how long do you have to get your arms and head around this?
That “X plus AI” model should be the messaging from top to bottom—from the CEO on down. Go and figure out in your line of business, in your group and your division, in your role, how that formula makes sense for you. But I will caution them that it’s later than they think.
In general, people tend to be somewhat reluctant to embrace the new. But AI seems to make people downright uncomfortable. Why?
Even in our modern, sophisticated world, we are afraid of the future. It’s a human characteristic. And for most people, their exposure to AI has been through science fiction—novels and movies. Within that context, we anthropomorphize the technology.
And then, of course, within the dramatic context of art, literature, movies, etc., there’s always a protagonist and an antagonist—a good guy and a bad guy. Often technology has been the bad guy. But the next generation won’t see it that way because their initial exposure to AI will have been firsthand, in an actual use case.
Visionary Voices