Sunday, December 22, 2024
Uncategorized

Work futurist on whether AI will take your job: Humans have 'innate sense of laziness'

The meteoric rise of AI has catapulted the world into uncharted territory. That’s good news for futurists like Amy Webb, who studies emerging technologies and uses quantitative and qualitative modeling to forecast how they’ll impact business and society.

As founder and CEO of the Future Today Institute, Webb has been grappling with AI and all its concomitant fears, from end-of-humanity doomsday scenarios to market flash crashes and jobs destruction.

We caught up with Webb earlier this week at the SXSW conference in Sydney. (Questions and responses have been edited and condensed.)

How is AI affecting worker productivity?

AI can greatly improve productivity for cognitive jobs where there’s a lot of reading, sorting and tagging, which you often find at professional services firms, law firms and investment banks. It requires less people hours to do those tasks, and you can ask an AI system to find patterns that you may have missed. That said, it’s still humans using the tech. There are plenty of cases where there is ample technology around us and people are somehow less productive. Humans are sort of biologically wired to expend as little energy as possible — it’s literally within our cellular structure — so I am curious as to whether this indulges our innate sense of laziness going forward and what that might mean.

What will AI do to the job market?

People ask, “is AI taking my job? Or taking a bunch of jobs?” But nobody’s asking what would it take for that to be true. We don’t have enough plumbers anywhere, right? In medicine, we’ve come a long way. You can use AI systems with computer vision to spot anomalies. But for AI to truly replace a knowledge worker requires the workers themselves to do the training to train these AI systems.

For instance, medical students are being offered money in certain parts of the world to sit for eight hours a day and click “Yes or No” through something called reinforcement learning with human feedback as training. But that’s ultimately a drop in the ocean. It’s much more productive to ask, “How is the business model changing going forward?” For example, the billable hourly-rate structure is going to have to change for some industries.

What are C-suite executives talking to you about around AI and new technologies?

They’re interested in having very basic conversations. To have the more advanced nuanced conversations requires leaning into uncertainty. A bank CEO recently asked me how AI can reduce headcount. But if you start adopting AI as a way of only improving your bottom line, through reduction of salaries, you’re going to wind up with a problem in a couple of years, if not sooner. 

What should they be thinking about?

AI is a series of different technologies and there are tools that can be created with it. In a handful of years, they’re going to be different types of jobs needed, so CEOs need to be very careful of making short-term decisions because it improves the bottom line. The real opportunity is in top line growth and figuring out where they can create new revenue streams, improve relationships and make it a force multiplier. That’s where most leaders should be placing their energy at the moment, but that’s not what I see happening in any country around the world.

What good questions are CEOs asking you related to AI?

They’re asking what it takes for us to be resilient versus how soon can we lay people off.

What are people confusing?

AI at the moment has become sort of a shorthand for ChatGPT. The non-technical side of organizations are just talking about it as a text-based system that gives answers.

What’s concerning to you?

Once your data have been used to train a system, how do you know who owns it and how do you monetize that going forward? The real questions should be, “Where is it getting the information? Am I okay with that. Do I trust it?” Once you’ve given away your archive and allowed it to be used to train an AI system, you can’t get it back out.

What’s next?

Multimodal. This is having an AI system that can do multiple things at once. Engaging different forms of logic, reasoning and analysis. So that can be used for more complex challenges, or decision points that a leader might encounter during the day provided there is enough context.

Do you have any interesting examples where you’ve tested AI?

I was keynoting a large financial services conference and the panel before mine was something about loan syndication, far outside my domain expertise. I copied and pasted the description of the program and the panelists into a couple of AI chat tools and asked each system to have the panel and tell me what the insights would be. Aside from the charts, graphs and linguistic flare that each person brought, there was really no discernible difference to the actual panel.

source

Leave a Reply

Your email address will not be published. Required fields are marked *