Sunday, December 22, 2024
Technology

Good old-fashioned AI remains viable in spite of the rise of LLMs

Remember a year ago, all the way back to last November before we knew about ChatGPT, when machine learning was all about building models to solve for a single task like loan approvals or fraud protection? That approach seemed to go out the window with the rise of generalized LLMs, but the fact is generalized models aren’t well suited to every problem, and task-based models are still alive and well in the enterprise.

These task-based models have, up until the rise of LLMs, been the basis for most AI in the enterprise, and they aren’t going away. It’s what Amazon CTO Werner Vogels referred to as “good old-fashioned AI” in his keynote this week, and in his view, is the kind of AI that is still solving a lot of real-world problems.

Atul Deo, general manager of Amazon Bedrock, the product introduced earlier this year as a way to plug into a variety of large language models via APIs, also believes that task models aren’t going to simply disappear. Instead, they have become another AI tool in the arsenal.

“Before the advent of large language models, we were mostly in a task-specific world. And the idea there was you would train a model from scratch for a particular task,” Deo told TechCrunch. He says the main difference between the task model and the LLM is that one is trained for that specific task, while the other can handle things outside the boundaries of the model.

Jon Turow, a partner at investment firm Madrona, who formerly spent almost a decade at AWS, says the industry has been talking about emerging capabilities in large language models like reasoning and out-of-domain robustness. “These allow you to be able to stretch beyond a narrow definition of what the model was initially expected to do,” he said. But, he added, it’s still very much up for debate how far these capabilities can go.

Like Deo, Turow says task models aren’t simply going to suddenly go away. “There is clearly still a role for task-specific models because they can be smaller, they can be faster, they can be cheaper and they can in some cases even be more performant because they’re designed for a specific task,” he said.

But the lure of an all-purpose model is hard to ignore. “When you’re looking at an aggregate level in a company, when there are hundreds of machine learning models being trained separately, that doesn’t make any sense,” Deo said. “Whereas if you went with a more capable large language model, you get the reusability benefit right away, while allowing you to use a single model to tackle a bunch of different use cases.”

For Amazon, SageMaker, the company’s machine learning operations platform, remains a key product, one that is aimed at data scientists instead of developers, as Bedrock is. It reports tens of thousands of customers building millions of models. It would be foolhardy to give that up, and frankly just because LLMs are the flavor of the moment doesn’t mean that the technology that came before won’t remain relevant for some time to come.

Enterprise software in particular doesn’t work that way. Nobody is simply tossing their significant investment because a new thing came along, even one as powerful as the current crop of large language models. It’s worth noting that Amazon did announce upgrades to SageMaker this week, aimed squarely at managing large language models.

Prior to these more capable large language models, the task model was really the only option, and that’s how companies approached it, by building a team of data scientists to help develop these models. What is the role of the data scientist in the age of large language models where tools are being aimed at developers? Turow thinks they still have a key job to do, even in companies concentrating on LLMs.

“They’re going to think critically about data, and that is actually a role that is growing, not shrinking,” he said. Regardless of the model, Turow believes data scientists will help people understand the relationship between AI and data inside large companies.

“I think every one of us needs to really think critically about what AI is and is not capable of and what data does and does not mean,” he said. And that’s true regardless of whether you’re building a more generalized large language model or a task model.

That’s why these two approaches will continue to work concurrently for some time to come because sometimes bigger is better, and sometimes it’s not.

Read more about AWS re:Invent 2023 on TechCrunch

source

Leave a Reply

Your email address will not be published. Required fields are marked *