Monday, November 25, 2024
Uncategorized

10 investors talk about the future of AI and what lies beyond the ChatGPT hype

When I mentioned “the rise of AI” in a recent email to investors, one of them sent me an interesting reply: “The ‘rise of AI’ is a bit of a misnomer.”

What that investor, Rudina Seseri, a managing partner at Glasswing Ventures, means to say is that sophisticated technologies like AI and deep learning have been around for a long time now, and all this hype around AI is ignoring the simple fact that they have been in development for decades. “We saw the earliest enterprise adoption in 2010,” she pointed out.

Still, we can’t deny that AI is enjoying unprecedented levels of attention, and companies across sectors around the world are busy pondering the impact it could have on their industry and beyond.

Dr. Andre Retterath, a partner at Earlybird Venture Capital, feels several factors are working in tandem to generate this momentum. “We are witnessing the perfect AI storm, where three major ingredients that evolved throughout the past 70 years have finally come together: Advanced algorithms, large-scale datasets, and access to powerful compute,” he said.

Still, we couldn’t help but be skeptical at the number of teams that pitched a version of “ChatGPT for X” at Y Combinator’s winter Demo Day earlier this year. How likely is it that they will still be around in a few years?

Karin Klein, a founding partner at Bloomberg Beta, thinks it’s better to run the race and risk failing than sit it out, since this is not a trend companies can afford to ignore. “While we’ve seen a bunch of ‘copilots for [insert industry]’ that may not be here in a few years, the bigger risk is to ignore the opportunity. If your company isn’t experimenting with using AI, now is the time or your business will fall behind.”

And what’s true for the average company is even more true for startups: Failing to give at least some thought to AI would be a mistake. But a startup also needs to be ahead of the game more than the average company does, and in some areas of AI, “now” may already be “too late.”

To better understand where startups still stand a chance, and where oligopoly dynamics and first-mover advantages are shaping up, we polled a select group of investors about the future of AI, which areas they see the most potential in, how multilingual LLMs and audio generation could develop, and the value of proprietary data.

This is the first of a three-part survey that aims to dive deep into AI and how the industry is shaping up. In the next two parts to be published soon, you will hear from other investors on the various parts of the AI puzzle, where startups have the highest chance of winning, and where open source might overtake closed source.

We spoke with:


Manish Singhal, founding partner, pi Ventures

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

This is a dynamically changing landscape when it comes to applications of LLMs. Many companies will form in the application domain, and only a few will succeed in scaling. In terms of foundation models, we do expect OpenAI to get competition from other players in the future. However, they have a strong head start and it will not be easy to dislodge them.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

I think in the applied AI space, there should be significant consolidation. AI is becoming more and more horizontal, so it will be challenging for applied AI companies, which are built on off-the-shelf models, to retain their moats.

However, there is quite a bit of fundamental innovation happening on the applied front as well as on the infrastructure side (tools and platforms). They are likely to do better than the others.

Is open source the most obvious go-to-market route for AI startups?

It depends on what you are solving for. For the infrastructure layer companies, it is a valid path, but it may not be that effective across the board. One has to consider whether open source is a good route or not based on the problem they are solving.

Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see?

We are seeing LLMs in other languages as well, but of course, English is the most widely used. Based on the local use cases, LLMs in different languages definitely make sense.

Besides linguistic differentiation, we expect to see LLM variants that are specialized in certain domains (e.g., medicine, law and finance) to provide more accurate and relevant information within those areas. There is already some work happening in this area, such as BioGPT and Bloomberg GPT.

LLMs suffer from hallucination and relevance when you want to use them in real production-grade applications. I think there will be considerable work done on that front to make them more usable out of the box.

What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months?

It can surely happen, although it may take longer than a few months. Once quantum computing goes mainstream, the AI landscape will change significantly again.

Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated?

Multimodal generative AI is picking up pace. For most of the serious applications, one will need those to build, especially for images and text. Audio is a special case: There is significant work happening in auto-generation of music and speech cloning, which has wide commercial potential.

Besides these, auto-generation of code is becoming more and more popular, and generating videos is an interesting dimension — we will soon see movies completely generated by AI!

Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI?

Contrary to what the world may think, proprietary data gives a good head start, but eventually, it is very difficult to keep your data proprietary.

Hence, the tech moat comes from a combination of intelligently designed algorithms that are productized and fine-tuned for an application along with the data.

When could AGI become a reality, if ever?

We are getting close to human levels with certain applications, but we are still far from a true AGI. I also believe that it is an asymptotic curve after a while, so it may take a very long time to get there across the board.

For true AGI, several technologies, like neurosciences and behavioral science, may also have to converge.

Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI?

Not really. Our companies are more targeted toward solving specific problems, and for most applications, lobbying does not help. It’s useful to participate in discussion groups, as one can keep a tab on how things are developing.

Rudina Seseri, founder and managing partner, Glasswing Ventures

Will today’s leading gen AI models and the companies behind them retain their leadership in the coming years?

The foundation layer model providers such as Alphabet, Microsoft/OpenAI and Meta will likely maintain their market leadership and function as an oligopoly over the long-term. However, there are opportunities for competition in models that provide significant differentiation, like Cohere and other well-funded players at the foundational level, placing a strong emphasis on trust and privacy.

We have not invested and likely will not invest in the foundation layer of generative AI. This layer will probably end in one of two states: In one scenario, the foundation layer will have oligopoly dynamics akin to what we saw with the cloud market, where a select few players will capture most of the value.

The other possibility is that foundation models are largely supplied by the open source ecosystem. We see the application layer holding the biggest opportunity for founders and venture investors. Companies that deliver tangible, measurable value to their customers can displace large incumbents in existing categories and dominate new ones.

Our investment strategy is explicitly focused on companies offering value-added technology that augments foundation models.

Just as value creation in the cloud did not end with the cloud computing infrastructure providers, significant value creation has yet to arrive across the gen AI stack. The gen AI race is far from over.

Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years?

A few market segments in AI might not be sustainable as long-term businesses. One such example is the “GPT wrapper” category — solutions or products built around OpenAI’s GPT technology. These solutions lack differentiation and can be easily disrupted by features launched by existing dominant players in their market. As such, they will struggle to maintain a competitive edge in the long run.

Similarly, companies that do not provide significant business value or do not solve a problem in a high-value, expensive space will not be sustainable businesses. Consider this: A solution streamlining a straightforward task for an intern will not scale into a significant business, unlike a platform that resolves complex challenges for a chief architect, offering distinct and high-value benefits.

Finally, companies with products that do not seamlessly integrate within current enterprise workflows and architectures, or require extensive upfront investments, will face challenges in implementation and adoption. This will be a significant obstacle for successfully generating meaningful ROI, as the bar is far higher when behavior changes and costly architecture changes are required.

source

Leave a Reply

Your email address will not be published. Required fields are marked *