Free e-Book:The Modern Data Stack:A Technical Roadmap.Download for free now!
It's LLMs All The Way Down - Part 2

It's LLMs All The Way Down - Part 2

MetaTech: Tech talking about Tech
Pablo Lorenzatto

Posted by Pablo Lorenzatto

on August 12, 2024 · 5 mins read

👉 Some future predictions

👉 Consumer facing cases

👉 Business facing use cases

Disclaimer: Some of the statements made in this post are forecasts based on currently available information. They could be rendered obsolete in the future or there could be a major change on the work and experiences of these technologies.

Some Future Predictions on LLMs

In Part I we covered the current landscape in large language models, the work being done in different models that are smaller but more complex and finally how open source is getting up to speed. In this post, we will make predictions about the future with all the information we have available today, plus current tests being conducted by some of the major players in the industry. I believe in making falsifiable statements, so here it goes. It’s loosely split into consumer and business use cases.

Consumer-Facing Cases

LLMs are going to get merged into the OS and have access to what you do and provide different functionalities based on that. The question remains if you’ll be able to have all of this running locally. I guess that yes, inference will be mostly on device. Smaller models will be used for most simple things and run locally while a few heavy stuff might still be offloaded elsewhere. The previously mentioned Apple keynote talks explicitly about this too. Some stuff locally, some stuff on the cloud. The balance of the two is a mystery. I’m betting heavily on locality. Cloud also has the drawback of having incredibly precise personal data out of users' control. Such a thing is problematic to say the least. I see 3 ways out:

  1. Encrypted models,
  2. Locality with little (hopefully non identifiable info) getting out or
  3. We just accept the new normalcy of our data
Business&ConsumersAreEmbracingAI

The first option is dependent on the success of research so we can just ignore that for now. We are left with the other two. As previously stated, my prediction is that locality will dominate for consumers. Another issue is size. Are we going to have larger and larger models with more capabilities? I think that, in the short term, it does not matter. For current applications, the capabilities of LLMs are enough. You probably don’t need much more knowledgeable ones. General ones are better and I expect things to move there eventually but not in the short term. Collections of tiny LLMs will dominate for a while. The reasoning for this seemingly heretic thought is simple: If your application is narrow, it's easier, if it's easier then you don't truly need an x-times more knowledgeable model. Narrow applications look like the main driver for the short term. There’s a difference between capabilities and knowledge. For instance, take google-deplot. It’s a small multimodal model for getting data from a plot image. It’s a small model with a very precise use case. I envision the model integration more as a collection of these sorts of small models (maybe customized from a reduced set of models so that you can “hot swap” the relevant weights) more than a large model that can do everything. Models like the phi-variants from Microsoft already go in this direction of getting things down to user compute.

That is not to say that local models won’t be better than the current generation; it’s just that the focus will shift from the best model possible to the best model that can run on a reasonable desktop. We will start using LLMs more and more: for UX/UI obviously but also, learning, web searching, automatizing stuff without programming and document writing.

Business-Facing Use Cases

Uses for business are harder to predict. Use cases are definitely much more business dependent and so it's difficult to make a general trend. One can only talk about the kind of business one is familiar with! In the short term, the current wave of things like NIM indicates that there is a “deploy your own model” interest so to speak. Looking at the current state, it seems that extracting/summarizing from large corpus is a popular application, the same goes with semantic search. This may not be the most exciting one but I do see an immediate need for that. On the tool side of things, the likes of SQL query constructor and multimodal models for front end development look like natural directions to reduce costs. Going even further, I think software development in general will see a major bump from LLMs. Both in the more technical aspect such as coding or testing and in the more humane aspect such as automatically generating tasks from meetings and writing progress summaries for interested parties.

Large scale interactions that require world knowledge but not much reasoning seem like a prime candidate for automatization too. This includes things like moderation and customer service (to a degree). Currently LLMs hallucinate too much to be left to their own devices in complex scenarios. This makes them much less viable for somewhat delicate stuff. Yes, there are some hacks and tricks to reduce these problems but not entirely or reliably. Hopefully the future will iron these issues out in a more natural fashion. Then you have things like video-editing or writing which can benefit from an AI assistant. Or marketing research, which can take advantage of LLM doing thorough web search, allowing people to use their time in other tasks.

Conclusion

As we stand on the brink of an AI-driven future, the evolution of Large Language Models (LLMs) is accelerating at a pace that promises to reshape the very fabric of how we interact with the digital world. From smaller, more efficient models to the growing ubiquity of LLMs in UI/UX, and the increasing standardization of these technologies, it's clear that we're witnessing the early stages of a significant paradigm shift.

Businesses and consumers alike are beginning to embrace the potential of LLMs. The open-source community is catching up quickly, pushing the boundaries of what these models can do and making them more accessible to a broader audience.

The road ahead is filled with exciting possibilities. The continued miniaturization and optimization of LLMs will likely bring more powerful AI experiences directly to our devices, enabling more seamless and private interactions. Meanwhile, businesses will continue to explore and deploy LLMs in innovative ways, transforming industries and creating new growth opportunities.

There are challenges to navigate—ethical considerations, data privacy concerns, and the ever-present risk of obsolescence as new advancements emerge. Nonetheless, the potential benefits far outweigh the risks, making this an incredibly exciting time to be involved in the AI space.

As we look toward the future, one thing is certain: LLMs are here to stay, and they will continue to play an increasingly central role in shaping the way we live, work, and interact with the world around us.

To sum up, with the growing corpus of fine-tunable models, this is where most creative applications will be. Only time will separate the gimmicks from the disruptors. Honestly, it's pretty exciting!

To learn more about AI, Machine Learning, Data Governance, and fueling your company with all these tools...

Follow us on LinkedIn keep up with what is coming in the future!