Tapping into the potential of LLMs
The rise of AI Engineers and production-ready LLMs
Posted by Pablo Lorenzatto
on September 27, 2023 · 3 mins read
Introduction
If you’ve been at least a little bit online during these past few months then you’re probably aware of a new “AI-boom”. We’ve all played with ChatGPT or generative models, and some have even deployed these models in real environments.
Since the initial shock, the field has matured, allowing not only dedicated AI enterprises but also other companies to build AI products, paving the way for more diverse next-generation AI creations. But who will construct these products and how?
Enter the AI Engineer
At first, cutting edge technology belongs only to the most advanced research centres and laboratories. Highly specialised researchers are the ones who move the field forward. But once it has reached a sufficient level of maturity, it’s time for a role that can tame this complexity and bring out its potential for day to day use cases. Just like it has happened before with Machine Learning research being tamed by Machine Learning Engineers, it’s time for AI research to be tamed by AI Engineers.
AI Engineering has emerged as its own subfield of Software Engineering, with its own set of tools and challenges, different from Machine Learning Engineering. But both have the same goal: make things work in the real world.
But this new subfield comes with its own set of challenges:
- New technologies: While models like ChatGPT4 hold renown, the emergence of open-source alternatives like LLama 2 keeps the tech scene vibrant. Navigating this landscape requires not just model selection, but fluency in tools like LLamaIndex, LangChain, and Vector Databases in order to build complete solutions.
- Infrastructure: AI models can either be built and trained from scratch using GPUs or leveraged through pre-existing APIs. AI Engineers are tasked with making this crucial decision based on their deployment needs.
- Quality of Data & Results: These systems rely on data for both training and addressing queries. Post-deployment, a combination of human oversight and automated monitoring is essential to ensure the integrity of their data sources and the accuracy of their results, enhancing their knowledge base over time.
- Compliance: When implementing real-world solutions, adherence to compliance is imperative. It’s crucial to not only treat sensitive data with utmost care but also to understand and manage where this data traverses and resides.
- Costs: Architectural trade-offs play a crucial role in devising cost-effective and viable solutions.
As you can see, it's more than just “Prompt Engineering”: it’s about giving shape to AI products that bring value to the real world.
Why should you get started with AI Engineering
In a very short time, the way we interact with our data has changed completely. LLMs in particular have allowed us to have access to a “second-brain”, that can supplement our day to day tasks. But to take this to the next level would require adding data not only from the whole internet, but also from your specific organisation.
That’s where AI Engineers come in: they architect and build the right solution for your needs.
If you wanted to get started right away, you could build something quite simple yet effective like this:
But using OpenAI directly has some very clear shortcomings: its cost, concerns around data privacy or uptime. But the most important of all is its lack of domain specific knowledge: OpenAI has awesome general purpose LLMs, but doesn’t know anything about the knowledge of your organisation.
This is where AI Engineering comes to the rescue: by building the infrastructure and systems that transform an AI idea into a real AI product. Here’s a simplified view of how the architecture of these solutions look like nowadays:
We really recommend reading Emerging Architectures for LLM applications to get a deeper understanding of how this works, but here’s a simplified explanation for a standard use case: Documents are converted into numerical vectors called "embeddings". These embeddings capture the essence and context of each document and can be stored in vector databases.Similarly to the context data, prompts also get processed and add data from external APIs and the vector database for added context. Prompts later get submitted to the LLM, which builds and returns a response to their query.
What Mutt can do for you
Just like what our value says: “We are Data Nerds”. We’ve got ample experience building data solutions, always making sure these solutions are tailor made to the needs of our clients and with a focus on making sure these systems are production ready and not just experiments.
We think AI Engineering is at the maturity to start reaping its benefits. We’ve weighed in our experience to deploy LLMs using both self-deployed solutions and APIs for our clients, always making sure the solution is tailored to their needs like speed, cost or compliance.
If you’re interested in tapping into the potential of AI, contact our team to help you get started in your AI Engineering journey.