How We Made That App: Igor Jablokov
Explore the future of AI with Pryon Founder Igor Jablokov on this episode of How We Made That App, hosted by SingleStore CMO, Madhukar Kumar.
Since the introduction of generative AI, organizations across industries have been exploring how to harness its potential. For many, the goal is clear: transform core processes and gain a competitive edge while mitigating the risks associated with this rapidly evolving technology. The task, led by small teams of experts and strategists, is one of balancing innovation with responsibility.
As we enter 2025, one term is set to dominate AI discussions—agentic AI. Whether it’s NVIDIA’s CEO Jensen Huang calling 2025 “the year of agents” or leaders from OpenAI and Anthropic predicting the rise of agents in the workforce, this technology is being heralded as the next big step for enterprise AI and automation. However, as excitement builds, businesses must approach this innovation with clarity, strategy, and a commitment to responsible implementation.
At Pryon, we view AI agents not as a fleeting trend but as a pivotal milestone in the evolution of generative AI. These advanced systems hold immense potential to deliver tangible business outcomes, but realizing this potential requires a thoughtful approach—anchored in robust governance, precise strategy, and deep expertise.
As a pioneer in the AI space, we’ve partnered with some of the world’s most innovative organizations to develop cutting-edge solutions for complex, high-stakes environments. By remaining deeply attuned to advancements in AI and machine learning, we aim to keep our audience informed about emerging technologies while championing safeguards that promote responsible AI deployment.
In this article, we explore the transformative promise of agentic AI—its potential to reshape industries, enhance productivity, and redefine automation. More importantly, we provide actionable insights on how your organization can safely and effectively harness this technology to stay ahead in a rapidly shifting landscape.
An AI agent is a system that uses language models to autonomously execute tasks by interacting with tools and data. Unlike simple chatbots or single-function tools, AI agents are complex systems designed to operate across multiple workflows. They combine several components into a cohesive system, driven by central reasoning capabilities powered by a large language model (LLM).
Agentic AI systems consist of four core components:
Together, these four layers allow AI agents to replicate human workflows, providing a level of autonomy that is reshaping how businesses operate.
An AI agent uses language models to direct tools and data within an operating system, autonomously completing tasks with precision and speed.
As you begin to explore integrating agentic systems into your organization, interrogating each of these components will be critical to setting your deployment up for success.
AI agents represent the bridge between generative AI’s promise and actionable results. They do more than supercharge workers—they can completely automate tedious, low-value tasks, allowing employees to focus on high-impact initiatives. The result? Organizations can unlock faster ROI and operational efficiencies.
And this is just the beginning of what AI agents can offer. The power of an individual agentic system increases exponentially when multiple systems are seamlessly coordinated to work together. This approach, known as the ‘multi-agent system’ paradigm, relies on a central “Project Manager” agent to orchestrate a network of specialized agents. Working collaboratively, these agents leverage their domain-specific expertise to tackle complex challenges and deliver transformative business outcomes.
AI agents represent the bridge between generative AI’s promise and actionable results, resulting in faster ROI and greater operational efficiencies.
Wondering where AI agents could deliver immediate value in your organization? Start with this. Think of the most tedious, time-consuming parts of your job. Then, imagine handing those over to AI. Agents can handle tasks like:
By delegating such activities, your team gains back valuable time and energy for higher-value endeavors.
Deploying AI agents isn’t without risks. In fact, underperforming or hallucinating agents can lead to serious consequences for enterprises that put too much trust in a faulty system. Much like hiring a new employee, building trust in an AI agent relies on two key factors: competence and character.
The memory layer is the single most important component to get right. It is the foundation upon which the entire agentic AI system is built. It ensures your agent has the right knowledge to perform its tasks reliably and accurately.
Without a scalable, flexible, and secure memory layer, even the most advanced AI will fail to deliver trustworthy results, which could cause serious disruption. This can look like fundamentally misunderstanding the steps necessary to complete a task by relying solely on parametric reasoning (especially for smaller or legacy models), as well as the inclusion of hallucinated responses and facts into the content of the task the agent is executing.
EXAMPLE
Imagine deploying an AI travel agent for booking flights and filing expenses automatically. If the agent hallucinates a corporate discount or fabricates nonexistent ticket availability, your team could arrive at the airport with invalid tickets. The result? Missed opportunities, reputational damage, and a financial mess to untangle.
The memory layer acts as the foundation for any agentic AI system – making it the most critical component to build correctly.
Agentic AI reasoning occurs at the cortex layer, where the LLM plans a course of action to accomplish its task. Advanced reasoning models, such as OpenAI’s o series or DeepSeek’s R1, exhibit strong performance solving complex, multistep problems. These models are specifically trained to optimize the chain of thought—the sequence of steps the model brainstorms in the planning phase—to increase performance. However, these sophisticated models can be very costly, especially when deployed at an enterprise scale.
For budget-conscious teams or builders of early-stage agentic systems, traditional, non-reasoning-centric LLMs may seem like an attractive alternative due to cost. However, these models often struggle with tasks that require nuanced, multi-step reasoning and execution. When applied to flexible or generalizable agents, these models may fall short, leading to poor task planning and eventual failure, regardless of how well the rest of the agentic system is configured.
To strike a balance between cost and functionality, teams should consider connecting their LLM to a knowledge base of best practices and standard operating procedures (SOPs). This strategy combines the cost-efficiency of smaller or legacy models with a minimized risk of failure.
Failure at the action layer is the most straightforward of the system component failures. Even with a perfectly planned approach and flawless recall, a task can fail if the computer use or RPA component shifts the mouse click by just a few pixels. Such small errors can prevent the task from being executed correctly.
Traditional RPA solutions attempted to address this issue by becoming increasingly brittle. However, advancements from large AI labs like Anthropic and OpenAI have introduced more flexible multimodal models. These models significantly enhance visual understanding of user interfaces, offering a promising solution to such action-layer failures.
As this technology continues to evolve, these failures will become increasingly rare. Until then, agent builders need to strike a balance between automation and human oversight. For example, shifting a task from requiring full manual effort to a process where the employee only needs to provide final verification still delivers significant value. This approach preserves efficiency while mitigating potential failures.
LLMs focus on language fluency rather than on factual accuracy (i.e. they know how to speak, not what to say). This limitation means that AI models may generate outputs that sound convincing but are detached from reality.
Compounding this issue is the fact that real-world representations are encoded into the models’ parametric memory. As a result, LLMs may produce responses that align with human perceptions of truth rather than an objective or unbiased reality. In enterprise or government contexts, this poses serious risks.
Grounding has become an essential prerequisite for deploying these models effectively, particularly in agentic systems. In conversational applications, answers are delivered directly to a human user, providing a natural "gut check". However, in agentic systems, hallucinations and inaccuracies can flow directly into automated workflows, leading the agent to not only say the wrong thing, but do the wrong thing, amplifying the potential for errors.
To improve accuracy, AI agents are often grounded in proprietary and personal data sources. While this provides better performance, it also introduces risks of exposure or breaches of sensitive data, requiring those sources to be properly handled and protected.
Ensure your system has appropriate guardrails to comply with local or national data privacy standards, respect role-based access control levels (ACL), and include measures to mask personally identifiable information (PII).
Keep in mind that AI agents function as integrated systems. Whether you’re building the system yourself or purchasing components from a vendor, every component must adhere to your data privacy standards and be architected to safeguard your information.
AI Agents are on their way to an office near you. These systems are automating tedious tasks, enabling smarter workflows, and unlocking productivity like never before. But with great potential comes the need for responsible implementation.
To truly harness the power of agentic AI, it’s critical to build these systems on a strong foundation and establish sound governance. At Pryon, we work with some of the most highly regulated organizations in the world, so we’ve perfected our approach to guiding trusted and secure AI adoption. Here are four essential best practices we've established for governing agentic AI.
LLMs build a model of the real world based on statistical probabilities in massive datasets of language. While powerful, there is still a large schism between these models and the realities we need them to interact with. Grounding these models with your organization’s own data ensures outputs are accurate, up-to-date, and tailored to your needs.
One of the most effective ways to achieve this is through retrieval-augmented generation (RAG). By using RAG, AI agents can retrieve relevant data and integrate it directly into their reasoning and decision-making processes, as well as use it for anything they may generate.
Building your AI agents on top of a strong RAG architecture, or buying an agentic system that has this built in, is a crucial first step to ensure reliable and contextually relevant interactions.
If you’re piloting AI agents for smaller, low stakes use cases, a basic, in-house RAG setup may suffice. But as you scale to enterprise-wide deployment, your agents must operate at a larger scope. They need to access data across multiple formats, locations, and systems—all while maintaining robust security measures compliant with enterprise standards.
Your agents are only as strong as the data layer you build beneath them. A strong, scalable foundation is critical for supporting their performance and ensuring they can grow with your organization’s evolving needs.
Model companies seek to increase the generalizable performance of agents through larger and more computationally expensive models. While these breakthrough models can perform highly complex tasks out of the box, they often come at an extreme cost. OpenAI’s yet to be released o3 model reportedly spent $350,000 in compute to solve a single benchmark.
Enterprises shouldn’t have to sacrifice ROI for functionality. By using a RAG-based architecture, you can achieve robust personalization, planning, and decision-making without relying on the most resource-intensive models. This approach not only reduces costs but also delivers domain-specific accuracy tailored to your unique operations.
AI agents can deliver remarkable productivity gains, but human oversight remains essential for safety and reliability. By having a human validate the outputs of autonomous systems before final execution, organizations can significantly mitigate risks while still reaping the benefits.
This balanced approach ensures quality control and safeguards against costly errors—especially in high stakes environments where the margin for error is slim to none.
Agentic AI isn’t just a passing trend – it’s a game-changing innovation here to stay. However, success depends on laying the groundwork with a robust memory layer and enterprise-grade RAG architecture. Without this pivotal foundation, organizations risk consequences ranging from workflow inefficiencies to critical failures.
We've created a comprehensive guide to help you master the process of building a robust memory layer with an enterprise-grade RAG system. Download it today to equip your organization with the foundation for a secure, effective, and impactful agentic AI system.