Databricks has introduced its new Mosaic Agent Bricks platform, aiming to enhance the deployment of AI agents in enterprise settings. This unveiling occurred at the Data + AI Summit and addresses a notable issue: many AI agent development initiatives fail to reach production primarily due to outdated evaluation methods that are manual, inconsistent, and difficult to scale. Instead of solely focusing on building AI agents, there’s a pressing need for effective real-world implementation.
The Mosaic Agent Bricks platform automates the optimization of AI agents through several innovative features. It incorporates Test-time Adaptive Optimization (TAO), which enables AI tuning without the reliance on labeled data. Additionally, it can generate domain-specific synthetic data, establish task-aware benchmarks, and optimize the balance between quality and cost—all without manual involvement. These advancements solve significant challenges faced by enterprise clients, as highlighted by Hanlin Tang, Databricks’ Chief Technology Officer.
Previously, companies struggled to evaluate AI agents efficiently, creating an expensive trial-and-error process. Mosaic Agent Bricks streamlines evaluation by automatically generating relevant evaluations and synthetic data based on high-level task descriptions provided by the users.
The platform provides four customizable agent configurations that cater to different enterprise needs:
1. Information Extraction, which transforms documents into structured data.
2. Knowledge Assistant, which offers precise and cited information from enterprise databases.
3. Custom Large Language Model (LLM) for text summarization and classification.
4. Multi-Agent Supervisor, which manages multiple agents for complex tasks.
In conjunction with the release of Mosaic Agent Bricks, Databricks announced the global availability of its Lakeflow platform. Lakeflow simplifies data preparation by integrating ingestion, transformation, and orchestration processes, ensuring that high-quality data is readily available for optimizing AI agents.
An innovative feature of the new platform is the Agent Learning from Human Feedback, which allows the system to automatically adjust its components based on natural language guidance, thus eliminating common issues like “prompt stuffing” that often complicate AI interactions.
With competitors including Google and Microsoft in the AI development space, Databricks asserts that the unique optimization features of Mosaic Agent Bricks position it favorably in the market. The various communication methods for agents add to its versatility, setting a foundation for improved enterprise AI solutions.
For organizations aiming to excel in AI implementation, Databricks’ new tools offer a pathway to overcome traditional barriers in agent evaluation and data preparation, allowing them to concentrate on identifying practical use cases. This transition signifies a pivotal shift whereby evaluation infrastructure no longer hinders deployment, thus fostering more effective and impactful AI solutions in real-world applications.