Back to News
Agent.ai Ships Major Platform Updates: Sharing, Evals & Builder Tools
Agent Frameworks

Agent.ai Ships Major Platform Updates: Sharing, Evals & Builder Tools

Agent.ai ships major platform updates including Google Docs-style sharing, custom instructions, OpenClaw integrations, and enhanced evaluation tools for AI agent builders.

4 min read
ai-agentsagent-frameworksagent-builderopenclawprompt-engineeringapi-integration

Agent.ai has rolled out a comprehensive set of platform updates targeting agent builders and deployment workflows. The updates span collaboration features, evaluation tooling, and developer experience improvements across the platform's core functionality.

For teams building and deploying AI agents at scale, these changes address key pain points around access control, agent training, and workflow automation. Here's what shipped and why it matters for production deployments.

Google Docs-Style Agent Sharing System

The platform rebuilt its sharing infrastructure from scratch, replacing the legacy dropdown system with a dedicated Share tab. The new system introduces role-based permissions that mirror familiar collaboration patterns.

Builders now have granular control over agent access through three permission levels:

  • Viewer — execution access only
  • Commenter — execution plus configuration visibility
  • Editor — full run, view, and edit permissions

The system supports both restricted sharing and public link sharing. Changes save instantly without requiring a publish step, eliminating the previous workflow friction around sharing updates.

For agent users, the access request flow now includes contextual permission requests with optional notes. Owners receive email notifications and can approve or deny access directly from the Share tab.

Custom Instructions for Consistent Agent Behavior

Custom instructions tackle the repetitive context problem that plague most AI workflows. Builders can now define agent response preferences once rather than repeating instructions across interactions.

The feature adds a dedicated Email Agent tab in Settings where builders specify:

  • Response tone and formatting requirements
  • Language and structural preferences
  • Domain-specific context and constraints
  • Output format specifications

If no custom instructions are configured, agent behavior remains unchanged. The feature operates as an overlay on existing prompt engineering workflows rather than replacing them.

OpenClaw Integration Actions

Three new actions connect workflows directly to users' personal OpenClaw instances without manual configuration. The integration handles credential resolution automatically, eliminating the typical setup friction for self-hosted AI infrastructure.

The new action set includes:

  • Get Details — automatic instance credential resolution
  • Chat Completion — OpenAI-compatible API calls with session persistence
  • Tools Invoke — direct tool calling through OpenClaw Gateway

All three actions appear in a dedicated OpenClaw category within the Action Library. The integration supports multi-turn conversations and streaming responses for real-time applications.

Knowledge Agent Evaluation Workflows

Knowledge agents can now convert user conversations into training examples through a new Evaluations tab. The feature addresses the gap between initial agent configuration and real-world performance optimization.

Builders can review actual user interactions and curate few-shot examples directly from conversation history. The system supports up to 100 examples per agent with a 5,000-character budget for prompt injection.

Key capabilities include:

  • Toggle messages as positive or negative examples
  • Edit responses to create ideal answer templates
  • Automatic injection of selected examples into system prompts
  • Real-time polling for new conversations every 10 seconds

The evaluation system has zero performance impact when no examples are configured, making it safe to deploy across existing agent infrastructures.

Redesigned Event Triggers Interface

Event triggers now configure directly within the Edit Trigger sidebar instead of requiring Action Library navigation. The change streamlines webhook and API integration setup for workflow automation.

The updated interface shows third-party connection status upfront and handles OAuth flows inline. Required field validation runs before trigger creation with human-readable error messages.

Active triggers display in the builder with clear labeling: "Trigger Agent On: Email Sent" with corresponding icons. The change reduces the cognitive overhead of managing complex agent workflows.

Enhanced Agent Builder Assistant

The Agent Builder Assistant received significant reliability improvements, particularly around action generation and template validation. The system now uses Claude Opus 4.6 with 32k token output capacity for handling complex workflows.

Key improvements include:

  • Guaranteed valid structure for generated actions
  • Self-correcting generation with built-in validation
  • Live progress updates during generation cycles
  • Persistent action saving across browser sessions
  • Full editability of generated components

The platform added 71 automated tests covering the generation-to-validation pipeline, addressing edge cases around action editing and draft persistence.

Bottom Line

These updates position Agent.ai as a more enterprise-ready platform for AI agent development and deployment. The sharing system addresses collaboration requirements for team environments, while the evaluation tools provide pathways for continuous agent improvement based on real usage data.

For developers building production agent workflows, the OpenClaw integration and improved trigger interface reduce integration overhead. The enhanced builder assistant should accelerate initial agent development cycles, particularly for complex multi-step workflows.