AI Agentic Assistance
Use grounded AI context, execution sessions, workspace memory, and policy-aware tools to plan and evolve branches safely.
AI Agentic Assistance
Nodeable includes an AI agentic assistance layer that blends branch context with policy-gated tool execution.
What this gives you
- Grounded context responses with source attribution
- Live branch graph snapshots for AI workflows
- Execution sessions that can be resumed, canceled, or rolled back
- Workspace AI memory for reusable constraints and standards
- Policy checks before tool execution
- Branch drift warnings before risky changes
Core workflows
1) Ask for grounded context
Use context query when you want evidence-backed answers from your current branch:
POST /api/workspaces/:workspaceId/ai/context/query- Optional controls include graph/wiki/branch metadata and result limits
You receive a grounded envelope with scored results and source handles.
2) Inspect live graph context
Use the live graph endpoint when a workflow needs direct node+edge snapshots:
POST /api/workspaces/:workspaceId/ai/context/live-graph
3) Check branch drift risk
Before large write operations, check drift:
GET /api/workspaces/:workspaceId/ai/branches/:branchId/drift
The response includes baseline branch info, lag signal, and risk level.
4) Run AI execution sessions
Create and manage execution sessions:
POST /api/workspaces/:workspaceId/ai/executionsGET /api/workspaces/:workspaceId/ai/executions/:executionIdPOST /api/workspaces/:workspaceId/ai/executions/:executionId/resumePOST /api/workspaces/:workspaceId/ai/executions/:executionId/cancelPOST /api/workspaces/:workspaceId/ai/executions/:executionId/rollback
Use these to persist multi-step AI work with explicit lifecycle state.
5) Manage workspace AI memory
Store durable conventions and constraints for your workspace:
GET /api/workspaces/:workspaceId/ai/memoryPOST /api/workspaces/:workspaceId/ai/memoryPATCH /api/workspaces/:workspaceId/ai/memory/:memoryIdDELETE /api/workspaces/:workspaceId/ai/memory/:memoryId
Memory entries support draft/approved/archived workflows.
6) Keep AI chat on the same conversation thread
AI chat continuity depends on sending the same conversationId on each follow-up turn.
Chat endpoints:
POST /api/workspaces/:workspaceId/ai/chatGET /api/workspaces/:workspaceId/ai/conversationsGET /api/workspaces/:workspaceId/ai/conversations/:conversationId
How continuation works:
- On chat start, API returns the conversation ID in
X-AI-Conversation-Id - The first chat
statusSSE event also includesconversationId(operation: "chat") - Web client stores this ID and sends it back on the next
POST /ai/chat
Why this matters:
- Chat history (including assistant tool calls/results) is persisted per conversation
- Follow-up turns replay persisted history to the model
- If a turn is sent without the prior
conversationId, a new conversation starts and prior tool usage context is not replayed
Policy and safety model
Tool calls are evaluated against capability policy before execution. Denied calls are blocked and logged.
Admin capability endpoints:
GET /api/admin/ai/tool-capabilitiesPATCH /api/admin/ai/tool-capabilities/:capabilityIdPOST /api/admin/ai/tool-capabilities/:capabilityId/test-policy
Practical team usage
- Keep memory entries for naming conventions and architecture rules
- Use drift checks before broad branch updates
- Prefer execution sessions for multi-step changes instead of one-off prompts
- Keep higher-risk tools approval-gated for production workspaces
BYOK configuration (TEAM workspaces)
If your workspace uses BYOK mode, you can configure how many tool-calling steps AI chat may run before it must stop.
Where to set it:
- Open workspace settings and go to AI Service Fleet
- Switch provider mode to Bring your own key (BYOK)
- Create or edit a BYOK provider
- Set Max Agent Steps
Behavior:
- Valid range is
1to20 - Default is
8 - This affects the AI chat multi-step tool loop for that provider
If your workspace is in Managed mode, this value is controlled by your admin-managed provider configuration.