Where Smartflow Sits
Deployed between the orchestration layer and LLM providers. Enhances existing frameworks without replacing them — your orchestrators stay intact.
Apps Inside Orchestrators
Your apps (web, mobile APIs, microservices) live inside orchestration frameworks. Smartflow transparently intercepts all LLM calls from this layer.
Symbiotic Integration
Bidirectional relationship: Smartflow enhances orchestrators with caching & routing AND reports metrics back to framework dashboards for full observability.
Non-Invasive Enhancement
Keep using Glean, LangChain, LlamaIndex as-is. Smartflow plugs in transparently via network redirect — no code changes to your orchestrators.
Multi-Provider Support
Route to AWS Bedrock, Google Gemini, OpenAI, Anthropic, and self-hosted models (Llama, Mistral) from any orchestrator or app in the platform.
60–80% Cost Savings
4-phase semantic cache reduces provider calls across all apps in the orchestration layer simultaneously. One hit benefits every user.
Framework-Native Reporting
Smartflow pushes telemetry back to LangSmith, Glean Analytics, and other framework dashboards — see Smartflow data in your existing tools.
Patent-Pending Standardisation
Proprietary normalisation layer preserves temperature, top-p, and model-specific tunings when routing between providers. Switch models without breaking behaviour.