Skip to main content
Enterprise

Built for the enterprise.

ZenSearch is delivered as an enterprise platform — every deployment is shaped to your data, identity stack, and compliance program.

Talk to our team and we'll scope the right deployment together.

Three ways to deploy.

The same product, shaped to where your data and your team live.

Dedicated Cloud

Single-tenant ZenSearch on infrastructure we operate. Region-pinned, SSO-integrated, with a custom SLA.

Self-hosted

Run ZenSearch in your VPC, on-premise, or fully air-gapped. Bring-your-own-LLM and storage from day one.

Compliance-led

SOC 2, HIPAA, and GDPR review paths. Custom guardrails, data residency, and audit retention shaped to your program.

Included with every enterprise deployment

The whole platform — sized to your environment.

  • Unlimited users and documents — sized to your deployment
  • All connectors, including premium (SAP, Salesforce, MS365 service-account)
  • Hybrid search, agents, governance, and Control Tower admin
  • Bring-your-own-LLM (OpenAI, Anthropic, Cohere, Groq, Azure, Bedrock, Ollama)
  • SSO (SAML / OIDC), SCIM, and document-level RBAC
  • Audit logging with retention sized to your compliance program
  • Dedicated CSM, deployment assistance, and migration support
  • 24×7 priority support paths available

Frequently asked questions

Why is every deal custom?

Enterprise deployments differ along axes that off-the-shelf pricing can't model — data residency, deployment topology, identity stack, compliance posture, connector mix, and target user counts. We'd rather size the work to your environment than ship a packaged tier you'll have to bend to.

Is there a way to evaluate ZenSearch before talking to sales?

Yes. The Lite self-host edition is a free closed-binary Docker distribution intended for evaluation and small internal use. You can install it in under 10 minutes on a single machine and run the full RAG and agent pipeline against your own data.

Can ZenSearch run fully on our infrastructure?

Yes. ZenSearch supports single-machine Docker, production Kubernetes, and fully air-gapped deployments where no data leaves your network. Bring-your-own-LLM through a Model Gateway makes local inference (Ollama, LM Studio, or any OpenAI-compatible endpoint) a first-class option.

Do you support cloud, self-host, or hybrid?

All three. Dedicated cloud and self-host share the same data model and APIs, so customers commonly start in one and migrate to the other. Hybrid mode keeps data on-premise while routing inference through cloud LLM providers.

Do you offer discounts for nonprofits or research?

Yes — we offer significant reductions for accredited nonprofits, academic institutions, and qualifying open-source projects. Reach out with details about your use case.

Is my data used to train AI models?

No. Customer data is never used to train models. Cloud customers route through major LLM providers under enterprise data-protection agreements; self-host customers retain full control — data never leaves your network.

Let's scope it together.

Tell us about your data, identity stack, and deployment constraints. We'll come back with a concrete proposal.

Or email us directly at [email protected].