Tier 01 — Foundations

Founda­tions.

Get up and running with a private, local AI model on your own hardware. The entry point for individual practitioners, small teams, or organizations exploring private AI for the first time — without sending a single token to the cloud.

$8,000
One-time setup fee
Hardware Client-provided or quoted
Models Deployed 1 open-source model
User Accounts Up to 5
MCP Integrations 1 included
Post-Deploy Support 2 hours included
Compliance HIPAA-ready baseline
Deployment Time ~6 weeks
Warranty Period 30 days

Everything to go from zero
to private AI in six weeks.

All tiers include a final handoff meeting with documentation review and a 30-day warranty period during which minor configuration issues are resolved at no charge.

01
LLM Server Installation & Configuration
Full setup of Ollama or LM Studio on your existing hardware. Configured for local-network-only access — nothing listening on the open internet.
Open-Source Model Deployment
One model selected and optimized for your hardware and use case — Llama 3.2, Mistral 7B, Phi-4, or equivalent. Quantized as needed for your VRAM.
Local-Network Access Configuration
Network binding, port configuration, and basic firewall and port security review. Your LLM is accessible inside your network and nowhere else.
User Accounts & API Authentication
Up to 5 user accounts with API key authentication. Internal REST API endpoint compatible with the OpenAI SDK — a drop-in replacement for existing integrations.
MCP Server Integration
One MCP server connected to your LLM — filesystem, web search, or an equivalent tool of your choice. Enables document access, search, and tool use within the model.
Firewall & Port Security Review
Review of your existing firewall rules, open ports, and network exposure. Recommendations and configuration changes to harden the deployment environment.
Post-Setup Documentation Package
Full technical documentation covering your configuration, model parameters, API endpoints, user management, and troubleshooting procedures.
Post-Deployment Support
2 hours of included post-deployment support for configuration questions, minor adjustments, and issue resolution after go-live.

Who Foundations
is built for.

02
01
Individual Practitioners

Lawyers, consultants, researchers, and analysts who need a private AI assistant for sensitive client work — without submitting confidential documents to third-party servers.

02
Small Teams Replacing Cloud AI

Teams currently using ChatGPT, Copilot, or Claude for internal workflows who need to bring that capability fully on-premises due to data sensitivity or compliance requirements.

03
Proof-of-Concept Deployments

Organizations that want to validate private AI before committing to a larger rollout. Foundations gives you a working system to evaluate without over-investing in infrastructure.

04
HIPAA & Financial Compliance Pilots

Healthcare teams, financial advisors, and regulated service providers who need AI capabilities but cannot route patient, client, or transaction data through cloud providers.

05
Internal Document Q&A

Teams that need to query internal policies, contracts, SOPs, or institutional knowledge quickly — grounded in your own documents, not the public internet.

06
Department-Level AI Assistants

A first-generation AI assistant for a specific department — legal, HR, finance, or operations — before expanding to an organization-wide deployment.

Live in six weeks,
not six months.

03
01 Weeks 1–2
Discovery & Procurement

Discovery call, compliance review, hardware specification sign-off. Hardware procurement order placed if needed. Network requirements documented.

02 Weeks 3–4
Installation & Configuration

LLM server software installed and configured on your hardware. Network access rules applied. Security review completed. Base system tested.

03 Week 5
Model & Integration Setup

Model deployed and optimized for your hardware. User accounts provisioned. API keys issued. MCP server integration connected and tested.

04 Week 6
Handoff & Go-Live

Onboarding session with your team. Documentation package delivered. System confirmed live. 30-day warranty period begins. 2 included support hours activated.

How Foundations
compares.

04
Capability Tier 1 — Foundations Tier 2 — Professional Tier 3 — Business Tier 4 — Enterprise
Setup Fee $8,000 $18,000 $38,000 $75,000+
Deployment Option Local only Local or Cloud Local + Hybrid HA Multi-Server
Models Included 1 Up to 3 Up to 5 Unlimited
User Accounts Up to 5 Up to 20 Unlimited Unlimited
MCP Integrations 1 Up to 3 Up to 5 Up to 10
Custom Agents 1 Up to 3 Up to 8
RAG Pipeline Included Advanced + Hybrid
SSO / LDAP SSO / SAML SSO + LDAP / AD
RBAC & Audit Logging Basic Full RBAC + Logs Full + Compliance
Staff Training Admin walkthrough Half-day (10 people) Full-day (unlimited)
Post-Deploy Support 2 hours 4 hours 8 hours 30-day hypercare

The highest-value additions
for Foundations clients.

Foundations is intentionally lean. These three add-ons deliver the most ROI for teams just getting started with private AI. Multi-add-on bundles of 3 or more are eligible for a 10% discount.

Prompt Engineering & Use Case Library
Curated library of tested system prompts for common business workflows — document Q&A, meeting summarization, policy lookup, email triage. Delivered as a living document with optional quarterly refresh. Gives your team a practical toolkit to get value from the model immediately, without requiring any technical expertise.
$2,500 – $5,000one-time
Model Benchmarking & Selection
Structured evaluation of 3–5 candidate models against your specific use case, with a written recommendation report. Ensures you've deployed the right model for your workload before investing time in prompts and integrations.
$2,500 – $4,500one-time
Annual Architecture Review
Yearly deep-dive benchmarking your stack against new models and tooling. Includes a written findings report and a prioritized recommendations roadmap. The LLM landscape moves fast enough that a yearly checkup pays for itself — add this from day one.
$4,500per year
What's not
in the fee.

The $8,000 setup fee covers professional services only — labor, configuration, documentation, and post-deploy support. It does not include hardware procurement costs for the on-premises deployment or any cloud GPU instance fees.

Hardware can be provided by your organization (existing hardware is fine for Tier 1), sourced by Creeksea at cost, or financed over 24–36 months. See our Hardware page for current configuration and cost guidance.

All prices are starting rates in USD. Final pricing is determined after a scoping session and provided via a formal Statement of Work. Rates quoted in a signed SOW are locked for the duration of the engagement.

Ready to Begin?

Start your private
AI deployment.

All engagements begin with a complimentary 30-minute discovery call — no pressure, no commitment. We'll discuss your use case, existing hardware, and whether Foundations is the right fit.

Schedule a Call → View Tier 2