Show, Don't Tell
"This is a live assistant connected to [data source]. Watch."
The AI assistant market grows from $3.35B (2025) to $21.1B by 2030 at 44.5% CAGR (MarketsAndMarkets). Glean raised $765M total at a $7.2B valuation, surpassing $100M ARR (Crunchbase, June 2025). By 2028, organizations using multi-agent AI for 80% of customer-facing processes will dominate their markets (Gartner, Oct 2025). The category is being defined right now.
Three consequences of inaction: (1) The product advantage erodes — Azure DI is iterating, Glean is expanding, every quarter the moat shallows. (2) Existing clients commit elsewhere — once locked into Glean or an internal build, switching cost is enormous. (3) The narrative solidifies — two years without traction becomes a self-fulfilling prophecy that overshadows the genuine engineering breakthrough.
One internal deployment → credibility. One paid pilot → revenue and a case study. One production contract → a reference customer. One reference → three more conversations. The product is ready. The data proves it.
Each ring funds and de-risks the next. Don't skip rings.
WEEKS 1-4
Deploy Nexus inside Fulcrum sales. Connect SharePoint — SOWs, proposals, pricing, case studies. Mandatory daily use. Measure time saved, accuracy, onboarding speed. This creates your demo, your first case study, your credibility.
WEEKS 5-12
3-5 existing clients. $10-15K paid pilot. One department, one data source, one assistant. Deploy within 48 hours. 4-week usage. ROI report. Expand or exit.
MONTHS 4-12
Convert pilots to annual subscriptions. Expand departments. 5-10 production customers. $500K-1M ARR by Month 12.
Per-assistant/department pricing — NOT per-user. Per-user punishes adoption.
1 assistant, 1 source, 25 users, ROI report
3 assistants, 100 users, RBAC, analytics
Unlimited. On-prem. 24×7. SSO/SAML.
Deploy for Fulcrum sales. Connect SOWs, proposals, pricing from SharePoint.
Mandatory 30-min session. Daily targets. Track queries, time saved, quality.
First ROI data. Fix top issues. Create internal case study with hard numbers.
20 existing BFSI/Insurance clients. Warm outreach begins.
Top 10 prospects. Live demo. Propose $10-15K pilot.
2-3 pilots. Deploy within 48 hours of data access.
Weekly check-ins. ROI reports. Convert 2 of 3 to annual contracts.
When you have benchmarked data, you use it. Technical buyers want data. Non-technical buyers respect that you have it.
| Dimension | FD RYZE® Nexus | Glean | MS Copilot | Internal Build |
|---|---|---|---|---|
| Accuracy | 89-92% (benchmarked) | Not published | Not published | 79-80% |
| Text Fidelity | 93-100% | Not published | Not published | 59-82% |
| Hallucination | Critic Agent (active) | Enterprise Graph | UI slider | Must build |
| Deploy Time | 48 hours | 3+ weeks | Days (limited) | 6-12 months |
| Annual Cost | $36-60K/dept | $200K+ ACV | $30/user/mo | $150K+ eng |
| POC Cost | $10-15K pilot | $70K+ POC | Included | 3+ mo eng |
| Deployment | Cloud/On-Prem/Hybrid | Cloud only | Cloud only | Azure only |
Sources: Internal benchmarks (Cord V2, Synthdog). Cost model: 50K pages/mo, 6K queries/mo, GPT-4o-mini, Azure East US, Apr 2026. Glean: Vendr/market reports, Crunchbase (June 2025). Gartner predictions: Oct 2025 IT Symposium.
This is the data that earns the right to every conversation. Don't hide it. Don't apologize for it. Own it.
| Metric | FD RYZE® Nexus | Azure DI |
|---|---|---|
| Overall Accuracy (Cord V2) | 89% | 80% |
| Overall Accuracy (Synthdog) | 92% | 79% |
| Text Fidelity (Cord V2) | 93% | 82% |
| Text Fidelity (Synthdog) | 100% | 59% |
| Semantic Accuracy (Synthdog) | 100% | 62% |
| Numeric Reliability | 100% | 100% |
| Completeness (Cord V2) | 94% | 87% |
| Hallucination Control (Cord V2) | 94% | 75% |
| Hallucination Control (Synthdog) | 100% | 93% |
| Cost Area | Nexus | Azure Foundry |
|---|---|---|
| Document Extraction | ~$4-30/mo | $625/mo |
| Integration & Onboarding | $1-2K | $2.5-3.5K |
| System Config/Tuning | $500 | $500-1.5K |
| LLM Output Tokens | ~$0.90/mo | ~$1.51/mo |
| Compute (AKS) | $357/mo | $357/mo |
| Year 1 TCO | ~$8,580 | ~$14,766 |
| Year 2+ Annual | ~$5,880 | ~$9,816 |
Basis: 50K pages/mo · 6K queries/mo · GPT-4o-mini both sides · Azure East US · April 2026 pricing
Dynamic VLM + OCR routing — intelligently routes each document to the optimal extraction method instead of one-size-fits-all. 400-token precision chunking — preserves context boundaries where Azure's 1024-token chunks fragment meaning. Critic Agent — actively verifies every answer against source material, not a passive confidence slider.
The largest gap is document extraction: ~$4-30/mo vs $625/mo. Azure Document Intelligence charges per-page processing fees that compound with volume. Nexus uses a VLM-first approach that bypasses per-page extraction fees for most document types. At scale, this gap widens — not narrows.
Print both tables. Bring them to every technical meeting. When someone says "why wouldn't we build on Azure?" — slide this across. Say: "We benchmarked against Azure's own Document Intelligence. Here are the numbers. Here's the methodology. We're happy to walk through it." Confidence backed by data is the most disarming competitive move there is.
Benchmarks win meetings. Customer-specific proof wins deals.
Build a vertical-specific demo assistant before every meeting. 48-hour deployment makes this feasible for every serious prospect.
Publish the methodology. Let prospects reproduce on their own data. "Don't trust our numbers — here's the test. Run it yourself."
Every pilot generates: quantified ROI report, customer quote, benchmark data, expansion recommendation. By pilot #5, the library sells itself.
Tactics to compress the sales cycle from months to weeks.
Map regulatory deadlines (RBI circulars, IRDAI compliance dates), budget cycles (Q4 use-it-or-lose-it), competitor evaluations ("Glean POC costs $70K — we're $10K"). Create time-bounded offers tied to real events.
Give your internal champion a pre-built slide deck they can present to their leadership. Include the ROI numbers from the calculator, the competitive table, and a 1-page executive summary. Make it easy for them to sell internally.
Pre-build: SOC2 compliance summary, data security FAQ, GDPR/IRDAI alignment doc, standard MSA, SLA terms. When procurement asks questions, answer in hours not weeks. Speed through procurement = deal velocity.
Objections are spoken — you can prepare for them. Silent killers are unspoken reasons deals die that nobody sees coming. Each one has a detection signal and a counter-move.
Signal: Response times slow. Meetings get rescheduled. New names appear on emails. Counter: Multi-thread from Day 1. Never single-thread a deal. Get a second sponsor above or adjacent to your champion by Week 2.
Signal: "We're managing fine with what we have." No urgency in tone. Questions about "nice to have vs need to have." Counter: Quantify the cost of status quo with the ROI Calculator. Make the invisible cost visible: "Your team spends X hours/year searching. That's $Y in salary burned."
Signal: "Finance is reviewing all discretionary spend." Pilot approval pushed to next quarter. Counter: Position the pilot as $10K — below most procurement thresholds. "This costs less than one consultant-week. It can come from an existing line item."
Signal: "Another team is also looking at this." "We need to align with [other department] first." Counter: Offer to present to both teams together. Position Nexus as enterprise-wide, not department-specific. "We start with one team, but this scales across the organization."
Signal: Enthusiasm from business side, silence from IT/security. "We need to run it by infosec." Counter: Deliver the Procurement Fast-Track pack proactively. Don't wait for questions — send the SOC2 summary, data security FAQ, and GDPR alignment doc before they ask.
Signal: "We're also looking at Glean / Copilot / building internally." Multiple vendors in play. Decision timeline keeps extending. Counter: Compress the decision with a paid pilot. "The fastest way to decide is to see it work with your data. 4 weeks, $10K, and you'll know."
A governed enterprise knowledge assistant that connects to your organization's unstructured data and delivers grounded, traceable answers in seconds. RBAC. Active hallucination prevention. Cloud, on-prem, or hybrid. Live in 48 hours.
48 hours. No engineering.
89-92%. Critic Agent. Self-correcting.
RBAC, audit, on-prem. Day 1.
Buyers think in "chatbot" because that's the mental model they have. Don't fight it — use it as a bridge. Start with what they understand, then elevate.
When a buyer says "so it's a chatbot?" — say: "Yes, in the same way Salesforce is a spreadsheet. The category is technically correct, but it misses the governance, traceability, and enterprise architecture underneath. It's a chatbot the way a Bloomberg Terminal is a calculator."
With business buyers (VP Ops, Head of Dept): Say "AI-powered chatbot for your [compliance/underwriting/HR] data." They get it instantly. Then show the governance layer.
With technical buyers (CIO, CTO, IT): Say "Governed enterprise knowledge assistant with RAG, RBAC, and active hallucination control." They respect the precision.
In proposals and docs: Use both. "Enterprise Knowledge Assistant (AI Chatbot)" — the formal term first, the accessible term in parentheses.
Size: 500-10,000 employees
Industries: BFSI, Insurance, Manufacturing
Data: Large SharePoint / document repos
Constraint: Can't use public ChatGPT
Pipeline: Existing Fulcrum relationship
CIO/CTO: "Governed AI in 48h, not 12 months."
VP Ops: "80% reduction in query time."
CCO: "Every answer traceable. Full audit."
Head IT: "No engineering. No vendor lock."
| Dimension | Nexus | Glean | Copilot | Build |
|---|---|---|---|---|
| Accuracy | 89-92% | Not published | Not published | 79-80% |
| Text Fidelity | 93-100% | Not published | Not published | 59-82% |
| Deploy | 48 hours | 3+ weeks | Days | 6-12 months |
| Annual Cost | $36-60K/dept | $200K+ | $30/u/mo | $150K+ |
| POC | $10-15K | $70K+ | Included | 3+ mo |
✓ Lead with a live demo
✓ Cite benchmarks: "89-92% vs 79-80%"
✓ Say "we have better RAG — here's the data"
✓ Offer $10-15K paid pilot
✓ Let the prospect type their query
✓ Be honest about limits
✓ Print and bring competitive table
✕ Hide from comparisons
✕ Define yourself by what you're NOT
✕ Offer free trials
✕ Promise roadmap features
✕ Skip governance story
✕ Let it become abstract philosophy
✕ Underprice the first deal
Map regulatory deadlines and budget cycles. "Glean POC costs $70K — ours is $10K." Time-bounded offers tied to real events.
Pre-built slides your champion can present internally. ROI numbers, competitive table, exec summary. Make them a hero.
SOC2 summary, security FAQ, GDPR doc, standard MSA. Answer procurement in hours, not weeks.
The unspoken reasons deals die. Know the signals. Have the counter-moves ready.
Multi-agent PDLC/SDLC pipeline. Inference orchestration, model routing, governance. Not what you sell. What makes everything possible.
No-code builder. Dynamic VLM+OCR, 400-token chunking, Critic Agent, auto-retry, RBAC. The engine. Reference but don't lead with it.
Purpose-built per department. Connected to customer data. Governed, traceable. One Nexus → many assistants → many revenue streams.
Sell the assistant, not the platform. Like Salesforce sells CRM — not Force.com. Buyers don't buy engines. They buy what engines produce.
Infinity: Inference routing, client Azure deployment.
Nexus: Underwriting manuals, risk assessments, claims.
Product: "Ask about guidelines, loss ratios, precedents."
Infinity: On-prem for data sovereignty.
Nexus: RBI circulars, policies, audit reports.
Product: "Digital lending obligations?"
Infinity: Hybrid — edge + cloud.
Nexus: Maintenance logs, quality reports, drawings.
Product: "CNC-400 recalibration after tool change?"
Infinity: Cloud, multi-tenant.
Nexus: Past proposals, SOWs, pricing, case studies.
Product: "Similar APAC financial services proposal?"
Beyond SharePoint: Salesforce, Jira, Confluence, ServiceNow, SAP. Each connector = upsell. Each integration = stickiness.
SIs and consultancies white-label Nexus for their clients. Revenue share. Each partner multiplies reach.
Pre-built kits: Insurance Underwriting, Banking Compliance, Manufacturing SOP. Deployment from 48h → 4h for standard use cases.
Each layer of the stack addresses a specific unspoken buyer concern.
Infinity is the full enterprise AI platform underneath. Nexus is just the starting point. When the buyer's needs evolve — multi-agent, workflow automation, custom models — the platform is already there. No re-platforming, no migration. This neutralizes the "we'll outgrow your tool" objection before it's spoken.
Multi-cloud, on-prem, hybrid. No proprietary model dependency. Standard data connectors. The buyer's data stays in their infrastructure. This neutralizes the silent fear that choosing Nexus means being locked in — the fear that killed their last vendor evaluation.
Purpose-built assistants feel like a tool built FOR that team, not a generic AI imposed ON them. The compliance team gets a compliance assistant. The underwriting team gets an underwriting assistant. This per-department customization is the adoption strategy — it neutralizes the "my team won't use another tool" silent killer.
"Every enterprise has a sea of documents employees can't find fast enough. We deploy a governed AI assistant that connects to that knowledge and answers in seconds — with traceability, RBAC, and active hallucination prevention. We deploy in 48 hours. We benchmark 10-13 points higher than Azure Document Intelligence. We cost 40% less. We can prove it right now."
Three yeses needed: (1) Large unstructured doc volumes? (2) Governance constraint on public AI? (3) Clear pain — fragmentation, slow onboarding, compliance burden?
10 min listening, 10 min demo, 10 min discussion. Open with live demo. Build vs Buy frame. Close with pilot proposal.
Week 1: Deploy + onboard. Week 2-3: Monitor + iterate. Week 4: ROI report + expansion proposal.
Pilot → Dept License ($36-60K ARR) → Multi-dept → Enterprise ($180-300K ARR). Each = new revenue event.
"This is a live assistant connected to [data source]. Watch."
Complex query. Let it think, retrieve, answer. Point out source citations.
"48 hours to deploy. No model training. No engineering. Just configuration."
"What question would YOUR team want to ask?" Let them type.
"When a new employee joins, how long before they can independently find information?"
"How long for your compliance team to get a sourced, verified regulatory answer?"
"How much time searching vs. actually using documents?"
"Where does institutional knowledge live — SharePoint, shared drives, people's heads?"
"Evaluated other AI solutions? What worked/didn't?"
"What governance requirements for any AI?"
"Who else evaluates this?"
"If we prove value in 4 weeks with one department, worth $10-15K?"
"Cost of NOT solving this in 6 months?"
At end of every discovery meeting, offer a specific date: "We can have this live for your compliance team by [date]. $10K pilot. Want to proceed?" Specificity compresses deliberation.
Engage the CIO (governance), the business owner (ROI), and IT (deployment) simultaneously. Single-threaded deals stall. Multi-threaded deals close 2x faster.
Answer the 20 most common security questions before they're asked. SOC2 posture, encryption, data residency, RBAC architecture. Speed through infosec = 3 weeks off the cycle.
After each successful pilot, get a 2-sentence quote and permission to share within industry. By pilot #3, you have enough references to neutralize any "who else uses this?" objection.
Deals don't die from objections — those are spoken. Deals die from unspoken reasons nobody sees coming.
Ask: "Who else would need to weigh in?" If the answer keeps expanding, you have a political problem. Map the full decision tree in Week 1.
Ask: "If we started Monday, what happens between now and then?" If 5+ steps, you have procurement complexity. Deliver the fast-track pack preemptively.
When deals stall, go above. "Would a 15-min executive briefing for your CIO help?" A top-down nudge often breaks middle-management logjams.
When inertia kills, calculate the cost of nothing. "50 queries/day × 15 min = 3,125 hours/year = $140K in salary spent searching. That's not a line item anyone sees — but it's real."
Every question the sales team has asked — answered with the depth and specificity needed to sell with confidence.
The instinct is to say "knowledge search for Sales teams" because it's horizontal and easy to demo. That's the right first deployment — but it's the wrong first pitch.
Lead with the prospect's highest-cost knowledge problem, not yours. For a bank, that's compliance — a single regulatory misinterpretation can cost millions in fines. For an insurer, it's underwriting accuracy — a 5% improvement in risk assessment drops straight to the loss ratio. For a manufacturer, it's SOP adherence — one incorrect procedure can shut down a production line.
The pitch should be: "We solve your most expensive knowledge gap. Let me show you what that looks like for [their industry]." Then demo with industry-relevant data. After they're sold on the concept, deploy the pilot on something simpler (sales enablement, HR policies) to prove value fast. The expensive problem gets their attention. The quick win earns their trust. Then you expand back to the expensive problem with credibility.
Three independent proof points, all defensible under scrutiny:
Accuracy (benchmarked): Tested on two industry-standard datasets — Cord V2 and Synthdog. Six evaluation parameters: Text Fidelity, Semantic Accuracy, Numeric Reliability, Completeness, Hallucination Control, and Overall Accuracy. Nexus scored 89% (Cord V2) and 92% (Synthdog) overall. Azure Document Intelligence scored 80% and 79% respectively. On Synthdog, Nexus achieved 100% Text Fidelity vs Azure's 59%, and 100% Semantic Accuracy vs Azure's 62%. These are repeatable tests — the datasets are public, the scoring formulas are parameter-specific, and we can share the full methodology with any prospect who asks.
Cost (modeled): Internal cost comparison using identical assumptions: 50,000 pages/month ingestion, 6,000 queries/month, GPT-4o-mini on both sides, Azure East US pricing as of April 2026. Result: $490/month (Nexus) vs $818/month (Azure Foundry). Year 1 TCO: $8,580 vs $14,766 — a 42% saving. The largest cost delta comes from document extraction ($4-30 vs $625/month) and integration/onboarding ($1-2K vs $2.5-3.5K).
Architecture (structural): Critic Agent performs active answer verification (vs Azure's passive UI confidence slider). Auto-retry mechanism attempts up to 3 correction cycles on failure (vs single-pass). 400-token precision chunking preserves context boundaries (vs 1024-token blunt chunks that fragment meaning). Multi-deployment flexibility: cloud, on-prem, hybrid, Docker (vs Azure-only).
Nexus today is a complete, self-contained product for enterprise knowledge retrieval and Q&A. It handles: connecting to data sources (SharePoint, document repositories), creating purpose-built assistants with defined scope and personality, delivering grounded answers with source citations, RBAC governance, and multi-deployment. No engineering required. This covers approximately 80% of what enterprises need from an AI knowledge assistant.
AI Studio (on the roadmap) is the capability expansion layer for the other 20% — the use cases that require deep customization: model orchestration across multiple LLMs, enterprise pipeline automation, multi-step workflow agents, custom embedding models, advanced analytics, and programmatic API access for developers. Think of it as: Nexus is the no-code product. AI Studio is the pro-code platform.
What to tell sellers: "Nexus is everything we sell and deploy today. It's production-ready and battle-tested. AI Studio is where customers go when they outgrow no-code — and most don't need to. Never promise AI Studio features in a Nexus deal. Never position Nexus as incomplete without AI Studio."
Ideal fit (highest win probability): Mid-to-large enterprise, 500-10,000 employees, in BFSI, Insurance, Manufacturing, or Professional Services. They have large volumes of unstructured documents living in SharePoint or file repositories. They cannot use public AI tools (ChatGPT, Copilot in its default mode) due to data sovereignty, regulatory, or governance constraints. They have not yet deployed an enterprise AI assistant. And critically — they are an existing Fulcrum client with a warm relationship.
Why this profile: Each of these characteristics removes a friction point. Large doc volumes = clear pain. Governance constraint = can't use free alternatives. No existing assistant = no incumbent to displace. Warm relationship = shorter sales cycle, higher trust, easier pilot scoping.
Avoid for now: Startups and tech-first companies (they'll build internally — let them). Organizations with deep Microsoft Copilot investments (displacement fight you can't win easily). Enterprises requiring complex workflow automation as the primary use case (Nexus isn't there yet). Government contracts with extended procurement cycles (timeline kills deal momentum).
Technical effort: 2-5 business days from data access to first working assistant. Breakdown: SharePoint connector configuration (2-4 hours), document ingestion and indexing (1-2 days depending on volume — 50K pages takes ~1 day), assistant configuration including personality, scope, and response parameters (2-3 hours), RBAC setup and user provisioning (2-4 hours), testing and validation (1 day).
Calendar time: add 1-3 weeks for client-side processes that are outside your control: security review and approval, procurement paperwork, data access permissions, VPN or network access provisioning. These add calendar time but not effort time. The key insight: technical deployment is measured in days, but organizational readiness is measured in weeks. Start the security/procurement conversation in parallel with the pilot scoping — don't sequence them.
Three tiers, all per-assistant/department (never per-user):
Tier 1 — Discovery Pilot ($10-15K one-time): 1 assistant, 1 data source, 25 users, 4 weeks, ROI report, dedicated implementation support. This is the "land" motion. Position it as: "Less than one consultant-week. If it doesn't deliver, you've learned where you stand on AI for the cost of a dinner."
Tier 2 — Department License ($3-5K/month, annual): Up to 3 assistants, 3 data sources, 100 users, RBAC governance, analytics dashboard, 8×5 support. This is the "expand" motion. Each new department requesting an assistant is an incremental $3-5K/month. Annual value: $36-60K per department.
Tier 3 — Enterprise ($15-25K/month, enterprise agreement): Unlimited assistants, unlimited data sources, unlimited users, SSO/SAML, on-prem/hybrid deployment, 24×7 support, dedicated CSM, custom connectors. Annual value: $180-300K.
Why not per-user: Per-user pricing punishes adoption. You want MORE people using each assistant because usage = stickiness = retention = expansion. Charge for the capability (assistants, departments), not the consumption (users).
Margin note: Platform infrastructure cost is ~$490/month. Even the Tier 1 pilot at $10K has healthy margin. Department licenses at $3-5K/month against $490 infrastructure gives 80%+ gross margin.
Primary (must-hit): Time saved per query — target 80% reduction (from 15 min avg manual search to under 2 min). Daily active users as percentage of total team — target 60%+ by Week 4. Query answer accuracy rated by users — target 4.0+ out of 5.0. New hire onboarding time to productive — target 50% reduction.
Secondary (nice-to-track): Proposal/document reuse rate increase. Reduction in "who knows where this is?" questions on Slack/Teams. Speed of deal qualification improvement. Number of "I would never have found this without Nexus" stories (aim for 3+ documented).
Critical success factor: Usage must be mandatory, not optional. If the internal pilot is positioned as "try it if you want," it will be ignored. Replace the old process of manually searching SharePoint — don't add a parallel path. When finding a past proposal means opening Nexus (not SharePoint), adoption becomes automatic.
Dual ownership: Head of Sales or VP Sales owns business outcomes (adoption, usage metrics, case study production). CAIO / Product team owns technical delivery (deployment, data quality, performance tuning).
Timeline: Version one live within 7 business days. First usage metrics within 14 days. First internal case study with quantified ROI within 30 days. First 3 external discovery meetings scheduled using the internal proof point within 30 days.
The make-or-break decision: The business owner must make Nexus the default way to find past proposals and pricing — not an optional addition. This single decision determines whether the pilot succeeds or becomes shelf-ware.
Risk 1 — Credibility damage: If the first 3 external deployments have quality issues, word travels fast among CIOs in the same industry. Mitigant: internal pilot first, then only approach warm relationships where you have trust capital to spend.
Risk 2 — Support overload: Without documentation and self-service troubleshooting, every customer requires hands-on support for basic issues. Mitigant: build the FAQ, troubleshooting guide, and admin documentation during the internal pilot — use your own team's questions as the content source.
Risk 3 — Scope creep: Customers will ask for features Nexus doesn't have — workflow automation, multi-modal input, real-time database connectors, Slack integration. Mitigant: create a clear "what Nexus does and doesn't do" one-pager. Share it during scoping, not after signing.
Risk 4 — Price anchoring: If your first deal is heavily discounted, every subsequent deal is anchored to that price. Mitigant: start with paid pilots at list price. Establish value before discussing production pricing. Never discount the pilot.
A governed enterprise knowledge assistant — purpose-built for each department, connected to their existing data in SharePoint and document repositories, delivering grounded and traceable answers with role-based access control — deployed in 48 hours, benchmarked at 89-92% accuracy (10-13 points above Azure Document Intelligence), at 40% lower cost, with cloud, on-prem, or hybrid deployment flexibility.
That's the pitch. Every word earns its place. Don't add to it. Don't subtract from it. Memorize it.
Share it. This is your strongest competitive move. No other enterprise AI assistant vendor publishes reproducible accuracy benchmarks. When a prospect asks to see the methodology, that's a buying signal — they're doing due diligence, which means they're serious.
Walk them through: (1) the two datasets (Cord V2 for real-world documents, Synthdog for synthetic test data), (2) the six evaluation parameters and their scoring formulas, (3) the comparison conditions (identical LLM, identical pricing tier, identical deployment region). Offer to let them reproduce the test on their own sample documents. "Don't trust our numbers. Here's the test. Run it yourself." Nobody who's confident in their product is afraid of transparency.
Then you've earned something more valuable than a deal — you've earned trust. Say: "The numbers didn't hit where we needed them. Here's what we learned, here's what we'd do differently, and here's whether a second attempt makes sense." Walk away if the fit isn't right. The honesty creates a relationship that leads to a future opportunity — or a referral to someone who is a better fit.
Practically: pilot failure is almost always a data quality issue, not a product issue. If the ingested documents are poorly structured, OCR quality is low, or the knowledge base has contradictory information, the assistant's answers will reflect that. The mitigation is to assess data quality during pilot scoping — before committing. If their SharePoint is a mess, say so upfront and help them clean it, or adjust expectations.
Today: SharePoint and document repositories (file uploads, folder structures, PDFs, Word docs, spreadsheets) are the primary connectors. This covers the vast majority of enterprise knowledge retrieval use cases because that's where most unstructured enterprise data lives.
On the roadmap: Salesforce, Jira, Confluence, ServiceNow, SAP, and custom API connectors. Each new connector expands the addressable market and creates upsell opportunities within existing accounts.
For sellers: lead with what exists today (SharePoint + documents). If a prospect asks about Jira or Confluence integration, say: "That's on our roadmap. For the pilot, we start with your SharePoint — that's where the highest-value unstructured data typically lives. Once we prove value there, we expand connectors based on your priority."
Don't fight the Microsoft investment. Complement it. The positioning is: "Copilot makes your Microsoft apps smarter. Nexus makes your institutional knowledge accessible. They solve different problems and they work together."
Specifically: Copilot is excellent at helping users draft emails, summarize meetings, and search within M365. But it doesn't build purpose-built, governed knowledge assistants for specific departments with custom scope, RBAC, and traceable answers grounded in proprietary data. That's Nexus.
The killer question for a Microsoft-invested CIO: "Can Copilot answer 'What was our underwriting loss ratio for Tier-2 commercial property in FY24?' from your actuarial documents with source citation? Because Nexus can." That question reframes the conversation from "competing with Microsoft" to "solving a problem Microsoft doesn't address."
Nexus (per department): Year 1: $10-15K pilot + $36-60K annual license = $46-75K. Year 2-3: $36-60K/year. 3-year TCO: $118-195K per department.
Glean (comparable scope): ~$50/user/month × 100 users = $60K/year minimum. Plus $70K POC. Plus 7-12% annual renewal increases. 3-year TCO: $250-350K+ for a single department.
Internal build (Azure Foundry): Year 1: $150K+ engineering + $14.7K infrastructure. Year 2-3: $50K+ maintenance + $8K infrastructure. 3-year TCO: $230K+ — and you're locked to Azure.
Nexus is the lowest 3-year TCO by a significant margin, with the added advantage of no vendor lock-in and multi-deployment flexibility.
Nexus is not the right fit when: The primary need is complex workflow automation (multi-step processes across systems). Deep model-level control is required (custom fine-tuning, model selection per query). Real-time streaming data integration is needed (stock feeds, IoT sensors, live transaction monitoring). The buyer has already committed engineering resources to a custom build and is more than 6 months in.
Why honesty matters: Acknowledging limits isn't a weakness — it's a credibility accelerator. When you say "Nexus isn't right for your workflow automation use case, but it's perfect for your knowledge retrieval problem," the buyer trusts everything else you say. The sellers who try to stretch Nexus into use cases it can't serve are the ones who damage deals and create churn.
Input your prospect's numbers. All calculations are live formulae — change any input and results update instantly.
Never pre-fill the calculator and present your projections. Instead, hand the keyboard to the prospect and say "plug in your numbers." When they input their own data and see the result, it's their conclusion — not your claim. Self-generated insights convert at 3x the rate of presented ones.
If you sense the prospect will push back on pilot pricing, show the ROI Calculator BEFORE discussing price. Let them see "$140K/year in search costs" first. Then when you say "$10K pilot," the frame has shifted from "how much does this cost" to "how much am I losing by not starting."
Screenshot the results page. Add one line: "We can validate these projections with a 4-week pilot for $10-15K." This becomes the single most important artifact your champion takes to their CFO. Finance people think in numbers. Give them the only number that matters: cost of doing nothing.
Generate a branded, printable 1-page brief to send after every prospect meeting.
A proprietary diagnostic methodology that quantifies the rate at which an organization hemorrhages the value of its institutional knowledge — through five compounding forces that no one is measuring today.
Why this exists: Every enterprise AI vendor sells "faster search" or "better answers." That's a feature conversation. The KEI™ reframes the conversation entirely: from "do you want a chatbot?" to "do you know how much knowledge your organization is losing every day, and what that costs?" The output is a single number a CEO can act on. The methodology behind it requires a meeting to explain. That meeting is your sale.
The play: Before a first meeting, run the prospect's KEI Score using publicly available data and reasonable industry estimates. Open the meeting with: "We ran a Knowledge Entropy analysis on your organization. Your score is 67 out of 100. That means your institution is capturing only 33% of the usable value of its own knowledge. Here's what that costs you annually — and here's what changes it." No competitor has this. No consulting firm offers it for free. It turns every cold meeting into a diagnostic consultation.
Knowledge doesn't just sit there. It decays, hides, duplicates, delays, and concentrates. Each dimension compounds the others. Together they produce a single score — the KEI™ — that quantifies what organizations have never been able to measure.
What it measures: The rate at which institutional knowledge evaporates when people leave. Not just turnover rate — but turnover rate × average tenure × inverse of documentation maturity. A 15% annual turnover with 3-year avg tenure and no documentation system means ~45% of accumulated knowledge is at risk every cycle.
Formula weight: 25% of composite KEI. Highest weight because it's irreversible — once a person leaves, undocumented knowledge is permanently lost.
What it measures: The percentage of existing knowledge assets that are never found when needed. Function of: repository sprawl (number of disconnected storage locations), search tool maturity (binary search vs semantic), and average time-to-answer. An organization with 15 SharePoint sites, basic keyword search, and 20-minute avg lookup time has a discovery failure rate above 60%.
Formula weight: 25%. This is the dimension Nexus directly solves — connecting all repositories into a single queryable knowledge layer.
What it measures: Work performed that has already been done elsewhere in the organization but wasn't found. Estimated as: cross-functional team count × average project volume × (1 - cross-team visibility score). A 500-person org with 40 project teams and no cross-team knowledge sharing system has a duplication tax of ~30% of project effort.
Formula weight: 20%. The most financially expensive dimension in absolute dollars, but partially addressable by non-AI solutions (better project management, wikis).
What it measures: The compounding cost of delayed decisions due to slow information retrieval. Not just "time spent searching" — but search time × decision criticality × frequency. A compliance officer who takes 25 minutes to verify a regulatory position, does this 8 times per day, on decisions with regulatory consequence, has a decision latency score an order of magnitude higher than a marketing team searching for a brand asset.
Formula weight: 20%. Industry-dependent — BFSI and Insurance score highest because decision criticality is extreme.
What it measures: The percentage of mission-critical knowledge that lives in fewer than 5% of employees. Derived from: key-person dependency count × knowledge sharing culture index × documentation completeness. An organization where 3 people hold all underwriting institutional memory, with no documentation system and a "ask Rajiv" culture, has a concentration risk above 80%. If those 3 people leave in the same quarter, the organization functionally loses decades of institutional knowledge overnight.
Formula weight: 10%. Lowest weight because it's a tail risk — low probability, catastrophic impact. But when it hits, it's existential. Nexus converts concentration risk into distributed, queryable knowledge.
KEI = 0.25(λ₁) + 0.25(λ₂) + 0.20(λ₃) + 0.20(λ₄) + 0.10(λ₅)
Each λ is normalized to 0-100 where 0 = no entropy (perfect knowledge capture) and 100 = total entropy (complete knowledge loss). The composite KEI represents the percentage of institutional knowledge value that the organization fails to capture. A KEI of 67 means the org extracts only 33% of the potential value of its own knowledge.
Input what you know or can estimate. The model uses industry benchmarks to fill gaps. Every field has a tooltip explaining the data source.
λ₁ ATTRITION DECAY
λ₂ DISCOVERY FAILURE
λ₃ DUPLICATION · λ₄ DECISION LATENCY · λ₅ CONCENTRATION
Before a first meeting, estimate the prospect's KEI using LinkedIn (employee count, turnover signals), Glassdoor (tenure data), and industry benchmarks. Run the diagnostic. Print the report. Open the meeting with: "We ran a proprietary knowledge health analysis on your organization. Your score is [X]. Can I walk you through what that means?" You've just turned a product demo into a management consulting engagement — for free. No competitor can replicate this because they don't have the model.
The KEI Score is designed to travel upward in an organization. A VP understands "our KEI is 67." A CFO understands "$2.4M annual knowledge waste." A CEO understands "we capture only 33% of our institutional knowledge value." This is how deals go from department-level pilots to enterprise-level commitments — the diagnostic creates urgency at every level of the hierarchy, in language each level speaks.
If a competitor enters the conversation, the buyer now has a framework they didn't have before — one that only maps to Nexus capabilities. When they ask Glean or Copilot "what's your answer to my attrition decay score?" — there is no answer. The KEI™ creates a proprietary evaluation framework that structurally advantages FD RYZE® Nexus in every comparison. This is category creation, not category competition.
After Nexus deployment, re-run the KEI diagnostic quarterly. Show the score dropping: "Your KEI was 67 before Nexus. It's now 41. Here's the dimension-by-dimension improvement." This is the single most powerful retention and expansion tool possible — quantifiable proof that the product is working, in a framework the buyer already understands and has presented to their board. Churn becomes nearly impossible when the metric is visible and improving.