Methodology

Frameworks taught at MIT. Applied by an operator.

Why methodology matters here

The frameworks are recognizable. The application is what differs.

AI consulting is full of strong opinions and weak evidence. Most engagements fail not because the technology is wrong but because the operating discipline behind the work is missing. We chose to build on frameworks that have been tested at scale, taught at MIT, and used by Accenture, McKinsey, Boston Consulting, and the other major firms. We deliver them as an operator who has run a P&L, not as a strategist who has not.

This page exists for buyers who want to verify rigor before they book a call. The homepage and Services page sell the outcome. This page shows the spine.

The method

Nine decisions, in order.

Generative versus agentic

The first decision.

Before any engagement scopes a build, we draw the line between generative AI and agentic AI. Generative AI produces content. Agentic AI takes action, makes decisions, and operates across systems with a degree of autonomy. The distinction matters because most failed AI projects are scoped against the wrong category. We help you decide which one your problem actually needs, and we build the deployment around that decision.

Staging the work

The AI Maturity Cycle.

We use the AI Maturity Cycle, a five-stage framework developed by Dr. Abel Sanchez at MIT, to stage every engagement. The stages move an organization from identifying high-value opportunities, to teaching AI fundamentals, to mapping workflows, to providing enablement, to scaling success across the enterprise. No leaps. No skipped steps. Each stage produces evidence that justifies the next investment. This is what protects a CEO from spending another six months on pilots that do not pay back.

Scaling the deployment

Crawl, Walk, Run.

Every deployment scales through three phases. Crawl, where small pilots run with strict human verification of every output. Walk, where supervised AI capabilities expand and human-in-the-loop oversight covers automated workflows. Run, where fully agentic systems operate with continuous monitoring, model drift detection, and a Center of Excellence for governance. You move forward only when performance meets reliability thresholds you set in advance. The phases are not optional sequencing. They are how the work survives an audit.

Architecture

Centralized, embedded, or hybrid.

For each function in scope, we help you choose between a centralized agent architecture, where one orchestration layer routes queries through familiar platforms like Slack or Teams, and an embedded architecture, where specialized agents live inside departmental systems and develop deep functional expertise. Most mature organizations land on a hybrid. We will tell you when to make the shift, and we will not move you there before the work is ready.

Integration

Working with what you already have.

Legacy systems are the starting condition of almost every engagement, not a barrier we ignore. We use the Model Context Protocol as a secure bridge between AI agents and your existing data, APIs, and tools. MCP keeps sensitive data on-premise while still giving agents the structured access they need to act on real information. Where MCP is not the right fit, we deploy middleware, microservices, and data governance layers that respect the architecture you already paid for.

Security

The NIST spine.

Agentic AI introduces a new threat surface. Prompt injection. Data poisoning. Deepfakes and synthetic identities. Autonomy misuse. Model and supply chain compromise. Every deployment is designed using the NIST Cybersecurity Framework structure of Identify, Protect, Detect, Respond, and Recover. We apply least-privilege access for every agent, sandboxing for every test phase, adversarial testing for prompt-injection defense, and multi-layered monitoring for anomaly detection. Your board gets validated sources, validated models, and validated actions. Not just validated endpoints.

Governance

Compliance that runs alongside the build.

Compliance is not paperwork you finish at the end. It runs alongside development. We align deployments with GDPR, CCPA, and HIPAA requirements based on jurisdiction and data type. We use risk-speed quadrants to decide where human-in-the-loop oversight is required and where automated guardrails are sufficient. We document model purpose, training data sources, and decision rationale so audits become a source of trust rather than a scramble. Four governance principles anchor the work. Transparency. Accountability. Privacy. Fairness.

Change management

ADKAR and Kotter, applied.

Most AI projects fail at the last mile, not the technology. Employees resist what they do not trust. We use the ADKAR model and an adapted Kotter eight-step framework to address that resistance directly. Communication plans that explain what AI will do and what it will not. Training programs with hands-on practice, prompt-writing workshops, and supervisor certification pathways. Pilots with small groups, structured feedback, and expansion only when adoption metrics support it. You get an AI-ready culture, not just an AI tool.

Ethics

OECD principles plus operational discipline.

Eighty percent of business leaders cite AI ethics concerns as a major barrier to adoption. We treat that as an opportunity. Deployments align with the OECD AI Principles of inclusive growth, human-centered values, transparency, robustness, and accountability. We borrow the operational discipline of corporate frameworks from Google and Microsoft on fairness, reliability, privacy, inclusiveness, and human oversight. Ethics done well becomes a differentiator with customers, regulators, and the workforce.

The deliverable. Eight sections.

The structure that makes it defensible.

Every paid diagnostic produces a written deliverable structured in the same eight sections. Defensible to a board, an auditor, or a customer.

01

Context.

What the company is actually trying to solve, who it serves, and what is in scope.

02

Technology Choice.

Generative or agentic, centralized or embedded, build or buy. Decided with evidence.

03

Cost Considerations.

Token economics, license consolidation, and total cost of ownership over 12 to 24 months.

04

Security Plan.

NIST-aligned controls, threat model, and the access architecture for every agent.

05

Change Management and Training.

ADKAR-mapped communication and training plan with adoption metrics.

06

Scaling Strategy.

Crawl, Walk, Run with explicit reliability thresholds at each phase.

07

Governance and Compliance.

Jurisdiction-specific alignment plus the four-principle governance model.

08

Key Performance Indicators.

Outcome metrics tied to the P&L, with feedback loops and clear thresholds.

What you should expect

The method, applied to your organization.

A defined use case with realistic success metrics. An integration plan that respects your legacy systems. A user-centric design that earns employee trust. A change management plan that overcomes resistance. KPIs that track success with feedback loops and clear thresholds. Human-in-the-loop checkpoints where the stakes demand them. Documentation ready for any audit.

That is the method, applied to your organization, by a partner trained in it and seasoned by twenty years of P&L work.

Apply the method

Book a working session.

90 minutes with Nick to apply the method to your company.

Book a 30-minute discovery call