Normis AI is the framework law firms use to meet their duties under ABA Model Rules 1.6, 5.1, and 5.3 — an enforceable AI policy, behaviour-changing training, and a lightweight tool that surfaces unapproved AI use across the firm.
When a lawyer pastes privileged material into a consumer AI tool, the firm's exposure is not theoretical. It is distributed across six risks every US-practising firm is already bound by.
ABA MODEL RULE 1.6 · OP. 512 (2023)
Attorney-client privilege.
Consumer AI platforms offer no confidentiality guarantees. Client PII, case strategy, and settlement figures fed into unauthorized tools can be ingested for training — and privilege can be deemed waived. One careless prompt can trigger malpractice claims, bar complaints, and client termination.
ABA MODEL RULES 5.1 & 5.3
Supervisory liability.
Partners face direct liability for AI misuse by associates and non-lawyer staff — even if they never touched the tool. No AI policy means no protection. Disciplinary authorities will hold the institution accountable, not just the individual.
ABA MODEL RULE 1.1 · MATA v. AVIANCA, S.D.N.Y. 2023
Professional competence and malpractice.
AI hallucinates — and courts are sanctioning lawyers who file what it invents. Competence under Rule 1.1 now includes understanding AI's risks. When an unauthorized tool produces a fabricated brief, the supervising partner and the firm are liable. Not the vendor.
FRCP RULE 11 · ABA MODEL RULE 3.3
Court rules and sanctions.
Federal courts already require AI disclosure certifications. Where formal rules don't exist, Rule 11 and Rule 3.3 fill the gap. Shadow AI use leaves no audit trail — making sanctions nearly impossible to defend against.
ABA MODEL RULES 8.4 & 1.4
Bar disciplinary action.
Submitting an AI-fabricated citation may constitute misrepresentation to a tribunal. Failing to disclose material AI use to a client implicates Rule 1.4. California, Florida, and New York are actively developing mandatory disclosure rules. Being caught behind them is a disciplinary risk.
INSURANCE · ENGAGEMENT TERMS
Commercial and insurance consequences.
Most malpractice policies predate AI — and insurers are beginning to exclude AI-related claims. Client engagement letters often contain AI restrictions that shadow AI quietly violates, triggering indemnification obligations and potential breach of contract.
Enforcement is live
Mata v. Avianca was not the outlier. It was the template.
In June 2023, the Southern District of New York sanctioned attorneys for submitting a brief containing six AI-fabricated case citations. Since then, federal and state courts in at least a dozen jurisdictions have issued comparable sanctions. Firms that cannot evidence human review of AI-generated work product are finding themselves without a defence.
MATA v. AVIANCA · S.D.N.Y. · 22-cv-1461 · JUNE 22, 2023
The market
The data on firms like yours.
9%
of firms have a written, actively enforced AI policy.
43% have no policy at all and no plans to create one.
SOURCE · LLAMALAB
59%
of employees use unapproved AI tools.
75% of them share sensitive data with those tools.
SOURCE · LLAMALAB
68%
of legal professionals have used unapproved AI tools at least once.
Fewer than 20% of firms have a formal policy to manage the exposure.
SOURCE · NIDISH
The framework
Three layers. Each one useless alone.
Policy without training is ignored. Training without visibility is forgotten. Tooling without policy is an IT project that never lands. Normis AI is designed to ship all three together.
Layer 01 · Policy
Rules lawyers can actually follow.
Defines allowed, restricted, and prohibited tools by practice area and matter sensitivity.
Sets clear rules for client data, confidentiality, and privilege handling.
Aligns with GDPR, the NY SHIELD Act, and state-specific bar confidentiality obligations.
Provides decision frameworks lawyers can apply in seconds — not twenty-page PDFs they will never open.
Outcome: a defensible position and lawyers who know where the lines are.
Layer 02 · Training
Behaviour change, not awareness theatre.
Short, scenario-based sessions built on real legal workflows — drafting, due diligence, client comms, discovery review.
Shows exactly how data leaks through consumer tools, with redacted real-world examples.
Teaches safe prompting and client data handling as muscle memory, not compliance slideware.
Separate leadership briefings for partners, risk committees, and ethics counsel.
Outcome: lawyers who understand the exposure in context — and faster adoption of approved tools.
Layer 03 · Tool
Visibility into what is actually happening.
Identifies which AI tools are being used across the firm, by which lawyer cohorts, on which matters.
Flags likely sharing of client-identifying data, privileged material, or matter-specific strings.
Risk dashboards for the COO, GC, CISO, and ethics counsel — each with the cuts they need.
Outcome: you know what the firm is doing — not just what the policy says.
Visibility
What the tool surfaces.
The Normis AI tool runs at the network and endpoint edge. It does not read the contents of files or communications. It detects interaction with known AI services, fingerprints likely data categories by pattern, and produces an evidenced view the firm can act on.
Signal categories
Surfaced detail
Unapproved service · consumer chatbot.
A user initiated an outbound request to a consumer chatbot domain not present on the firm’s approved services list. The source practice group and destination are recorded. The request body is not captured.
The record
event.record.json// unapproved ai service detected
Normis AI does not capture message contents, does not surveil individual lawyers, and does not replace the firm’s existing DMS, MDM, or DLP stack. It operates alongside them and produces the one artefact none of them does: a firm-level AI usage record mapped to the ABA Model Rules.
The argument
Most solutions fail on one layer.
Policy-only engagements produce documents lawyers never read. Training-only programmes decay inside a quarter. Tool-only deployments become IT projects that never map to bar rules. Normis AI ships the three together because the three fail apart.
Policy without enforcement is ignored.
Training without visibility is forgotten.
Tools without legal framing never land.
Fit
Built for firms with real confidentiality exposure.
For you if
You are an AmLaw 200, UK Top 100, or mid-market firm with 100+ fee-earners.
Your practice includes regulated industries, litigation, M&A, IP, or any work under NDAs or protective orders.
You hold responsive data under US federal court protective orders or GDPR processing agreements.
You face outside counsel guidelines or client audit rights that now include AI use.
You have a risk committee, general counsel function, or CISO who owns the answer when a client asks.
Not for you if
You are a sole practitioner or boutique under 20 fee-earners — a policy and CLE course are sufficient.
Your firm has already deployed a full external AI governance engagement and the tooling to match.
You want a DLP product — Normis AI is complementary, not a substitute.
You are looking to approve or procure a client-facing AI tool (Harvey, CoCounsel) — that is a different evaluation.
Methodology
Built by the people who wrote the ethics opinions and the people who built the monitoring.
Lenka Molins
Co-founder · Risk and ethics
Designed Deloitte's audit and assurance framework for the Digital Services Act, now running against Very Large Online Platforms across Europe. Advises the continent's largest platforms on the AI Act, NYC Local Law 144, and Colorado SB 21-169. Chairs the NYC Bar Association's Subcommittee on International Regulation of AI. Qualified New York attorney. MSc, Oxford Internet Institute.
Kyle Bossonney
Co-founder · Infrastructure
Ships an autonomous agentic system at Google that tracks cryptographic key propagation across an 86-terabyte monolithic codebase. Published at ACM SIGMOD/PODS on regex engine internals. First place in Programmable Cryptography at ETHOxford 2025. MSc, Advanced Computer Science, University of Oxford.
CITATIONS · ABA MODEL RULES 1.6, 5.1, 5.3, 1.1 · ABA FORMAL OP. 512 (2023) · MATA v. AVIANCA, S.D.N.Y. 2023
Engagement
Priced against the real alternative.
A single malpractice claim costs hundreds of thousands to defend — before settlements, lost clients, or bar proceedings. Our solution provides a complete protection layer for a fraction of that exposure.
Does the Normis AI tool monitor individual lawyers?
No. The tool detects interactions with AI services and data-category patterns. It does not capture message contents, does not read email or DMS files, and does not produce individual surveillance records. Dashboards default to practice-group cohort views; individual-level detail requires an explicit privileged-access workflow.
Is the tool itself compliant with our bar confidentiality obligations?
Yes — that requirement drove the architecture. Normis AI operates on pattern fingerprints rather than content. Retention and access controls are configured to match the firm's obligations under its most stringent client engagement terms.
How does it interact with our DMS (iManage, NetDocuments) and existing DLP?
Normis AI is complementary and integrates at the event level. It consumes the firm's existing access logs where available and produces AI-specific signal the DMS and DLP stack do not generate natively. No rip-and-replace is ever required or recommended.
What about approved firm AI tools like Harvey, CoCounsel, or in-house RAG?
Covered, and easy. Approved tools are added to the policy allow-list and generate routine usage records rather than flags. The tool's job is specifically to surface the gap between policy and practice.
Can Normis AI produce a document I can send to a client responding to an AI audit clause?
Yes. The engagement output includes a client-facing attestation template that maps Normis AI's evidence to common outside counsel guidelines and AI use provisions.
Does Normis AI provide legal advice to our firm?
No. Normis AI is a framework and a software product operated under the direction of the firm's general counsel, ethics counsel, or risk committee. Nothing on this site and nothing in the product constitutes legal advice.
Which jurisdictions does the framework cover?
The framework is grounded in the ABA Model Rules and maps to the state bar adoptions that diverge (California, New York, Florida explicitly). It extends to GDPR and NY SHIELD Act processing obligations and to the UK SRA Code of Conduct for firms with UK practice. Additional jurisdictions are added as state bars finalise their AI disclosure rules.
Request a risk assessment.
Thirty-minute scoping call, followed by a fixed-fee AI exposure assessment. The assessment output is the firm's alone — we do not keep it, and proceeding to the full framework is not a condition.