A general counsel gets a Slack message: “Quick question — is it OK if the marketing team uses ChatGPT for blog drafts?” Then another: “Sales wants Claude to summarise customer calls — fine?” Then: “Engineering is putting Gemini in our customer-support chat — any concerns?” Each question deserves a different answer. AI is now embedded across product, ops, marketing, support, and engineering at varying depth and varying risk profiles. “Is AI risky” is too broad to answer. “Is this specific AI deployment risky for this specific use case in this specific jurisdiction” is the actual question.
This piece is the framework that answers it. Five risk axes that matter (data privacy, regulatory exposure, IP and copyright, bias, contractual obligations), an evaluation pattern that produces operational decisions, and the regulatory landscape as it stands in 2026 with honest signposting on where it’s evolving.
The risk axes that matter
For each AI deployment, evaluate against five axes:
-
Data privacy. What data goes to the AI vendor? Is it PII, PHI, regulated, customer-confidential, or trade-secret? Where does it physically reside? What’s the vendor’s data-handling commitment? Privacy law (GDPR, CCPA, HIPAA, state-level US laws) varies by jurisdiction; the AI vendor’s compliance posture must match the data being sent.
-
Regulatory exposure. Does the AI’s output drive decisions that fall under specific regulation? Hiring (EEOC, NYC LL144), lending (ECOA, fair-credit laws), healthcare (FDA for medical decisions), financial advice (SEC, FINRA), legal advice (state bar regulations). The output’s downstream use determines the regulatory bar.
-
IP and copyright. Does the AI ingest copyrighted material as input? Does it produce output that may be substantially derivative? Does the vendor offer indemnification? Active litigation against AI vendors means this risk is evolving; the contractual posture matters.
-
Bias and discrimination. Does the AI make or influence decisions affecting individuals’ rights or opportunities? Hiring, lending, housing, education, criminal justice. Bias-audit requirements are appearing in regulation (NYC LL144 already; EU AI Act high-risk classifications; pending state-level US laws).
-
Contractual obligations. Do your customer agreements or vendor agreements restrict how you can use AI? Some enterprise contracts explicitly prohibit processing the customer’s data with AI; some require disclosure. Audit your existing contracts before broad AI deployment.
Each axis produces its own decision; the deployment’s risk is the maximum across axes, not the average.
How to actually run the assessment
For each AI deployment, document:
- Use case description. What does the AI do? Be specific — “drafts customer support responses for routine billing questions” beats “AI in customer support.”
- Data flowing in. What kind, what volume, what classification (public, internal, confidential, regulated).
- Output and its use. What does the AI produce? How is it used? Does a human review before action?
- Vendor and contract. Which AI vendor? Under what plan tier? What does the contract say about data, IP, indemnification?
- Regulatory mapping. Which regulations apply to the use case and jurisdiction? What does the regulation require?
- Risk score. Per axis, score the residual risk (low / medium / high) after the existing controls.
- Mitigations. What controls reduce the risk? Vendor-tier choice, redaction, human-in-the-loop, audit trail, periodic review.
Approve the deployment when all axes are at acceptable residual risk; reject or require additional mitigations otherwise. Document the assessment; future legal questions will reference it.
What's in force and what's coming
As of 2026, the regulatory landscape includes:
- EU AI Act. In force in stages; high-risk applications face significant compliance requirements; general-purpose AI providers have transparency obligations.
- US state-level AI laws. Colorado, California, Illinois, New York have specific AI regulation; more states are following on a quarterly basis.
- NYC Local Law 144. Bias-audit requirements for automated employment-decision tools used on NYC residents.
- Existing privacy laws applied to AI. GDPR, CCPA, HIPAA, etc. all apply to AI when data fits the regulated category; the AI angle is a layer on existing compliance.
- Sector-specific AI guidance. FDA on medical AI, SEC on AI in financial advice, EEOC on hiring tools — each agency is issuing guidance.
- Pending federal US legislation. Movement on AI-specific federal law continues; expect changes within 12–24 months.
The landscape evolves quickly; the framework must accommodate updates without rebuilding. The pattern is to anchor on the risk-axis evaluation (stable) and update the regulatory mappings as new laws come into force (dynamic).
What this risk actually costs when it bites
The risk magnitude is meaningful enough to warrant explicit assessment; the compliance investment is manageable relative to the potential exposure.
Where AI risk assessment typically goes wrong
Treating “AI” as a single risk category. Different AI deployments have wildly different risk profiles. A chatbot answering FAQs has different risk than an AI making lending decisions. Stratify the assessment per deployment.
Skipping the existing-law analysis. Existing privacy, employment, and consumer-protection laws apply to AI deployments even when no AI-specific law does. The temptation to wait for “AI law” before acting misses the existing legal exposure.
Over-relying on vendor compliance claims. Vendor compliance certifications (SOC 2, ISO 27001) cover the vendor’s operations; they don’t necessarily cover your specific use case. The compliance question is “does my deployment meet my obligations,” not “does the vendor have a checkbox.”
Under-documenting decisions. When a regulator asks why you deployed AI for X without specific review, “we discussed it informally” is a weak answer. Documented risk assessment is the artifact that supports defensible deployment.
Updating the framework reactively only. Regulation is evolving fast; the framework needs proactive updates. Build a quarterly check-in for new laws and case law; don’t wait for an enforcement action to reveal a gap.
Related work
For the broader privacy framework, see AI privacy — what to watch for. For the broader security risks, see AI security risks for businesses. For the data-leakage specifics that often trigger compliance concern, see Data leakage in AI tools. For the specific hiring-AI compliance pattern, see Resume screening with anti-bias guardrails.
FAQ
Do we need an AI policy?
Yes if you're deploying AI in operational ways. The policy doesn't need to be elaborate — a few pages covering data classification, approved vendors, approval workflow, prohibited uses. Even a basic policy is enabling; without one, every deployment becomes an ad-hoc legal question that scales badly.
What about indemnification — do AI vendors cover us for their model's outputs?
Some do, partially. OpenAI, Microsoft, Google, Anthropic have offered limited indemnification for IP issues on certain plans. The coverage typically excludes deliberate misuse, requires use of specific features, and has caps. Read the contract carefully; the indemnification is meaningful but not unlimited.
How do we handle AI use by employees outside formal deployments (shadow AI)?
Employees using ChatGPT or Claude in their browser for work is widespread. Three patterns. (1) Acceptable-use policy that names what data can and can't go into consumer AI. (2) Provide approved enterprise-tier alternatives so employees have a sanctioned path. (3) Periodic awareness training. Pure prohibition rarely works; reasonable channels reduce shadow use.
What about IP rights in AI output — can we own and protect AI-generated content?
Mostly no in the US — works lacking meaningful human authorship aren't copyrightable. Adding substantial human editing on top creates a stronger copyright claim. For trade secrets, internal AI outputs can still be confidential if treated as such. The IP picture is evolving; rely on contracts (between you and customers) rather than purely on default IP law.