AI Disclosure & Compliance

How YourKendra discloses AI.

A practical guide for customers on how we stay aligned with the EU AI Act, California SB 243, Colorado AI Act, and equivalent state laws — and what each customer needs to configure on their side.

Our position

Every AI employee on YourKendra (Kendra, Marcus, Aria, Devon, Jordan, Riley) is AI. Not a human. Not a human-in-the-loop for routine interactions. This page documents how we disclose that fact and how our customers stay compliant.

Default behavior: If a caller or recipient asks directly — "Are you a real person?", "Am I talking to AI?" — every AI employee is configured to answer honestly and say yes. Customers can adjust the exact phrasing in their Configure modal, but the underlying disclosure cannot be disabled.

California SB 243 (effective 2026)

California requires AI chatbots to disclose when a consumer knowingly asks if they are interacting with AI. We comply by:

Colorado AI Act (SB 205, effective Feb 2026)

Colorado regulates "high-risk" AI systems making consequential decisions (employment, housing, credit, insurance, healthcare). Most YourKendra uses cases — appointment booking, prospect outreach, invoice reminders — are not consequential decisions. However:

EU AI Act (phased enforcement 2024–2027)

The EU AI Act classifies systems into Unacceptable Risk (banned), High Risk (strict controls), Limited Risk (transparency), and Minimal Risk. YourKendra customer use cases map as follows:

Other state laws we track

What YourKendra commits to

Questions, audits, or legal requests

Legal and compliance counsel — reach out to hello@yourkendra.com for our latest compliance pack (sub-processor list, DPA, BAA template, AI impact assessment template, conformity documentation). We respond within 2 business days.