A practical guide for customers on how we stay aligned with the EU AI Act, California SB 243, Colorado AI Act, and equivalent state laws — and what each customer needs to configure on their side.
Every AI employee on YourKendra (Kendra, Marcus, Aria, Devon, Jordan, Riley) is AI. Not a human. Not a human-in-the-loop for routine interactions. This page documents how we disclose that fact and how our customers stay compliant.
California requires AI chatbots to disclose when a consumer knowingly asks if they are interacting with AI. We comply by:
Colorado regulates "high-risk" AI systems making consequential decisions (employment, housing, credit, insurance, healthcare). Most YourKendra uses cases — appointment booking, prospect outreach, invoice reminders — are not consequential decisions. However:
The EU AI Act classifies systems into Unacceptable Risk (banned), High Risk (strict controls), Limited Risk (transparency), and Minimal Risk. YourKendra customer use cases map as follows:
Legal and compliance counsel — reach out to hello@yourkendra.com for our latest compliance pack (sub-processor list, DPA, BAA template, AI impact assessment template, conformity documentation). We respond within 2 business days.