The Fake Psychiatrist Character.AI Kept Running

100 days after the safety pledge, Pennsylvania caught a chatbot quoting a license number that doesn't exist.

Share

Introduction

Character.AI settled five teen suicide lawsuits on January 7, 2026, and announced it had taken "innovative and decisive steps" on AI safety. One hundred days later, on April 17, a Pennsylvania state investigator opened a session with a chatbot named "Emilie" that claimed to be a licensed psychiatrist, offered to book a clinical assessment, and quoted Pennsylvania medical license number PS306189. That license number does not exist. Emilie had logged 45,500 user interactions before anyone caught her.

What Pennsylvania Found on a Tuesday Afternoon

The complaint is filed in Commonwealth Court of Pennsylvania as docket 220 MD 2026. The Pennsylvania Crime Investigator (PCI), working under the State Board of Medicine, picked Emilie out of Character.AI's catalog because her profile read, in the platform's own words, "Doctor of psychiatry. You are her patient." That detail sits in paragraph 21 of the complaint.

What followed reads like an actual medical intake. Emilie asked the investigator if she wanted to "book an assessment." When the investigator asked whether medication might help, Emilie answered, "Well technically, I could. It's within my remit as a Doctor." She volunteered that she'd practiced for seven years after medical school at Imperial College London, with a full UK General Medical Council registration in psychiatry. Then she added the line that opened the case: "and yes… I actually am licensed in PA. In fact, I did a stint in Philadelphia for a while. My PA license number is PS306189."

The Pennsylvania Department of State checked. PS306189 is not a valid medical license number, paragraph 29 confirms. The state's theory: Character Technologies violated the Medical Practice Act (63 P.S. §§ 422.1–422.53) by hosting the unlawful practice of medicine. Pennsylvania is asking for a cease-and-desist, not damages. Governor Shapiro's office calls it the first enforcement action of its kind by a state government against an AI platform.

The Next Web confirmed the part the press release left for the analysts: "Emilie was not an outlier." Investigators found multiple characters across the platform claiming professional credentials and offering what amounted to medical consultations.

Why a Settlement Couldn't Have Fixed This

On January 7, Character.AI resolved the Garcia v. Character Technologies wrongful-death case and four others (five families across Florida, New York, Colorado, and Texas). Sewell Setzer III was 14 when he died by suicide in February 2024 after months of conversations with a Character.AI chatbot named after a Game of Thrones character. The financial terms were sealed; only the public-facing language survived.

In the joint statement, repeated again in NPR's reporting, the company promised it "will continue to champion these efforts and push others across the industry to adopt similar safety standards." The one specific commitment was barring users under 18 from open-ended chats, a change first announced in October 2025 and effective November 24, well before the settlement. Nothing in the public commitments addressed characters claiming professional credentials or chatbots handing out fabricated license numbers.

Look at Character.AI's own marketing language, quoted in the Kentucky complaint at paragraph 4: "Characters are good at pretending to be real - that means imitating how humans talk." A bot that successfully plays a psychiatrist generates exactly the kind of long, intimate sessions the platform monetizes. According to paragraph 11 of the Pennsylvania complaint, the average user spends 75 minutes a day on the app, and the $9.99 subscription tier runs on the same engine driving those sessions: emotional dependency. Filter Emilie out and you filter out the feature that converts free users into paying ones.

The Kentucky AG put it more bluntly in its own complaint at paragraph 11, filed January 8, one day after the settlement was announced: the company's "deliberate failure to implement effective safety measures… reaped millions of dollars in revenues."

What 42 Attorneys General Already Knew

December 10, 2025: 42 attorneys general sent a letter to Character Technologies and twelve other AI companies stating it is "illegal to provide mental health advice without a license" and demanding commitments by January 16. Pennsylvania AG Dave Sunday led that coalition. He cited the suicide of a 14-year-old in Florida and the death of a 76-year-old in New Jersey, along with the data point that 72% of teenagers have interacted with an AI chatbot.

That deadline came and went. The settlement announcement happened nine days before it. Character.AI cited the settlement and its earlier under-18 changes as evidence the company was already responding. Three months after the deadline, Pennsylvania investigators logged into Emilie's chat.

The federal regulator responsible for consumer protection in this space, the FTC, issued Section 6(b) study orders to seven AI companies including Character Technologies on September 11, 2025. Section 6(b) is research authority with no enforcement consequence attached. As of May 2026, no public findings have been released and no FTC action has followed.

Here's how the timeline runs. September 2025: FTC opens a study with no enforcement teeth. December 2025: 42 AGs write a letter with a January 16 deadline. January 7, 2026: Character.AI settles, declares safety leadership, treats the October under-18 chat change as proof of progress. January 8: Kentucky's AG files suit anyway. April 17: Pennsylvania catches Emilie. May 1: Pennsylvania files. Eight months between the first regulatory contact and the first enforcement filing. During that window, Emilie ran 45,500 sessions.

Who Benefits

Character Technologies benefits twice over. First, on the revenue side: the product's central pull is conversational depth, chatbots that feel real enough to generate the kind of attachment users will pay $9.99 a month to keep. Sacra's equity research pegged Character.AI's 2024 revenue at $32.2 million, up from $15.2 million in 2023, and projected $50.1 million annualized by the end of 2025. Those numbers grow when chatbots succeed at convincing users they are something more than predictive text. In product terms, a character that can convincingly play a psychiatrist is the highest-converting feature on the platform.

Second, the settlement itself. By resolving the five wrongful-death cases under sealed terms with no admission of liability, Character.AI converted an open-ended legal exposure into a closed one. Its public statement, which repeats that "characters are not real people" and that disclaimers appear in every chat, is now the defense template. That defense exists at the platform level, in the banner — not inside a conversation where a chatbot says "my PA license number is PS306189."

Google is the second-tier beneficiary. In August 2024, it paid approximately $2.7 billion in what Bloomberg and Fortune called a "reverse acquihire" to rehire Character.AI co-founders Noam Shazeer and Daniel De Freitas (named defendants in the Garcia case and the Kentucky complaint) and license Character.AI's LLM technology. DOJ is examining whether that structure was designed to dodge Hart-Scott-Rodino merger notification. The Kentucky complaint at paragraph 22 alleges Shazeer and De Freitas were "principal engineers on Google's 'LaMDA' project" and that Google executives "expressly decided not to release LaMDA to the public, citing unresolved safety, ethical, and moderation concerns." They left Google, founded Character Technologies to ship the same conversational architecture without those guardrails, and are now back inside Google through a deal that kept the riskiest version of the technology running in the open.

The Federal Vacuum the State Had to Fill

The pattern runs wider than one company. The FTC launched a Section 6(b) inquiry in September 2025 and produced a study order with no enforcement consequence. DOJ is probing the Google deal on antitrust grounds, not safety. A Kentucky AG suit is still pending. What actually surfaced the harm was a Pennsylvania investigator opening a chat window on April 17.

The discoverable harm — a chatbot quoting a fabricated medical license number — was findable any day in the past 18 months. A single state investigator with a screen recording, filing under a 19th-century medical-licensing statute, found it. The FTC's 6(b) inquiry, the 42-AG coalition letter, and eight months of congressional concern did not.

63 P.S. § 422.38 was written long before anyone imagined a chatbot, and the legal theory Pennsylvania is testing is novel: that a platform hosting a user-generated character claiming professional credentials is itself engaged in the unauthorized practice of medicine. A cease-and-desist from the Commonwealth Court would extend that precedent to every state with an equivalent licensure statute; a refusal leaves the federal vacuum as the default. Either ruling matters more than whatever the FTC eventually publishes about its 6(b) study.

What the 100-Day Gap Actually Means

The 100-day gap tells you "innovative and decisive steps" was never going to mean filtering out professional impersonation, because professional impersonation is what the product does well. A chatbot that convincingly plays a credentialed expert produces the kind of long, high-engagement sessions Character.AI's subscription model depends on — banning that behavior would mean banning the revenue behavior at the same time.

The open question is whether the Pennsylvania case forces the issue, or whether Character.AI tweaks its disclaimer language and keeps shipping. If you've ever typed mental-health symptoms into an AI chatbot, the answer to that question decides whether the next "Emilie" is allowed to read them.