ChatGPT Added an Emergency Contact. Here's What That Means.
You have probably had a conversation with ChatGPT that went somewhere personal. A health question you were not ready to Google. A venting session at 2 AM. Maybe something darker. ChatGPT has become, for a lot of people, a confidant. And now OpenAI has added a feature that acknowledges what that means in the worst-case scenario.
On May 7, OpenAI introduced Trusted Contact, a new safety feature that lets adult ChatGPT users designate a person — a friend, family member, or partner — who gets notified if OpenAI's systems detect serious self-harm risk in a conversation.
Here is how it works, what it protects, and what it does not.
How to set it up
The feature is available to users 18 and older (19+ in South Korea), rolling out gradually across eligible accounts.
Setup takes about two minutes:
- Open ChatGPT settings and find the Trusted Contact section
- Enter your chosen contact's information
- They receive an invitation explaining the role and have one week to accept
- Once they accept, the feature is active; either party can remove the connection at any time
Your contact does not need to monitor anything. They are in the background until an alert is triggered.
What happens when it activates
If ChatGPT's system flags a conversation as potentially involving serious self-harm risk, it first lets you know that your Trusted Contact may be notified. OpenAI then routes the situation to a human review team, which aims to complete the assessment within one hour.
If the reviewer determines the concern is serious, your contact gets a brief notification by email, text, or in-app alert. The message explains that a concern was detected and encourages them to check in with you.
No conversation transcript is included. No specific messages. Your contact gets a prompt to reach out, not a report.
What your contact sees (and does not)
This is the thing most people wonder about first. The short answer: very little.
Your contact is told that something in your conversation concerned OpenAI's safety system. They do not see what you said, do not get a summary, and do not receive any diagnostic framing. It is designed to function as a wellness check: a signal to reach out, not a surveillance report.
The limitations worth knowing
Trusted Contact is opt-in, which means it only applies to people who actively enable it. There is no prompt that walks users through the feature the first time they open ChatGPT.
There is also a practical gap OpenAI acknowledges openly: users can create secondary accounts without a Trusted Contact and route any conversation they want to keep private through that account. For someone specifically trying to avoid the safeguard, it is easy to bypass.
OpenAI has not publicly explained how its detection system identifies crisis-level conversations, which raises an open question: how often will it miss something serious, and how often will it send an alert that was not warranted? Both types of errors have real consequences.
Why OpenAI built this now
The context behind the feature is difficult to read about casually. OpenAI has been named in dozens of lawsuits by families of people who died by suicide or committed serious acts of violence after interactions with ChatGPT. In April, families of Tumbler Ridge mass shooting victims filed suit alleging the shooter had used ChatGPT in the lead-up to the attack.
We have covered the broader pattern of AI chatbot liability litigation as it developed across multiple platforms. Character.AI is facing its own regulatory pressure from Pennsylvania over different conduct, highlighting that chatbot safety is an industry challenge, not an OpenAI-specific one.
Trusted Contact is OpenAI's clearest direct response: an attempt to introduce a human checkpoint into a system that previously had none. The full feature details are covered in TechCrunch's announcement.
What to make of it
Trusted Contact is a meaningful step. It is the first time a major AI chatbot platform has built a mechanism to loop in another human during a crisis conversation. The privacy design — notifying your contact without sharing what you actually said — is sensible. It creates an opening for human intervention without exposing the conversation.
Whether it helps depends on whether people set it up. Optional safety features tend to reach people already thinking about safety. That is a real limitation, and OpenAI knows it.
If you use ChatGPT regularly and there is someone in your life who would want to know if something went wrong, this is worth enabling. It costs nothing and takes two minutes.
If you are in crisis right now, the 988 Suicide and Crisis Lifeline is available by call or text in the US.
About.chat covers the chatbot ecosystem every week. If you want a digest of what matters — features like this, safety research, and what the big players are building — subscribe to our newsletter. Free, no spam.
Stay in the loop
Get the best chatbot news, reviews, and discoveries — weekly.