- OpenAI allows users to disable model training via ChatGPT's settings page, but it also disables Chat History, which seems counter-intuitive.
- There exists a privacy portal page where you can opt out of model training and keep your Chat History intact, but OpenAI does not highlight it.
- OpenAI doesn't promote the privacy portal page, where you can file a request to stop training, and it's not found inside ChatGPT's interface.
ChatGPT is by far the most popular AI chatbot out there. For many, AI and ChatGPT have become almost synonymous. And if you are not paying attention, OpenAI uses your private chats to improve its model aka train its AI model. Indeed, OpenAI informs the user to not share sensitive information as your chats can be reviewed to train its model, but that’s only for the first time when a new user joins ChatGPT.
Not just that, OpenAI uses human reviewers from “trusted service providers” to process your private de-identified chats, but fails to mention it on ChatGPT’s homepage. Google also employs human reviewers for Gemini chats to train and improve its model, but Google clearly informs the user on the homepage.
While OpenAI offers a Data controls option in ChatGPT’s settings page to disable model training on all of your chats, this setting also turns off Chat History. This functionality, where opting out of training removes access to past conversations, feels punitive and discourages users from prioritizing privacy.
Having access to your past conversations is a basic feature, and in no way, it should be tied to your privacy. Both can co-exist. And indeed, it exists.
OpenAI has a privacy portal page where you can opt out of model training and keep your Chat History intact. The hot AI startup doesn’t promote this page anywhere in ChatGPT’s interface. You have to open the Data Controls FAQ page where it mentions you can very well disable model training, and keep your chats intact.
What if I want to keep my history on but disable model training? We don’t use content from our business offerings such as ChatGPT Team, ChatGPT Enterprise, and our API Platform to train our models. Please see our Enterprise Privacy page for information on how we handle business data. If you are on a ChatGPT Plus or ChatGPT Free plan on a personal workspace, you can opt out of training through our privacy portal by clicking on “do not train on my content.” You can also turn off training for your ChatGPT conversations by switching Chat history & training off in Settings. Once you opt out, new conversations will not be used to train our models.
So I kept digging in and launched OpenAI’s privacy portal page where I filed a request to stop training on my content. If you are already signed in with your OpenAI account, it will automatically log you in. Otherwise, you’ll have to enter the email address associated with your ChatGPT account, and OpenAI will send an email. Click on the link you receive in your email and file the request. Both free and ChatGPT Plus users can file this request. Here’s what the process looks like:
Of course, the request will be applied to all future chats and not your past conversations. An active request will be created. Reload the page in a minute or two and your request will be processed, at least it did for me. You will also receive an email stating the following:
“We successfully processed your request to not train on content provided to our consumer services. We will no longer use your content to train our models. As a reminder, this request is forward looking and does not apply to content that had previously been disassociated from your account.”
Now, you can use ChatGPT with your Chat History turned on (including Chat sharing), while not being part of model training. Thank you. I would have truly appreciated OpenAI if the company added a link to the privacy portal page inside the Data Controls settings page so users could make informed decisions about their privacy.
This kind of dark pattern raises questions about OpenAI’s commitment to transparency, particularly when dealing with user chat history. It would bode well for the company if they walked the talk on transparency otherwise it can erode user trust.