- OpenAI is now rolling out the Memory feature to all paid ChatGPT Plus users.
- It can now remember your preferences and key information to make ChatGPT more personal.
- The feature is enabled by default, but you can disable it from the Settings page.
Earlier, in February, OpenAI introduced a ‘Memory’ feature that would remember key details and preferences across all your chats. And now the feature is being rolled out to all paid ChatGPT Plus users. It’s an effort by OpenAI to make your chat experience more personal and act like a personalized assistant.
For example, ChatGPT can now remember preferences like you love traveling, prefer bullet points for summarization, concise answers as output, and more. From broad-level preferences to personal details, the Memory feature can remember everything and use the information to respond more personally during chats.
The feature is enabled by default, and ChatGPT automatically updates its memory when it comes across such information during chats. You will find the “Memory updated” dialog whenever it remembers anything. You can also write “remember it” in your prompt to save the information for future chats.
That said, you can turn off the Memory feature in ChatGPT. Some users would not want ChatGPT to remember personal details fearing privacy risks. You can simply say “forget” when it updates the memory or manage all your memories and delete them from the settings menu. You can manage memories under Settings -> Personalization.
To make ChatGPT more personal, OpenAI had earlier rolled out Custom Instructions where you can add details about yourself so that ChatGPT knows you better. And now with the Memory feature, ChatGPT is likely to become even more personal.
While the feature can be helpful, ChatGPT users should know that all your chats are anyway used for model training, and it’s enabled by default. However, there is a secret way to opt out of ChatGPT model training. Follow our linked tutorial and find detailed instructions.
Privacy-conscious users should take these proactive steps so that their personal data doesn’t become part of the training dataset. Lately, companies are looking for all kinds of data, and even generating synthetic data to train their models.