ChatGPT Introduces "Incognito Mode" and Will Launch ChatGPT Business Features
But, it doesn't really delete everything or do it immediately
OpenAI today announced a new set of data sharing and privacy controls for ChatGPT that are already available to users. You may be aware that ChatGPT added a prompt history that can be accessed in the left sidebar. It remembers your prompts and ChatGPTs responses so you can access them later.
The new Data Controls enable a user to turn off prompt history, essentially turning subsequent sessions into something similar to Incognito mode you might be familiar with in the Chrome browser. The FAQs for the new feature include important details:
What are the Data Controls settings?
Data controls offer you the ability to turn off chat history and easily choose whether your conversations will be used to train our models. They also give you the option to export your ChatGPT data and permanently delete your account.
How do I turn off chat history and model training?
To disable chat history and model training, navigate to ChatGPT > Data Controls. While history is disabled, new conversations won’t be used to train and improve our models, and won’t appear in the history sidebar. To monitor for abuse, we will retain all conversations for 30 days before permanently deleting.
How ChatGPT Uses Your Data
The user controls enable you either have your conversation history saved and be used to train OpenAI models further or not have your data saved and not used to train the models. There is no option for “save my conversation history but don’t use it for training.” It appears to be the type of all-or-nothing privacy option we used to see from certain social media platforms.
OpenAI does address the obvious omission by saying that it is working on a way to enable that option. In the interim, you can follow the existing process for opting out of use for training data by filling out this form. That should enable you to save each conversation prompt automatically and its response but not have that content used for training.
This is particularly important for some users. Three Samsung employees entered proprietary information into ChatGPT, including source code and a meeting transcript. According to Engadget:
One employee reportedly asked the chatbot to check sensitive database source code for errors, another solicited code optimization and a third fed a recorded meeting into ChatGPT and asked it to generate minutes.
Techradar shed some additional light on this situation:
In one of the aforementioned cases, an employee asked ChatGPT to optimize test sequences for identifying faults in chips, which is confidential - however, making this process as efficient as possible has the potential to save chip firms considerable time in testing and verifying processors, leading to reductions in cost too.
The issue here is twofold. First, there is no way for Samsung to recall this information or efficiently request its removal from the model. Second, if ChatGPT is trained on this data, which you should presume it will be, the information entered may show up in a generated response for another user outside of Samsung.
ChatGPT Data Controls mean the second risk will not be an issue if Chat History & Training is turned off. However, the first risk remains for 30 days before the information is deleted from OpenAI’s servers.
ChatGPT Business
OpenAI recognizes this situation could significantly inhibit adoption, and ChatGPT Business is expected to address the issue. According to the announcement:
We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default. We plan to make ChatGPT Business available in the coming months.
Note that statement includes interesting information beyond “more control over their data.” ChatGPT Business will also include features for “enterprises seeking to manage their end users.” Many administrative features will interest companies that want to use a tool like ChatGPT while ensuring it complies with security, governance, and oversight practices.
A Yammer Growth Model?
Yammer quickly became a popular social-network-style business application after launching in 2008. It was a combination of Facebook and Slack before the latter existed. The company was acquired in 2012 by Microsoft for $1.2 billion.
The company was legendary for its go-to-market model. Yammer let anyone sign up for free with their company email address. Notably, they could do this without the approval or involvement of the IT department.
This created a viral effect, as users invited colleagues to join and formed user networks within their companies. Yammer then approached the IT leaders in these companies and showed them how many of their colleagues were using Yammer already. The sales representative then offered to provide paid premium features that enabled more control and visibility to the enterprise. Slack replicated this go-to-market model shortly after it launched in 2013.
You could imagine some businesses simply wanting a generative AI solution like ChatGPT that offers some of these controls and administrative features upfront. Others may find that hundreds or thousands of their employees are already using ChatGPT, and they can better control the behavior by moving them over to a corporate account.
There is a straightforward path for ChatGPT to expand from a consumer app to a business app employing a similar model. And, there is the argument that your employees are going to be using this anyway; if they do it through ChatGPT Business, then there is a lower risk of private company data inadvertently winding up in an OpenAI data training set. It is always great when you can create a problem and get someone to pay you to reduce the risk.
The Regulatory Picture
The other key factor in the Data Controls feature is the looming regulation by Italy, Canada, and other countries. Users can now export their user data from ChatGPT and more easily delete their accounts. They will not be able to delete the effects of their data being used to retrain OpenAI models. Still, they will be able to remove existing information and that will not be used for future training.
This is a step in the right direction. However, it will likely fall short of what regulators want, which is the ability to delete all representations of user data, including the conversations that were generated.
These include the option for a user’s chats to not be saved by ChatGPT and opt-out of the model training OpenAI
Love your writing and insight, Bret... but, your comment about the Samsung experience may not necessarily represent reality. It depends upon what version of chatGPT the user is accessing, and also, whether that user has read ChatGPT's full T&Cs. OpenAI has made it plan for all who do how to opt out of their prompts, and the resulting responses from being used for future training and, therefore, by Samsung's competitors. Many large enterprises, rather than defaulting to the individual's steadfastness in reading such provisions are banning chatGPT outright, but the ability to opt out was available since the original release, and still exists today.
The Utopia P2P ChatGPT assistant https://u.is/ is a versatile tool that can be used for a wide range of purposes. Whether you need to ask a simple question or require more complex assistance, this tool provides an effective and secure means of communication.