How do AI companies view your privacy?
How will OpenAI's new option to turn off chat history help protect your data privacy?
As of April 25, 2023 OpenAI has introduced its best feature for privacy-oriented folks. This feature allows users to turn off chat history in ChatGPT, which ensures any prompts and responses sent by the user are not used for further retraining purposes. This option is new to ChatGPT, but it was enabled by default in their API (developer) version.
What if I opt in to data collection? OpenAI has been vague with their retraining process, but in their words the system works towards “better understanding user needs and preferences.” This starts when a user will like or dislike a prompt response, perfectly labeling the data before removing personal identifying information.
While this opt-out option seems viable, there will still be a 30-day window where OpenAI holds onto your user data, after using either the API or ChatGPT. This begs the question, how comfortable are you sharing your information with OpenAI? As always, you should be cautious with sharing info with a 3rd party service.
“A thing that I do worry about is… we’re not going to be the only creator of this technology.” Sam Altman has been an outwardly spoken critic of other companies not encouraging safety standards in their AI models. This seems trustworthy, but can we be sure his private company is aligned with consumers, or is this just a centralization of power?
Instead of having these mega corporations operate the LLMs (large language models), we could leave it in the hands of the people. Could companies be able to host their own on-premise chatbots that allow users to be sure whose hands their information falls into? This can cover our concerns of privacy and security, but does this decentralization of AI pose regulatory difficulties or existential threats?