After Apple announced the integration of OpenAI’s ChatGPT into its devices at WWDC 2024, Elon Musk expressed dissatisfaction and concerns about user data security. Musk stated that the collaboration with OpenAI posed an “unacceptable security breach” and threatened to ban the use of iPhones and MacBooks at his companies. He doubted OpenAI’s ability to protect user data and accused Apple of “selling out” its users. Musk even suggested implementing security measures, such as requiring visitors to his company to leave their Apple devices in a Faraday cage to prevent potential security risks.
This reaction from Musk has sparked controversy, with some attributing his stance to personal interests following a failed acquisition of OpenAI. Regardless, Musk’s concerns about data security when integrating third-party AI tools into the operating system are echoed by many experts. Critics also expressed worries about OpenAI’s collection and use of user data, as well as Apple’s control over this process.
In response to these concerns, Apple has emphasized its measures to protect user data, such as hiding IP addresses and not storing requests. However, the effectiveness of these measures in ensuring absolute safety for user data remains a topic of debate.