LinkedIn faces criticism for using user data to train AI

LinkedIn, the business and employment networking platform owned by Microsoft, has recently come under scrutiny for using user data to train generative AI models without prior notice to its users. This has raised concerns about transparency and privacy. The company confirmed the use of user data to power AI in a blog post by Blake Lawit, Senior Vice President and General Counsel of LinkedIn. However, the process began without obtaining explicit permission from users first.

In response to the criticism, LinkedIn has updated its user agreement and privacy policy to provide more detailed information about how user data is used to recommend content, manage information, and develop AI features. The updated privacy policy explains the collection, processing, and use of user data for product development, including AI-generated content. Despite claims of using “privacy-enhancing” technologies to protect personal information during AI training, many users feel that LinkedIn needs to be more transparent and seek permission before using their data.

The quiet collection of user data without prior notice has sparked privacy concerns, leading to debates about the legality and ethics of such practices. Experts believe that LinkedIn’s lack of transparency in data processing could erode user trust, especially given the increasing public and regulatory pressure to safeguard online privacy.

Related posts

Google launches Gemini 2.0 – comprehensive AI that can replace humans

NVIDIA RTX 5090 can be 70% more powerful than RTX 4090?

iOS 18.2 launched with a series of groundbreaking AI features