LinkedIn in is under fire as they’ve started to use user-generated content to train its AI systems without first having asked its users for permission to do so.

In its updated privacy policy, LinkedIn revealed that posts, articles, and other types of content shared on the platform may be used to improve the performance of its AI models. While the platform has added an opt-out option, it appears that users were automatically included in this process without any initial consent.

 

How Does LinkedIn’s AI Work With Personal Data?

 

LinkedIn’s AI is made as a tool to assist users by suggesting content creation ideas, drafting messages, and giving users other types of automated responses. The system relies on personal data, such as user posts and articles, to train the AI and make its suggestions more accurate.

User-generated content, such as posts and articles, is now being fed into the system to sharpen these AI features.

So, without users’ direct consent, LinkedIn is actively collecting and processing their content. For example, if you ask the AI for help writing an article, it might suggest a message based on data it has gathered from other users, which actually includes personal details, that might slip through.

 

 

What Options Do Users Have To Protect Their Data?

 

For those who aren’t comfortable with LinkedIn using their content in this way, there is a simple solution – but it requires action. Users can go into their settings and opt out of having their data used to train LinkedIn’s AI. Under “Data for Generative AI Improvement,” there’s a button that, once switched off, will stop LinkedIn from using your data going forward.

It’s also important to know that this opt-out does not prevent data that’s already been gathered from being used. Also, while opting out stops LinkedIn from using your content to train the AI, it doesn’t stop the system from processing your data when you use AI features like content drafting tools. So, the control isn’t as straightforward as many might hope.

 

Are There Any Regional Protections For Users?

 

LinkedIn has drawn a line when it comes to users based in the European Union, UK, Switzerland, and a few other regions. Due to stricter privacy laws in these areas, the platform has stated that it will not use content from users in these locations to train AI systems. This gives users in these regions added protection, but for those in the rest of the world, the responsibility to protect their data falls on them.

LinkedIn has said it uses tech that protects privacy to remove or reduce personal details when training its AI models, but the system still leaves room for error. There have been concerns that, depending on how an AI request is phrased, the output could unintentionally include personal information shared by another user.

 

What’s The Reaction From LinkedIn Users?

 

The reaction has been predictably negative. Many users feel blindsided by the lack of transparency around the changes, and the fact that they were automatically opted in has only added to their frustrations. Across the platform, posts advising others on how to switch off the data collection setting have been circulating, with many seeing this as a serious violation of privacy.

This isn’t the first time a large tech company has quietly started using personal data for AI development without explicit permission, and it’s unlikely to be the last. LinkedIn’s way of opting users in without a clear upfront explanation has created a sense of mistrust, and many feel the company is putting innovation ahead of user confidence.





Source link

Share.
Leave A Reply

© 2024 The News Times UK. Designed and Owned by The News Times UK.
Exit mobile version