Sun. May 29th, 2022

Malicious actors can force machine learning models to share sensitive information, poisoning the datasets used to train the models, researchers have found.

A team of experts from Google, National University of Singapore, Yale-NUS College and Oregon State University published an article titled “Serum of Truth: Poisoning Machine Learning Models to Reveal Their Secrets (opens in a new tab)”, which details how the attack works.

By Admin

Leave a Reply

Your email address will not be published.