Access and Feeds

Data Poisoning: Corrupting Data to Thwart AI

By Dick Weisinger

It’s a fact of life that big tech and governments are collecting your personal data: all the pieces of your life’s history, your interactions with digital technology, and even the monitoring of your biologic functions.

Some of the reasons why companies and governments value your data include:

  1. Data can be monetized. This is probably the main reason why your data is important to companies. Your data can be sold for advertising purposes and it can be used to improve sales. With your data, it’s possible to create targeted, personalized interactions with you that allow companies to better sell and to more easily convince you that you should buy more products and services from them.
  2. Your data can be used as feedback that helps businesses improve their products and create experiences that better meet the needs of not only you, but all users.
  3. Data can be analyzed to identify risk factors or usage patterns that would be related to hacking or fraud. This data can help organizations minimize their risks.

But increasingly users aren’t comfortable with being tracked without clearly understanding how their personal information might be used or abused. What can you do? Some users are adopting ‘data poisoning’ techniques that provide inaccurate information about themselves or cause their digital profile to be interpreted in a way that differs from their true self.

Sarah Erfani, Lecturer at the University of Melbourne in Australia, said that data poisoning “can be used as a key by an individual to lock their data. It’s a new frontline defense for protecting people’s digital rights in the age of AI.”

What are users doing? A common data poisoning technique is to create a pseudonymous user persona, complete with fake names and email addresses and frequent purges of cookies that have stacked up in browser caches. But more recently, techniques that can foil AI tools, like facial recognition, are being adopted too. Researchers like Erfani, for example, have created apps that can alter images, like selfies, in ways that make the images appear unaltered to humans but which can cause facial recognition AI software to hiccup and fail.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*