The most popular and comprehensive Open Source ECM platform
Garbage in – Garbage out (GIGO). It’s the idea if you base decisions on flawed information, your solution will likely be equally flawed. The concept is especially true in areas like data analytics and artificial intelligence. And a corollary to this idea is that outcomes in analytics and AI can be manipulated by tailoring the content of the data input into the algorithms.
It’s the reason Microsoft’s initial attempt at creating a chatbot failed. Users unscrupulously fed the chatbot racist propaganda which the algorithm picked up on and learned from. Soon the chatbot started echoing the same racist comments.
Roman Yampolskiy, head of the CyberSecurity lab at the University of Louisville, said that “any AI system learning from bad examples can end up being socially inappropriate, like a human raised by wolves.”
A group of researchers from the University of Maryland’s Robert H. Smith School of Business wanted to investigate the extent that AI could be manipulated or fooled. For their research, they looked at applications to the US patent office. The patent office extensively uses AI algorithms to search existing patents to find existing claims that are similar to the submitted applications.
To convince the AI algorithm of of the novelty of the submitted patent idea, applications now often often introduce unusual language and phrases and often include new hyphenated words and omit references to similar patents in the application. This makes it much more difficult for the algorithm to find related patents.
The researchers found that “although it is theoretically feasible for ML algorithms to continually learn and correct for ways that patent applicants attempt to manipulate the algorithm, the potential for patent applicants to dynamically update their writing strategies makes it practically impossible to train an ML algorithm to correct for this behavior.”