The most popular and comprehensive Open Source ECM platform
AI Regulations: Attempts to Tame the Risks
Regulators around the world are now in the process of writing laws that will attempt to manage how companies and governments can use artificial intelligence (AI). Regulation of AI is inevitable and can be expected to enacted soon.
For example, the EU recently released a proposal for regulating AI with noncompliance fines that could range up to 6 percent of a company’s annual revenues. In the US, bank regulators and the FTC are developing guidance for the use of AI.
Sundar Pichai, Alphabet CEO, said that “there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.” Stephen Hawking worries that AI has the potential to destory humanity. Bill Gates is concerned about the development of a ‘super intelligence’. Elon Musk has called for AI regulation too.
In a Forbes article, Rob Toews, principal at Highland Capital Partners, advocates for the creation of a US federal agency that could develop public policy and to police the use of AI.
Mark MacCarthy, wrote in a Brookings Institute report that “calls for modest regulation that lets industry take the lead are part of a failed regulatory philosophy, one that saw its natural experiment over the past several decades come up lacking. AI is too important and too promising to be governed in a hands-off fashion, waiting for problems to develop and then trying to fix them after the fact.”
Pichai wrote for the Financial Times that “principles that remain on paper are meaningless… We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.”













