Access and Feeds

Artificial Intelligence: Seven Rules for Ethical Development of AI

By Dick Weisinger

Recently Google announced  seven principles which they pledge to follow for all of their AI projects.

  • AI should only be undertaken when the benefits outweigh foreseeable risks should.
  • AI needs to be designed so that it does not create or reinforce unfair bias.
  • AI needs to be built and tested for safety.
  • AI needs to be accountable, allowing people to comment and appeal.
  • AI needs to be transparent and build in safeguards for privacy.
  • AI needs to uphold high standards of scientific excellence.  Google promised to share their AI knowledge.
  • AI built needs to follow these principles.

Sundar Pichai, CEO of Google, wrote on his blog that “we recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

Google pledges to steer clear of “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people and systems that gather or use information for surveillance violating internationally accepted norms.”

Google announced these principles after complaints from employees about work being done for the Pentagon on Project Maven.  Google subsequently dropped the contract.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*