The most popular and comprehensive Open Source ECM platform
Artificial Intelligence (AI): What Could Go Wrong?
Elon Musk has warned about the long-term possibility of artificial intelligence can be seriously be harmful to humans. How can Artificial Intelligence be used responsibly?
Can things like decency, fairness and morals be programmed into AI algorithms?
I think the answer to this is clearly ‘Yes’, but the question is more of whether how can we guarantee that the algorithms that we create will be decent, fair and moral? What is the incentive to build responsibility into an algorithm? How can the responsibility of an algorithm be measured or certified?
We already see on a daily basis how government policy can be subverted by geopolitical, economic and partisan priorities. If a technology could be used for political or other advantage, there likely will always be groups or nations that approve it.
But what if we assumed that globally all nations could come to an agreement that a moral code should be built into all artificial intelligence?
Even then, specifying how to do that will be difficult. David Hagenbuch, marketing ethicist, for example, lists four sticking points that might hold up or subvert any agreement for building responsible artificial algorithms:
- Different cultures will have different moral standards. A mutual agreement is needed but may be difficult to
- Standards and laws cannot cover every conceivable situation If there is any grey area, how would machines decide? Humans would debate and often defer to panels of judges. Will humans always be able to be the ultimate arbiter?
- Would algorithms be able to apply standards based on context? While some laws apply in nearly all cases, there are often exceptions. Would machines and algorithms be able to handle exceptions?
- What happens when the algorithm is wrong or results in harm? Who is accountable and who should be responsible?