Make sure that AI is helpful and not a threat

When it comes to artificial intelligence and its application in businesses, sooner or later we have to discuss a few ethical rules concerning the topic.

Even though AI is completely new and incomparable to anything in the history of mankind, there must be certain ground rules and norms regarding its application: where limits must be set, what is allowed and what is not. Otherwise, the threat of abuse of AI may become too great, and in the worst case, AI could become harmful and dangerous instead of being advantageous, helpful and constructive. Referring to the quote from Stephen Hawking from a Web summit technology conference:

“Success in creating effective Artificial Intelligence could be the biggest event in the history of our civilization. Or the worst. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

– Stephen Hawking

Therefore, many institutes, authorities and organizations are deliberating guidelines and ethics for the upcoming usage of AI.

ethics in AI

The basics of ethics in AI

As mentioned, many groups and organizations are heavily involved in this subject. The European Commission, for example, set up the “Independent High-Level Expert Group on Artificial Intelligence” to develop an ethical guideline for trustworthy AI. The group’s document contains a framework for trustworthy development of AI and all that goes with that. The key points of the document are that AI has to be lawful, robust and ethical. Everything is based on four ethical principles:

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability
privacy and respect of human autonomy

These principles are based on fundamental human rights and have been adapted to AI. That sounds promising, but how does one make sure this will work and the guidelines will be followed? Here we have the so-called realization of “Trustworthy AI”. The workgroup defines seven requirements to verify that these ethics can be realized in the future of everyday businesses and aspects of life:

  • Human agencies and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability
privacy and data governance

Since AI is still quite a young discipline and we are at the beginning of its development/evolvement (in the phase of “Artificial Narrow Intelligence”), it obviously not only makes total sense but is also mandatory to define clear standards and guidelines in the field of AI ethics.

It needs the involvement of society, governments and economics

As previously mentioned, artificial intelligence already takes place in so many areas of everyday life. And due to the sheer volume of implementation of AI, it is crucial to have clear guidance on how to do so. But this automatically brings us to the following question: Who sets the ethics and based on what standards? To leave these decisions up to only the economists and the governmental institutions involved could be incoherent. It needs the synergy of many experts from politics and economics as well as academics, the scientists and unquestionably sociology.

Only by deeply and broadly understanding the possibilities as well as the dangers and threats of AI can clear standards and frameworks be set up. That’s why it takes experts from many different fields working together on ethics in AI. That’s what makes this whole topic so fundamentally important. For example, a sociologist might have a good understanding of already existing, common ethical norms but probably has no comprehensive understanding of the possibilities of AI and everything it can bring.