There are various studies on different topics of artificial intelligence. Many renowned institutes and companies deal with the various aspects and technologies and the corresponding effects on companies. One of KPMG’s highly relevant and highly topical studies on this topic is presented in this article. For this purpose, KPMG interviewed CEOs of various renowned companies in the USA.
According to the respondents, the most critical trust factors are algorithm integrity, explainability, fairness in terms of ethics and accountability, and resilience. Integrity is assessed by whether the algorithm follows the original goals of the project and the company. Explicability refers to the understanding and simplicity of the AI model and the traceability of the results. The fairness of AI is expressed by the equality and absence of a proxy, which leads to unfair treatment of participants. A resilient AI should, in turn, include all aspects of a safe option and consideration of all security risks that might occur.
In addition to these essential cornerstones for an AI, the KPMG study also looked at how a company can put more trust in artificial intelligence and how this can be controlled. A five-step process illustrated this.
In the next step, it is vital to prepare a strategy for the AI project. First, the vision, aspiration, and desired results must be defined to provide a clear path. After that, the design of the AI is in focus. The desired AI model has to match with corresponding unbiased data, appropriate engineering, and pre-defined goals. It also lays the foundation for values and ethics, security and privacy, and subsequent quality. Then it is relevant to implement and train according to the model and data. The implementation must, of course, be following the organizational principles, guidelines, and legal regulations. After completion of the AI, it is essential to question and evaluate the results critically. Here it is not only relevant to assessing the qualitative results but also to check the ethics, the values, the fairness, and the explainability of the results. In this way, not only the added benefit of an AI but also the trust in an AI can be verified and validated. Finally, it is immensely talented for an AI to develop further and, if necessary, adapt the AI based on the evaluations. Again, key indicators such as ethics, values, fairness, and integrity play a role. By following these steps, an AI can not only achieve excellent results but also create a trust for the employee and the customer and become a reliable tool.
In its study, KPMG defined six governance aspects for an AI, which must be taken into account in the development process presented.
- Develop AI design criteria and establish controls in an environment that encourages innovation and flexibility.
- Design and implementation of an end-to-end AI governance and operating model over the entire life cycle: AI strategy, design, training, evaluation, deployment, operation, and monitoring.
- Assess the current governance framework and carry out a gap analysis to identify opportunities and areas that need to be updated.
- Design a governance framework that enables AI solutions and innovation through policies, templates, tools, and accelerators to deliver AI solutions quickly, yet responsibly.
- Integrate a risk management framework to identify and prioritize business-critical algorithms and an agile risk mitigation strategy to address cybersecurity, integrity, fairness, and resilience considerations during development and operation.
- Design and establish criteria to maintain continuous control over algorithms without stifling innovation and flexibility
In summary, when developing artificial intelligence, companies should not only pay attention to operational and business factors. Still, they should also consider intangible values such as trust, ethics, and fairness of an AI. This all-round view can create a higher level of trust and acceptance, both internally and externally, and enable the added value to be achieved more quickly.