Responsible AI and Companies

Written by Prajna Naiker

What is Responsible AI?

Artificial Intelligence (AI) has been a topic of discussion in recent times as its form has developed rapidly from AI that can answer simple questions to a tool that is now being used by everyone, from students to industry specialists to social media platforms. AI models use Machine Learning (ML) to learn how to do tasks by training with specific sets of examples. By learning how to respond to these examples, the AI models are able to answer examples that it has not been exposed to (Heaven, 2024). In this way, AI models create rules by observing patterns and answer problems in this way. This means that we are able to use AI to go through large amounts of data, increase efficiency, and use it as a tool to understand the world faster. However, this also includes risks. Because of how quickly AI algorithms are adapting and the possibilities for what AI can be used for are expanding, there are considerable risks associated with using AI and the data it has access to. Biased outcomes, privacy concerns and unethical practices are risks associated with AI. To combat these risks, Responsible AI Practices are used by institutions to make sure that the AI models being used are legal, safe to use and ethical. Although there is no particular consensus on what practices should be considered under Responsible AI, the Harvard Business Review has 13 principles (Spisak et al., 2023) that adhere to the five value-based AI framework offered by the OECD (Russo & Oder, 2023).

What Responsible AI Means for Companies

The OECD (Russo & Oder, 2023) has general principles that guide Responsible AI practices. These principles protect AI users’ data and promote unbiased outputs, security and transparency. This also helps users and creators adhere to evolving regulations so that workflows are not interrupted and a trusted relationship between users and creators can be maintained.

Responsible AI is a key indicator to companies that the AI models and the creators of these models have their best interests at heart. Companies that follow Responsible AI practices are accountable to follow AI-specific regulations and promote transparency, data protection, and consumer protection legislation. AI biases should be reduced and there should be a promotion of both human rights and human-centred values. Sustainable development should also be included where creators have initiatives that improve both human skills and creativity.

There are different ways of implementing Responsible AI. The Harvard Business Review suggests 13 Principles of Responsible AI (Spisak et al., 2023). These principles include informed consent, transparency with employees and clients, proactive steps to mitigate biased AI, explainable AI systems and disclosure of any data collection and sharing. Reducing the possibility of biased outcomes makes AI outputs more trustworthy and accurate. The promotion of data privacy further bolsters this trust in the AI model and its developers.

Benefits of Responsible AI for Companies

By adhering to the principles above, companies mitigate risks and liabilities associated with using AI. One of these liabilities is the evolving regulatory space around the world surrounding AI. The EU Commission has proposed the AI Liability Directive, the United States has proposed the blueprint for an AI Bill of Rights and China has also proposed legislation to regulate AI (Gulley, 2023). These regulations will soon take effect to regulate how AI is used and protects users from harm. They will be informed by Responsible AI practices as regulatory bodies are already looking at international institutions, such as the OECD, for guidance.

Companies will need to adhere to these regulations to maintain technological efficiency with competitors. By using creators that already adhere to Responsible AI, companies can be assured that there are already safeguards in place that will mitigate the impact of regulations on their efficiency and outputs.

Eunoic follows Responsible AI principles and it is applied to both the tools and the information it uses to create outputs. Client data is confidential and the AI algorithms do not use it to train on. ESG-specific open-source or publicly available data is used by Eunoic’s AI algorithm to provide clients with accurate tools and information that drive company value. These tools contribute to clients’ understanding of their ESG priorities and improve their ESG performance and perception. Eunoic is transparent about the objectives of the applications and tools and where data is gathered from and compliant with regulations in the spaces that it operates. Responsible AI principles are important to Eunoic as they uphold the ethical and regulatory standards that we abide by and the values of integrity, safety and privacy that Eunoic promotes.

References:
Gulley, A. (2023). *Why we need to care about responsible AI in the age of the algorithm.* World Economic Forum. https://www.weforum.org/agenda/2023/03/why-businesses-should-commit-to-responsible-ai/
Heaven, W. D. (2024). *Large language models can do jaw-dropping things. But nobody knows exactly why.* MIT Technology Review. https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/
Russo, L., & Oder, N. (2023). *How countries are implementing the OECD Principles for Trustworthy AI.* OECD.AI Policy Observatory. https://oecd.ai/en/wonk/national-policies-2
Spisak, B., Rosenberg, L. B. & Beilby, M. (2023). *13 Principles for Using AI Responsibly.* Harvard Business Review. https://hbr.org/2023/06/13-principles-for-using-ai-responsibly