appendix B Responsible AI tools

 

As generative AI models have become increasingly prevalent in enterprises, ensuring that they are developed and deployed responsibly is essential. Responsible AI (RAI) practices can help organizations build stakeholder trust, meet regulatory requirements, and avoid unintended consequences. Fortunately, many tools are available to support developers and architects in integrating RAI principles into their AI systems.

The following sections outline some of these tools and frameworks, which can help ensure transparency, fairness, interpretability, and security in AI.

B.1 Model card

A model card is a special type of documentation accompanying an AI model. It provides a standardized information set about the model’s purpose, performance, training data, ethical considerations, and more. It’s akin to a product data sheet, offering transparency and facilitating responsible AI practices.

While it might seem odd to think of model cards as an RAI tool, they serve an important role in the context of RAI. Model cards are considered an essential RAI tool. They help stakeholders understand the capabilities and limitations of GenAI models, such as those based on GPT architectures, ensuring that these powerful tools are used ethically and effectively:

B.2 Transparency notes

B.3 HAX Toolkit

B.4 Responsible AI Toolbox

B.5 Learning Interpretability Tool (LIT)

B.6 AI Fairness 360

B.7 C2PA