Just like Santa needs his naughty or nice list on Christmas, organizations have recently noticed a need for a similar naughty or nice registry for AI models and use cases.
In today's rapidly evolving technological landscape, the reliance on models, particularly those powered by artificial intelligence (AI), has grown exponentially. These models drive decision-making processes across industries. However, as their influence expands, so too does the risk they pose. Misaligned assumptions, data quality issues, or unintended biases in models can lead to catastrophic outcomes—financial losses, reputational damage, and legal consequences. Model Risk Management (MRM) has thus emerged as an essential discipline to ensure that models operate as intended, align with regulatory requirements, and serve their intended purposes without causing harm.
The necessity for robust MRM practices stems from the inherent uncertainties and complexities in modeling. Models are approximations of reality, built on assumptions and trained using datasets that might not fully represent the environment in which they are applied. Therefore, it helps with efficiency, transparency and the speed of constant development that all vital information is captured in a centralized location so that the assumptions and risks of AI models and use cases are understood by all who want to utilize them.
What has made MRM even more relevant today is the European Union's Artificial Intelligence Act (EU AI Act) that represents one of the most comprehensive regulatory frameworks aimed at governing AI systems. Introduced to establish trust in AI while ensuring compliance with ethical and legal standards, the Act emphasizes transparency, accountability, and risk management.
The EU AI Act classifies AI systems into categories of risk: unacceptable, high, limited, and minimal. High-risk AI systems, such as those used in critical sectors like critical infrastructure, healthcare, law enforcement, and finance, are subject to new requirements.
Robust MRM practices can help with the compliance with the Act. Organizations must not only validate models during development but also monitor their performance and impact post-deployment. This ensures that risks are mitigated proactively and that systems remain compliant as real-world conditions evolve.
MRM is available as a solution on the SAS Viya platform. To learn more visit www.sas.com/fi_fi/software/model-risk-management.html or contact me at antti.heino@sas.com
Merry Christmas,
Antti Heino
Principal AI Advisor, SAS Institute
... View more