BookmarkSubscribeRSS Feed
CatTruxillo
SAS Employee

This is a discussion forum for the activities in the Human Centricity module of the Free SAS e-learning course, Responsible Innovation and Trustworthy AI.

Palliative Risk Score activity

Consider This: 

How can you combine an automated modeling recommendation with human-centric objectives?  

 

Please share your ideas in the discussion. 

3 REPLIES 3
DavidGould
SAS Employee

Encourage citizens to create an Advance Directive, or 'living will' for medical care, and integrate these into model scoring inputs. 

Examples include a 'Do Not Resuscitate', no tube feeding or extended ventilation, and in some jurisdictions MAID (medical assistance in dying).

 

https://medlineplus.gov/advancedirectives.html

https://www.nia.nih.gov/health/advance-care-planning/advance-care-planning-advance-directives-health... 

https://www.canada.ca/en/health-canada/services/health-services-benefits/medical-assistance-dying.ht... 

makster
SAS Employee

This is a very interesting approach. It highlights how patient agency can be explicitly built into medical decision systems. However, it also raises important questions when applied to other sensitive contexts. For instance, consider patients with severe mental health challenges who may be at risk of self-harm. In such cases, prioritizing privacy and consent might conflict directly with the ethical duty to preserve life. The dilemma then becomes, should the system’s primary objective always be to save life, even if it means overriding an individual’s autonomy and privacy? Or should respect for patient agency and consent remain paramount, even when it may lead to preventable harm? This tension lies at the heart of building truly human-centric AI in healthcare.

jomana-khatib
Obsidian | Level 7

In today’s data-driven world, it’s not enough for models to be accurate—they must also be aligned with human values. When building automated systems, especially in sensitive areas like healthcare, we must ensure that recommendations go beyond technical optimization. A truly human-centric approach considers not only cost or efficiency, but also a person’s wishes, beliefs, and life circumstances.

For example, a model might suggest an in-home care plan because it’s cost-effective. But if the patient has no family support or the recommendation conflicts with their values, that outcome could do more harm than good. That’s why we must design systems to support, not replace, human decision-making.

By integrating ethical principles, individual preferences, and transparent logic into our models, we can create solutions that are not only smart but also compassionate. Ultimately, the goal is to ensure that technology serves people not the other way around.

sas-innovate-2026-white.png



April 27 – 30 | Gaylord Texan | Grapevine, Texas

Registration is open

Walk in ready to learn. Walk out ready to deliver. This is the data and AI conference you can't afford to miss.
Register now and lock in 2025 pricing—just $495!

Register now

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!

Discussion stats
  • 3 replies
  • 2072 views
  • 5 likes
  • 4 in conversation