Two distinct design philosophies currently define how autonomous intelligent systems engage with the world: AI agents and agentic AI. The terms are similar, but they represent technologies with different attributes. The purpose of this introductory post is to distinguish between AI agents and agentic AI, introduce use cases from business and academia, and explore just a few of the ethical and practical consequences of implementing agentic systems. At the end I'll leave you with lots of resources that I am finding helpful in navigating this topic.
This post assumes that you have at least some background in statistics or data, and have wandered into the realm of AI either out of curiosity or out of business need. But this post makes no reference to math or statistics. Anyone should be able to join this conversation, and I hope you will, in the comments below.
An AI Agent executes specific tasks in response to inputs or instructions. AI agents tend to be reactive, predictable, and task-bound.
Agentic AI operates with a degree of autonomy and self-direction in ways that are proactive, adaptive, and goal-forming. Agentic AI can operate without human supervision to accomplish multi-step tasks, adapting its approach based on changes in conditions.
Let’s look at three examples:
Julia is a marketing coordinator who uses a voice assistant to take notes and book meetings. The assistant responds to her input, but it does not initiate tasks or adjust behavior beyond the predefined logic.
Julia’s voice assistant is typical of an AI agent. It is useful, but its scope is limited. If her system were to begin suggesting rest breaks based on her calendar and the tone of her emails, it would be entering agentic territory. Agentic AI systems may proactively rearrange tasks, recognize behavioral patterns, and provide context-sensitive nudges.
This increased autonomy would require clearer disclosures, especially when systems derive insights from personal signals. It's important to consider any privacy issues when systems infer meaning from sensitive data. Providers of agentic systems have an obligation to develop and evolve safeguards in parallel.
Raj is a director of logistics, supervising a fleet of delivery drivers. He relies on an AI system that monitors fuel prices, weather patterns, and forecast delivery volume over time. The system can autonomously reroute shipments, reallocate warehouse inventory, and update stakeholders before Raj even flags an issue.
Raj’s AI system does not wait for instructions. It acts on its own insights, reshaping operational strategy in real time. This agentic AI system illustrates a shift from automation tools to intelligent partners. Agentic AI can benefit optimizing operations by continuously reassessing business conditions and refining processes without explicit prompting. These systems might synthesize multi-source data to recommend interventions at scale.
Note the difference in customer engagement between an AI agent (Julia) and agentic AI (Raj). AI agents offer support, while agentic AI adjusts tone, prioritizes escalations, and adapts workflows dynamically based on sentiment data that the system can access.
Blanca is a professor who uses a suite of AI tools to identify contradictions across research studies and to propose new experiments for testing theory. She also uses it to update her classroom lectures based on student performance in prior semesters.
Blanca's AI assistant navigates through databases and devises research ideas. It flags contradictions in literature and suggests new hypotheses. The system functions as an intellectual collaborator by identifying unexplored variables and recommending sources. This is an example of agentic AI.
By adapting instructional content to cohort performance trends, Blanca’s system also enables better instructional support by diagnosing learning gaps and tailoring the content to be delivered to students. This application of agentic AI could be useful, for example, to counter the educational losses that a cohort of students experienced during the COVID-19 pandemic, tailoring educational assets to diagnose and fill in the gaps.
Unique ethical issues arise from the use of agentic AI in different settings. For example, in Blanca's academic use case, what are the implications of agentic AI on authorship transparency and academic integrity? A few years ago, universities scrambled to publish generative AI policies and those policies continue to evolve in response to how students and faculty use AI, and as the tools available to them become more sophisticated. The time for agentic AI ethical guardrails is here, but there is still a great deal to learn about the downstream impact.
Here are some questions I’ve been thinking about regarding ethics and governance for agentic AI.
Agentic systems challenge traditional supervision of models. What level of oversight is appropriate? What thresholds should trigger human intervention?
Continuous learning of AI systems carries a risk of carrying forward historical biases into future decisions. Ongoing auditing and explainability protocols are needed to assess potential bias. Who is accountable for implementing these protocols?
Agentic AI can make impactful decisions. How do we attribute responsibility, for better or for worse?
Public policy worldwide is evolving. How do we determine liability, especially when outcomes involve public trust or personal data?
As AI becomes more a part of everyday life, the consequences extend beyond technical issues and into the realm of the personal and societal. We want to design AI systems that reflect our values and goals, and that serve to improve the world. I'd like to ponder these questions over the next few months and share those thoughts with you here-- let me know what you think about these issues, too.
I would like to hear from you. What kinds of AI applications have you been dreaming of, or even designing? What ethics-related topic in analytics should I take on next? Please leave suggestions in the comments!
There's a strong appetite for information about these topics right now. If you’re interested in reading more about agentic AI, please check out these posts:
The hard part of agentic AI: Designing for trust - SAS Voices
Building trust: A principled approach to ethical AI agent development - SAS Voices
From Chat to Decision: A Blueprint for Intelligent AI Assistants
SAS Agentic AI Accelerator – Register and Publish Models
From Chat to Decision: A Blueprint for Intelligent AI Assistants
Agentic AI: Powering the Next Evolution in Data and AI Lifecycle
Want to learn to learn to do this in SAS? Check out this course: Agentic AI - How to with SAS® Viya®, which is part of the SAS Decisioning Learning Subscription.
Want to chew on ethical dilemmas and learn about responsible and trustworthy AI? Sign up for this free e-learning course: Responsible Innovation and Trustworthy AI, which is part of the AI Literacy for Everyone learning path.
Additional references that I found valuable in writing this post:
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8
OpenAI. (2023). GPT-4 Technical Report. https://openai.com/research/gpt-4
Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early Experiments with GPT-4. arXiv preprint. https://arxiv.org/abs/2303.12712
Whittlestone, J., et al. (2019). The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
Find more articles from SAS Global Enablement and Learning here.
... View more