AI is a tool. Just like a hammer is a tool. A hammer can be used for many things. It can be used to build a beautiful house or a fun tree fort for kids to play in. Or. A hammer can be used to destroy property or to break a bone.
AI can be used for many positive things, like identifying cancer cells and improving vehicle safety. Or it can be used for blackmail, revenge, and spreading misinformation among other disturbing things.
This blog will talk specifically about Bots. On the one hand, AI is used to create and use more and more bots. On the other hand, AI is being used to identify and stop bots. Hmmm… does this mean that more and more AI must be combatted with more and more AI?
AI and Ethics
Ray Greenwood of SAS contends that there is no responsible or irresponsible AI any more than there is a responsible or irresponsible calculator. You can use a completely accurate calculator to cheat on your taxes, for example. It’s not the calculator’s fault and, to be honest, the calculator couldn’t care less. See Ray’s excellent webinar (starts at minute 10).
However, AI can be used responsibly or irresponsibly. And any mistakes, such as bias, in an algorithm, can be multiplied and magnified with AI.
One of the most challenging aspects of AI ethics is the ethics part. What one person believes is good, another may believe is evil. Is making weapons of war more accurate good or evil? Is improving the marketing of unnecessary, expensive products to low-income purchasers good or evil? Is replacing humans doing menial jobs good or evil? Is facial recognition and location software creating a benefit to society by helping to find criminals or is it an invasion of privacy?
With the caveat that the opinions expressed here are solely my own, here are some of the positive things that AI can accomplish:
But AI can also be associated with negative actions:
Although I am categorizing these tasks as positive or negative, there are many gray areas! Things are not always black and white. Would you move some of these items from positive to negative or vice versa?
Because there are many aspects of AI, I will focus in this blog specifically on bots.
Robots in Science Fiction
In Science Fiction, robots have often been anthropomorphised. Not only are they often given human emotions and agency but also given physical features like faces, arms, and legs.
Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.
Often, through they course of the story, bots are determined to be either benevolent or nefarious. Rarely do you encounter the robot that just simply makes mathematical mistakes resulting in the doom of humankind. Instead they intentionally turn on humans, their creators, and willfully wreak havoc. It makes for a more exciting movie than say, a computer that denies a loan through a small bias that is magnified through its algorithm.
But in reality bots do not need faces or fingers (nor do they feel emotions or guilt or shame). Let’s talk about real present-day bots and how they fit into our modern world where so many transactions are done online.
Benevolent Bots
Some bots are designed to improve a user's experience online. For example, they may be programmed to provide the user with near real-time updates on topics of interest, like how many times the Dallas Cowboys kicker has missed the extra point in this one playoff game alone. Others are designed to answer user questions without spending the money on a real staff person. However, even some of these “helpful” bots can be mighty annoying when they pop up without a request or they can’t seem to figure out what you’re asking for.
Bots: Fake Accounts
Fake accounts that function automatically are also referred to as bots. Some of these may be harmless, but others are willfully used for illegal or unethical tasks.
These bots can be used to:
Even though Facebook removes billions of fake accounts every year, they still estimate that about 5 percent of their accounts (around 90 million) are fake. Spam bots attempt to mimic live user activity to spread content. They are common on all social media platforms and most of these spam bots require little or no human involvement and operate extremely efficiently.
Bots: Fake Reviews
Companies like FakeSpot use algorithms to help spot fake reviews on major webites like Amazon and Walmart. Tips for spotting fake reviews include:
The following looks like a fake review to me. Who reviews halogen light bulbs by saying "They where easy to install." Ummm. Wouldn't that be all light bulbs?
If you have time, check out a suspicious reviewer’s profile and their other reviews. If you see they repeat the same phrases in other reviews, or all their reviews are five stars or one star, that is a red flag. But really… who has time for this!?
Service Bots: Stealing Jobs?
Many restaurants now let place your food orders from your phone. A McDonald’s near Ft. Worth, Texas unveiled a new automated dining concept, targeted at carry-out customers, that takes it one step further and uses contactless order pickup as well as automated ordering. Perhaps this can help with reducing the spread of disease through contactless pickup? Or is this stealing menial jobs and as such a bad thing?
Other examples of bot uses include changing survey results, laundering money, or abusing free services. An example of the latter would be using a free cloud computing account offer to mine cryptocurrency. How are fake accounts created?
One solution to address the “stealing” of human jobs by AI and automation is to shorten the work week. The five-day, 40-hour work week has been the default standard in the United States since President Franklin Roosevelt signed the Fair Labor Standards Act of 1938. In the mid 1900s a full work week for Maryland state employees was 35.5 hours; the work week had been lowered over the years in lieu of pay raises. Perhaps forward-thinking companies that value employee satisfaction will break from the current tradition and launch a new full-time 32-hour four-day work week as its standard?
Bots: Stealing Tickets
In the now infamous Taylor Swift TicketMaster fiasco, shortly after her concert tickets went on sale the whole system crashed. According to Joe Berchtold of TicketMaster’s parent company “an onslaught of bots that crowded out real fans and attacked Ticketmaster’s servers. While the bots failed to penetrate our systems or acquire any tickets, the attack required us to slow down and even pause our sales. This is what led to a terrible consumer experience." One Ticketmaster detractor responded “for the leading ticket company not to be able to handle bots is, for me, an unbelievable statement. You can’t blame bots for what happened.” The lesson from this is that any business that doesn’t want its name dragged into the mud and its CEO dragged before the US Congress, better be bot-ready! https://www.nytimes.com/2023/01/24/arts/music/ticketmaster-taylor-swift-bot-attack.html
Spotting and Protection From Bots
• Use AI to combat AI! Protection against bots uses AI-based real-time analytics to make near instant authorization decisions. These analytics commonly look for anomalous activity.
• Make it difficult for bots to create or access accounts. We’ve all seen the 9 pictures with “Are you a real person? Which of these contain fire hydrants?”
The trouble is, the trickier the protection gets the trickier the bots get leading to something of a vicious spiral.
Spam Bots
Even if they are spreading truths and not spreading disinformation, spam bots can be annoying and suck up your time and energy. Bots have been extremely effectively used to manipulate public opinion, distort financial markets, interfere in elections, and inflate the popularity of entertainers and politicians. In 2016, Twitter identified over 50,000 Russian-linked spam accounts that were cleverly designed to microtarget Twitter users and sow seeds of division related to the US election. Spam bots have also been used to spread misinformation about COVID-19, other diseases, and vaccines.
Phishing Attacks
Hopefully you have all taken the SAS security trainings, and mentally armed yourselves against phishing attacks. As we better arm ourselves, however, the phishers become more and more sophisticated. Dastardly bots can cleverly masquerade as friends/acquaintances or legitimate businesses and suck unwary people into providing confidential information or passwords.
Bots in Warfare
"Future wars may depend as much on algorithms as on ammunition,” according to Robert Work, the former US deputy secretary of defense. The US Defense Department and other defense departments continue to invest more and more in artificial intelligence. Work (who currently serves on the board of Govini, a data and analytics company) maintained in 2017 that “rapid advances in artificial intelligence — and the vastly improved autonomous systems and operations they will enable — are pointing toward new and more novel warfighting applications involving human-machine collaboration and combat teaming. These new applications will be the primary drivers of an emerging military-technical revolution.” Source: Washington Post.
Folks like Work maintain that AI is transforming war just as earlier inventions like the rifle, telegraph, railroad, and airplane have done in the past. Big players in this arena include the usual “beltway bandits”, i.e., large US federal technology contractors like CACI, Leidos, Lockheed Martin, Northrop Grumman, Raytheon, and SAIC. In today’s military, AI is even used in soldier training, where soldiers experience simulated battlefields using virtual reality.
There has been much debate and discussion about the use of lethal autonomous weapons systems. These are sometimes referred to in the media as “slaughterbots,” “killer bots,” or "predator drones" These are weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without human intervention. The former US President Obama received much criticism as well as some applause for his use of lethal drones. Military drones are currently being used by both Ukraine and Russia in the Ukraine-Russia war.
Flying drones can be used for audio-visual reconnaissance or they can also be outfitted with small bombs. Electromagnetic pulses and other communication jamming/intercepting means can be used to disorient drones, interrupt or intercept communications between drones and their operators, or even co-opt the drone’s operation.
So the question becomes, if society agrees that having human soldiers kill other humans is a necessary and acceptable approach to international conflict, what is it about using AI to accomplish this same task that makes it no longer acceptable? Is it the fear of error? Is it because the AI lacks a conscience? Now that a human being is no longer pulling the trigger or dropping the bomb from a plane, is it somehow less ethical? Is it the fear that if there is an error, there will be no one to blame? The current international legal framework is woefully inadequate to address this topic or, frankly, most topics surrounding AI ethics.
While killing machines get most of the media attention, there are also ongoing cyberwars on many fronts with bots trying (and succeeding) to invade critical infrastructure and communications systems.
Note in the screen shot above that even while I searched online for information for this blog, I was microtargeted by AI with Cole Haan ads.
Who is to Blame?
One of the key ethical questions is who is to blame when something goes awry. Anyone who has ever driven a vehicle knows that in some situations an accident cannot be avoided. Even the best autonomous vehicles can unfortunately have some accidents, and lives or limbs will be lost. In that case, who is to blame? The owner? The manufacturer? The software? Yet another question that is unsettled in the established legal and ethical realm.
Conclusions
Once concern about AI is the fantasy of infallibility. Nothing is 100% accurate. Even if results are 99% accurate, 1 out of 100 times they are wrong. Some falsely believe that if an answer is given by AI, it must be unbiased and accurate. This is not true! Remember, garbage in, garbage out. Biased data leads to biased results. Inadequate data leads to inadequate results. Small errors can be magnified by AI. See My YouTube video on bias and SAS’s Fair AI Tools for more information on that topic.
Do you agree that there is no responsible or irresponsible AI any more than there is a responsible or irresponsible watch? Your watch may not keep time accurately, but it is still the watch wearer’s responsibility to arrive to appointments on time. If your friend arrives half an hour late, and blames their watch, or the red lights on the road, will you be annoyed with the watch or the red lights? Or with your friend?
Because ethics vary from person to person, the codification of these ethics into laws are really the only way to govern AI. Currently autonomous vehicles must obey traffic laws, online information systems must obey data protection laws, and autonomous weapons have to comply with the laws of war. Unfortunately, the world of AI is rapidly outpacing the associated laws and the legal world has much catching up to do.
Teaser
Are you wondering what SAS is doing in the fight for Responsible AI? Stay posted for my next blog where I will cover this topic.
Real or Bot?
In a recent Microsoft Teams conversation with SAS employee @michaelerickson, Teams “decided” to make Michael look a bit bot-like.
For More Information
Interesting Read Klara and the Sun by Kazuo Ishiguro. A thought-provoking story about AI “friends.”
Read Part 2 - AI Ethics Part 2: Trustworthy AI at SAS
Find more articles from SAS Global Enablement and Learning here.
Join us for SAS Innovate 2025, our biggest and most exciting global event of the year, in Orlando, FL, from May 6-9.
Early bird rate extended! Save $200 when you sign up by March 31.
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.