There is an enormous amount of excitement around artificial intelligence (AI) and machine learning—and has been for some time. Generative AI tools like ChatGPT have been with us for some time, giving rise to the widespread democratisation of the AI experience
In the Asia-Pacific region, there are clear signs that organisations are rushing to jump on the AI bandwagon. Nearly half are planning a large increase in their investments in AI over the next 12 months. What’s more, over 40% expect to get a three-fold return on investment in AI initiatives. Is this justified, over-confidence or possibly a sign of lack of AI maturity in the region? New research from IDC, commissioned by SAS, provides insights on the state of data and AI in the Asia-Pacific region.
The real question is not so much ‘whether’ to use AI as ‘how’ to use it and particularly how to get value from investments in this technology. The experiences of early adopters will not just affect their own future investments and implementations, but those of other organisations in their industries and beyond. It is therefore essential to understand more about the concerns of company leaders about using AI—and how these concerns can be addressed.
Loss of control over data - perception management lessons from the frontline
Jeremy Hebblethwaite is the Chief Technology Officer for Thakral One, a technology consulting and services company and SAS partner. Thakral One is headquartered in Singapore, has presence across Asia and emerging markets. The company has clients from multiple sectors, including financial services, telco, healthcare, and consumer goods. Its work is generally centred around helping its clients to adopt value-added bespoke solutions, make better decisions through analytics and data, and get the full benefits of cloud computing. Jeremy is clear that both AI and cloud are important for ongoing success with data and analytics—and that executives have different but related concerns about the two.
“There are different concerns across the two, with some overlap. For example, loss of control is crucial for both. With cloud, you are entrusting a third party for critical business operations. With AI, you are effectively relying on a ‘black box’. We have to be able to show executives that they won’t lose control of either their data or decision-making logic.”
He adds that the ethical concerns around AI are probably the most important for many executives.
“Executives are understandably worried about the reputational damage of getting it wrong. If an AI-based algorithm produces an outcome that is discriminatory, who is responsible and accountable? What controls can be applied to safeguard against the risk? Alongside that, we’ve heard a lot about potential bias in AI decisions, so people are concerned about that.”
Jeremy observes that part of the work of Thakral One, and other similar organisations, is to help companies to overcome these concerns. It is essential to invest in managing data, to improve confidence in data quality. Companies also need to focus on improving transparency, and robust governance of decision-making and other processes. Jeremy notes,
“You always have to be able to offer an explanation for any AI-driven outcome. AI explainability ensures that businesses not only trust the outcomes but can also trace back how specific decisions were made, which is crucial for regulatory compliance and maintaining stakeholder confidence. With good quality data, this can be supported by more traditional data science methods.”
Understanding technology without the jargon
Jeremy suggests that there may also be another way in which software companies and technology consultancies can help their clients: by using alternative descriptions that focus on outcomes.
“Instead of talking about ‘AI-based algorithms’, it may be easier to describe them as ‘software solutions that improve decisions’, or ‘intelligent decision-making tools’. Executives relate to those descriptions more readily—and they can also see how to manage the risks.”
This approach has several benefits. First of all, it talks about what the business is interested in: the value and the outcomes that will be achieved. This makes it easier to understand the potential benefits, and also to see how they might be measured. Second, it is clearer for non-technical audiences, which is likely to be the majority of people within client companies. Third, but by no means least, Jeremy makes another very good point.
“When you focus on what software can do, rather than what it is, you help clients position it more strategically. It shifts from ‘I want an AI project’ to ‘I have this strategic need, and this investment can fill it’. This avoids any new technology being seen as the answer to everything whilst reducing the hype. AI is not new though. Automated decisions based on logic have always been AI. The difference now is the lack of explainability compared to the relatively simple algorithm or code of the past.”
Balancing data and AI potential with responsibility
While the excitement around AI and its potential is undeniable, it is crucial to balance this enthusiasm with responsible, thoughtful investment. The concerns raised about transparency, bias, and ethical implications are valid and must be addressed to gain trust in AI-powered decisions.
By focusing on robust data management, transparency, and clear communication of AI's role, businesses can navigate these challenges. Investing in AI with these principles in mind will allow companies to unlock innovation while safeguarding their integrity and accountability, ensuring long-term success and confidence in AI-driven decisions.
Unlock the full potential of data and AI in Asia Pacific—download the Data and AI Pulse: Asia Pacific, 2024 eBook and discover how these technologies are reshaping the future of the region.
... View more