Is a lack of trust inhibiting adoption of AI in South Africa


A recent study by World Wide Worx shows slow adoption of artificial intelligence solutions in Southern Africa. A lack of skills is certainly one barrier, but increasingly studies are also showing that poor quality data and a lack of local (South African) data sets are also factors that drive up the risk of AI failure, and are inhibiting adoption.

Syncsort’s 2019 Enterprise Data Quality survey explores the challenges and opportunities for organizations looking to bring data quality across the enterprise

Syncsorts 2019 State of Enterprise Data Quality report, which was released last week, shows that this is a global trend.

Whilst some 70% of the survey respondents felt that their business leaders had enough insights to inform business decisions, other recent industry statistics suggest that only 35% of senior executives have a high level of trust in the accuracy of their Big Data Analytics.

If you consider that nearly 50 percent of the respondents indicated both that: 1) their organizations lack a standard data profiling or data cataloging tool; and 2) that they personally had previously experienced un-trustworthy or inaccurate insights from analytics due to lack of quality, then there appears to either be a disconnect or a difference in perspectives around organizational data quality.

More telling, 75% of the respondents cite data quality as a high or growing priority in their organizations. This aligns with other industry reports that 84% of CEOs are concerned about the quality of the data they’re basing decisions on. With greater emphasis placed on the ability to respond quickly to customers, to rapidly innovate, and to gain new competitive insights, just having good quality operational data is no longer good enough.

The top challenges are neither new nor surprising: many varied sources of data (70%), applying governance processes to measure and monitor data quality (50%) and volume of data (48%) are the top three.

Industry expert Michael Stonebraker noted the first as the “800-pound gorilla in the room” at this year’s Enterprise Data World conference.

75% of the respondents noted large data volume as a barrier to data profiling to gain insight into the data quality issues and subsequently to ensure the quality of the data being used. Whether stored in the data lake or in the Cloud, roughly 20% of the participants cited the quality of that data as “Fair” or “Poor”. Without the ability to gain effective understanding or insight, or to address data quality, it’s no wonder that the recent study by Dimensional Research reports that nearly 80% of AI initiatives have stalled due to data quality issues.

Next week I have been invited to speak at the BI and Analytics conference – I will look at some of the challenges inhibiting successful AI and bring in some of this research. If you can’t join us please download the report for highlights from the survey as well as a deeper look at the full results.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.