AI Accessibility: The Next Spreadsheet Revolution for Modern Business?
The Key to Better Business Outcomes from Data Science
In a Harvard Business Review article, Alessandro Di Fiore, founder and CEO of the European Centre for Strategic Innovation (ECSI) countered the assumption “that companies with more data scientists have a better chance of generating business impact.” Based on both in his consulting work and research, he has come to the conclusion that hiring a larger number of data scientists does not necessarily produce better results for a business.
The same observation was made to me in a recent interview with Henry James, founder and deputy CEO of Fincross International, who said that what he’s seen at businesses with vast resources to invest in data science is that they can, in fact, do better with a team of five than of 50.
Extending AI to Those with Domain Expertise
What really makes the difference for a company, Di Fiore, pointed out, “is the democratization of access to AI tools and decision-making power among managers and employees which creates more tangible value.” He went on to observe, “Best practices show how democratization can bring about quicker and better distributed decisions, making companies more agile and responsive to market changes and opportunities.” (To learn about how some businesses are already using AI, check out AI Today: Who Is Using It Right Now, and How.)
While he doesn’t care for the term “democratization” and prefers that of “team sport,” Todd Hay, Ople’s COO, agrees with that view. As he explained in an interview with Techopedia, he envisions the shift from rarefied and centralized AI to the masses as analogous to the adoption of spreadsheets, a useful tool that should be used by all businesspeople.
“Subject and domain experts are in the best position to assess a prediction that can impact the business,” Hay said. But with a setup that puts data scientists in charge of those predictive models, “they’re excluded from the process.” That’s not to the benefit of the business.
Though he concedes that the data scientists have the expertise in math and statistics to judge if a model performs well or not, they don’t have the capability to determine which questions they should be putting to the AI to solve. And that gap between model expertise and stakeholder expertise is what accounts for the fact that “70%-80% of case models are never used.”
Understanding what Goes Into the Decisions
There are further ramifications to not being able to understand the way the model works. In regulated industries like healthcare, insurance or finance, Hay said, the concern is being in a position in which they have to explain the decision-making process to auditors and not being able to do it.
Rick Saletta, Ople’s senior sales marketing executive of AI, machine learning & data science, noted his agreement in the interview and said this is why businesses are now looking to develop “transparent AI,” also known as explainable AI. As we saw in AI’s Got Some Explaining to Do, in the absence of a clear explanation of how AI reaches its conclusions, you cannot be sure it is “bias-free.” He added, it is no longer acceptable to shake off the business’s responsibility to operate fairly by saying “the AI did it.”