Is Recent Criticism of IBM’s Watson Unfair?

On May 8th, Chamath Palihapitiya, Founder and CEO of Social Capital, declared on CNBC that “Watson is a joke, just to be completely honest”. “I think what IBM is excellent at is using their sales and marketing infrastructure to convince people who have asymmetrically less knowledge to pay for something”. IBM responded that “Watson is not a consumer gadget but the A.I. platform for real business. Watson is in clinical use in the U.S. and 5 other countries. It has been trained on 6 types of cancers with plans to add 8 more this year.”

Watson’s contributions in specific fields of application are impressive but Palihapitiya’s argument that IBM’s claims are too broad deserve examination, not least because the parallels between success stories such as oncology and other industrial applications seem tenuous. Just as in real-life, an expert oncologist may offer little insight into specialisms outside of their area of expertise, so too can specialized software systems offer little value in other domains. So the counter-argument from IBM referring to specific success stories is weak because it side-steps the main issue which is a claim that IBM is over-hyping Watson by inferring the relevance of successes in certain niches to broader applications. IBM need to explain why high-profile successes in one domain are relevant to broader industrial challenges if their claims are to carry weight.

Across the world there are research teams working diligently on grand AI challenges in areas such as Procurement, Transport, Sport, Law, Health and Agriculture. The primary obstacles for software companies developing AI solutions typically surround transformation of noisy data from multiple real-world sources into streams that are cleansed and validated with desired outputs that constitute training data for classification, image recognition or other challenges. There is no shortage of AI software libraries to use on when deploying solutions on any cloud infrastructure provider of your choice so the argument for ‘AI Infrastructure’ is less compelling than for Hardware Infrastructure as a Service. The IAAS business case is obvious, software providers don’t want to manage IT hardware and its more cost-effective to out-source that activity. Hardware management is not a core aspect of a software business whereas developing and managing AI is very much central to most businesses and it should also be important for any Enterprise to retain control and responsibility for this. So the nuances around AI as a Service, whilst offering some value for most companies, are easier to bypass and for strategic reasons may be preferable to retain in-house.

Another concern is that the most appropriate AI techniques for addressing a relevant industry challenge require multiple subfields of AI. For example, Watson has demonstrated strength in Natural Language Processing but that doesn’t necessarily translate well to Planning or Optimization challenges. Intelligence is such a broad concept that progress in specific functional areas is being progressed by teams whose sole focus is that niche. The various components of AI systems that exceeds human expert level rely upon the software development team truly understanding the details associated with that domain. Nobody would dispute that IBM has some very strong AI development capabilities in various subfields of AI, but the real industry challenges lie in the domains of capturing comprehensive and well-structured data, big-data management and digitalization of legacy un-optimized business processes.

In the case of Procurement, we recently saw how IBM decided to mothball its Emptoris product and partner with SAP Ariba. The May 17th press release stated that “Leveraging SAP Leonardo, IBM Watson technologies and SAP Ariba, the solutions will bring intelligence from procurement data together with predictive insights from unstructured information to enable improved decision making across supplier management, contracts and sourcing activities.”. Again, this seems to miss the point because providing predictive insights from unstructured information is not conducive to effective AI. The promises in the press release that Watson could help with “defining the correct Request for Proposal type, identifying appropriate suppliers to participate based on commodity category” are underwhelming. AI enriched software can deliver much more than simple classification but if the only structured data it can parse are the historical categories of spend and the type of RfP that was issued then it is limited to just that. Real value comes from understanding how the sourcing event was designed, the parameters used in configuring the bid process and operational details of its success or otherwise to learn how it should be improved in future.

Powerful and groundbreaking AI is possible and that’s what Keelvar is aiming towards. The hard challenge lies in preparing the foundations for collection of rich, well-structured data that is reliable and clean. There are no quick short-cuts around this because the significant upstream challenges in Procurement that offer most value when supported by AI are complex and require all details to be known and accounted for systematically. The first steps on the path to truly intelligent procurement have already been taken and some significant successes already enjoyed. A white paper on Sourcing Robotics described some important stepping stones and a realistic plan towards the Holy Grail of Autonomous Sourcing. https://keelvar.com/white-paper-download/. From Keelvar’s perspective, at least, really valuable AI for Procurement requires that no shortcuts be taken in the preparatory stages on that path to the Holy Grail.

Dr. Alan Holland has a PhD in Computer Science and lectured in Artificial Intelligence in University College Cork. He is the Founder and CEO of Keelvar Systems.

The image is from Clockwork, Creative Commons licence: https://commons.wikimedia.org/wiki/File:IBM_Watson.PNG