The full episode is available here and worth a listen.
I’ve heard many CEOs talk about AI but this conversation was different.
Here are some excerpts I thought were especially interesting:
The Six Stages of Enterprise Journey with AI
HB’s Notes: A very succinct and representative story of what Enterprises especially BFSI / Healthcare have seen with AI / Automation.
1. Imagining a new era with IBM Watson
(after watching it in action vs Ken Jennings in Jeopardy)
In 2013, we were one of the first guinea pigs, pilots with IBM for the use of Watson, when they came up with Watson for, you know, natural language, processing and so on. My vision at that time, was that I plug Watson into Bloomberg, Watson would read Bloomberg in real time, and be able to read all the graphs, charts, everything.
Therefore, instantly I would know, whenever the chilli(Chilean Peso?) had gone up, or the gold should be down or what happened to the Apple stock.
On the other side, they would read the half a million wealth portfolios I have. And we’re constantly keep figuring out, hey, you know, versus got too much gold or futures or too little server, then match it and make recommendations every morning, to rebalance your portfolio
That was my vision.
2. The Buyer Test Fail
(when the Enterprise buyer decides to play around with the product, but that gets repeated over and over)
I still remember vividly, there’s a sentence called “Greece is not yet too big to fail”. And it could not get the context of that it read “not yet too big to fail”, and basically decided that “Greece was too big to fail”, right.
HB’s Notes: You’ve not sold SaaS to Enterprise if your buyer doesn’t have one adverse example that they rememeber and quote for years to come.
3. AI fails to deliver
Watson didn’t work for us, because I realized that natural language processing at that time turned out to be quite primitive, it was clunky. First of all, it couldn’t read graphs, it couldn’t read pie charts, it couldn’t read pictures. But also it was doing sentence parsing.
4. The Verticalized Solution
(that has trained on your particular domain and actually understands the user journey)
Fast forward a few years later, I invested in a New York based company called Consisto(??). We actually used it to plug into a call centers to be able to do chat, so wanted people to talk and Consisto would do the response.
5. The High Quality Bar
(replace Consisto with RPA or anything else and the story stays the same)
Even though it was in the domain, financial services, and we train the train the models like heck, despite that the accuracy rate of being able to answer good conversation was about 85-87%. What that means is that one in every six or seven callers would get completely absurd information that’s not good enough to put into the market and one in six, seven times, you’re talking nonsense.
6. The Lobotomy & The Human in the Loop
(simplify what you ask of automation and add a Human to deal with edge cases)
So we sort of crafted what we call guided conversations, we did not let it loose or open conversations, we will craft it into a yes/no situation, the AI tool would handle the understanding, but would go through a series of guided conversations to make sure we got the output, right.
1. Imagining a New Era with
IBM Watson ChatGPT
However, what ChatGPT showed in November, I had not kept pace, I did not know that we had evolved so far, and GPT-3 and GPT-4 are complete game changers. To me, I’ve often felt that ChatGPT has rekindled my dystopian view of how this is going to be really impactful on the future of everybody. And not just banking, of everybody. I mean, today, from your MOOC on language models, the capacity to do what a person can do is just extraordinary.
The Road to Dystopia
HB Notes: Over the past 6 months I’ve spoken to a lot of business folks about AI. In just under 3 minutes Mr Gupta summarizes nearly all the things I’ve seen Enterprise leaders do & say about ChatGPT.
Experiments & Productivity Enhancements
We’ve got 10 different projects running with ChatGPT, or generative AI models, to essentially do minute taking in meetings to write our annual report, to write up our research papers, to send marketing material to a customer. So everything which I have a team of 5-8 people doing today, I can do with one person, or maybe even zero very soon. It’s quite clear to me that the productivity that you get from some of these things is massive.
Immediate Impact means this time is different
You get to see the impact to that within 12 months, it’s not going to be 5, 7, 8 or 10 years
Job Losses & Re-skilling
Will these lost jobs be replaced by something else? Multi billion dollar question. At DBS we’ve been quite fortunate that so far we’ve been able to re-skill people do different jobs. As I look forward, it’s not entirely clear to me that I can find the jobs for the kinds of skill sets and people that ChatGPT could take out.
How do you start making the distinction between a robot & a human? this has both passed and failed, the Turing test. It has passed the Turing test, because it’s obviously so perfect. But you can make out that it wasn’t the human being, is that it’s failed the Turing test, because I couldn’t have written it that well.
My industry is an industry which is built on discrimination. We’re paid to do to discriminate between good borrowers and bad borrowers.That’s what my shareholders expect me to do to give money to people who are going to pay it back.
But when I build an AI model, which does the same thing, and does it better than I can do to discriminate between good and bad borrowers, that creates a lot of consternation society, through it could lead lead to redlining and black lining and you choose or don’t do some geographies, and don’t do some ethnicities. But that’s what the AI is telling you that those are the better payers, and those are the worst payers.
The insurance industry is based on a fundamental premise that none of us know where the risk is. So we neutralize the risk. We all we all share the risk. If all of us are sitting in a room, we don’t know who’s gonna get cancer, we all put money in a kitty, which is premium, and the person who gets cancer basically withdraws the money. Now, what happens when we all know when AI tells you 100% surety that future is going to get cancer and versus does not? End of insurance industry.
The people who know they’re not getting cancer don’t want to pay the premium. The people who know they’re getting cancer, nobody wants to insure them.
AI is racing down the path and nobody started thinking about what are the things that can go wrong? How do we put guardrails? How do we decide what is not appropriate to do? None of it is clear at all.
Privacy vs Social Good
When COVID happened. We had case early in February 2020. My BI team used a bunch of data: door-tap data, outflow data, meetings data. Within an hour, the first contact tracing output for the bank, and published it by the evening, so everybody knew whether they were at risk, whether they needed to quarantine, etc. I did an OpEd in the Financial Times saying this is the power of data and how well you can use it. I got flamed by people in the West, saying you had no right to use the employee data to do this stuff. Whereas all my employees in Asia loved it. They were saying this is the best thing that happened because it kept us all safe. So this notion of collective use of data for good outcomes, social outcomes, relative to individual rights over data, it’s a very East vs West concept. So there’s no absolute.
You can’t regulate availability of data, you can’t regulate away the fact that the cameras in airports and cameras everywhere capturing a movement, we can’t regulate away the fact that you have a large digital footprint that everybody’s being able to track all the time.
The best analogy to it for me is how do you control a gun versus a knife?
You control a gun to a licensing regime. So you need to go get a license before you can go buy a gun. But you control the knife through appropriateness, suitability and use
Anybody can buy a knife at any hardware store. If you use it for cutting the apple, it’s an appropriate use. If you use it to stab somebody you get thrown into jail, I think the use of data and AI will eventually have to go that way.
DBS uses a rubric to evaluate the use of Data & AI called PURE
- Purposeful (there should be a good reason to tap into data)
- Unsurprising (no one should be spooked when they hear that we’re using this data)
- Respectful (do not invade privacy without a good reason)
- Explainable (we can explain model behaviour)
This is the most detailed explanation I could find: Capgemini Research Institute - Interview with Paul Cobban, Chief Data & Transformation Officer at DBS