S
4
-
EPISODE
29
Data Masters Podcast
released
April 15, 2026
Runtime:
44m12s

Applying the Scientific Mindset to Machine Learning, Data Science and AI with Jonathan Burley of Bloomberg Industry Group

Jonathan Burley
Director of AI of Bloomberg Industry Group

Building effective AI products isn't just about using the latest large language model; it's about asking the right questions and solving real problems. In this episode, we’re joined by Jonathan Burley, Director of AI of Bloomberg Industry Group, to explore his journey from modeling climate systems to leading AI strategy. Jonathan discusses why the scientific mindset is critical in machine learning, the value of the minimum viable experiment and how to avoid the pitfalls of generative AI demos.

I'd rather read the transcript of this conversation please!

In this episode, we’re joined by Jonathan Burley, Director of AI of Bloomberg Industry Group, to explore his journey from modeling climate systems to leading AI strategy. He shares why the scientific mindset is critical in machine learning, the value of the minimum viable experiment and how to avoid the pitfalls of generative AI demos.

Key Takeaways:

00:00 Introduction.

03:14 The evergreen skills of handling data nuances help scientists transition into industry.

08:57 The scientific method provides a foundational mindset for reasoning under uncertainty.

16:39 Frame conversations around concrete business problems instead of leading with new technology.

19:46 Focus on minimum viable experiments to test core assumptions before committing to a minimum viable product.

24:46 Find unexciting areas of the economy where AI tools can deliver rapid and measurable ROI.

38:20 Approach generative AI demos with caution because they easily disguise incomplete products.

41:51 Solve the most boring, thankless and repetitive tasks to build tools experts actually want to use.

Resources Mentioned:

Bloomberg Industry Group website

Actifai website

Continuous Delivery — David Farley

[00:00:00] Jonathan:

Be really careful of demoing Gen AI.

[00:00:03] Jonathan:

It breaks the rules and will often look a lot closer to being a real product than it actually is.

[00:00:08]

[00:00:34] Anthony:

Welcome back to Data Masters. Joining us today is Jonathan Burley, the Director of AI at Bloomberg Industry Group, where he leads AI and machine learning initiatives across their legal, tax, and government divisions.

[00:00:51] Anthony:

Jonathan brings over 15 years of experience to the table, ranging from his early days as a strategy consultant in London to co-founding Actifai where he served as the Head of Data Science. A PhD in computational geophysics from the University of Oxford,

[00:01:08] Anthony:

Jonathan has a unique academic pedigree that includes research fellowships at both Oxford and Harvard University. In today's episode, we'll explore Jonathan's journey from modeling volcanic systems to leading AI strategy

[00:01:24] Anthony:

at

[00:01:24] Anthony:

one of the world's most data-rich organizations. And we'll dive into his philosophy of practical AI, the idea that a simple solution is often the most effective, and discuss why he believes

[00:01:37] Anthony:

that the scientific mindset is often more valuable in machine learning than pure coding proficiency. And finally, we'll spend some time taking a look at how he's leveraging Gen AI to transform data-professional data products. Jonathan, welcome to Data Masters.

[00:01:59] Jonathan:

Pleasure to be here.

[00:02:00] Anthony:

So, before we dig into what you're doing today, your

[00:02:05] Anthony:

background is fascinating. Geophysics and maybe somewhat, maybe, I suppose for listeners, somewhat unexpected. I will say for the record that we've talked about this, but my experience has been that physicists in general, but scientists more generally, are often drawn to data work—into data science and data engineering.

[00:02:30] Anthony:

But again, I'm getting ahead of myself. You know, you've spent a lot of your time and energy thinking about modeling climate systems, volcanic systems. You've used, presumably, large and expensive computing supercomputer environments, but

[00:02:45] Anthony:

now you work in the corporate world at Bloomberg Industry Group, looking at Gen AI.

[00:02:50] Anthony:

Maybe talk a little bit about this transition from the academic work into the industry work.

[00:02:56] Jonathan:

Yeah, great question. I'd agree with the overall observation. From my time at university, I've got a wide circle of PhD friends in the physics and science departments. It is a large fraction of them, possibly more than half, who now do jobs in the AI and data science sector.

[00:03:14] Jonathan:

The evergreen skills of shouting at computers and really caring about the nuances of data turned out to be useful in the 2010s and 2020s as people were looking for what to do next after their doctorates. But me specifically, that PhD that I did around climate-systems-scale computational models—there are things that are really relevant to the work that I then went on to do.

[00:03:37] Jonathan:

There's how do you handle large-scale computational loads on distributed clusters? There was the nuance of the work that I was doing with simulations where I was trying to say, instead of a really complex climate simulation where you can run all of the climate simulations you like and it'll say something like, "Hey, the monsoon belt moves three degrees further south in all of these simulations; we are really certain that's a real effect"—why does that happen? Know, it's impossible to trace through all of the physics in the box. I moved to systems of let's make them reduce complexity; let's get the system as simple as possible while still

[00:04:10] Jonathan:

showing real effects in the world. And when you do that, you have a much, much stronger

[00:04:17] Jonathan:

ability to trace causative effects through the system. And I'm sure our data scientist listeners will be like, "Wow, that sounds a lot like data science." Yeah, a lot of the same mindset. The open question I always have is: is it your experiences that make your mindset, or is it the mindset that means you choose the right experiences?

[00:04:33] Jonathan:

That all of those PhD scientists who always

[00:04:35] Jonathan:

the are

[00:04:36] Jonathan:

right kind of people for data science and they would've been great without the PhD? I don't know.

[00:04:41] Anthony:

You point on something there I wanted to key

[00:04:43] Anthony:

in on 'cause it's super interesting and I see this in a number of different places in my own life.

[00:04:48] Anthony:

You're pulling in the thread of explainability. And the one way to think about this is in a professional environment, if we say the propensity to churn is 22%, it doesn't really matter why, presuming it's a right. Right. People don't, you know, at their, at some level

[00:05:07] Anthony:

don't care.

[00:05:07] Anthony:

They're just like, "Great."

[00:05:08] Anthony:

"If that's the answer, fantastic." But in academia, the focus is on understanding the causality in the chain of events that cause something to happen, almost—I want to be careful—not at the detriment of being

[00:05:24] Anthony:

right,

[00:05:25] Anthony:

but as equally important. And I'm also being a little loose with my language 'cause of course people do care why the propensity to churn is a particular number.

[00:05:33] Anthony:

But is there a—give me your thoughts on this distinction between the, you know, and it comes down sometimes to almost like academic papers, 'cause you can't write a paper if you can't explain why something happened.

[00:05:45] Jonathan:

Yeah, I think that's a fascinating question. I think some of the best scientist-industry converts are the people who really do care about the causative effect.

[00:05:56] Jonathan:

I think it's because in business, putting out an accurate number is fun. But what's really valuable is understanding the why. Like, you're 22% likely to churn, and we can reduce that by five percentage points if we can remediate this effect. That's valuable. And if you have someone who comes into the business who's obsessed with that kind of detail, who finds that joyful to work on and is very good at it, they are a person who drives a lot of value for the business.

[00:06:25] Jonathan:

I think it gets into the idea of data products as any way that you create value from data. That's an area where the scientist who wants to understand why and then do something with that why is very useful. I think—I don't know, this is likely to come up again—I find it a fascinating topic. It's the "just a scientist mindset" isn't enough.

[00:06:44] Jonathan:

There's other skills needed to be an effective operator in industry, and the "I understand the why, and then what do I do with that information? Why is it meaningful to act on that information?" is the second keystone

[00:06:57] Jonathan:

in

[00:06:57] Jonathan:

building an effective industry operator from a scientist.

[00:07:00] Anthony:

So in that spirit, I mean you've spent a lot of—I mean you yourself, obviously, were an academic or worked in that area and have transitioned into data science.

[00:07:11] Anthony:

You've also hired many people that come from a traditional STEM background. And I don't want to put words in your mouth, but—and you're welcome to correct me if you think I'm misstating the case or overstating the case—but I think it'd be fair to say you prefer hiring people with a STEM background over traditionally trained classical computer scientists.

[00:07:32] Anthony:

So,

[00:07:33] Anthony:

talk about? Or first of all, am I properly putting words in your mouth? And if you agree, like why? What is it about the STEM background? And I hope that for listeners, maybe this is a little bit of a joyful moment because presumably there's a bunch of STEM-background people who are like, "Oh, finally someone who gets me."

[00:07:55] Anthony:

But you tell me.

[00:07:58] Jonathan:

Yeah.

[00:07:58] Jonathan:

Well, I guess, let's see. I'll start with the nuance. Yeah, I've done a lot of hiring. Last time I ran my numbers, I think it's—I've interviewed a little bit under 1,500 people across data science, software engineering, and machine learning roles of various secondary titles: machine learning engineer, machine learning scientist.

[00:08:15] Jonathan:

And

[00:08:15] Jonathan:

the successful candidates have come from a wide array of backgrounds. Some of the ones you expect—computer science, traditional sciences, maths—and then some less stereotypical ones like historians, classicists, and journalists. So I want to be careful to avoid giving the impression that only scientists can succeed, or if future colleagues of mine are listening, that I would only hire scientists.

[00:08:36] Jonathan:

But I would say it is a fact that scientists are better prepared to excel in these roles, particularly the data science end of the spectrum, than you might naively think. Which I don't think is an original observation from me; I'm not the first person to notice this. The first person I saw give an explanation that really resonated with me was David Farley of Continuous Delivery, which if listeners haven't heard of him, I advise following David.

[00:08:57] Jonathan:

I think he's great. So that's my background piece. And then to answer the question, why do I think this? What's the balance between why you might like a traditional coding background versus a sciences background in someone that you hire? I'd say that you can absolutely train someone to code on the job.

[00:09:14] Jonathan:

Coding has clear rules. There's fast feedback. There is a well-defined notion of what correctness is when you're writing code. So if you're motivated and reasonably sharp, you can get productive surprisingly quickly. But the scientific method is

[00:09:28] Jonathan:

different.

[00:09:29] Jonathan:

It's a sort of foundational mindset about how you think and how you reason under uncertainty, and it's a lot harder to change how you think and how you approach a problem than some bolt-on of how you execute and perform work after you've done your fundamental thinking.

[00:09:46] Jonathan:

Why does that matter for us in our world? Well, machine learning in the real world isn't about implementing a model—or at least very rarely outside of a toy problem. It's about answering questions where the answer isn't known ahead of time, or optimizing under an unclear outcome and unclear constraints. And your really good scientists

[00:10:07] Jonathan:

know

[00:10:08] Jonathan:

how

[00:10:08] Jonathan:

to transform a vague business need into a testable hypothesis. So recognize the assumptions that have gone into how the problem is framed and maybe question those assumptions and change the problem. Framing is the most useful thing you can do before you can get to using a keyboard to code any of this into a solution.

[00:10:23] Jonathan:

Then, how to design experiments and interpret noisy results and go, "But this isn't working. We should try something different." I think the best version of a scientist—which I'm doing sort of air quotes if people are listening to the audio—the best version of a scientist is kind of the good version of the Silicon Valley fail-fast mentality of experimenting well, your original question and assumptions, and then going, "Okay, what do we do next?"

[00:10:50] Jonathan:

So coming back around, doing the short summary version of that: it's that CS fundamentals are teachable on a reasonably quick timescale. But there is something really valuable to how you operate in a business in the very uncertain space that data science and machine learning tends to be,

[00:11:05] Jonathan:

that

[00:11:05] Jonathan:

can really well align with someone who comes in with the right science mindset.

[00:11:11] Jonathan:

I'd say there are two nuances about how I'd think about that. But I'll pause now 'cause I've been talking for quite a while.

[00:11:17] Anthony:

Sure. So, just to play that back, what I hear you saying—and this has been my experience as well, so I'm echoing my own perspective—is that scientists by their nature start with questions.

[00:11:29] Anthony:

They're naturally in, and

[00:11:32] Anthony:

and are almost like drawn to questions, whereas I think a lot of computer scientists or coders, and even some business people, shy away from questions 'cause questions are uncertainty. And like if you're trying to write

[00:11:44] Anthony:

software

[00:11:45] Anthony:

and someone's like, "Well, here's a bunch of questions I have," they're like, "Well, that's not helpful to me. What I need are answers because I need to write code and I need the requirements. Give me the requirements and then I'll write you the code." Whereas a scientist is naturally drawn to: where are the, what are the pieces of this flow that you are least certain about?

[00:12:03] Anthony:

And so I can sort of develop a set of questions, hypotheses, theories about what the answers are. And they view the writing of the code as almost the drudgery—the, you know, if it was an archaeologist, that's the digging, not the

[00:12:17] Jonathan:

Yeah.

[00:12:18] Anthony:

know, not the place to dig.

[00:12:19] Jonathan:

Yeah, I'd echo that.

[00:12:20] Jonathan:

I think that's spot on. I think what you're talking about there of, inside a business, someone comes along and says, "Hey, let's do this," and there's a version of stereotype of the science person who goes, "No, I'm not doing that. Let's sit down and talk about it for a week"—is a pivot of like, when scientists are great for a business and when they can be

[00:12:39] Jonathan:

a really bad

[00:12:40] Jonathan:

fit. That was going to be one of my—one of my nuanced thoughts is like, scientific thinking is not the only thing that you need to be effective in business. What you need varies a little bit by business, but for nearly all businesses, that negative stereotype about a scientist wants to think and talk and not do anything for an extended period

[00:12:58] Jonathan:

is the nuance. Why not all scientists are good at this? You need—there are stereotypes like that bias for action, the knowing social interaction, like when to push back on a thing. What are the assumptions we can question here? Or don't just question 'cause you find the questioning fun. Say it's really important if this assumption is true or not,

[00:13:17] Jonathan:

so I'm going to dig in on this one. This other assumption could be true or false, but it doesn't really change what we're going to do, so don't spend time talking on that one. There's that balance of don't investigate because curiosity is fun; pick where you apply your curiosity because there's real value in having a better certainty on this assumption or this question.

[00:13:39] Anthony:

Yeah, that's, I think, an excellent point. And maybe the

[00:13:44] Anthony:

the

[00:13:44] Anthony:

other side of the coin is that if all you care about are questions, you know, the job is not identifying questions, but it's actually answering them. So you said two nuances. What was the second?

[00:13:55] Jonathan:

Yeah. The second nuance that's linking to us doing kind of "scientist" in air quotes during the answer is like—do get nervous about language of what at least people think—but it's not being a scientist.

[00:14:05] Jonathan:

It's not this magical stamp of "you're a scientist, so you're excellent and everything else is terrible." There's this mindset of the appropriate level of curiosity and questioning the assumptions and understanding what value is, and you don't need to be a scientist to have that. Like, you can have that mindset without ever doing a science degree.

[00:14:22] Jonathan:

There are other degrees that teach a lot of that. I think if you went and spoke to a historian who really gets into talking about provenance on sources—like, what do you think you know, and how do you think you know it?—they have a lot of similarities. So I don't

[00:14:40] Jonathan:

want to imply that you need to be a scientist to be good at this work. Conversely, there are plenty of scientists, people who have been successful in science and for various reasons in their specialty, do not have this mindset, or they have some of this mindset but then don't have those more business-friendly skills on how to apply the scientific curiosity.

[00:14:54] Jonathan:

So really what I'm trying to do is being fair to the audience at large and not have the thing where, "Oh, this guy's got a PhD in science and therefore he loves PhD scientists and everyone else is a second-class citizen."

[00:15:04] Jonathan:

That really isn't it. It's a—there is a certain way of thinking about the world that some scientists have. It's an easy marker to find people who are more likely to have it, but it's certainly not the only place that you find people who are great at machine learning and data science roles.

[00:15:18] Anthony:

Sure. So the key takeaway here is, to your point, don't give up and go back and get a PhD, but rather start any data problem with a set of questions, assumptions.

[00:15:29] Anthony:

Lean into the questioning dimension because it helps you frame out the problem. And in a way, don't start with writing code to that earlier point, because that's likely to drive you down to answering

[00:15:41] Anthony:

might

[00:15:41] Anthony:

maybe the wrong question in that spirit. And maybe shifting gears slightly

[00:15:47] Anthony:

away

[00:15:47] Anthony:

from the philosophical to a bit more of the hands-on, you've spent a lot of energy advocating for discovering what are the real root problems and using

[00:16:00] Anthony:

simpler tools to answer those questions rather than starting with the tools, the latest, greatest, coolest newfangled algorithm or a thing, and you, and then figuring out what problems you can go solve with that.

[00:16:15] Anthony:

And

[00:16:15] Anthony:

we'll,

[00:16:16] Anthony:

we'll get to this in a second, but just to foreshadow

[00:16:19] Anthony:

in

[00:16:19] Anthony:

a way, Large Language Models and Gen AI have that feel about them, which is many organizations are saying, "Well, I need to have a Gen AI, you know, Large Language Model strategy" without then having any—and then, "Well, what problem can I go solve with this?" I don't know. So, but let's wait for a second before we get there.

[00:16:39] Anthony:

Start with the general, this idea of starting with problems versus starting with solutions.

[00:16:44] Jonathan:

Yeah. You know, I find something fascinating. We could talk for hours on this and the nuances, but that framing is spot on. If you're trying to start a conversation, don't start with the technology. Start with: what is the problem that we're trying to solve? I have seen this go wrong where hype can survive

[00:17:03] Jonathan:

when people lead with that "we need to use the latest model," or there's secondhand fears of what the rest of the market is doing and we should do it as well.

[00:17:12] Jonathan:

Instead of doing the "here is a concrete problem that we have and how to solve it." Like, "Here is a bottleneck that limits us for these reasons.

[00:17:19] Jonathan:

Here is a customer problem that matters

[00:17:21] Jonathan:

for

[00:17:21] Jonathan:

these reasons." And once you force clarity on the problem—

[00:17:25] Jonathan:

what

[00:17:25] Jonathan:

outcome needs to change for who, by how much, what are the success criteria, what's the value at stake?—then you're in a much better place to talk about the technology solution and to give trite examples.

[00:17:36] Jonathan:

You can have something that sounds really important and exciting, and then you work out like, "Oh, you know, what value does this represent to our customers?" and the answer is an extra, I don't know, $500,000 across our customer base. Then implementing a Large Language Model from scratch might not be worth it. Like, you might decide that the cost to serve that means it's actually lower revenue than your product currently is, and

[00:17:59] Jonathan:

you

[00:17:59] Jonathan:

need to find a better value case than using Gen AI. That's a small example. And if I was going further, we'd be talking about the ways in which you should frame this, that I just did a kind of version of arguing against anti-pain—going to leadership and saying, "You know, here is a risk management scenario, and Large Language Models have these costs." At a large, better way to have the conversations.

[00:18:29] Jonathan:

I think like, "Well, what can we do?" and we build value from the ground up. You're like, "Okay, we want to understand the system. The minimum viable experiment is to go look at our data, let's improve data pipelines," and then that's our minimum viable experiment: does the right kind of data exist for us?

[00:18:43] Jonathan:

Great. If it does, we can explore further. Don't build your final-stage machine learning product as version one. You have a, you know, an MCP template that wraps around your LLM gateway, and you put an API into the MCP server and just quickly chat back and forth with it. With a Large Language Model, it takes you like 15 minutes, and you can then see if your API produces answers that are easily understood by the model and it feels good to chat with.

[00:19:06] Jonathan:

Then maybe you are ready to build a product around that. If that fails, that 20 minutes' experimentation shows you that this is not a short project; this is a much larger refactor to get to a product, and you should price that in before you commit to it.

[00:19:19] Anthony:

Yeah. There's

[00:19:20] Anthony:

a couple points there, but I think the idea of starting small and scaling, trying something quickly.

[00:19:26] Anthony:

And I think the challenge we've always struggled with in conversations on this topic is how to know. Is the experiment a failure because it was too MVP-like—too "minimum"? You know, the important word, the word that does the work at MVP, is the "V"—is the "viable" part.

[00:19:46] Jonathan:

Yeah.

[00:19:46] Anthony:

So, but how do you think about that? Or do you have tests for it, or how do you think about that?

[00:19:52] Jonathan:

It's a really tough one. In the ideal world, I love the principle of—you can write down, AWS calls it like a "one-pager."

[00:20:00] Jonathan:

The "here's a product we're thinking about building, here's why it matters, here are the assumptions that go into it."

[00:20:04] Jonathan:

What you can then do is you don't even need a minimum viable product. It's not that you need to get to a product that's good enough to go to customers. You can get to a minimum viable experiment: drill down on those hypotheses and say, "Is this thing true or not?" and that lets you do some very small and faster and cheaper units of iteration on: are all the

[00:20:25] Jonathan:

things that I need to be true about the world for this to be a success true or not? And

[00:20:29] Jonathan:

going back to earlier in our conversation, sometimes you have assumptions that really matter and you 100% need to determine. Some of 'em don't matter so much. Test them and build from there. Without a concrete example, it gets hard to go into how you would pivot around those decisions like the—

[00:20:46] Anthony:

Sure.

[00:20:47] Jonathan:

You know, "we went and looked at the data; the data isn't good enough." And sometimes "the data isn't good enough" turns into a, "Oh no, that's a multi-year project to fix," and sometimes "the data isn't good enough" is a relatively cheap problem to fix, and you move from there accordingly. So I think if I

[00:21:03] Jonathan:

was boiling it

[00:21:04] Jonathan:

down into one thing, it's the

[00:21:06] Jonathan:

I try really, really hard to write down assumptions. And if you haven't thought about it this way before, minimum viable product is a great principle; minimum viable experiment, even better. And start testing those assumptions. And you can do that in weird, careful ways that feel almost orthogonal to building a product but are actually very good ways of understanding if the things you need to be true are actually true.

[00:21:27]

[00:22:07] Anthony:

I like that. And again, it goes back to this scientific mindset of thinking about: what are the questions we're trying to answer? How would we break those things down? And then to your point, a minimum viable experiment. And I like that you almost could imagine a tree that: here's your MVP—the minimum viable product—here's a set of minimum viable experiments.

[00:22:29] Anthony:

And there may even be some staggering there. Like, "I'll try this experiment; if that works, then I'll trust this, this." And build up to—what I appreciate about that way of thinking is, and again, I think this is the problem with this "minimum viable product" framing: again, so much hangs on the "viable." There's a tendency to try something, have it fail, and give up and be like, "Well, we tried it, we did the MVP, the MVP didn't work. We, therefore, it was wrong," when it just wasn't viable. You didn't have enough to get it over the line.

[00:23:04] Jonathan:

Yeah,

[00:23:05] Jonathan:

I think that's spot on, and it gets to some of the real complexities about getting stuff done in the

[00:23:09] Jonathan:

actual world. We're dealing with humans. They're invested in things; they have their own biases coming into the problem. When it's done at the product level, it becomes really easy to get in these swells of "someone is really in love with this," and they

[00:23:24] Jonathan:

can

[00:23:24] Jonathan:

see all the imperfections in the minimum viable product. And you'll really get into arguing about: was it viable?

[00:23:29] Jonathan:

Was it big enough? And

[00:23:31] Jonathan:

you can take some of the temperature out of that discussion, lower the heat. If it's not "Is this big product idea that some people really bet their personal capital on—their personal political capital on—correct or not?" it's that there are some assumptions that need to be true for this great idea to be valid.

[00:23:49] Jonathan:

And we're just talking about the assumption; this isn't a thing anyone's in love with. It's just a test.

[00:23:55] Anthony:

Exactly. And so, you know,

[00:23:57] Anthony:

everyone

[00:23:57] Anthony:

can probably get their head around running an experiment, whereas the bigger leap to the product is harder. I'm curious if this feels relevant to your experience co-founding Actifai, and you know, maybe it's worth sharing a little bit about what it was or is, and then how you thought about building up

[00:24:18] Anthony:

from the perspective of a startup building up that minimum viable product.

[00:24:23] Jonathan:

Yeah. So Actifai came out of a startup foundry in DC called Foundry.ai, which is run by Jim Manzi and Ned Brody,

[00:24:34] Jonathan:

whose names people may know. And what's interesting about how we worked at Foundry was we had a pilot of capital to have data scientists and software engineers around who were really good at building new products.

[00:24:46] Jonathan:

But what's always hard for a startup is solving the product-market fit—knowing that if you build a thing, is there genuinely a market for it? We had this very interesting idea that I thought was a fantastic thing the founders put together of working in partnership with large companies. They have lots of CEO mates at Fortune 500-firms and smaller companies—go and do pilots with them.

[00:25:06] Jonathan:

Go talk to a CEO. Let's ask, "What are some problems that you think AI data science could solve?" We'd work in the company, pick the one that looked like it was really promising—combination of it was valuable and the tools existed to really address that problem—and build it and do a pilot and try and get it to show positive ROI (Return on Investment) within six or 12 months.

[00:25:27] Jonathan:

And when you did it and it worked and you could show that rapid return on investment, that was a product that had a product-market fit. And Actifai came out of working with a call center for a regional telco provider in the United States and solving this problem for their call center agents: they actually had a really hard job to do.

[00:25:48] Jonathan:

This is where we convert from kind of the background of where I came from to this weird way of addressing

[00:25:53] Jonathan:

a startup into what problem we actually solved. Buying internet—going to an ISP and buying an internet package—is not like buying a car where people understand what a car is and what they like.

[00:26:06] Jonathan:

You know how many cup holders you like; you know lots of things about cars and understand what you're buying. But like, what's the difference between 50-meg, 100-meg, and 1-gig broadband? What does that mean for you? Like, how much should you value it? It's a tough thing for people to understand who are not living in the internet and tech-savvy.

[00:26:21] Jonathan:

Then asking your customer sales rep to manage the understanding of the customer's technical sophistication, building a rapport with them, getting the vital details over the call like address handling, what's in the service area, what packages are available. Then going through all of the potential offerings alongside the pure

[00:26:38]

[00:26:38] Jonathan:

internet speed that you are selling. There's bundled cable TV packages. What hardware do you need? Do you need Wi-Fi extenders in your house to handle getting the internet everywhere in the home so they can have pleasing provisions? This is getting long, rambly, and confusing to listen to, and that's the point.

[00:26:51] Jonathan:

It's a really hard thing to do in a single call.

[00:26:54] Jonathan:

And what we were offering was simplifying the agent's life, and we did two important things. One was

[00:27:00] Jonathan:

a

[00:27:00] Jonathan:

lot of them were using systems that looked straight out of Windows 95, that were just painful interfaces to manage calls on. So we just had a faster, more pleasing

[00:27:07] Jonathan:

UI

[00:27:08] Jonathan:

that would display key information on screen so they didn't need to remember it in their heads.

[00:27:11] Jonathan:

So write it down. Straight software, no AI, but actually one of the most important things we did for people wanting to buy the software. And two: it turned out to be a really AI-amenable problem in the classic machine-learning sense that we'd have a lot of information about people on the call according to where they lived, and some key questions that were asked, and

[00:27:29] Jonathan:

a series

[00:27:29] Jonathan:

of other things.

[00:27:30] Jonathan:

I'm not revealing company IP secrets; these are all on the Actifai website if people would like to look for the explainer of how the product works. And it turned out you put all that information into a machine learning model, and we could do some really smart, sophisticated things about what is the appropriate package for the customer, what is the appropriate set

[00:27:47] of

[00:27:48] Jonathan:

talking points for this person, and put that all together.

[00:27:52] Jonathan:

And we've just solved a couple of different angles for the customer sales rep to make their job easier, more pleasing, and as it turns out, have a really large uplift in

[00:28:05] Jonathan:

how often you make a sale, uplifting the value of the sale, and by matching it well to customers, seeing that they were better retained over an extended period of time.

[00:28:16] Jonathan:

If I was boiling that down, what were we doing? We were finding an unexciting area of the economy where there hadn't been a lot of investment in solving the problem. And it turned out it was really well suited to AI tools and we were just talking to people who knew what they were doing and just being a little bit obsessive about

[00:28:33] Jonathan:

the right details to make it a pleasing experience and ensuring we weren't just shipping good AI.

[00:28:38] Jonathan:

We were shipping AI that people wanted to use by being very careful about our front end, our customer experience, and thinking about the funnel as you move through the product. It's this all-in-one: you can't just build a model; you need to have a model that's delivered in a way that people will want to use it.

[00:28:54] Anthony:

Yeah, no, I think that's a really critical point. It was obviously something we think about here a lot. Like, the model is interesting, but the value is in the output of the model—the better, more organized clean data in our case. But the call center script or the set of words that you want someone to say in the Actifai case—I'm curious if your

[00:29:18] Anthony:

well, I

[00:29:19] Anthony:

actually, I was going to—I want to ask, with Actifai, did you find that the sales pitch to the agent, the person who's going to use the software, was different than the sales pitch to the senior executive or even the call center manager? Did you feel a tension there?

[00:29:37] Jonathan:

They're definitely different. I think we were lucky that there was not a tension—

[00:29:42] Jonathan:

that the kind of things that someone like the CEO running the company, or somewhere between Chief Marketing or Chief Revenue who's running the call center, want to see are the same things that the sales reps want to see.

[00:29:54] Jonathan:

But just how you approach it is different for those two audiences. Yeah.

[00:29:58] Jonathan:

Our pitch is very clear to senior leadership because we were a product that wasn't arguing for some nebulous efficiency gain. We were very deliberate: "We make revenue. You pay us an amount of money, and as a result of paying us an amount of money, you get 10x that amount of extra revenue,"

[00:30:20] Jonathan:

which is a much more pleasing

[00:30:22] Anthony:

Okay.

[00:30:23] Jonathan:

pitch than all the people who come along asking for money without directly tying it to increased profits immediately.

[00:30:30] Jonathan:

And we offered an A/B test, so it was risk-free if you trust our experiment process. And then when we're going out to the agents,

[00:30:37] Jonathan:

it was really important to have 'em on side. If you try and pitch a product in the company and the end users don't like it, that's a silly place to be. And that was a lot of why we chose to invest a lot in the user experience and be careful.

[00:30:51] Jonathan:

So what the tool feels like to use—things like: how easy is a password reset? How quick is it to navigate back, get back to the start of the call? What happens if you sort of walk away from your workstation for a bit? Those kind of questions. And we would run that as an interactive training.

[00:31:06] Jonathan:

And we're

[00:31:06] Jonathan:

trying to get people to love it on first sight and be like, "Please let me use this."

[00:31:11] Anthony:

Interesting. You want it

[00:31:13] Anthony:

to be

[00:31:13] Anthony:

almost like a pull versus a

[00:31:14] Anthony:

push.

[00:31:15] Jonathan:

Exactly.

[00:31:16] Jonathan:

It's

[00:31:16] Jonathan:

a great friend. Yeah, it's a pull. People are attracted to the software. We knew we were in great places when we'd do A/B tests and they were kind of difficult to maintain because there were complaints internally that people who weren't using the software

[00:31:26] Jonathan:

really wanted to use the software. Partly for "it's nicer to use," and sometimes internally people were like, "Hey, people who are using this tool are making more money. Like, we know they're making more sales. I'm paid on commission. I really want this like now."

[00:31:42] Anthony:

Yeah.

[00:31:43] Anthony:

That's wonderful. I love that.

[00:31:46] Anthony:

So,

[00:31:46] Anthony:

what have you taken from that experience to Bloomberg Industry Group?

[00:31:50] Anthony:

You know, clearly, I think it would be unfair to characterize Bloomberg Industry Group as a startup, although maybe, I hope, it shares some of that DNA at its core. But walk through like what that transition's been like. What's worked in that transition and what have you had to rethink?

[00:32:08] Anthony:

I.

[00:32:08] Jonathan:

Ooh, great question. And a lot of the fundamentals are

[00:32:11] Jonathan:

are

[00:32:11] Jonathan:

the same. If you're going to make a good problem—sorry, a good product—you need to understand why it's a meaningful problem to your customers. You need to understand why it's valuable to do this. And I think that gets to the greater philosophical questions about when you're running an engineering team, you want everyone to understand why what you're doing

[00:32:27] Jonathan:

matters. You want people to have a clear idea of "what's the next most important thing I can be doing?" and if they can't answer that question, you've got a problem. Relatively easy to achieve in a startup; the manner in which you have to do that when you're distributing information across a larger organization is different.

[00:32:40] Jonathan:

But the principle behind why you want to is the same. Yeah. When you're in a larger—your risk-reward changes for releasing new features because you already have a large customer base and anything you release instantly can be used by a lot of people. You are more careful in some senses.

[00:32:58] Jonathan:

You can do things like canary releases or have people who sort of volunteer for beta programs to see new features early to help

[00:33:03] Jonathan:

understand

[00:33:04] Jonathan:

that process, but you do adjust some of your risk tolerances in releasing new features. And for Bloomberg Industry Group specifically, where we are in the law and tax space, we are very careful.

[00:33:16] Jonathan:

The process by which we release new Gen AI in the law and tax space involves people inside the company who are lawyers

[00:33:26] Jonathan:

doing

[00:33:26] Jonathan:

sort of statistically valid checks of, "Yes, this is right," down to some pretty nuanced views of like how you should describe, interpret, and report things back in the legal field to other—

[00:33:37] Anthony:

So what

[00:33:38] Anthony:

you're saying is that hallucinating new laws is not a good thing.

[00:33:41] Jonathan:

No.

[00:33:42] Jonathan:

You'd be shocked to hear that lawyers have a really low tolerance for computers spitting out things that could have them lose their bar license.

[00:33:50] Anthony:

Shocker.

[00:33:51] Jonathan:

Yeah. And triply so since those stories hit the news.

[00:33:55] Anthony:

No, I—so I think that makes a ton of sense and it feels like the, in a

[00:34:01] Anthony:

way,

[00:34:02] Anthony:

the challenge in a place like the Bloomberg Industry Group is not that the problems are different, but that the

[00:34:10] Anthony:

the immediacy of the scale is different. And so to your point about being careful, you're pushing something out, you immediately have a customer base and you immediately have a user base. And so you want to be—which is a

[00:34:23] Anthony:

tremendous asset, I mean in a way, like it's a huge opportunity. And also it comes with some—you've got to be careful—some risk associated with that.

[00:34:32] Jonathan:

It was part of what really excited me about the role at Bloomberg Industry Group: having a large existing customer base that are really sophisticated consumers in the field. And if you're interested in Gen AI and the idea that what really differentiates your ability to do good Gen AI is data—

[00:34:49] Jonathan:

boy, do we have some of the best data in the world for really large,

[00:34:52] you know,

[00:34:52] Jonathan:

databases of case law and relevant tax provisions, and then also things we call portfolios, which are basically expert analysis of tax written by leaders in the field. It was a great place to build and know that when you built something, it was immediately used by a large number of people.

[00:35:08] Jonathan:

That was pretty exciting. And

[00:35:10] Jonathan:

yeah, the leadership question of: sometimes people think leadership and how to run things is about doing the right thing, whereas really the nuance is it's doing the thing according to how the corporate mechanisms around you need it to be done.

[00:35:25] Jonathan:

You

[00:35:25] Jonathan:

start to consider like, "Okay, where are we right now?"

[00:35:27] Jonathan:

You can hypothesize, "Here's an ideal place you'd like to get to," but we're over here. You don't get to just say, "Oh, wouldn't it be nicer over here?" You're like, "What's the next step we take here?" There's the "we have a large existing compute system instead of databases," and how do you go, "Okay, here's the feature that you can build that's worthwhile, that gets you further and further to the ideal state that you envision being the end place."

[00:35:48] Jonathan:

How do you go through and be like, "Here's the existing procedures for how we get things done; how do we do things like free up lawyers that we had inside the company and move them onto this group that carefully reviews legal-facing LLMs before we put them out into the world at large?" At which point I should celebrate some of my colleagues in New York.

[00:36:08] Jonathan:

I did not invent that process when I arrived; this was something that was well-built before I arrived in Industry Group. Unsurprisingly, given that Bloomberg Industry Group has been doing some pretty sophisticated NLP and Gen AI since before ChatGPT came onto the scene. We were using LLMs before then to do some non-trivial amounts of the work behind the scenes.

[00:36:29] Anthony:

So I was actually going to pull on that thread, and I—so I'm glad you brought it up to your point. Bloomberg Industry Group's on the forefront of

[00:36:37] Anthony:

making

[00:36:38] Anthony:

capabilities—technologies, for lack of a better term—useful, which I think really connects where we started this conversation to here, which is: start with questions, understand the challenges and problems, then build something quickly, you know, get it out in front of people, get feedback. And

[00:36:56] Anthony:

to your point, Bloomberg Industry Group's been doing that, but specifically in the context of AI. And so there seems to be a lot of angst and gnashing of teeth in the industry around what it's going to take. You know, clearly, there's a lot of investment in AI and then there's this corresponding angst and grief around

[00:37:14] Anthony:

whether

[00:37:15] Anthony:

those investments are worth it, or are going to turn into applications that people really use

[00:37:20] Anthony:

beyond

[00:37:20] Anthony:

just chatbots. So, and it strikes me that you in particular, but Bloomberg Industry Group also, is really probably some of the world experts on, at a practical level, how do you do that? Like how do you get these applications built and out in front of users? Are there any tips and tricks or lessons learned or things you would take out of that for others that are thinking about—whether they're building a startup or frankly thinking about how a bigger company could take advantage of these technologies?

[00:37:52] Jonathan:

Ooh, that is a really great question. I have two things that leap to mind. The first thing for companies in general thinking about Gen AI building and shipping products is being incredibly careful of the demo and who sees the demo. Which I'm saying because leadership at any company—they are good leaders—over the years have developed a pretty good sense of when something is shown to them with a view of like, "Hey, is this the thing we want to do?

[00:38:20] Jonathan:

Can I get an investment

[00:38:21] Jonathan:

boost?

[00:38:21] Jonathan:

You know, put staff money behind this?" Yes, no. Gen AI really breaks your antenna for that kind of decision because it's incredibly easy to build a Gen AI demo that looks fantastic,

[00:38:33] Jonathan:

that

[00:38:33] Jonathan:

feels like it's 98% done, and in a leisure where you're like, "Wow, this feels a lot better than what I'm used to seeing."

[00:38:40] Jonathan:

"I know this is a project that's very likely to be successful." But behind the scenes, that's just not true at all.

[00:38:46] Jonathan:

The Gen AI demo that demos well is not a 98%-done product; it's like a 40%-done product, if that. And the work to get it up to the kind of 98% quality bar that's needed to release can be incredibly hard and incredibly non-obvious

[00:39:00] Jonathan:

in the initial demo setting.

[00:39:02] Jonathan:

And you perhaps want to be really cautious about who makes the decisions to invest, or how you decide your breakpoints of "we want to stop doing this initiative because it's expensive and is not inherently on the path to success."

[00:39:19] Anthony:

Interesting.

[00:39:20] Jonathan:

That's my first one. Be really careful of demoing Gen AI.

[00:39:24] Jonathan:

It breaks the rules and will often look a lot closer to being a real product than it actually is. And the second thing that leaps to mind for writ large, experimenting with AI and how you incorporate it into your product in a way that is genuinely valuable and not some kind of "the market—everyone's doing this" sort of concern,

[00:39:48] Anthony:

Yeah,

[00:39:48] Jonathan:

is

[00:39:49] Jonathan:

you really want subject matter experts. You want subject

[00:39:53] Anthony:

of subject matter

[00:39:54] Jonathan:

the field that you're working in, people who understand the users. It is no coincidence that a lot of the world's best software products, SaaS that people love using, are for technical purposes because the engineers who are writing the software really understand other engineers.

[00:40:11] Jonathan:

And I think Gen AI is in a similar place for: you either want your engineers to partner and be really interested in understanding subject matter experts so they can ingest a lot of it, and you want a really rapid feedback cycle between what the engineer builds and someone who can genuinely offer feedback about what's good or bad or what else you should think about doing.

[00:40:30] Jonathan:

Or can you find those special unicorns who are engineers who are also subject matter experts from—in your end users. If you do find those people, hang on to them. Alternative ways companies can think about this is: I think retention is way more important in engineers than we often appreciate.

[00:40:48] Jonathan:

If you're in a weird technical field, that engineer you've got who is a great engineer but has also developed a deep understanding of your field—be that you're in pharmaceuticals and, you know, turns out you hired the person whose whole family are doctors and they actually have a pretty good understanding of clinical trials and how you release new pharma anti-infectives to the market—

[00:41:12] Jonathan:

that person is very valuable. Do everything you can to keep them. Or the person who just acquired it over 15, 20 years of working in the company. Do not underestimate

[00:41:21] Jonathan:

the need

[00:41:22] Jonathan:

to understand the market that you're building Gen AI solutions for, and do everything you can to maintain that knowledge inside your business and keep it as close to your engineering staff as possible.

[00:41:31] Jonathan:

No, I love that. Those are very—but also very valuable—insights. I would add one, which is much more pedestrian, but I'll get your reaction: when I think about places where Gen AI technologies can be helpful and AI technologies can be helpful, what I often think about is what is the most boring, thankless, repetitive,

[00:41:51] Anthony:

frustrating, difficult, and annoying work that your experts do? And it's almost like if you just ask people to rank the 25 things they did in a day and you just start at the bottom and work your way up, you know? Like, your point about lawyers: lawyers love nuance and intricacy and deciding, you know, case law against—why would you take that away from them? Like, take the stuff that's like, "Oh my God, if I have to do one of these again, I'm going to strangle someone." They wouldn't strangle anyone, of course that's against the law, but you get my point. Like, go find the stuff they really don't enjoy.

[00:42:25] Jonathan:

I love that. I think that speaks to the: understand your user and try and make a product that has pull—something they want to use.

[00:42:32] Jonathan:

It's a really wonderful place to be. Like what you're saying there, the idea of finding something boring—I think it can—it's a lot harder to succeed in the market where there are tons of people trying to do the exact same thing. And perhaps coming full circle back to talking about science, I got some really interesting advice when I was an undergrad from a Nobel Prize winner,

[00:42:55] Jonathan:

Dr. Tim Hunt, and he said there are two places that are really worth working: and it's either you are in the forefront of the field where no one knows it's exciting yet—there's no one else there 'cause no one else is looking there—and you can do brilliant things there. Or you find something that's old that no one is interested in working on anymore;

[00:43:13] Jonathan:

it's boring and it's dull and no one wants to do it. And that's the other wonderful place to do work. Which I think is still kind of true for AI and software engineering.

[00:43:22] Jonathan:

It's the—

[00:43:23] Jonathan:

it was the success in Actifai: lots of things, one of the things was that kind of telecoms call center space had not been looked at.

[00:43:35] Jonathan:

No one else was playing there. The tools that they had were pretty shaky and showed signs of being really non-competitive, and

[00:43:43] Jonathan:

by

[00:43:43] Jonathan:

going in there and doing something that was not immediately exciting, we actually found a pretty exciting core problem space.

[00:43:50] Anthony:

Why would we expect anything other than very good advice from a Nobel Prize winner?

[00:43:55] Anthony:

But I appreciate you sharing it. That's great. Well, Jonathan, thank you so much for the time. This was a great conversation—a lot of great insights—and really appreciate you making the time.

[00:44:05] Jonathan:

Thank you, Anthony. It was a pleasure joining. Longtime listener, first-time caller.

[00:44:10] Anthony:

All right. Thank you.

[00:44:12]

Suscribe to the Data Masters podcast series

Apple Podcasts
Spotify
Amazon