S
4
-
EPISODE
19
Data Masters Podcast
released
August 6, 2025
Runtime:
35m05s

Moving Beyond Chatbots: Rethinking AI Interfaces with Ozair Ali of ekai

Ozair Ali
Co-Founder of ekai

AI can mimic human language, but does that make it intelligent? We’re joined by Ozair Ali, Co-Founder of ekai, to unpack what it truly takes to turn cutting-edge AI into real-world solutions. Drawing on his global experience in startups, government and academia, Ozair brings unique insight into the limitations and possibilities of LLMs in the enterprise. He walks us through the practical UX challenges of working with LLMs, the statistical roots of modern AI and what makes an interface more than just a chat window.

I'd rather read the transcript of this conversation please!

In this episode, we’re joined by Ozair Ali, Co-Founder of ekai, who explores how AI is transforming data workflows. He explains the limitations of LLMs, why enterprise context is essential and what the future holds for data analysts in an AI-driven world..

Key Takeaways:

(02:56) AI terms often come from engineering, causing common confusion.

(05:57) Like the brain, LLMs show emergent behavior that’s hard to explain.

(09:00) LLMs mimic human speech but lack calibration.

(16:11) RAGs aren’t the first step — cheaper, simpler methods often get the same results.

(20:07) It’s mind-blowing that chat is still the default AI interface, as something better must exist.

(25:18) Non-technical users can build fast, blurring the line between data and software engineers.

(31:00) LLMs favor data-rich giants, but there’s hope for new disruptors to emerge.

(32:49) AI can unlock opportunities globally despite local infrastructure challenges.

Resources Mentioned:

ekai website

Conway’s Game of Life

Ozair: [00:00:00] don't know what they don't know. Therefore they don't know what to ask you to the point of everything they need to know for the thing that you want them to do. 

Anthony: Hello and welcome back to Data Masters. In today's rapidly evolving landscape, it feels like every conversation about data eventually becomes a conversation about ai, Beyond the buzzwords and the hype, what does this revolution actually mean for people on the ground?

The builders, the analysts, and the leaders shaping our data-driven world? How do you [00:01:00] separate the science fiction from the practical reality? I. To help us navigate these questions, we have a truly special guest. Today I'm speaking with Azer Ali, the co-founder of eai, a new venture building, an AI based analytics and data operations tool right here in Kendall Square.

But what makes a Azure's perspective so unique is his incredible background with degrees from Wharton, the Harvard, Canada Gay School of Government. Stanford Graduate School of Business. His career spanned from advising the government of Albania on restructuring his energy sector to co-founding Alter Global, a firm dedicated to scaling tech ventures in emerging markets.

He's seen how technology can create systemic change on a global scale, and now he's at the forefront of the AI wave. Today we're gonna tap into that deep well of experience. We'll get his take on what it [00:02:00] takes to turn cool science into a viable product. Explore his views on the practical limits of today's LLMs and discuss how AI will reshape the future of data careers.

I hope we can also zoom out to look at the bigger picture, who will win the race for enterprise AI and where this technology will have the biggest impact in established western markets, or perhaps leapfrogging development around the globe. Azi, thanks a ton for joining us on Data Masters. 

Ozair: It's a pleasure to be here, Anthony.

Anthony: So let's start with a topic that you actually posted on some time ago on, on LinkedIn. And I sort of grabbed the quote. You posted if you understand college statistics, You could understand the foundation of large language models. So, and I know you have a background in statistics.

Do you think the industry kind of overcomplicates the discussion around ai [00:03:00] and makes it more than it is? What's your view?

Ozair: I don't believe that there's any. going on deliberately. it's informative for me to think about my first ever data course that I, took at Stanford and it was taught by a faculty member who has done pretty well for himself since, but his PhD was in electrical engineering. And that gives you a sense of where this field has developed. It didn't develop in the statistics and mathematics departments at universities, it developed either, either in the corporate world or within universities. It was within CS and engineering. And so what that led to was that vocabulary that's around AI or ml, which are essentially some version of a statistical model. is different from the vocabulary you would be familiar with if you took a basic statistics course. So if you, if you have learned regressions, what you think of, when I say inference, it may be different from what inference means today for a [00:04:00] CS or a CS major learning about l lms. it's in my view, perhaps it was just the fact that in certain departments and universities like you just had. For lack of a better word, more snootiness. So if you didn't have a chain of reasoning or a good way to explain why whatever you saw was happening happening, then it wasn't good research. Whereas perhaps in in more engineering focused disciplines, it was like, this thing works.

We're just gonna have it work. We don't really care about getting into the y. but I do fundamentally believe that ai LLMs GBTs. Are in the business of predicting what should happen given they have seen. And that's fundamentally a statistical process. And so if you have a knowledge of, regressions of ordinary lease squares or OLS. It's possible to build up from there to how an LLM works. And I think the thing that blows my mind about this is not the actual method [00:05:00] itself, it's the fact that it works at all. Given the complexity of language and the nuance of language. The fact that we were able to squeeze this into a few billion parameters and do a pretty good mimic of human language is actually truly what blows my mind.

Anthony: So I love this distinction between the theoretical and the practical and also in a way in the academic. Departments, the statistics department versus the CS department. And It has often been said that people don't actually understand how these LLMs work, and I'm not sure that's entirely true, but I think it probably is true that they can't answer the question why it said the things it said, if that makes it, that distinction makes like they know it works and they know at least at a high level how it works, but they can't answer the question how it specifically works.

Is that a fair way to say it or am I misstating the case?

Ozair: it's the same. I think the analogy I'd use is we understand why neurons work, we don't understand why the brain [00:06:00] does what it does specifically in any specific instance. Like you can kind of point at even in like large language models, you could theory look at all the weights and parameters and see what changes when you kind of pass something through it.

But like why those particular things actually have, what do they represent is a question that's not easily answerable. And It's A case that happens with like most complex systems is that you get what are considered emergent properties or behaviors that aren't easy to explain. And in fact, like I kind of go link this to my, like first ever job on research, my in macroeconomics.

When you're studying like large economies, you're studying emergent. Properties that come from like the firm, the individual, the consumer, all of whom you can easily model. But then going from there to like predicting, like what happens at a macroeconomic level is kind of hard. not that if you build like the building blocks, you can kind of suddenly put a macro economy together.

And that's also why like forecasting macroeconomic stuff is hard because you don't know how these things interact. X ante and of left with testing everything empirically, [00:07:00] which is the place we're at with a lot of LLMs. It's a lot of just empirical research on, exactly what they do well and what they don't do well. And the Y is a really hard question. 

Anthony: I think this idea of emergent properties is a really important one. one of the other things that I did as an undergrad is we had a, a class on programming and. For lack of a better idea of what to do, I decided to program Conway's Game of Life. And I think it's a, great example of a set of very simple rules that then produce really unexpected outcomes.

And my favorite example of this is somebody sort of proved that the Conway's Game of Life is a touring complete program that you can actually build a essentially a microprocessor that can do anything using the. bits and pieces, so this very simple, binary on, off thing from which emerges effectively an entire computer.

Ozair: Exactly.

Anthony: a good example, I. So you've hosted a number of [00:08:00] discussions on what it takes to go from cool science or academic research to actual products that people use in the real world. And I'm curious from your perspective as you've seen a lot of different companies think about commercializing AI technology, what are some of the.

Big mistakes people making in the AI space today. There's, and there are plenty of examples of people doing cool stuff with AI or new stuff with ai. But what's the difference between something that's cool and something that's actually a useful product? I.

zair: I'll, I'll focus on, I'd say a, a learning rather than a mistake that we've had with Eki when we rolled out our first version of the product is underestimating the UX side of interaction between LLMs and. human beings, I see this in a lot of other products that incorporate LM specifically, AI is broader than that, so I'm not gonna touch on that.

But lm specifically into their workflows, and I think the challenge there is that LMS do [00:09:00] such a good job of mimicking human speech, you sort of expect to interact with it as you would with a human being. But there are certain very key differences. that, humans can do that. LMS cannot do at all right now or they may get there, but at this point, given the technology we have today. They just don't do that. And so, like, one example is that if you assign any, classification task, kind of tell an analyst like, Hey, there are these pieces of text, I need you to like, label them with likeX, Y, Z labels and so on. if a good analyst is ever confused about something, they can come back in and say Hey, I don't know about this.

Could you review that? And so they can point to, where there are instances where someone needs to come step in. And so this is, what we would call calibration, which is they have a good sense of what they are unsure about. unfortunately, lms. Somewhat by design and somewhat by accident, just don't have a good sense of that. but when you as a user are interacting with something that sounds human, you feel like it should kind of, it [00:10:00] should be able to do that. that's one thing. And the other thing is, as human beings interacting with each other, we have a lot of context. I know you do for a living, for instance, Anthony.

So like if you talk to me about. certain subject. Let's say you talk about sales, I sort of have knowledge about what you think about when you say sales, given what I know Tamer does. and so we, again, like an LLM doesn't necessarily have that, and so it's a different way of interacting with an LLM, and so we haven't quite figured out what that is. Final thing I'll add in terms of just the, those interactions is don't know what they don't know. Therefore they don't know what to ask you to the point of everything they need to know for the thing that you want them to do. that's again, a very human thing. And so they have these like amazing, insanely good capabilities and sort of sound like a human being with sound.

But then they have these like completely Different lack of abilities, I would say from, a human being. And that I think [00:11:00] will really trip up users 'cause they're interacting with something that in their head subconsciously they think is a human-like

Anthony: So I think that's, probably very true. This anthropomorphizing, of the LLMI mean as humans we are, we're always trying to find shapes in the clouds. We're trying to understand patterns, but we also want things to have intent and emotion and, specificity of, like goals and this sort of thing.

the framework that I've written about and I've used is that LLMs are a lot like interns in that they work very, very hard. They don't know very much and they're, and they're a reasonable intelligence, but not. Not great intelligence. and then, then further, I've sometimes mused that the LLM is like an MBA intern, which is not only do they not know a lot and they're willing to work hard but they are quite eloquent and they're very reticent to raise their hand and ask for help because like a typical [00:12:00] MBA intern, they just wanna sort of go and do the thing.

And, reality be damned. They'll just plow ahead without. Sort of pausing and to your point, asking a question. So, with that as, context you've done a lot of work with LLMs specifically around this idea of, how can you write a data model with accuracy and completeness?

Maybe it'd be wonderful if you could share from your experience and vantage point where you see the. Real world limits of LLMs in the enterprise data world. where do you get surprised by how good they are? And where do you get surprised by just how awful they are? 

Ozair: I think in general, like within the enterprise world, and you know this 'cause you are in line of business, Anthony, but having the right context. Is critical in terms of having the LLM do the thing that you would like it to do. And each enterprise's context is unique. Each enterprise is like internal vocabulary is unique, like how someone defines customer acquisition cost.

I mean, how someone [00:13:00] defines a customer may be different within a single enterprise, right? And so LMS have this like great understanding of just general knowledge, but in order for them to. Do the thing you want 'em to do within an enterprise. They have to like understand is unique about the enterprise and knowledge that essentially sits within the enterprise.

And that's often diffused, often amongst people's heads, sometimes amongst like PDFs, sometimes amongst Excel models, version 24.0 something. And. They need to somehow sift through it, access it, figure things out, do stuff with them. I think the thing that like blows my mind sometimes is when we're doing data modeling, for instance, and we wanna figure out if that are like badly labeled, And like may not necessarily have metadata are like represent the same entity. do pretty well. and I'm talking about like the reasoning models here. They like, they do better than I would expect human beings to do. having said that, like the baseline for a human being in that, for that specific [00:14:00] problem is still like a bad baseline.

Like you wouldn't want to sell that as a product, just like the ability that they have to infer stuff Like information that they have somewhere. I mean, I don't even know where in their weights or parameters they have that, but they somehow have that. It's pretty good, but even pretty good sometimes it is not like productizable, if that's the word.

Like it's not the level you want to have it at to be able to sell it at. So they do a lot of things better than human beings do in terms of exactly what we do, which is inferring or inferring data models, inferring like what's called the semantic layer. They do it perhaps better than an intern would, but it's still not at the position where if you were to present it back to management or some executive, they'd probably be like, 

What are you talking about? So you still have to build guardrails in, and unfortunately, since the capabilities of LMS are still. and I think this is I, I believe these were researchers that HBSI may be wrong about this, who coined the word jagged Frontier, or at least this is what I've seen this it's unclear where [00:15:00] they'll trip up. So you need to do lot of testing. You need to set up a lot of testing and guardrails order for them to actually do the thing you want 'em to do reliably. since they are fundamentally like a stochastic process. 

Anthony: Again it, feels like the intern analogy is spot on. Just as a way of summarizing what you just said is you wouldn't put the intern in charge of the board deck. You might ask the intern to build a, slide or provide some data or, but you would not ask them to just be like, yeah, here's here, you do the board deck and while you're at it, why don't you just present it Like, you'd be like, no, that's a terrible idea.

And then we seem to turn over these tasks to LLMs as though they can do it. also this, piece about context feels both very relevant and incredibly important. The large language models are obviously trained on internet content. They don't know anything about your business. And so techniques like rag where we can surface context into the model aren't just a sort of [00:16:00] nice to have, but arguably are kind of. 

Critical, wouldn't you say?

Ozair: Yes. Although I will say something slightly different that like rags for us are a lost resort.

Because they have enough knowledge about like, if I were to say that Thaer does master data management, I. It knows enough about what that is and can figure out, like again, like if I'm looking at your data structures, like what things probably are, then also figure out like from maybe call names, other metadata, et cetera, like what things should be. as I said, with all of that, it actually gets to a pretty good level of I to like, as I said, like we use drugs in our product. I personally try to stay away from rags because I think there's like basically cheaper and easier ways to do. I would say like multi-agent methods to get to the same outcome that you would otherwise have with Rise.

Obviously, like there are certain instances where Rise are unavoidable, like completely it's [00:17:00] chunking things into tasks rather than like pieces of text that you kind of store on a library and then like retrieve them and then feed them back into an LLM. Yeah, maybe. But like, I think there are other ways to be like. Let's say something we were talking about earlier before we started uh, knowledge graphs. Building knowledge graphs, retrieving them, using them would be more, is actually like much more interesting and helpful.

[00:18:00] 

Anthony: Yeah. So to say it differently giving them a tool that they can use to navigate a tree of knowledge, is it better than just handing them a piece of data? Is that a reasonable way of saying it 

Ozair: my standard answer with anything with LMS is like, it depends. Build a framework, test it, be empirical about it. I think the beauty about it like modeling something as a statistical process is you can often test it and as long as you set your tests in advance and you know what you want and you know what good output is and bad output is, and you have the capacity to like. stuff, you can do a pretty standard process of, just experiment and figure out like what's, stuff it's good at like, what do you care about and go from there. also find leaderboards not very helpful. Like they're just like LLM leaderboards and all like, it's almost across of the LLM and like the task you put in.

That's what I care about right now. All of these are empirical questions for me. 

Anthony: So I wanna go back to something you sort of mentioned, but I wanted to maybe dig into a little bit more detail. This idea that humans and AI [00:19:00] systems will work together. And this goes to something that I've sort of complained about in the past, which is my feeling is that the industry is over-indexed on this chat interface that it's maybe the curse of chat GBT.

Having released. The interface to the LLM as a chat interface. And it's probably worth noting that open AI's first interface was an API and only as a, almost as a fun experiment, threw a chat interface on it to see what would happen. And now that's taken off. But anyway, now that we've anchored on this chat interface, it feels like everything has become a chat interface.

Like whatever the problem is, we're gonna put a chat interface on it. And that feels very limiting. So I guess first part of the question is, would you agree that it's limiting? And then what have you seen as more effective human AI interfaces?

Ozair: So I, agree that it's not just limiting. I think it's an awful experience for the user. Like I [00:20:00] just think it's

an awful experience. I think it's like insane that in the last I don't know, 10 years of UI innovation, this is the best we could land on. It's just we had chat bots like 10 years ago, more than 10 years ago maybe, but like this is, the fact that this is the best we could have come up with is just like mind blowing.

Like, I'm, dead sure that something exists that's a better interface than what we have today. So, for sure I don't know where we're, why we're still stuck here, but I think I know what a different interface may look like, but I don't have a good intuition on like what a better interface may be. so, but I'll make two points here. One is I think human beings will also adapt to ai. So in the sense that I. When you speak with someone, when you work with someone, you sort of learn their habits, you learn what works well. and you learn that through interaction. Like you learn how to say stuff. I mean, that's sort of like a manager role, like how to like, know, frame things, say stuff so that you get, you know, you get what you need or you get to motivate the person. And I think it's the same thing [00:21:00] with lms. It's a, gonna be a different way of communicating and I think. Humans will figure that out. And I think people who are like interacting with it more often will figure it out faster. I think that's probably where the innovation's gonna come from. Is that like who's younger than us who spends perhaps more time actually interfacing with an LLM will figure out like that's, this is not actually the best way to kind of interface with them. I see a pretty big role for voice here as well. But I guess, I don't know. I think the second point I was gonna get at is it's also unclear to me what the best interface for LLM to LLM conversation is. 'Cause right now they're sort of like to each other in like human language.

But I guess that the best way like, they don't need to do that in any way. it could just be, I don't know, exchanging some version of like embeddings, some token sequence that they've come up with. That don't make sense to us necessarily. So I think those are things that both the models will learn for themselves and also like people will [00:22:00] learn.

So, I know I haven't answered your question on the UI piece, but frankly I don't, and I just know that what we have right now sucks, I don't have a better one for you, but.

Anthony: No, I think it's a good, it's a it is a hard problem and I think it's, in a weird way, I suspect people have settled on the chat interface because they haven't invented better interfaces yet. That being said, I, I would say two things. One there are lots of modes of interaction that we have with computers that LLMs would fit very nicely into nots to sound very old school.

But email, I mean, sending someone an email or a an asynchronous conversation that occurs over time is it's the way many of us interact. And is not chat for sure. Yet we don't do that with LLMs. The other thing you said there that I think is really important is that we change our behavior to adapt to the technology.

And this is, I think, something that people, I. Spent far too little time thinking about And, And it's not just about [00:23:00] computers, but there's so much about the way we interact with the world, which is as much about us modifying our behavior to adapt to the technology versus the other way around.

when you think this way you realize all kinds of things we do that are not because it's optimal for us, but it's because it fits into the system that we've designed.

also maybe just to bring it back to data and data analysis for a second. in the realm of data and the way people work with data there's this role of the data analyst, and this is something you've written about and, and spoken about.

In the past how do you think the role of the data analyst looks in five years? And how are they using ai or are they completely obfuscated by ai? Like, Do, is there even a role for an analyst anymore? 

Ozair: I mean, this is gonna be hard to predict. I feel like with ai, who is [00:24:00] data inclined can do data analysis themselves. I. If you have some sense of how to slice and dice data or some sense of like, you know, if you are in some function that requires AB testing, that like how that works.

If you're in some function that requires like ML stuff, like that's, I think you're gonna have enough like ai, like do enough stuff for you that for basic. Data analysis, descriptive statistics, et cetera. Like anyone knows what they want can get it done. I think the role of a data analyst, so to say, is probably gonna shift more from that to kind of creating knowledge analyst, for lack of a better word, like creating the infrastructure that is needed for anyone who wants to do data analysis, to do accurate data analysis. And so they'll perhaps be in charge of almost like. Librarians of a company, they kind of are in charge of [00:25:00] understanding what knowledge sits where within a company and who should have access to what and so on.

So that, I think that will be a more, more specialized function within a company. The other thing I'll add is that I think the role of an engineer is probably like a data engineer is. I think gonna be merged with a classical software engineer. I don't think you'll necessarily have a big kind of difference between the two because fundamentally, like if you do any kind of like data analysis or do any type of data prototyping on let's say a data mart or something that prototyping part, it's already much easier now than it used to be, but now you'll have. Instances where someone who's not quote unquote technical will be able to build those pretty quickly. And if it's helpful for them, they will probably take it to someone who's in engineering. And then their job is basically the scale up part, much more software engineering aligned than like classical data engineering, so to say.

I also think that a lot of the data engineering frameworks that we have in terms of [00:26:00] how the data stack is structured, we'll shift radically. I think there's gonna be a lot of automatic decentralization of data use. And so, I don't think you, you can have a central org, like a data team that does a big data team, at least that kind of centralizes everything and then makes it available.

I think if I am a product manager for an app. Currently my data kind of, the transaction level data goes, gets kind of sent and processed and like sits in some or like analytics like Ola place and then I can access some layer that someone has even accessed me.

Those are like 10,000 different steps. But like the data like actually sits with me. Like I strike there. Like I get the transaction level data in my app. So like I should be able to just. like query that myself. It shouldn't have to go through all of these loops and then come back to me. but then if I wanna build something I wanna build like a pricing model or something I still need like something, if I wanna put something into production, I still need to kind of take it to an engineering team.

So. 

Anthony: let test this against you. So, admit at the risk of dating myself [00:27:00] if you go back into the nineties and you wanted to build a system for managing your interactions with customers, you could get an Oracle database and a four GL programming language and you could build a CRM system.

And along came Tom Siebel and he said, well, this is stupid. We know how to do this and we're just gonna build the system and you can just buy it from us and it's ready and it works. Maybe you customize it, but largely it works. And I thought of that as the app and, you know, whether it's CRM or SAP with, ERP systems and PeopleSoft and like

built an entire industry of, that are essentially amplifying or building apps around these common business processes and business use cases. another way to think about what you said is. That the engineering talent associated with doing data engineering may not actually sit inside the organization that's using the data, but actually as a software provider that's providing the, an app that just does the thing, like the number of the custom built [00:28:00] pipelines inside, like little companies around the world doing exactly the same thing as the 7,000 other.

You know, Exact replicas of that pipeline in little companies like, like that feels like four GL programming language is building a CRM systems in the nineties. But you're welcome to disagree.

Ozair: I mean, I don't think I disagree much. I think like one thing that is similar to that and one thing that I think is different, one thing that is I think similar is like fundamentally, anything that can get automated does get automated in software engineering.

So like this is, not just data engineering, but if you have enough people building the same kind of pipelines over and over again, someone's gonna build a product for that. Right. I think that's the part that I think is similar where I do agree with you , the thing that I think is different about AI is that. think that once it actually has, like once an enterprise has, it does have an AI system understands its data, understands its contract, understands its knowledge, there's stuff that it can do that you like the actual employees may not have thought about. it'll understand, let's say your. your business context, if, yeah, I wanna build like a pricing recommendation system, it can [00:29:00] probably suggest what those features would be, and I don't need to be like an a machine learning expert for that. Right. And so like, you know, obviously, like there are people who will say like, oh, there's like risk associated with someone who's not an expert in machine learning, like just run a mach, like a pricing system and all.

But like, I sort of compare this to like the flip side where I've seen people. Who have, like just a very rudimentary understanding of statistics, like run regression models and base very large decisions at both enterprises and the public sector just based on correlation coefficient. That'll happen, that's what happens when something gets more widely available. I don't think it's necessarily a bad thing. 'cause otherwise they would've done it on just like, I mean, you can't see what I'm doing right now, but just kind of like plucking something out of like the air.

Anthony: Yeah, no, that's a great point. And I think that analogy is actually a good one. As we sort of, wrap here, I want to maybe step way back and you've had. Quite a varied experience thinking about startups and large companies established western markets, emerging markets.

So from your [00:30:00] perspective you think the innovation in AI is, which is clearly very disruptive, do you think this ends up being dominated by, a few big, large incumbents? Or this is a case where we see a set of new players emerge, that are small startups today, but become the new disruptors.

And similarly and maybe very differently is this. Yet another technology that the Western world, sort of dominates and becomes the leader in? Or is there an opportunity for emerging markets to leapfrog here? Uh, Those are big questions, but I just given your background, it seemed like I have to ask.

Ozair: those are like great questions for the end. The first question, I wanna believe that this time won't be different and we will have new companies kind of emerging and taking on the bigger ones and like. disrupting like the, industry as it stands today.

And I think that's a healthy thing. It's a good thing. It emerges like new cultures and new corporate cultures, new ways of doing things to emerge and so on. Right. [00:31:00] I think the thing that concerns me this time around is that like the power of LMS is like directly correlated with how much knowledge and information you have, which a lot of like larger companies have spent, I don't know, a few, couple of decades at least, just gathering and hoovering up and putting on this.

Like, I don't think like large companies fail because. Because they don't have like an economic competitive advantage, I think they fail because they kind of just too, are too far removed from the actual user or the actual, like market trends.

Like Notice what's going on and then they, by the time they get around to it, they're kind of lost. But, you know, this time may actually be different. I don't know. It's if there were to be a time where something is different, it will, it may be the ai just because of the nature of the technology I think the emerging versus western markets thing. the thing I kind of like talk about, which is like a subset of that question, which is labor markets, which I kind of, think about because I think over the last 50 years, like we as world have liberalized what I would consider like the market for goods and the market for money, which is you have pretty transfer of money, pretty easy transfer [00:32:00] of. goods across the world for now at least. And then, but we never really got there with people like, with skills, basically. Less than people with skills. those things like sort of stayed frozen. And so you see, you have, seen rather over the last 50 years.

I would say that those differences diverge, like the wage gap, the salary gap for the same skills in different markets, like is quite high. I think AI could be a great and generally speaking, software is a great equalizer for that. To an extent. there are other things that like, you know, you can't, let's say like. if you're hiring someone in like a third world they, they sometimes have to be concerned about do I get electricity or not? At certain hours. I grew up in one such country where I also often have to like, think about I actually gonna have electricity like between this and the, these couple of hours or so get my work done?

Right? So like, productivity still gets impacted because of the environment you're in, but like the fact that you can still do stuff as a skilled. Person having to move from point A to point B [00:33:00] will mean that a lot of this wealth transfer that otherwise was not possible will still happen. so that's kind of what I'm optimistic about.

And I think that the thing that like is amazing for me just is blows my mind is like I. The amount of knowledge that you now can package into a little model that is available to anyone. And so anyone can look up how to do certain things, how to edit their emails, how to communicate properly.

I think we underestimate how much of an impact that can have in terms of like economic growth, especially for like skilled labor. I think I have more concerns around. How AI can be used for consolidation of like state power certain places, and especially then in emerging markets or in the third world.

Like, I think that's actually where I think like I, most of my concerns are, I think as like a pure economic force. I do believe it'll be a net positive. Even if the majority of the bulk of innovation is gonna happen, of technological logic, innovation will happen here. There will be applications in terms of [00:34:00] business model innovation in terms of operational innovation that will still come out in these countries. And I think they'll, it'll come out faster than we've seen before. They'll kind of figure out ways to use AI in ways that we can't anticipate within those markets. And I am pretty excited about that.

I am not excited about the potential for abuse basically.

Anthony: Yes. But again, hopefully like any technology, which of course there's always room for abuse the positive uses and leveling of the playing field dominate as the effects. I mean, there's always some bad actors. 

Ozair: Yeah. Yeah. I think there are always some bad actors, but I think I'm hopeful that we, find a way, like people find a way as like civil society kind of. Figure out amongst themselves, like how we manage that. So 

Anthony: Well, I think that's a, great place to end. Appreciate you making the time and joining us on the show.

Ozair: thank you Anthony.

Anthony: I.

Ozair: you for having me.

Suscribe to the Data Masters podcast series

Apple Podcasts
Spotify
Amazon