
Balancing Innovation and Security in Analytics with Oren Falkowitz of Gigasheet
Oren Falkowitz
Security isn’t just a checkbox — it’s the foundation of trustworthy data analytics. Oren Falkowitz, Advisor at Gigasheet, joins us to explore what it really takes to secure data in the digital era. With a background spanning NSA, US Cyber Command, Amazon and Cloudflare, Oren brings a deep understanding of how to strike a balance between access and protection, especially as analytics demands grow. He explains why most companies fail to define what they need to protect, why phishing remains a top vulnerability and how treating security as a technology — not just a human — problem changes the game.
I'd rather read the transcript of this conversation please!
In this episode, Oren Falkowitz, Advisor at Gigasheet, shares lessons from decades at the frontlines of data security — from NSA to Cloudflare. He explains why identifying what to protect is step one, why phishing is still the biggest risk and how cell-level security can unlock smarter analytics.
Key Takeaways:
(03:24) Government and private sector data security face similar foundational challenges.
(06:08) User training alone won’t stop phishing — it’s like expecting drivers not to text.
(10:29) Apache Accumulo shows cell-level controls matter more than having all the data.
(18:50) Intelligence turns technical data into real decisions.
(24:42) LLMs need rich, licensed data — scraping alone isn’t enough.
(27:59) Cyberattacks work because they look real, visually or organizationally authentic.
(31:03) Security improves by mastering the basics, not chasing the newest trend.
(32:43) Strong analytics matter more than the breach itself when it comes to security impact.
Oren: [00:00:00] you've probably over the last couple years gotten duped a bunch of times seeing something on X or on social media where is that real?
Or you go and tell someone, they're like, that's not real. It did really happen that way. And it's easy to fall into that. And I think over time people are gonna really wanna have a lot more veracity which I think is a cousin to the security problem.
Anthony: Welcome back to Data Masters. Today we're diving into a topic that's absolutely fundamental to everything we do with data. Security in an age where data is more valuable than ever, protecting it isn't just a good [00:01:00] idea. It's a critical imperative. When data isn't secure, the consequences can be far reaching and severe impacting people, businesses, and as we'll talk about even national security.
Our guest today is someone who has lived and breathed data security from multiple extraordinary vantage points. Orrin Falkowitz has journeyed from the front lines of US cyber operations serving in senior roles at the NSA and at US Cyber Command, where he focused on computer network operations and big data to the cutting edge of the private sector innovation where he co-founded squirrel.
A big data analytics firm built on technology developed at NSA which was later acquired by Amazon. He then went on to co-found and lead Area one security, a company dedicated to preemptively stopping phishing attacks, which was subsequently acquired by CloudFlare, where he continues to contribute his expertise.[00:02:00]
Today we're gonna tap into Orin's unique experience to explore some really crucial themes we're look at the inherent tension that often exists between a drive to move fast, analyze lots and lots of data, uncover insights, and the necessity for maintaining robust security. And we'll touch on recent events, which might teach us a bit about this balance or perhaps the imbalance.
More importantly, Orin has a wealth of practical, hard won insights in how to actually implement effective security in a real world. so we're not gonna talk just theory. We really want to get to the way to get things done in a practical way so that we can get value out of our digital assets. So, Orrin, welcome to Data Masters.
It's a pleasure to have you here.
Oren: Thank you so much. It's my pleasure.
Anthony: So I wanted to start a little bit, and I know a lot of your experience is in the public sector and the government. I think if we've learned one thing over the [00:03:00] last 120 days or so is that the government might operate a bit differently than private sector. That the techniques and lessons we learned in the private sector are different.
But maybe start by sharing a little bit about what you've seen in. What it takes to secure large government agencies and, perhaps contrast that a bit with what it takes in the typical enterprise.
Oren: Sure. Well, I think that, one of the things that I learned, early on is that folks who live and work in Washington, DC where I am currently, or work for the federal government in particular, have a lot of feelings and thoughts about what it must be like to live and work in Silicon Valley.
Right. Or Boston. And I've lived in both of those places in addition, starting and running companies and folks in Silicon Valley in particular, just to use the coast as like the extreme, have a lot of thoughts about, what happens inside a company or what happens inside the government?
The reality is that they're more similar, than they are disparate. There's so many people at the largest companies, starting companies with backgrounds like mine. You might have [00:04:00] started with a desire to serve and protect our country. But then, the innovation bug and the opportunity to build something in a private company sense, catches you and, and you kind of go on for that.
So when I left NSA, it felt like I might've been the only person to have ever left to start a company. But then over the subsequent, decade or so you just meet so many other people doing it. I'd say the core difference is the understanding of what it is that we're trying to protect, right?
And that, and in the government context is very clear. And companies, typically where they run into the most amount of trouble is they don't really know what it is that they're trying to protect. So, for instance, to give a specific example, within. The national security apparatus, we have data classification levels of top secret, confidential, for official use only, unclassified. And it's very clear, right? There are standards for what those data are. They come marked. Generally people know this will cause, grave harm to the National Security United States. This could cause, other levels, of [00:05:00] harm and so forth. But when you go into a company or you're a new company in particular and you're just starting to create something, you might have a sense that something is personally identifiable information or it might fall under a specific guideline like hipaa. But otherwise, most companies don't really, I. the work in or do a lot of the thinking that's required to understand what they want to protect, right? Is it a trade secret? Is it proprietary, patent like information is it business practices and so forth. And so to me that is really the crux of where the challenges and the differences of li if that makes sense.
Anthony.
Anthony: It does. And so you're coming at it from the perspective of. The data. So the, what is the information, maybe not data, but information that you're interested in securing. you, You've in particular been quite critical of government spending yielding sort of zero impact because of this reliance on user training as a primary defense mechanism.
Maybe [00:06:00] talk a little bit about why just. Subjecting people to online training isn't enough to create security.
Oren: the context of protecting people from. Knowing which emails, right, which phishing attacks to, to click on, whether it's clicking a link or downloading a file. Which today, 90% of the time when there's a security incident, it's the result of a phishing attack. I like to akin it to something like texting while driving or, sex education, right? Humans go through these trainings all the time, but yet you can look around every day and probably even our own lives, we don't follow those practices. So to take something that's a little bit even more esoteric like cybersecurity and expect, any one of a hundred or thousands or tens of thousands of employees at a company to, really gr that and to to abide by it, consistently to the point where it's gonna protect your organization. It is generally idiotic, right? I mean, just be honest about it. 'cause I don't know about you, Anthony. Like, Do you ever text and drive? Do you want to, do you wanna admit it?
Anthony: [00:07:00] Not on a podcast that's recorded and put on the.
Oren: I'll admit it. I'll be honest. Like I have texted and driven and if you just think about that we all know it's bad, We get trading about it. The consequences to us as individuals are enormous. Like they are much greater than what could possibly happen if I click on this.
And so I think it's just a matter of, trying to find solutions that really, have impact. And so my criticism of that type of training is that it's good for building culture. It's good for. People in your organization, if you're a large, multinational corporation or you just have people in different cities or states or countries.
And some do HR and some do finance and some do engineering and some do marketing to get them to know that they're all team players in the cybersecurity sport. But to use it as a solution, is slightly silly.
Anthony: So to say it a different way, it's like a necessary but not sufficient condition to success like. Yes, it's good to train people. Yes, you should be aware, as you say, texting and driving is [00:08:00] bad. Clicking on links, you should be careful. But if that's the only line of defense, you're not likely to be successful.
Oren: I think that's being generous. That's necessary, but not sufficient. Just be, again I think taking driver's ed is probably a reasonable thing for folks to do, but at the end of the day, the things that saves lives are airbags and seat belts and technology. Right. And you and I are both in the business of building technologies and there was a time when people said it was the guy behind the wheel that caused the, would ascribe. Traffic fatalities to the nut behind the wheel. But we've learned over 50, 70 years that seat belts, engines not getting pushed into people dropping to the ground. A hundred different things. I can't do anything in my car without a hundred beepers and sensors and lights, going off. So, I, it shouldn't be any different, especially for a technology space like, like cybersecurity is.
Anthony: I think that's actually a good analogy. It's a, you know, there are many cases where the best intention person still [00:09:00] ends up in a bad situation, whether it be cybersecurity or driving. And so, arming them with preventative tools that they're don't, that they don't have to explicitly turn on is a good way of thinking about it.
So, shifting a little bit, and I alluded to this in the introduction, but wanted to dig into it a little bit. think, there's always this tension between especially given this podcast focuses so much on data and analysis and improving business outcomes. By focusing on having organizations make better use of data, there's this tension between wanting to do that, wanting to get at the data, dig into the data, analyze the data.
Then what feels like things that slow you down because of needing the rights to the correct data and the getting approvals and getting access and logins. And I think this came to a head in our shared world in with the Doge experiments, where I think [00:10:00] there was a sense that if we could take Silicon Valley thinking and move fast and break things type behavior and apply it to the federal government that this.
Could create a good outcome. And we don't have to get into necessarily the politics of it, but this, the question of that ethos bumping up against the necessary security rights and authentication and who has access to what data. I think that's a really interesting tension. But I'm curious on your perspective on it.
Oren: Well, I mean, if I had like a nickel for every person who was like, if you just gave me all the data, I would solve all your problems. That's often a common sort of, refrain by folks who in the. Analytics space, right? It's if it just had it all right, I could figure something out. that's just not reality, right? The of how anything works and you know, it goes back a little bit to what I was saying at the beginning about if you really understand what data is sensitive, how the data needs to be controlled , I. Then you open up opportunities. And this was the basis for the [00:11:00] Apache Cumul project that we built at NSA, which is to say that, look, we have an enormous amount of data. And if you think of it like an average person, if you think of it in rows and columns, right? Instead of having a, controls at the row level or the column level, which is sort of typical in most data, structures. Why don't we. Try to control them at the individual intersections, the cells, right?
So daytime group and this row could be one thing, but the to continue on with the email data set, like the subject line or the two line, or the from line or the forwarding IP or the attachment, right? Each one of these. Are really different. And in most contexts the birthday might be controlled one way, but the fact that it came from Sloan Kettering or it came from this hospital or that hospital maybe is irrelevant. And so purpose of that tool was to be able to unlock more data. I think what we found over the years, especially in multi petabyte, systems that, we're. Dealing with, the kind of, the purpose of the accumulative project is that the [00:12:00] more controls you have, especially when they're more granular and fine grained, what we would call in those times, like cell level security. It actually just unlocked more opportunity. it wasn't about just giving unfettered access, it was about giving the right access. Because I think, as Anthony, that like when you give just unlimited access it, it's hard to know what to do with it even, right? It's hard to know which questions the Cold start problem is pretty significant and so I, I would argue that actually better controls unlock better opportunities.
They're not the hindrance. The hindrance is not understanding not labeling how the data should be handled.
Anthony: So that's an interesting way of. Coming at the question. So your argument would be that we could actually do a better job of getting to answers and at the same time, keep a handle on security. This idea of operating at the cell level, I think is a really interesting one.
I think it's probably worth noting that the. Most common analytical tool in the enterprise is Excel which is by [00:13:00] nature a cell-based, albeit with no security, a cell-based metaphor. So, maybe talk a little bit more about what you mean by cell level security. I think I understand it conceptually, but at a practical level, how does this operate and work and how should listeners be thinking about it?
Oren: Yeah, I mean, so if you're just imagining or visualizing your dataset as an Excel sheet, which I think is a, or like people just think of them in pros and columns, then each one of those cells should be able to have its own security. 30 description and control on it. it's the intersection of the row or like the specific piece of data that usually is broken up into metadata, into lots of features and facets. And then, those features and facets, create the columns, right. So. What is the timestamp of this particular data? What is the source of it? Who is the author? Who is the recipient? If you were, take a Word document, how many words are in it, how many, [00:14:00] you know, I mean, you can go on to infinitum, right?
With how many features you want to extract for something. But each one of those really should have its own control. But the typical way people do it is they say, well, row. Is controlled, right, or the column is controlled. Or what's probably more typical to folks is that you create a database of a certain type of data you take, like all patient billing records, and they go into a patient billing record database.
And so now access and control is given to people for that. And then all. Maybe outcomes of an operation are put into an operations outcome database and people get access. But the intersection between those two might be interesting, might be useful in some way, but they're, then they're locked up.
Right? And that's unnecessary if you're able to consolidate and flatten the data together, but control the access based on rules, [00:15:00] regulations, compliance, procedures, and so forth. Does that make sense or.
Anthony: It does. maybe to push on that a little bit would you agree with the thinking that, in an effort to try to create a secure environment, many enterprises and arguably, or I'm not sure as much about the government, but many enterprises move to a model where they're essentially creating copies of the data, new data silos with, in your example, different security.
I might add that they potentially are also creating different aggregations of the data. They might say the details of the patient records are proprietary, but aggregations are acceptable. So we're gonna, we'll group things by geography and then that's okay. We'll let that out to the researchers, but we'll retain the individual procedures or, whatever the, so aggregation might be a mechanism.
Oren: Adding
Anthony: Yeah.
Oren: Analogy to go down, but yeah,
Anthony: Yeah.
Oren: think that's, I think that's right. Yeah.
sometimes those Aggregations are created for, in my experience. I mean, you tell me if it's been your experience as [00:16:00] well, but for control that people create. Databases applications because they want to control the outcome, right?
It becomes sort of like a fiefdom within an organization. Well, I have this thing, right, and I wanna do this thing. And that, that was very common, what I've seen, over the last 20 years, right? That.
Anthony: so your, again your idea there is that derivative data sets are created partly as a mechanism of focusing the answer on a, on a. A pet project or on an outcome that you want. But, But more generally, I think if I put words in your mouth, and again, this is your opportunity to agree or disagree, the enemy here is new data sets, new data silos, and so the thinking is if we could have.
Simply put fewer copies of the data, fewer variations on aggregation level, which columns are available, which rows are available to your point that is not only allowing us better access to the truth, to your fiefdoms quest point, but also better security.
Oren: [00:17:00] A hundred percent.
Anthony: maybe to draw back to the phishing app piece for a second. I think it's also the case that the more you know, the more email accounts you have, the more likely you are to be phished. The more interaction points you have is a.
Oren: data's pretty clear that if you send a message, just if you're curious to 10 people, there's a 90% success rate that one of those people will click. So if you extrapolate that out, it gets pretty significant, right? And it only takes one to cause damage. So.
Anthony: Right, and only, in a singular data breach of a singular table or even a few rows itself can be. Quite problematic. [00:18:00] a little bit. you mentioned this sort of under your breath, but maybe just to pull it out a little bit originally you developed a set of technology at Qumulo as part of NSA.
Maybe just share for folks a little bit on how that became squirrel and I think that'd just be interesting for folks.
Oren: Well, when I started my work for the National Security Agency on February 6th, 2006, I was assigned to work on ballistic missiles. Right. And to forecast and predict the launch of missiles, around, around the world. And it sounds a little quaint, I think, today to use terms like big data or to go back.
But, just to give people some reference, there was a time when most of the data that. Folks in the intelligence community were collecting what we think of as technical data. It was either signals, like telemetry [00:19:00] signals or radio signals or faxes or phone calls. And as the internet really kind of exploded. even folks. Like myself has always grown up with some form of dialogue BBS to, what is the internet today? That being the primary use of business and communications, right? It's just gotten bigger and bigger.
And so part of the work of an intelligence organization is to collect data, but really the core of it is to be able to. Analyze it and turn it into decisions for policymakers, president, secretaries of defense, you know, state generals, and so forth. And so I think the challenge that that we were facing at the time was that like we really had an incredible amount of information, coming in.
And it was very disparate in terms of its language, its types and and so forth. And the traditional way that it was handled was that. It was just put into different silos, different databases, different structures for problem A, problem B, problem C. Now you as a smart whipper [00:20:00] snapper like I was, you could go get access to these things if your mission required it and you met the requirements.
But to do anything across them really made no sense. This creates a lot of cost. Right now you're paying like Oracle, like a ton of money, to do this over and at the time. For folks who don't remember, Google had written a bunch of papers which eventually became what people call, sort of like companies like Cloudera and so it was like the Google file system big table. And the big table paper is what we base the Accumulo project off, which is essentially a key value store, right? That allows you to store massive amounts of data. It was very efficient to use this, sort of query style called MapReduce, which is so out of fashion today probably, right. You knows.
It's not so dumb to say. But uh, but that was really the rage, back in those back in those days 10 years ago. And what that, allowed you to do is just to like, bring large amounts of data together and reduce it down to find, what you were looking for and get, better answers.
Our real innovation to this was to be able to add the security control, right? Because it [00:21:00] wasn't possible in our context to be able to just. Commingle anything willy-nilly. Some of that data could only be used by this team, by that team, by, so forth. Some of those different classifications, you know, there are really sensitive considerations around the data thing.
And so the problem we the data sort of access control problem, and so the. The focus was to be able to create this very fine level granular security. Now, we found a couple of challenges, with this. The first was what I mentioned at the beginning, which is not all the data was really understood to who could see it, what, how sensitive it was, right?
And you would define all sorts of quirky very dull types of things to explain. But you find all sorts of quirky. Things that people would try to describe their data with. And so there was a lot of work to be able to do that. But ultimately, that project I think was very successful for us.
We decided to make it an Apache open source project, which was not the first time that NSA had done that. They've done it with SC Linux and they've done it with some [00:22:00] other projects, since. But it's slightly unusual thing to do. But we thought it was really cool and we wanted to share the work, especially 'cause at the time. The thing that we really admired about Google was that they were publishing the papers, right? There are so many companies that are existing today, even the AI companies that are really based off Google papers, right? And all the great kind of research they're unique in the way that they do this really cutting edge computer science that's proprietary for their purpose, but then they publish it.
Like they just put it out there, not with all the details, right? But with enough to get you going a little bit in the right direction. And so that was how it came to be, but we thought that would be useful to people in highly regular industries like financial services, healthcare and so forth.
And that was the reason that the company came came to be. I met a gentleman named John Frank. He was doing some advisory work for us. He's a Boston guy and he said, you should really start a company. Around this thing. And I looked at him and said, well, what is a company? I don't even know what you're talking about, because my [00:23:00] dream had just been to work for the government and to do all these cool things for our country.
And I was having a great time and it was getting promoted and doing all sorts of fun, exciting things. And he walked me through it and that's how it sort of came to be.
Anthony: Nice. Again I, I love the, the theme there is, fundamentally the goal is to link data sets, right? so much value comes when you see two sets of data linked together. And I. The idea that the blocking factor in that effort was security is I think arguably something that's very relevant for anyone listening.
It's the sort of problem we see over and over again. I also I think it's, it is very telling. How much, I mean, it'd probably be fair to say that Google today is the Bell Labs of of 30 years ago. It's like the origin of many many really interesting foundational technologies including generative technology.
I think Google probably doesn't get enough credit.
Oren: Hundred percent. Hundred percent. Yeah, exactly. It's, it's really fa it's actually really fascinating. They're very unique in that way.
Anthony: So, [00:24:00] I wonder if you would cast your eye forward a little bit. Obviously you've had a tremendous amount of experience the MapReduce comment notwithstanding you in a good way that suggests you have some history and longevity with regard to what technologies are evolving and, have been valuable.
But I'm more interested now in what you think will be valuable. Going forward the obvious elephant in the room is ai, generative AI specifically, or AI more, more generally. But yeah. What do you see coming down the pike that if you were yourself 20 years ago, you would tell people to start looking at related to security and data?
Oren: To be honest, I think it's really almost the exact same thing that we've been talking about, right? what seems to be interesting to me is, these large language models, they rely on having data, right? People like Google making, business deals with folks like Reddit, right?
'cause they have such a rich corpus of information. And I think [00:25:00] people are understanding that crawling and scraping only gets you so far. Because really the thing that separates I. Like excellent responses, excellent results details or that people re value is a lot of context, a lot of depth.
Right. And that depth, you know, it's not general. Like lots of people can write blogs about, which restaurant or, how to make a, make a patty melt or something. I'm sure I'm sure it's got that. Covered, but like really hard material science questions is, or protein folding questions or if you really want to get local context, like how do I know where's the best place to get my haircut and name your city, right? you have to have access to that data. These models don't create it, right? They sort of synthesize and rely on having something behind it. And it would seem to me that what we were just talking about is probably the key thing. Either that data's held. Behind a walled garden like a Reddit, it's like a, pick your New York Times, you know, is, has a lot of rich data and they're sensitive to people, stealing it or things that are copyrighted and so forth. [00:26:00] And having access to that, having, knowing how to control it. And then I. The counter to sort of security and controlling things is also being able to identify and provide like, provenance and like veracity, right? So when a response comes back well, what is it based on? Right? The great thing about the Blue links is they show you where it's coming from, and I've heard a lot of people argue, oh, the blue links are dead and so forth.
But know, I think they're really valuable to people. If you wanna know. This is where the answer comes from, how do I trust it and so forth. And I'm sure like myself, you've probably over the last couple years gotten duped a bunch of times seeing something on X or on social media where is that real?
Or you go and tell someone, they're like, that's not real. It did really happen that way. And it's easy to fall into that. And I think over time people are gonna really wanna have a lot more veracity which I think is a cousin to the security problem.
Anthony: No, I think that it's interesting the GPT because they've. Gotten so good at language, they're [00:27:00] also in a way, really good at sounding credible. So what make, what makes it, I mean, this is true of phishing as well, just to bring this back to where we started. What makes phishing work is that you believe the roost, like you really think it's the email from your boss, or you think it's an a legitimate order.
That, I mean, sending you a, an order confirmation for you to click on is way more effective if you've just placed an order. You'd be like, oh yeah, it seems very logical that I would get an order confirmation. I should this, I should trust. And in that same sense. The generative AI technologies have gotten very good at writing a blog post about patty melts, which is complete garbage.
It describes melting plastic as the mechanism, or I think with the example that was given many months ago was Google arguing. What was it that, pizza cheese would be an effective glue, which again, sounds reasonable. Like it's sticky. I sort of get it, but it just doesn't.
Anyway, so the point is that it is gotten really good at sounding credible.
Oren: Yeah, I would say for a
[00:28:00] long time that the reason that the attacks work is because they're authentic and this, it maps perfectly to what's happening with the generative AI stuff, is that there's really only two forms of authenticity, right. First is a visual one. So you'll see that in a lot of cyber attacks, they look like brands that we trust, like it looks like it's Bank of America, it looks like it's, Nike or something along those lines, right? Amazon. Right. We're all, we all get. from Amazon. So it's not unusual. And the second is organizationally Authentic. Like it looks like it's from the CEO o, right? Like who's gonna say, I've got your email, and I thought it looked weird, so I deleted it. Nobody's gonna do that, right?
Or vendors, right? Like you said I'm trying to pay this bill, but I didn't realize this one was off. And you see that problem just sort of playing out. It's like it's very easy to repeat information. it sounds authoritative, it looks authentic and so forth. It's like reasonable. we've seen in the news, I'm sure you've read it, like some lawyers have decided to chat GPT their way through court and gotten totally burned by it and, and so I think that [00:29:00] just remains, part of it. But I think ultimately it comes down to. Are you providing the right data?
And maybe if it's not because it's like top secret in the way that I was describing at the beginning, a version of that would be like, it's not licensed properly, right? You don't have the license to it, you don't have access to it. Or you're getting like the first third, before the paywall shows up or some version of that. Those have big consequences. And you're seeing companies go out and just, try to buy and make those licenses from content providers.
Anthony: Yeah, and I think that's also relevant inside the enterprise. It's your user analogy. The first third before the payroll. If you're only looking at a sample of the data or the first third of the orders database, then your probability of making. Predictions about the next best action are very low.
So, ultimately context matters and context comes through linking data and through providing and doing so in a secure way. So I wanted to end with maybe a slightly unfair question. But given your experience in security and in government, I'm curious on [00:30:00] your hot take on The future of security in, and I'll let you focus on the government just because I suspect your knowledge base is deeper there. But, there's one view that we should just operate as though the Chinese know everything or our advent. Maybe I should be more general and say our adversaries know everything about us.
But what's your view on the fu the future of security in the government? If there's something you could do to make us more secure, what would it be? I'm just curious on the hot take.
Oren: I would say two things. You know, one of the things you learn about, for some part of my life I was the attacker, right? Which is cool, right? I got to break into computers. And so, I've seen it from that advantage point as well as on the defense side. The threats are real, right? And, you know, I think government takes it, very seriously. And. The entire building I worked in was like air ga, right? Like, You don't bring things into it. It's a, It's a whole different kind of world. and companies over the years have also learned how to do [00:31:00] more and so forth. So I think, look, people have spent a lot of money on the security problem, not inside and outside the government. And to be honest, the results have gotten worse, right? So there's still a long way to go and a lot of that has to do with. Being focused. Some people listening to our conversation might say talking about very practical, basic things like stopping phishing attacks or labeling your data.
It seems lame, right? But at the end of the day, these foundational things are the ones that make the most difference. And we don't spend enough time or resources because it's so much more fun to try to do the latest MPC or something along that like everyone just wants to do the, like the next thing, right? The other side of it I will say is that, so I, I don't think all is lost, especially in the government. They're very sophisticated and competent. There's all sorts of stuff. The other side of it though, is too, while there has been a lot of loss to foreign governments, China, Russia, and others, to name a few, really hard to go through the [00:32:00] information and to make sense of it for the purposes that typically, state actors want. Right? Because the thing that's often missing from the types of data we're talking about is really like leadership intentions, deep context of decision making and so forth.
I think one of the thing that, one of the things that. People struggle with the current president is like, what is he thinking? Right? And so if you don't know what he's thinking, you're not gonna get a bunch of emails and documents to sort of suss it out. And this is true for a lot of world leaders, right?
Uh, It's true for Putin, it's true for she, it's true for maybe even Netanyahu. For Macron. There, there's lots of people where it's really hard. And so at that level, right? There's a lot you can do with the data, but there's still a lot about how is this gonna play out? And if anyone, could see the future, Things would be a lot different. So I think on that front, the reason security and analytics go together so much is because in order to take advantage of great break-ins and great attacks you need incredibly powerful and, analytics, this [00:33:00] is work that, that we did at the computer science research team at, at NSA and that's the space you're in. That's an evolving space. That's the space I'm mostly in. I, I mean, mostly I tell people I'm not really a security person. I'm really more of a data analytics person. Our whole, Our whole mission was to like really just make sense of it, right? It was easy to smash and grab, stuff out of computers, it was really hard to figure out what were we gonna do with it?
What did it mean? Was it gonna be significant to. have an impact. Right. And I, think that's still probably the case as volume and variety, continue to excel. And on the authenticity level, I mean, like some of these pictures and videos, like who knows, so.
Anthony: Right. just to pull out two big insights there. The basics matter. So to your point, airbags are boring, but airbags work. And so it's great, know,
Oren: I saw Musk giving an interview saying that you probably don't even need seat belts anymore. That the airbags have gotten so good. Now. I don't know if I would, do that, but I mean, that is
Anthony: why not? Why not have both? Let's go.
Oren: Yeah. Why [00:34:00] not? Well, but it's interesting. It's an
interesting sort Sort of thought, right? yeah, they can get really good.
Anthony: yeah. But more generally, if I were a listener here, I think that the message is do the basics. If you're not doing the basics, do the basics. 'cause they're, they may be boring, but they really work. And then the second, I think, key idea there, which I think is really insightful, is yes, the data's valuable.
Both. And so yes, obviously secure it do the basics. What's really valuable are the insights you gather. On top of that, the human insight that comes from being able to analyze the data and then having analyzed that data, put it in context of conversations and changing events. And if it's government, thinking about the people, but also in, in business, thinking about your customers and thinking about the business you're in. So, I think those are two things people can absolutely take away and, think about for tomorrow. So, Orrin, thank you so much for joining us on Data Masters.
Oren: pleasure. Yeah. Thank you so much.