datamaster summit 2020

Data Secures the World

 

Christopher Ahlberg

Co-Founder and CEO, Recorded Future

Information Security is an increasingly growing data problem. To be able to assess the security posture of organizations, one needs to process and assess massive amounts of structured and unstructured data, sourced from external resources, such as threat intelligence, and from internal IT systems. In this open conversation, Christopher Ahlberg, CEO and Co-founder of Recorded Future and Andy Palmer, CEO and Co-founder of Tamr will discuss the challenges of handling security data at scale and how to overcome them.

Transcript

Speaker 1:
Data Masters Summit 2020 presented by Tamr.

Andy Palmer:
Thrilled to be here today with Christopher Alberg the CEO and co-founder at Recorded Future. And we’re excited to talk about all things data, and especially security data based on his real world experience at Recorded Future. And also maybe a few references back to his previous company Spotfire, which is one of the leading analytic platforms that he started out of his PhD program back in the mid ’90s. But Christopher, to start off, congratulations on the success at Recorded Future, what an amazing company. Would love to hear you describe the mission and vision of Recorded Future and where you are in terms of the development of the company.

Christopher Alberg:
Thank you, Andy. And thank you for those kind words. Yeah, no, it’s been quite the journey, as you mentioned. I just [inaudible 00:01:10] originally would trying to visualize the world’s data was sort of the, the mission back then, that was in the ’90s and early 2000 when people were not really thinking about this and then analysis became analytics, became big data, became machine learning, became AI and these sort of things. And having sold that company and we started Recorded Future, we started thinking about how could we A, use the web sort of not just something, a place to go search for information, but actually use it for analysis in itself. So we started Recorded Future there back in 2010, pulled in Google and Incutel, and sort of was able to sort of start building a pretty cool platform.

Christopher Alberg:
We then realized that actually this place, the web, the internet was not just the place where information ended up at, be it open or secret, but it was also the place where cyber attacks were happening in a pretty amazing fashion. So we’re like, “Oh, if we get really good about indexing, not just this general data, but all the trails that cyber bad guys might have sort of left behind, we could build a platform that intel analysts and security professionals and all kinds of people in this world could use to hunt down threats, bad guys, vulnerabilities, all of those sorts of things.” And we sort of turned this into a data business, much like a Bloomberg or Bloomberg for cybersecurity. And it’, as you can imagine, given what we’re going to talk about today, it’s a challenge of collecting data, organizing data, extracting data, extracting signal out of that data doing analysis. No, it’s pretty cool.

Andy Palmer:
It’s amazing. I’d like to dig in a little bit. You sort of talk about organizing all the information on the web and preparing it for analytic purposes. Can you talk a little bit about, in a minute we’ll talk about internal and external data, but maybe talk about structured and unstructured data and how you view the web, is it an unstructured data set that you’re then organizing and preparing, or is there structure in there that just needs to be worked out? How do you think about that at Recorded Future? And as you guys have built your mechanisms to crawl and organize?

Christopher Alberg:
Yeah, I know and the operative word is probably what you said, just have, this easy button [inaudible 00:03:37] . So we started in what I’d like to think about some of the ultimately unstructured world, spoken language or written language. And so we ingest boatloads of text and the text is coming from news sources, that’s probably where most sort of interesting enough were most sort of entity extractors and those sort of tool sets are built for, for news texts. The sort of the places we do that in English and Arabic and Farsi and Russian and Chinese and all these wonderful languages, probably in 15 languages in depth and 30 in total. So that’s sort of one endeavor and again, from new texts to sort of a blurby shot or forum text where all kinds of slang and all wonderful stuff sort of happens.

Christopher Alberg:
That’s one side of the coin and that’s where we started and being able to take things there and organize it and understand that when somebody talks about Mr. Xi or Xi Jingping or the President of China, or the head of the military commission of China, those four things are actually the same thing. And not just looking at one piece of text, but across a whole corpus that is just flooding at you being able to do that [crosstalk 00:04:57]

Andy Palmer:
So you’re sort of calling the meaning out of just the raw text.

Christopher Alberg:
Absolutely. So we see our mission here to build what we think about as to the security intelligence graph. We think about this as the equivalent of the Google knowledge graph. It’s not just about doing text extraction. That’s really nothing. Actually the value becomes when you turn it into knowledge. Especially to do that, it’s one thing to have generate knowledge about what the ancient Egyptians were doing, or big facts that are sort of known, but in our world, many of the facts are tricky. There could be facts about a disinformation campaign that’s not hard and fast. The view of an American towards compared to a Russian, this can be quite different potentially around that fact or whether a cyber attack involves X or Y, it could be quite contested. And, but then there’s this whole other side of data.

Christopher Alberg:
So this is sort of unstructured data. You turned it into structure where things start getting a little bit more organized, typically graph oriented data. Then there’s all this data from the internet, simple stuff, domain name registrations, internet certificates, malware of various sorts, so code various sort of things, data dumps. The internet just contains … a internet dump. It’s sort of a very weird, but it’s a lot of juice in that. And then you get, and I can sort of drill and drill, and eventually you get to sort of, you think whether you have IP addresses and domains, machines, so machine traffic, so net flow, so things communicating with each other and we collect all of this. And then we try to use that to turn that into this knowledge graph that we call the security intelligence graph. So it’s a majestic data unification problem.

Andy Palmer:
Yeah. So this is so similar to a lot of the things we work on with our customers at Tamr. As you sort of go through the process of mastering all that data, how much of that sort of requires the use of the machine and how much is human expertise important and critical, and that activity as you line the data up?

Christopher Alberg:
So, there’s elements that are easy and there are elements that are hard. So there are some things. Softer vulnerabilities turns out to be very important in our world. There’s probably 130,000 of them that are carefully sort of annotated as they’re written out a CDE-2020-1955. The good news is that is with annotation. That is very distinguished. You can write a reg X or whatever to sort of catch that. And it’s not going to be, it’s very distinctive. It is what it is now is if you have a malware name that might be called a crypto locker, it’s still pretty good. But then somebody else writes a piece of malware that come along and put a name Locky on it. And suddenly it’s going to start getting ambiguous.

Christopher Alberg:
And that’s the same thing in biology when somebody names proteins and they have six different naming standards. And in this case, it’s sort of what the US government might call something, what the Russian government might call something, what the bad guy himself calls something. So yes, in some degree, you can trust machines to do that stuff, but it needs humans as well. So we’ll run processes where we will apply the entity extraction, for example, that I talked about before and try to, in real time, name stuff, but then we’ll insert humans in that loop. If we see that something is gaining momentum, getting enough sort of content around it, we’ll insert human curator who will come in and then sort of retrofit things. And that might be sort of insert [inaudible 00:08:48] on top of that. They might actually kill the whole thing. A lot of different actions might come from that. So absolutely you need some humans in this loop. Obviously you try to avoid this as much as you can, but you just have to-

Andy Palmer:
Yeah. And what you described as you using the machine really aggressively to make sure that when a human is involved, that they’re focused on the most important things that need to be adjusted and require human judgment. And I know that when you were doing your PhD with Ben Schneiderman way back in the day, I know that the human was always important and always at the front. And I had the same thing when I was working with Marvin Minsky back in the ’80s. And it’s like they are these principles where it’s always about the human and the machine working together that seem … It’s true today, or maybe truer than ever before.

Christopher Alberg:
Yeah, I really think so. And whenever people sort of like, “Ah, you have been out of the loop,” and you just get suspicious. I don’t know. We think about as this tower or [inaudible 00:09:49] whatever you want to say, some towers. So a threat send tower, which is the human front. And then there is this big horse, a horse body that sort of is the machinery, the two things together. And then when we provide our technology to the customers, when they sort of get the equivalent of the Bloomberg screen or portal in front of them, it’s still the same thing. And again, that data could flow to them where they sort of use it in their work, or our data can flow into Splunk or some sort of machine consumption sort of tool where our data gets used for correlation and enrichment. And then yet, again, it pops out to machine, to a human every now and then. So [crosstalk 00:10:28]

Andy Palmer:
So the idea is that machines and humans are both useful in producing and aligning and creating the asset that Recorded Future delivers and also machines and humans consume it as well. [crosstalk 00:10:44]

Christopher Alberg:
You’ll do all kinds of work here too. You know, again, I mentioned those CDE annotations, if it’s a company name, the good news is that there’s these perm IDs and Bloomberg IDs and the stuff that you’re more familiar with than I am, but so there are these certain places where I can get two pretty distinct identifiers, but then there’s other stuff that it’s just, that doesn’t exist. So yeah, no, it’s going to be a long path to solve all of these problems.

Andy Palmer:
Organizing the world’s information is a big challenge for all of us. So one other thing before we move on and talk about internal and external data, I know that in using Recorded Future, there’s this amazing and very intuitive interface that’s driven by these temporal indexes. Time is a kind of a primary organizer. Can you talk a little bit more about the importance of temporal interfaces and indexes in next gen systems?

Christopher Alberg:
Yeah. And I think sort of the reason that worked out so well for us is that when you deal with information and you’re trying to find patterns in clusters and unit and anomalies, and these sort of things is that especially, time is always very important last time I checked, there was three dimensions, plus time. So time is obviously important, duh. But when you, and I think there’s something special when I’m analyzing data where they’re not hard dimensions, they’re tricky dimensions. It’s one thing by measuring meters versus time, or these hard dimensions that are sort of irrefutable, what they are or not, like the one versus the other. And so if I’m looking at the number of cyber attacks versus China compared to the number of cyber attacks versus the US, a lot of different sort of things come into play as like, whose definition am I working with and dah dah.

Christopher Alberg:
So now, one of the ways that I have to normalize these sort of fuzzy things is actually looking at it over time. So putting it compared to 2020 over 2019, 2018. And so if I’m a little bit softly, but I’ll say I may be sort of bad about how I measure it, but at least I’m doing it consistently. And that sounds terrible, but it actually, it’s not a bad approach. And so this idea of organizing information temporally we have is obviously look at this from many different ways. We’re going to, ultimately we want to be able to give a glean into what’s around the corner, but time just turns out to be very important in this. So, spent a lot of time on organizing data temporally and in lots of different ways.

Andy Palmer:
It’s amazing, so as we switch now and like start talking about you spend a lot of your time or have traditionally, I think organizing threat intelligence data around these external sources. But I would imagine that increasingly you’re starting to integrate or plug that external data into these internal sources that come out of people’s internal systems, their security operations centers. You mentioned Splunk that a lot of people use. Can you talk a little bit about how that data is going to get mashed up and the challenges you see in there? I mean at Tamr, we spend a lot of time on people’s internal data. And so it seems like there’s a natural sort of dovetail in here. [crosstalk 00:14:02] Maybe also how this temporal index is kind of maybe a key that might help facilitate.

Christopher Alberg:
Yeah, no, exactly. Because this is sort of, if you think about what operators do and whether they work in a nuclear reactor sort of thing, or somebody is in the big deck or operation center on an aircraft carrier, people have tried many times to take this sort of data and boil it down to something that can just be in a little screen, but it turns out that humans are pretty good about seeing lots of observations in their perceptual system. We can see, this is weird, this part of the process that readout should not be this. And we just do magic in our heads without stuff. And it’s wonderful. Maybe one day we’ll be able to sort of decode what’s what’s going on there. So in our world, a lot of that, sometimes it can be encoded to two data fields meeting. And if they meet and certain [inaudible 00:14:56] but in many cases, it’s subtle.

Christopher Alberg:
When you chase bad guys, when you chase spies or criminals or in your systems, they have this magical thing that they do that they’re actually trying to make you not find it like the bits and bytes doesn’t want to be found. So again, that means that you’ve got to figure out how you match things in clever ways. So sometimes you can find hard mapping. So if there’s traffic in your network on a particular IP address, and we also have data on those too, and now we can sort of say badness on the inside badness on the outside, it’s probably worse than only one of them. If you see badness related to drive an IP address on the outside, but not on your network, you’re going to be like, “It doesn’t matter to me,” but if I see that traffic, whether it’s my IP address or there’s inbound traffic from that on my network and Recorded Future has a bad score on it. Now my alarm bells are going to go off. That’s a hard and fast sort of thing to measure.

Christopher Alberg:
Now, on the other hand, there are others where it’s sort of based whether it’s a company name or a threat actor name. I don’t want to get too much into the weeds, where there’s more fuzzy mapping going on. And now again, I’m back to that where I might see things in a particular time period of Recorded Future happening. And I see something in my own sort of plotting something in Splunk or whatever in the same time period, I have zero hard mapping between the two, except for that they happen in the same time period. Now my brain can go to work and try to make that mapping, or at least tell me that now it’s worthwhile spending time on drilling into why these two things are happening. So yeah, there’s sometimes hard mapping. Sometimes it’s soft mappings, but it’s when it’s a softer disc, we’ve got to help by putting the human brain to work.

Andy Palmer:
Yeah, it’s so it’s so incredible. And I got to imagine that you guys have amazingly disciplined methods associated with taking all the data out there on the modern web and turning it into highly clean, organized things that people can consume, especially if they’re making decisions based on that in their security surface area. Do you think that do a lot of your customers have their internal data sort of organized in a similar way and do they have it under control or do you find that they still have work to do?

Christopher Alberg:
Yep. There’s decades of work in front of us, I think.

Andy Palmer:
Really.

Christopher Alberg:
And the tricky part is that you’re going to have some people who just want the very smallest piece of data with the highest level of like, this is extremely tight data. It’s a reference data set. It’s bulletproof. In financial trading, that tick level data, you want that to be bulletproof. But there’s other data that is not going to be as bulletproof, but frankly, squeezing more analytical juice out of what’s bulletproof, it’s probably not going to be that easy because it’s somebody else had already squeezed that juice. It’s sort of, where’s the stuff, where are you going to go chase for whether it’s sort of financial trading or chase the bad guy. He’s not going to sit there in the easy data and he’s going to sit in the hard data.

Christopher Alberg:
And that’s why both you and I have, we’ve been at this now for 20 years. And then the juice is in the hard places and that’s where [inaudible 00:18:14] gets excited. So I don’t know, sometimes some of our own guys can be like, “Oh, this is terrible. Why isn’t all the data hyper bulletproof, juicy?” I’m like, Nope, because if it was the problem would have been solved. Literally in our case, the bad guys are introducing new things that are hard to deal with, you know? And it’s just a very natural part of what we have to do and you just have to stay at it. It’s part of the game.

Andy Palmer:
It’s sort of inherent. Mike, as you know, our mutual friend Mike Stonebraker, we always talk about sort of data entropy and database decay, where in your case, you’ve got people very deliberately creating ambiguity in the data to hide and [crosstalk 00:18:59]

Christopher Alberg:
It’s very unusual. You find that also in a few other areas, you find it in algorithmic trading where traders will post phony orders, 10,000 of them just to do either price discovery or frankly, just to mess with other people in the marketplace. And so it’s very unusual. This is sort of an extreme thing, but extreme situations are fun.

Andy Palmer:
At Tamr, our sort of core principle is we just think that data is very messy and ugly and always constantly needs to be sort of organized and aligned and reversioned and remastered,

Christopher Alberg:
Which is a very powerful notion, I think, because there are so many people sitting around out there and they think, we’ve all been involved in those data cleaning projects. Yeah. By March 22, it’s all data’s can be clean. [inaudible 00:19:47] What are you talking about? 29, it’s still not going to be clean. The simple, you could say it’s a process, which I don’t necessarily go to that word, but it’s sort of true that it’s a forever endeavor.

Andy Palmer:
Yeah. It’s a core part of the muscles that you have to build.

Christopher Alberg:
Absolutely. You’re onto a good point here that this never goes away. And you’re always going to find new opportunities and then maybe it’s at the meta level, but it’s always going to be new opportunities too, because I always think about it. I want to squeeze analytical juice that’ll happen where there’s high degrees of entropy to your point.

Andy Palmer:
Yeah. And the most valuable signals oftentimes come from these highly ambiguous, very dirty sources.

Christopher Alberg:
That is where they are. Show me great cases of … I’d be shocked if somebody is going to show me your easy to deal with data sets that are highly cleaned up and organized and somebody who then goes and find analytical juice out of that? I don’t know. That makes me skeptical.

Andy Palmer:
Well, let’s go back to your Spotfire days and talk about that a little bit. I mean, you must have run across this on a regular basis. At Spotfire, we’re kind of close to the last mile of analytic benefit. I know you evangelized for years, companies using their internal data more aggressively to deliver analytic outcomes. What do you think is the state of analytics and data in the enterprise in general, outside of security information, based on your all your experience from the past and your sort of recent efforts with Recorded Future, how do you view the enterprise and the state of it all?

Christopher Alberg:
So, I’d like to think that maybe there’s some interesting progress that being made, that if you think about a CRM where sort of now Salesforce is more or less have a monopoly on CRM, the good news is that at least a lot of Salesforce, a lot of CRM data is similarly organized with news, because it means that instead of having every customer, having their version of CRM data, that’s now more standardized. And so that’s good. You can still mess up Salesforce pretty badly. In fact, some of my salespeople yelled at me this morning for all these fields that we had in Salesforce. And I’m like, you know what, I’m not in there every day. I’m like, dude, I thought that was pretty standard, but apparently not. So I think there are plenty of Tamr sort of opportunities for that.

Andy Palmer:
Great.

Christopher Alberg:
And so that’s one. You’re very right. If I can think back of the summer of 1991 many of the people who listened to this were probably not even born then depressingly enough. And I’m sitting in Schneiderman’s lab, and we’re working on sort of the first versions of what became Spotfire and we’re going to create demo data sets. And what happens? Immediately half of the time goes to cleaning up these demo datasets by hand. It’s true. Whenever it gives, somebody says, “Oh, let me send this data to you,” you know you’re going to spend 80% of the time. That has not been solved in a quarter of a … I was going to use a bad word, but a quarter of a century has not been solved.

Christopher Alberg:
So it’s just mind-boggling. That was a huge part of the problem with Recorded, or I’m sorry with Spotfire that we dealt with, visualizing somebody’s data. Goodness. It’s when you visualize somebody else’s data, they have a lot of vested interest in it. So they’ll work with you on it. And many times when we came in and we would visualize data, I remember, I won’t mention the company name, but we’re in sort of showing somebody high throughput screening data, we visualize it. And somebody basically goes up and just slams the computer down and says, this cannot be watched. What’s going on here? And that’s basically making somebody look very stupid. We had another situation with this where you know the adverse events systems and we’re looking at some clinical trials, adverse events data at a big company, won’t mention them. And again, pretty much got the things slammed down and there’s like, “We cannot see this pattern because if we see this pattern, we have to go back and stop this trial,” basically. And it’s that sort of behavior that people are afraid of dirty data.

Andy Palmer:
It’s remarkable. You’ve talked about this a lot to me, this either … So the human element in how people consume data and use it, and either their reluctance to admit what the data’s telling them or to hoard the data and not, prevent it from being shared. And I got to imagine a huge part of what you do at Recorded Future is about sort of bringing data together almost in spite of people trying to stop you from doing that.

Christopher Alberg:
That’s sort of our job to do so that we know that there are certain core elements, whether it’s a, what we call a threat actor, one of these hacking groups, whether it’s sort of the world leaders and the policy makers around it, whether it’s pieces of malware, which obviously there’s gobs of them. Elements like IP addresses and domains and all the data around them, software vulnerabilities file [inaudible 00:24:56] And so we organize them around, we call them these intel cards, intelligence cards, that have all the data associated with them. I think we have about two billion of those in total, which are supposed to be highly curated elements that have all the pivot points to be able to jump between either, between different data structures and be able to pivot out into the corporate world of that we’re into the internal datasets. And so that’s our job to maintain that, to be good. Now are all our pivot points off those two billion records perfectly clean and nice? No, they’re not. To be honest, but the job is to maintain that and make that to be eventually a really strong reference data set. That was sort of the mindset around that we have.

Andy Palmer:
That’s incredible. And you’ve described this a lot. It sounds very similar to what Bloomberg did. You really, you mentioned, you referred to that earlier, is that an inspiration for you? Kind of a role model?

Speaker 1:
Oh, 100%. The Bloomberg on Bloomberg book that I keep referencing to everybody and I’m not going to remember the story, but he was able to figure out how do you, what do you call them? The bond yield, yield curves, I guess, from bond curves. And that was published by a whole bunch of different places. And he started amalgamating that into one place and then built a little bit of analysis around it and scenario planners and then just kept adding sort of equity data and bond data and commodity data and putting the news together. Data analytics and so on and so forth, absolutely that’s being our sort of approach to it. We just on the record here by Recorded Future, which is our media site for media property that goes with it. Somebody told me earlier today that the smart guys, they imitate and the brilliant, they steal or something like that. We’re happily just imitating and stealing away here from, getting inspired. I guess [inaudible 00:27:03] politically correctness here. Just learn from the best. Absolutely.

Andy Palmer:
But then you described this process, you described is a very agile process. We like to advocate this term data ops, kind of this agile data management approach and what you described as very agile integrated, kind of holistic view of how you build these data assets and deliver real value from it.

Speaker 1:
100%. In that now, to make it not totally ad hoc, there are reference data sets that you can start with. So for example, geography plays a big role in what we do. So he worked with geonames.org is one fantastic data set, and there is a set of others that we use. So that sort of our bulletproof start of geography. Now our clients, they worry a lot about Syria. For instance, they worry about Yemen and it’s complicated, it’s sort of, it’s quite complicated. And it sounds like I’m talking about social relations, sort of. It’s quite complicated. And we just refreshed our geography database and we added, I think, seven million places to it. I’m like where the heck did the seven million places come from? But it’s just works. The world is getting better at describing itself.

Speaker 1:
And a city, a small village in Syria might have seven different ways to be described in [inaudible 00:28:23] and in Arabic and English and Russia. So all the different people who have sold goods there will talk about it in different ways and then connecting that. But so there are for geography, we could use that as a starting point. These softer vulnerabilities I mentioned before is another one where we have sort of a starting. So we love that when we have these fundamental starting points for data, and then we’ll just have to sort of build from there.

Andy Palmer:
It’s amazing. It’s incredible what a huge challenge, as you mentioned earlier, this is a multi-decade kind of problems. And we start with Tamr, we started working with the NGA. We thought geolocation was kind of a solved thing, but curating geolocation data is just a massive thing in and of itself.

Christopher Alberg:
It is. My favorite example of that one is I grew up in a town in Sweden called Outer Village. That sounds terrible, but it’s true. Outer Village.

Andy Palmer:
How do you say it in Swedish?

Christopher Alberg:
[foreign language 00:00:29:19] I knew that there was a second [foreign language 00:29:23] I knew that there was another one, because that’s the famous one in the, it has, I think, at least four or five elements in the periodic element system, comes from this other … Terbium. There’s the whole set of elements of that towards the end of the periodic elements. So then I remember being 25 or something like that, looking into the NGA open database where you find data very nicely, it’s super well-organized as you can imagine, that’s one of the datasets that we worked with also, and you look at [foreign language 00:29:55] in Sweden or Outer Village. It turns out that there’s damn 15 of them. Where you take Stockholm. And you realize that there is like nine Stockholms in the US and those are fairly simple because you may not care so much if you screw up, but St. Petersburg, Russia versus St. Petersburg, Florida. Tripoli in Libya versus Tripoli in Lebanon. There was a lot of stuff that happens at both. If somebody is going to bomb somebody based on this data, it’d better be pretty good. [crosstalk 00:30:31]

Andy Palmer:
Absolutely. Well, you just reinforced this idea that data is hard and ambiguous. It’s not a binary thing, and this has been fantastic. And just before we close out, maybe can you give us like a little hint of what is going on there in the world of cyber threats? Anything we should be keeping our eye on or worried about?

Christopher Alberg:
We should be worried about everything? Everybody’s out to get you. No, we’re here, obviously it’s sort of end of September, we’re coming up to election in the US. But you sort of got to look too, first at the global level, China, US you got that. We’ve got a global pandemic, first time in a long time there’s been a truly global pandemic. It’s not necessarily being run the very best ways in very many places. From that we have a global financial crisis. So now you have global tensions, you have global pandemic and global financial crisis. And then, the bad guys, it’s not like they were lazy before, but they’d be like, “Whoa, this is good opportunity.”

Andy Palmer:
Opportunity.

Christopher Alberg:
Pharmaceutical companies have been hit like this.

Andy Palmer:
Really?

Christopher Alberg:
Yeah, from people who want to steal the … It’s not surprising if you were running an intelligence agency, it doesn’t matter if you were in the west or more adversarial sort of places. If you work in an intelligence agency in Russia, you’re expecting to get a letter from the ministry of whatever, from the policymakers saying, you shall have the best information around. You shall have the best information regarding what drugs are going to be developed, what and where and when. So you’re going to, okay, that’s my tasking. I’m going to go make that happen. So, no, it’s not easy world here. We see it all the way over to the Russian guys who are figuring out that they can apply ransomware in all kinds of interesting ways. And they’re real threats to elections, upcoming elections here, we should be nervous. We’ve got to be vigilant and be careful.

Christopher Alberg:
But at the same time, we can’t be fearful either. We’ve got to keep doing what we’re doing and be willing to sort of fight back against these guys. And that’s what we call it, disrupting, disrupting the adversary. And we’re working hard at that.

Andy Palmer:
I know it makes, helps me sleep at night, knowing that you’re out there helping not only our government here in the US but also governments around the world and many corporations fend off the bad guys. So thank you for everything you do.

Christopher Alberg:
Thank you, Andy. Thank you.

Andy Palmer:
It’s great to have you Christopher, and truly an honor, and always a pleasure to catch up and thanks for joining us for Data Masters. We really appreciate it.

Christopher Alberg:
Thank you very much. This was fun.