datamaster summit 2020

Leveraging Federal Data as a Strategic Asset

 

Speakers Include

Joe Grace—President and CEO at Grace and Associates / Former Chief Information Officer at Navy, Medicine
Nick Sinai—Senior Advisor / Venture Partner at Insight Partners / Former U.S. Deputy Chief Technology Officer, Obama Administration
Nate Ashton—Managing Director at Dcode
Lauren Strayhorn—Tech Engagement Manager, Dcode
Todd Broadhurst—Solution Director, Tamr

The U.S. government developed the 2020 Action Plan which establishes a solid foundation to support agencies in the implementation of the government’s data strategy over the next decade. Panelists will start a dialog on the tools and cultural changes that need to be in place in order to support agencies to better leverage their data.

Join this panel to hear from the the former CTO of the Obama administration, the former CIO of Navy – Medicine and tech and government innovation center panelists from Dcode as they gather to debate how agencies should find and use the best tools that ultimately enable them to meet mission success.

Transcript

Announcer:
Data Masters Summit 2020 presented by Tamr.

Lauren Strayhorn:
… And thank you for joining us at the Data Masters Summit. We are thrilled to kick off this panel leveraging federal data as an asset, which we’ll cover in debate the foundational elements outlined in the 2020 action plan. A little bit about the plan. So the US government has developed a 2020 action plan, which establishes a solid foundation to support agencies and the implementation of the government’s data strategy over the next decade.

Lauren Strayhorn:
Throughout this panel, we’ll provide thoughts on the tools and cultural changes that need to be in place in order to support agencies to better leverage their data. To begin, I’ll go around and introduce the speakers and have them share a little bit about themselves. So I’ll go first. I’ll be the moderator for today’s session. My name is Lauren Strayhorn and I’m a Tech Engagement Manager at Dcode. So Dcode is the leader in bringing commercial technology and innovation into the federal market.

Lauren Strayhorn:
And we’ve worked with hundreds of tech companies like Tamr that we’ve vetted for federal as well as government organizations and partners to improve mission outcomes fast. In my role, I work with dozens of tech companies like ones involved in big data and advanced analytics to help them form their federal go-to-market strategy. And for today’s panel, I’ll serve as a moderator. And then now I hand it over to Nate Ashton to talk a little bit about himself.

Nate Ashton:
Hi everyone, Nate Ashton here, Managing Director at Dcode, where I have run the accelerator programs for a couple of years, really working closely with a broad range of tech companies and folks in the venture capital community to bring tech into government alongside Lauren. Prior to that, worked in government mostly at the White House across a pretty broad range of agencies. Excited to chat with you all today.

Lauren Strayhorn:
Thanks Nate. And the next I’ll hand it over to Nick Sinai.

Nick Sinai:
Hi everyone. Nick Sinai here, I’m a Senior advisor at Insight Partners. Insight is a large venture capital and growth equity firm. I’m also a Senior Fellow at the Harvard Kennedy School and I’ve taught a tech and innovation and government class there for the last five years. I served four years in the White House in the Office of Science and Technology Policy.

Nick Sinai:
A couple of years as the U.S. Deputy Chief Technology Officer, where I helped with President Obama’s Open Data Initiatives, which includes the executive order, making data open and machine readable as the new default for information and help draft and 13-13, which is about data as a strategic asset and helped with a whole range of data jams and data paloozas to make better use of the data with external folks.

Lauren Strayhorn:
Thanks, Nick. Joe Grace, over to you.

Joe Grace:
Thanks very much. This is Joe Grace. I’m a CEO of Grace and Associates and a former Recovering Nuclear Submarine Officer and CIO in Navy. Our firm Works with companies that are trying to sell and market their services into the federal government. We hold no federal contracts and we bring people to market.

Joe Grace:
For the government, they see us as trusted advisors and for the folks that we work with, our goal is to make the right decisions, bring the right technologies forward. It’s all about the mission. It’s all about the people and it’s all about making a difference. Glad to be here.

Lauren Strayhorn:
Thanks Joe. And finally Todd Broadhurst.

Todd Broadhurst:
Hello everybody, Todd Broadhurst I’m a Solutions Director with Tamr Government Solutions. Glad to be here.

Lauren Strayhorn:
Thanks, Todd. All right. So now that we have all of our panelists introduced, we’re going to dive right into our questions here. So action number one in the federal data strategy is to identify data needs to answer priority agency questions. So having trained 500+ government leaders on scoping problems, sets and tech needs, Dcode has learned that approaching the problems from multiple angles and putting yourselves in the stakeholder’s shoes gets you to the core need for things like data.

Lauren Strayhorn:
So with that in mind first, what are some key questions agencies should be asking about what they need and what would you recommend as some of your top needs? Nate, I’d like to toss it over to you first.

Nate Ashton:
Sure. So I think at a sort of primary level, one of the key things when we’re going down from the strategic level to actually implementing something like the data strategy is you actually have to identify what the business problems are. And this can be a little bit complicated because especially in the data realm, there’s so many different problems that can be solved by better leveraging data, right? There’s everything from supply chain and logistics to OSD comptrollers trying to do an audit of financial data across dozens of different programs. And they can’t get that done without automating the way that they harness their data.

Nate Ashton:
And so getting down to the business users and giving them the tools they need to actually implement, this is one of the key things, right? They need to understand what the best practices are, lessons learned from other agencies. So leveraging organizations like the Joint AI Center in DoD, which I know Tamr is doing some work with or the GSA Centers of Excellence, which are doing a lot of good work with sort of bringing best practices in terms of tools and implementation to different civilian agencies.

Nate Ashton:
I think that’s key in sort of getting people the tools they need to understand, “Here’s the actual business problem that somebody else has solved externally that I can bring to my organization and implement.”

Lauren Strayhorn:
Thanks Nate. Nick, I’d like to toss it over to you next to talk a little bit about your experience with the Open Data Initiative and some of the needs and questions that you were trying to address there and what this action plan is trying to accomplish as well.

Nick Sinai:
Yeah. So this question of what questions are people trying to answer? What are people trying to do? What outcomes are they trying to have? There’s a series of folks that you can ask inside of government, right? So you can go ask mission owners of what problem you’re trying to. There’s always going to be a leadership, a secretary or assistant secretary who’s going to be generating questions. And so those are going to drive some of the analytical questions, but we found with the Open Data Initiatives, which was really about data as a strategic asset, not just for the agency or the department, but also for for the public.

Nick Sinai:
Thinking about GPS and weather data as these canonical examples where data, multi-billion dollar ecosystems that are built on top of federal public data. We found that agencies weren’t that experienced about asking external stakeholders, especially ones that they didn’t have a lot of experience with that is the software developers, the API community. So one of the things that we did was organize a series of data jams and data paloozas and data hackathons where we brought data subject experts and folks who have great, interesting data that could be made public wholly or in part.

Nick Sinai:
And then we brought in startups, larger companies and found ways to have those constructive collisions to help generate those kinds of questions and understand the kinds of outcomes. It may be that they were trying to develop a better app to help with symptoms so that you could decide whether you needed to go to the emergency room as an example of an entrepreneur we work with. Or it could be something that’s helping farmers make better decisions about crop insurance and how to manage their field and based upon USDA and NOAA and other data.

Nick Sinai:
But it was really finding ways to stimulate the generation of those options and not just in the abstract, but actually could we get prototypes built in 90 days of ideas of features of full apps. Because I think talk could be really, really cheap. The last thing I’ll say on this is agencies have a lot of feedback mechanisms, but they’re not usually designed to think about data, right? There’s all kinds of customer engagement and stakeholder engagement, field offices and personnel. So there’s lots of channels. It was in some part just kind of trying to reposition them to also think about the data needs of traditional and non-traditional stakeholders and users of them.

Lauren Strayhorn:
Thanks Nick. So Joe, I’d like to toss it over to you and hear about your experience as the CIO of Navy medicine and how you all went about figuring out what your data needs are and how to utilize that data for outcomes.

Joe Grace:
Yeah. Thank you very much. I think that data is an interesting discussion. As we talked with the panel before we actually started coming together. We were talking about our different views of data. And data for data’s sake is kind of irrelevant, particularly in medicine, particularly in supply chain or just gathering data for the sake of it is just kind of a waste of everybody’s time. And right now, there’s a big focus on just gathering the data, getting the data together, what is the data, let’s pass the data, let’s share the data. But very few people are saying, “And once we do that, we can do what?”

Joe Grace:
So when we talk about data, a lot of times the folks that own the data aren’t the people that need to do something with it. The folks that want to do something with it don’t have control of it. And so this has become a common issue where there’s a policy written that prevents the sharing of the data or there’s a rule written that doesn’t allow people to do something with it, but the organization itself has a need to do something with it.

Joe Grace:
HIPAA, for example, was originally designed as a way to prevent people from having pre=existing conditions prevent them from getting healthcare from an insurance company. They could look it up and say, “That person’s got a very serious medical condition. My underwriters aren’t going to allow that.” What emerged was an entire industry based around preventing sharing medical data under the guise of HIPAA. That wasn’t what it was intended for, but it actually became a cottage industry that grew into billions of dollars. And people spent their entire lives preventing the sharing of data in something that was originally designed to just protect insurance fraud.

Joe Grace:
So oftentimes we have rules and policies that were put in for a good reason that prevent you from actually achieving anything you were trying to do with the data, whether it’s managing a supply chain, whether it’s managing the number of syringes or PPE, there are people that don’t want to let you know that because if you knew how many we had, you wouldn’t buy more and I want you to buy more.

Joe Grace:
So even the people that are needing the data put in policies that prevent you from using it. So my experience in medicine is if you really have a need, the data ought to be there for the need, but don’t create an opportunity to share just for the sake of playing with the data. There’s got to be an outcome in mind.

Lauren Strayhorn:
Thanks, Joe. And I think that drives back to a lot of the balance between data privacy and data functionality and we’ll talk about that a little bit later here, but Todd-

Nate Ashton:
I think the one thing … If I can jump in and just take it back off of Joe for a second. I mean, the one thing I’d add is even assuming you have the policies in place and you are able to access the data. One of the things that that often ends up being a problem is actually having sort of the tools and best practices in place to take advantage of it. I mean, data is at such a scope and scale now that nobody can yell put some data scientists on it and say, “Well, we’ll wrangle these petabytes of data tomorrow and have it done.” You need to have the tools in place, the processes and systems in place to actually get there.

Nate Ashton:
I think a really interesting example was when the CDC all of a sudden needed passenger manifests from the airlines in order to make sure that they’re actually notifying folks if they’re exposed to somebody who’s traveling on airlines who has COVID and it was a mess. You’ve got passenger manifests that are still on paper. Some were incomplete, they weren’t tracking it consistently. And so those types of use cases where it’s critically important to get data from one place to another, you need, you need software in place.

Nate Ashton:
You need be thinking about security and ATOs and privacy and all these things. But you have all these critical use cases where the policies are there to share the data, it’s possible, but you need to get it organized. You need to use a whole suite of software tools. And there’s a whole bunch of innovation in the private sector, Tamr and other companies that are doing really interesting things to enable that so and that’s if they set it up.

Joe Grace:
Well, along the same line that we discussed, we work in the medical field of the Veterans Affairs and Department of Defense Defense Health Agency. And the example that we used is your entire benefits package in the VA is based on a form called the DD 214. This thing has a life of its own. It is your record of where you served and who you served with and what assignments you had, what your pay grade was, what medals you have. It is your lifeline.

Joe Grace:
Well, the DD 214 used to be something that you kept in a locked box and it was a piece of paper that you actually protected with your life. Now it’s available online. Well, all those data fields, which actually determine whether you’re eligible for something or not, it seems like you ought to be able to just say, “This is Captain Joseph grace. I served from these times, I’m a Naval Academy graduate. Here’s my DD 214. What am I eligible for?”

Joe Grace:
But instead we make the veteran go through and look up what each piece of data might mean and each piece of data, which might give you a benefit is not available to tell you what that benefit was, unless you know the double secret handshake, double probation secret. And so this is where data companies could actually take the format of the DD 214, take all the rules, regulations, and policies, put them all in one thing, shake it up and with a single click that veteran ought to have all of those things made available to them.

Joe Grace:
There’s an outcome taking data mapping, data privileging, data security, and making it available for someone that doesn’t have to do a whole lot of legwork for someone that’s already earned the right to not have to do that. But those are the types of things that we should be driving in our data management, not just collecting the data and protecting that data for just the sake of it.

Lauren Strayhorn:
And so we talked a lot about the need for understanding what the questions and the purpose and the intent of that data will be. But there is a lot of overemphasis on wanting that analytics and that end outcome, but not a lot of conversation or as much as there may be should be around the quality of that data and the data preparation prior to that analytics piece.

Lauren Strayhorn:
So would love to hear what you guys think about once you have that need for that data and we talked about it a little bit. What’s the first step to utilizing that data? And Nate, I’d love to hear from you first and then probably bring it over to Todd to hear about Tamr’s experience doing that and how you guys have seen successes and results in the federal government using that.

Nate Ashton:
So I think they’re sort of a set of hurdles they have to go through once you’ve identified what problem you want to solve. I think Joe hit exactly on the first one, which is the policies, the silos. “Okay, great. I want to solve this business problem who owns the data? Am I allowed to move it? Am I allowed to share it with somebody else, et cetera, et cetera?” That tends to be the first hurdle that I see agencies getting over once they have a business problem they want to solve or identify data that they need to pull for some use case.

Nate Ashton:
And then from there, it’s always cleaning the data, especially if you’re doing it at scale or trying to actually implement AI or machine learning at some real level. It’s always cleaning the data because it’s going to be a mess in terms of how it’s categorized and different tags and different databases that are varying in their structure and their sort of completeness. Generally, I mean, the statistic that people always throw around is that 80% of a data scientist’s time is spent just cleaning and organizing and wrangling data.

Nate Ashton:
And so that’s where a lot of time is spent currently, it’s just getting ready to be able to use it. The analytics, once you have the data in a good place, you can hit go and run analytics all day. And so that’s where again tools like Tamr are super useful and sort of getting that speed to impact down.

Joe Grace:
Yeah. And Nate, before you leave that, I want to put it in practical terms. Because I always think you want to make this something that people could identify with. I get a new phone and I hit that button that says, “Do you want to merge your contacts?” Well, of course I do. I’d like to bring in those contacts from all my sources and you get your contacts from 2003 from AOL, as well as your thing from that one time you signed up for something and you just blew 10,000 contacts into your phone that you have no idea which one has data lineage, which one does what, how it’s there.

Joe Grace:
And when you try to remove those with any kind of de-data deduplication thing, you lose the five important contacts at the White House that you really needed to keep. My point is there’s a practical application of data filtering, data mapping, and bad data destroying a new platform. And so quality becomes so important to have a tool or a methodology that truly compares the data lineage so you can go back and reconstruct what you messed up with your movement. Or you can go back and find out where did that piece of data come. In healthcare, critical for life and death.

Joe Grace:
In intel, critical for when to left a boom. So that ability to track the data and undo your mistakes of mergers, synergies, collections, and what do we all do? We take that phone data and put it on 17 different WD Passport things because we’re afraid we might lose it. And then we don’t know which one was the one we wanted to go back to in the first place and then another week remember a password for.

Joe Grace:
So we each have this at a very fundamental level that translate completely to how does that work at OSD or how does it work in the finances of the Army? It’s the same practical applications of data cleansing, data lineage, being able to map it and then come to a clean set of data to do something with. I don’t know, Nick, if you’ve found that, but hopefully that was in your plan that you had at the White House.

Nick Sinai:
I think we’re going to toss it over to Todd to talk a little bit more about how this works in practice, but I would just say I +1 both your fantastic comments.

Lauren Strayhorn:
Thanks Nick. Yep.

Todd Broadhurst:
I couldn’t echo the group, the panel even more. 80% of all analytic projects fail. That’s according to Gardner, that’s not according to Tamr or this panel, they fail because the data is dirty. It does not flow correctly. It’s not being pulled in from those internal and external sources. And so how do you conflate and catalog and connect all that data together? That’s easily done if you’re talking about two or three data sources, you. Can do an If-Then Statement or a Python code. But when you’re talking about millions of different records, when you’re talking about hundreds of different ways that the scheme is categorized with different naming conventions, different acronyms, abbreviations, especially in the government each department and agency use a different terminology. You need to be able to curate that and connect it all together.

Todd Broadhurst:
And that’s essentially the most important thing that Tamr does is we catalog, connect and curate those internal and external data sources, whether that’s hundreds of data sources or even thousands. And then we go through a combination of machine learning and human expertise. And not human expertise from data scientists, but human expertise from the analyst and the people that know what that data is about. And so we can now apply that to everything from clinical trials and customer 360 to know your own customer, data management, reference data management, supplier, and mastering of those supplier records. Also geospatial and patient mastery.

Todd Broadhurst:
All of the things the panel has got great experience in. How do we bring all of those records, not only that 214, but if he’s got private health care, if he was in certain environments while he was in military or post-military, how do we conflate and bring all that together? That’s how we can make analytic projects and visualization of those analytics more impactful and more successful overall.

Lauren Strayhorn:
Thanks Todd. And Nick, I want to toss it over to you for a final comment here for this question, and then we’ll move on to the data privacy and security and functionality piece to wrap up the panel. I know you talked a little bit before about, and Todd mentioned it a little bit as well about that. Having that AI and ML tool in place, but also having that human in the loop like Todd was saying like that data analyst knows exactly what that data needs to be, what story that needs to be telling. So I’d love to hear a little bit about what you’ve seen in the past be successful when that human in loop is considered and has an important role.

Nick Sinai:
Yeah. So thank you, Lauren. So one of the things that back when I was in government and in the White House and writing data policy, there’s a few things that we learned. One is that executive orders are not self-executing. You really have to work with the people who want to take it forward. And that actually was one of the things that led to the creation of the Presidential Innovation Fellows Program, where folks who wanted to be embedded inside of agencies to take these data projects forward.

Nick Sinai:
Instead of calling it AI machine learning, a lot of what we were talking about was big data, but it was the same. I think that term went out of fashion just as I was leaving out of leaving government. And so now we call everything AI, but of course, AI and ML have been around for many, many decades. And so there’s a big focus in government about using AI and ML to get better outcomes. And that’s absolutely fantastic, but we want to use machines where machines are good in predicting certain things, but then also bring the human expertise who tend to know the ins and outs of a particular dataset and kind of with what purpose it was collected and maybe with what bias it was collected. And all of these things that a machine may not know. Maybe harder for it to train.

Nick Sinai:
And so bringing both of those together is what Tamr does to give you a much better picture of whether it’s a part or whether it’s a veteran or a traveler or whoever or whatever it is. And I think that’s important as we get really excited about all of the AI use cases. And I too am excited about the use cases, but if we don’t focus on getting the data clean for very specific outcomes, then we’re going to have a lot of investments in AI for AI sake, but we’re actually not going to be able to get out of the starting gates in government.

Lauren Strayhorn:
Thanks Nick. All right. So moving on to our final question here. I know we’ve talked a lot about first talking about addressing and figuring out what those needs are and then how you can start to address those needs and clean that data. So wanted to kind of close here with once we have that data and we want to start sharing it, how do we address that privacy aspect as well as the functionality of that data? So one of the practices in this 2020 action plan under the governing managing and protecting data talks to governing data to protect confidentiality and privacy.

Lauren Strayhorn:
So that’s around ensuring there are sufficient authorities, roles, organizational structures, policies, and resources in place to provide appropriate access to that confidential data and to maintain public trust and safeguard privacy. So I’d love to hear first from Joe, what are some emerging applications where that data can be shared between and among agencies to fit that operational need and that outcome, and just hear what your thoughts are.

Joe Grace:
I think there’s a great example that’s recently taken place during the COVID let’s bring it current. During COVID, telehealth became a real crisis because people were no longer willing to go into the hospitals and have anything from an optional scan or just go in and see their doctor about something. CMS has always had a great deal of lockdown on virtual health. So the ability to see your doctor, protect your patient information, you had to have a special device and it was all coded and it was very shut down. Well, people didn’t have the tools. So the president opened up CMS and you could do Zoom calls for telehealth. You can do teams calls for telehealth. You could do FaceTime for telehealth.

Joe Grace:
Well, that opened up a whole world of privacy of personal health information is now flying out there. What do we do? And so I think now, how do I get reimbursed from that? So going to Todd’s discussion of finding out what people pay for this and do people pay the same thing across all things and for various telehealth experiences, what is the data that shows how I should be paid? How long was the encounter? What did they do? Who was the patient? Was the patient there? Did the doctor provide care?

Joe Grace:
That whole explosion during COVID has created a world of data management discussions around health information. Everything from identifying a patient because Blue Cross identifies a patient different than the VA does, which is different than the DoD does, which is different than your local concierge medicine doc does. How do we correlate all those different pieces of information, charged for it, track the result and see how that comes out, so the patient actually is taken care of and their privacy is protected using a platform that has no security.

Joe Grace:
So those are the kinds of things right now that there’s an operational need, there’s a technology that can do it, and now the data propeller heads are going to have to go back afterwards and figure out what in the world does this mean? So Nick and Nate and Todd are going to be in business for the rest of their natural lives with this one. And there’s so many things about this from patient tracking to contact tracking to who was in a restaurant and what are the results of … I mean, COVID has set up a data explosion and trying to figure that out is a great example of practicality. So I’d be open to their comments on this one.

Lauren Strayhorn:
Yeah. And I’ll kind of open up the floor to whoever has a comment for us. I see Nate you’re off mute first. So it sounds like you’re ready to comment.

Nate Ashton:
Fair enough. And I think Nick’s done a lot of work on open data, so I’ll let him talk about that side probably. Because certainly when it comes to data sharing, that’s a big piece of it. I think generally government has more problems with getting data out there than it does with locking it down. Plenty of silos still to be broken down within and across government on the sharing side. When it comes to privacy, I often look at it mostly as a security software question.

Nate Ashton:
I mean, you’ve got all these breaches happening every year within government where the more and more sensitive data is being stored across agencies, the higher value targets there are. And if we’re not simultaneously improving our security posture that’s going to keep happening. I mean, I’m one of the many whose OPM records are all in China after the big breach in 2014 and we’ve all been subject to many breaches. I look at it less as a what are the policies that we need in place and more of the are we also staying mature and ahead of the curve from a cybersecurity best practices standpoint?

Nick Sinai:
Just building on those comments, both of those comments a little bit. Privacy is fundamentally about protecting data. It’s also about the use of data and the expected use of data. And so I may have a reasonable expectation that the government is going to share data internally to help protect me and help protect the country. But I may object to uses that are outside of that kind of reasonable expectation. And that’s why certain use cases are extremely locked down when it comes to census data and IRS data and so forth. And to a certain degree medical data and so forth.

Nick Sinai:
We spent a lot of time in the past administration talking about open data and the current administration has also talked about open data. It’s part of the current federal data strategy, which is promising to see that this is really bipartisan. There’s a couple of ways to think about this though. There’s data that can be completely open and machine-readable and hopefully this is something you’re thinking about through the life cycle of the data, not just the time of dissemination.

Nick Sinai:
But then there are data assets, which can be made available to certain populations. And so maybe it’s researchers, maybe it’s other folks. And so you can put certain legal precautions and you can have virtual or real data centers or places where they can go work on the data and they’re only able to take the aggregate out. We’ve learned a lot about social mobility and and some of the big structural problems from having Harvard and other researchers use IRS data and social security data in very protected ways. And I think that’s something that we all benefit from.

Nick Sinai:
The final thing I’d say is that fundamentally this is your data and you should be able to use it. And so one of the things that we focused on was how do we promote the standardized use so that people could easily understand and easily use their data, whether it’s health data or energy data. So we had a blue bond initiative and a green button initiative. Some of those have continued in this administration.

Nick Sinai:
The blue bond at the point of care, for example, as an initiative that CMS has continued that is how can you get that patient’s data to the doctor and to the patient when they’re actually in that clinical moment? There’s a number of different ways to think about the privacy challenges, but at the end of the day, it really is related to factors and responsible use that helps everyday Americans.

Joe Grace:
Nick, one of the things that strikes me is that in our civilian lives or our everyday lives, the use of data is ubiquitous everywhere. I log in with Facebook or Google or LinkedIn and I connect into my Roku TV, which connects back to my phone. And in a matter of seconds, its linked my AT&T bill to my Roku thing that signs up and puts me on Facebook to let me know that I’ve done something. And then I walk into the federal government and all I want to do is find out if I’m eligible for a benefit. And it takes an act of Congress for me to find that out.

Joe Grace:
So I think that our experience in our everyday life of data manipulation and data use, we become very free with it and also very, I would say irresponsible with our use of data. We will sign up to get a T-shirt and give them every single thing about our life, including our bank account, but we won’t let the government tell us whether or not our child has a disease. So it’s a very strange use of data. And what we found a lot of times is we were always afraid that the senior executives had walked in and read something on the plane or in the bathroom and came out and said, “Hey Captain, why can’t I do this? I can do it on my phone. Why can’t I do that here?”

Joe Grace:
Well, sir, the data’s not available. We lock it down. There’s a rule, there’s a policy, there’s a thing, but their experience in their life doesn’t correlate to their experience with the government. And I think that’s where the technology such as we’re discussing today are going to bridge that gap to start making some of those things more seamless. I’d be curious just to hear your reaction, but I think that’s one of those frustrations is you get a chance to figure out what the latest Yelp thing is, but I can’t figure out what an MRE is.

Lauren Strayhorn:
Thanks Joe. And I think if there’s any additional comments, feel free to jump in and then we’ll toss it to Todd for a final comment here to close us out.

Todd Broadhurst:
[crosstalk 00:32:10] Sorry, go ahead, Nate.

Nate Ashton:
Yeah. And I’ll just add to what Joe said around sort of the lived experience today versus what you’re dealing with internally on government systems. And certainly it was a change leaving government and leaving my Blackberry behind and having to use Slack for the first time and all that.But one of the really interesting things and why I think it’s a super exciting time, both in data and in tech and government is a lot of the new up-and-coming IT leaders and program managers and government they’re taking the status quo for granted. And you’re seeing a lot of innovation now. It’s not just the PIFFs and the ATNFs anymore. There’s all sorts of programs across government doing really cutting-edge stuff. And so I think it’s an extremely exciting time for the years ahead.

Todd Broadhurst:
Exactly Nate. And I mean, we’ve gone phenomenally forward in our access controls to the data, whether that’s biometrics, CAT cards and two-factor authentication. But we’ve also as Joe mentioned, I ordered a fishing pole last night and on my Yahoo this morning was a thousand different lures to go with that fishing pole. Everybody’s sharing-

Joe Grace:
Blame Salesforce for that.

Todd Broadhurst:
Yeah. Everybody’s sharing everything. And the introduction or the adaptation and adoption of these cloud service providers, and not only on an open level, but a sensitive and top secret and uber-secret type levels of these cloud service providers. It’s great now though that like the intelligence community or the DoD as a whole can start now cross-pollinating their information. They can start to share that with each other, where are those clearances and where the access controls allow.

Todd Broadhurst:
And we can cross-pollinate it with the external data sources. Companies through satellite, through LIDAR, through everything that are new in the last decade or so that we have access to. We are on the precipice of being able to combine all of that information to give us phenomenal insights, but we need to figure out and keep control of it because I don’t want to log into a DISA system and then have it come up on my NSA system with a hundred things that I did on my DISA sys so.

Joe Grace:
And I think from a CEO’s perspective, Todd, what you have said, and I got to tell you, Nick and Nate having worked in the White House is just incredible. The CEO’s job is to keep his boss off the front page of the paper. And it’s to make sure that the data systems that you’re using give him mission capability and operational outcomes while making sure that you’re complying with the law. You’re making sure that you’re doing all the things right. And having a company that actually can do that for you, that you can trust is an absolute godsend in this market.

Joe Grace:
Because there’s no one smart enough to understand all the different rules and policies and procedures and what the data does and doesn’t do. And so you need someone that can come in and be that trusted advisor. And I think that’s the role of companies like Tamr and the folks that are coming in need that voice of reason to kind of explains how that works and that trusted partnership becomes critical. As a CIO, it’s more important than almost anything I can do is having trusted vendors that actually come in and let me know what I can and can’t do.

Lauren Strayhorn:
Thanks Joe. Thanks Todd. Nate or Nick, any final closing comments before we wrap up here?

Nick Sinai:
I guess the last thing I would say is one of the things that the Tamr CEO Andy talks a lot about is this idea of data ops, which is data and operations coming together. And for those of you who are familiar with software development, we’ve moved from big waterfall projects where we take years and years to build software and then test software to DevOps development and operations coming together, so we have faster cycle times. You get software for the end user. And the same thing is happening in data.

Nick Sinai:
And so I think Tamr and other companies are really on the leading edge here of how do we reduce that cycle time so that we can clean up and use and analyze the data for a particular outcome? Whether that’s helping a veteran understand the benefits that she’s eligible for without her filling forms out in triplicate. And so it’s that faster feedback time. Also, it lets us fail faster. It lets us learn faster and it lets us innovate and ultimately scale faster across government.

Nick Sinai:
So companies that take the best of AI and the best of humans and reduce that cycle time, that’s so that’s so important because that means that innovation can really scale. We have lots of pilots, but we don’t often get to operational scale in emerging tech. So that’s what I’m really excited about.

Lauren Strayhorn:
Thanks Nick. All right. Well, that’s a wrap for us. Thank you to all of our speakers and for all of you joining the session today. All of us at Dcode love working with Tamr and are glad to be part of the Data Masters Summit. In data, Dcode, as I will say again is the leader in bringing back commercial tech and innovation like we’ve talked about into the federal government to improve outcomes.

Lauren Strayhorn:
And data and analytics are obviously a big part of that mission. And we’re excited to continue working together with Tamr and to see what else you all can do for the government here. All right. Well, that’s all of it all for us here. Enjoy the rest of the summit.