Meeting Mission Requirements with Mastered Geospatial Data
- DATAMASTERS on Demand
- Meeting Mission Requirements with Mastered Geospatial Data
As the proliferation of imagery data and mapping information grows across the federal and
commercial markets, there is a growing need to consolidate, catalog and curate this data from a variety of sources. Hundreds of data sources across GPS, social media, satellite imagery and
other integration points can measure into hundreds of millions of records, perhaps billions.
Conflation of geospatial data is necessary to ensure that all data sources (both internal and
external) are grouped to provide the most complete information about points and entities of
Maxar’s EVP of Global Field Operations, Tony Frazier and Tamr’s CEO, Andy Palmer will address:
Best practices for geospatial analysts to utilize machine learning coupled with human expertise to save time and improve mission effectiveness
Why cloud native capabilities are crucial for extending information sharing and ingestion and importance of grouping and cleaning data prior to visualization
How analysts can get started
Data Masters Summit 2020, presented by Tamr.
Hi, welcome to Meeting Mission Requirements with mastered geospatial data. I am Carrie Drake, Director of Government Marketing at Maxar Technologies, and I’ll be your moderator for this session.
Today, we’re speaking with Tony Frazier, the EVP of Global Field Operations at Maxar Technologies and Andy Palmer, Co-Founder and CEO at Tamr.
Today’s discussion is going to focus on meeting mission requirements with mastered geospatial data. Tony and Andy, you guys go way back, so tell us a little bit about the work you’ve done together and how we’ve come to this conversation today.
Yeah, well Tony and I started working together at a company called pcOrder.com back during the height of the boom. And Tony was running product, and I was reluctantly running sales and marketing. And it was an amazing place, because we were doing a lot of what people today would call SAS, or even data as a service. And a lot of the stuff that Tony built back then was things that people consider sort of cutting edge today. We were way ahead of our time, don’t you think, Tony?
Yeah, absolutely, Andy. It was really a fun period, but when you think about the issues we were dealing with and trying to get multiple sources of PC industry data normalized and in a format that could be usable, free commerce across multiple channels, that was essentially the problem we were trying to solve then, and I think if we play forward our career, yeah, we’ve seen different instances of that problem we were trying to solve, first for enterprise and now what I’ve been doing for government.
It’s really amazing how, maybe we’re just getting old, that we have the same problems over and over again.
Yeah, no doubt. I mean I think, one of the things which has been interesting for me. I’ve been in this industry for about 10 years now and after, pcOrder, I did a NLP start-up in Boston, so followed you to the Boston market and spent time in that world, that company’s a part of IBM and then I did some more things at Cisco Systems.
When I left commercial task and came into the geospatial industry in 2010, I was seeing this massive opportunity because by the time, the commercial satellite imagery industry was still pretty nascent at that time, it was very focused on data, but had an aspiration to move into data analytics.
And to do that, we had to create different ways to extract information from all that imagery and turn it into information to then feed analytic engines to drive various insight. That was really what drew me to the industry and we’re starting to see it play out and it’s hard to believe it’s been about a decade later and so now, the types of questions we can answer through bringing together sources of geospatial data, is pretty phenomenal.
Let’s talk a little bit about that, what does that current landscape of geospatial data look like, from data variety to accessibility? Tell us a little bit about that.
Yeah, I’ll tackle that first, if it’s all right. Geospatial data has really exploded over the past decades, in terms of the classic sort of three Vs of data – volume, velocity and variety. And they’re great tools to help deal with volume of data, all the big data solutions available on the cloud, as well as on prim options like my partner, Mike Stonebraker and I built a system called Vertica, it’s one of the big column stores in the last 20 years.
And there are lots of options to deal with volume and similar, in terms of variety. The design pattern represented by Kafka, which has been popularized by Amazon, has enabled companies to deliver streams of data, in addition to basic batch workloads. And so, the velocity of data that the people can consume in the tools they’ve got to deal with velocity, increasing velocity, are really incredible.
The one that is the most frustrating is variety and this is where we’ve been focused on at Tamr. And I think that Tony and the team of Maxar have also experienced, where we’re really only at the beginning of people really dealing with large scale production systems, to deal with the variety of data, and the work we’re doing at the NGA, for Tamr, under the direction of Admiral Sharp and team, is one of the most cutting edge implementations of large scale data curation in the world.
Maxar is one of the few other places where this is kind of happening, where the Maxar team is curating large quantities of geospatial data at scale. And then maybe if there’s one other place, where this is happening, it’s at Google. Where they’re in the Maps team, as well as Earth and in Google Knowledge Graph, they’re practicing large scale data curation.
And it’s one of the reasons why Google invested in Tamr, as one of our first investors was because they saw us sort of generalizing this machine-driven, human-guided design pattern for data curation. But most of these geospatial data conflation solutions are limited, in terms of the number of sources they can support and it’s primarily because of the rules-based design pattern, that underlies the tools. And the new set of tools, that are becoming available, are model-driven, using cutting edge machine learning and integrating the human and the machine together to curate data at very large scale.
And I think this is one of the things that Tony and I connected on, as we were starting Tamr and as he was digging into Maxar, was this fundamental design pattern of using model-based approaches to do large scale curation. And then, eventually, with both of us working with the NGA, seeing this pattern emerge, that there are regular opportunities and challenges to integrate hundreds of thousands of data sources, at some of our commercial customers. And the scale at the NGA and others, is definitely at that level.
I think we’re both seeing that, right, Tony?
Yeah, absolutely. I think that there is a … What’s really interesting is the evolution of technology is facilitating application to mission, that’s becoming more and more important. Just to kind of hit the same topic from different angles. The NGA, they’re a combat support agency that supports the war fighter and it ultimately is their goal. And so that’s both in, core mission by safety and navigation, but then also in providing different types of intelligence support to war fighters, how to understand retired life or indicate their morning, how to support broader research type missions.
And so, what’s been happening over the past decade is, as the mission has changed from a heavy focus on counter-terrorism to more great power competition, the scale of the mission has expanded to the globe, it’s the Blue Marble, in terms of coverage. There’s over 100 million square kilometers of land mass and equal amount of sea, that the agency needs us to understand. Compound that with the new source of data, as Andy mentioned with the variety, there’s hundreds of satellites on orbit, that are collecting data, there’s drones, there’s ILT, there’s a hundred other things based on source.
All those sources, whether they’re from government assets or commercial assets, are creating opportunities to be able to deliver insight at scale. And so there’s a need to actually harness all that and then drive signal from it. I think a lot of what Andy described is creating the tooling to do that in a way that’s scalable. And it’s important because budgets are tight for many of our customers. With all this data influx, you can’t just hire more analysts. I mean, there was a report that said, with the number of commercial assets coming online that the NGA would need to hire 6 000 000 analysts, just to be able to put eyes on Pixel.
I mean, that’s not feasible, so we have to come up with scalable ways to combine the human and the machine, to be able to extract information insight at scale.
That’s a really segue way there, Tony. We’re talking about the scalability and we talked a little bit about budget constraints and we’ve talked about, are there enough humans in the loop for this, but what are some of the other issues that geospatial analysts are facing? And how are we helping to go about solving these issues, what are our customers doing for solving those issues?
Yeah, I’m happy to give a couple of examples to start, Andy, does that work for you?
One of the things that has been really fulfilling is, our imagery had been used traditionally to support different core missions, foundational mapping, commanditory assistance and app response, those are areas that are very prominent, in terms of how Maxar imagery has supported both our government customers and different commercial applications.
One of the areas that’s been really exciting for us is, with COVID-19, that obviously disrupted how we all work, live and play, quite frankly. One of the things that we saw in partnership with the US government was, as many analysts that typically work in a secure environment, need to transition to telework, we saw a tremendous spike in usage of systems we’d deployed in a classified domain sat that could support relevant missions of the point that I talked about earlier, with the NGA.
One of the real success stories, that our team’s super proud of, is we were able to see thousands of analysts begin to use the systems that were deployed on the internet and they weren’t just viewing imagery but they were actually combining imagery with automation techniques to extract features, things like roads, buildings, points of interest et cetera. And then do mapping on top of that, to be able to rapidly map, I think we’re over 12 million square kilometers of land mass, where we’ve been able to create feature data for over 200 mapping campaigns.
And so, once you create that foundation data, then it actually creates a mechanism to conflate other sources to it, much of which Tamr is facilitating and integrating those sources in a common baseline.
It’s amazing because there’s this philosophy behind this, which is, if you can use the machine really aggressively, the more data sources you add, the higher the quality of all the data. You end up in a state, where you’re adding a lot more data sources but actually the overall quality of all the data gets much better, much faster because you’re leveraging the machine. It’s totally a win-win kind of a scenario.
It’s really amazing the work that you guys are doing and kind of an unanticipated consequence of this pandemic, to enable all these people to do this remote work and engage in the data in this way. Very cool.
Yeah. I think that’s exactly right, we touched on the quick pivot and need to be able to provide access to data to analysts in new environments. As we’ve made it through this pandemic, but as we grow and as we move forward, what are we looking at, from the importance of providing those different levels of access, whether they’re unclassified, various levels of classification, for viewing these geospatial data visualization? Can you talk a little bit about the importance of those different levels of classification?
Tony, you okay if I dig into this one?
We see this because we have so many customers across commercial and government agencies. We see this pattern that’s emerged, where it’s absolutely essential to manage access and security, when you’re conflating this many sources together. And it’s very complicated, but the hardest part is to manage the graceful permissioning of curated date sets, from curated data sets that it’s not discoverable at all. That no one can see.
And the most confidential and clandestine form of curated data set, to very gracefully then making it discoverable so that you know that it exists and may request permission to access it in some way, shape or form. And then the next state is making it possible to discover the meta data associated with that curated data set, maybe the attributes that are in the data set and when the data set was created and what the original sources of the data are, some of the provenance, lineage of the data.
And then eventually to the graceful discovery of sample data, from the curated data set. And then finally, the last level of permissioning is, the actual access to the entire curated data set itself. But it’s the graceful, sort of progression of that, from not discoverable at all to, here you can have the whole data set, that I think is the biggest challenge, in terms of accessibility and doing that at large scale is something that I see a lot of companies struggle with, because many of those mechanisms aren’t necessarily built into their infrastructure.
But it’s a big part of what we …. One of the things we try and do at Tamr, is build that graceful discovery, from zero access, zero knowledge, all the way through to full access and knowledge of curated data sets, into our system. Tony, I’m sure you see this all the time.
Yeah. It’s definitely interesting because a lot of what you described, which I know has been tried and true in the industry of financial services and healthcare, they apply in this environment for the government missions because obviously intelligence analysts work in different security domains, going from unclassified to secret, top secret and above.
And so, having the ability to do a broad area search, where you may be looking for, say course object and looking for airplanes or rail cars or certain types of vehicles, that’s something that you can do at scale, leveraging the commercial sources in the unclassified domain and then you can move those detections up, into a more secure domain, where you can start to do better attribution to that, you’ll know the specific type of vehicle that might then be an indicator for some intelligence question that you’re trying to answer.
I think they’re definitely a parallel there, so having the infrastructure to be able to, not only do the conflation and normalization of data, but also identify what different users should be able to access when, it’s super important that we build an architecture to support some missions at scale.
I really like that phrase “graceful permissioning,” I feel like I want to use that again, that’s wonderful. We talked a lot about the various sources of data and we’ve got a lot of availability of satellite imagery but what has the impact of the combination of lidar and drone and satellite imagery and all of these various data sources been on the geospatial world? Tony, you want to start?
Maybe I’ll start? Yeah, I’ll start and then Andy, we’d love to hear your thoughts. I think what we’re seeing is that there’s a growing appetite for persistence. Ultimately, if the mission’s global then you want to be able to have an unblinking eye, that sees everything that occurs everywhere and then derive insight from that, on very rapid timelines.
I think the opportunity is vast, when you think about all these sensors that are from space to air, down to data being collected on the ground. One of the huge problems in deriving signal from all that is how you co-register all that together and then allow you to essentially use the spatial index to do spatial temporal analytics. I think that’s a lot of the problem that we’re trying to solve. We’ve bene trying to solve it through different means and one of the ways that Maxar has been attempting to solve that is by creating derivatives from our imagery, that can be a reference layer for spatial data but then there’s the whole issue of how you then tie other things down to it.
We had joint venture we started in 2015 called [inaudible 00:18:53], that we then since exercised, it’s now part of the Maxar family, wholly owned but what we’ve done to do that is, we take all the imagery we collect from different angles and we create 3D products at scale. We’ve been able to use that to then tie down other satellite data, other drone feeds, use that as a way to be able to look at a stack of pixels and to get the location at a given point in time. But then, the next question is, okay, well what about social media?
What about telemetry data? What about other sources? And I think that’s a lot of where we’ve been having great discussions between Maxar and Tamr, about how to fuse that together, to answer more complex geospatial questions.
It’s so complicated what you guys are doing, it sounds simple but these two primary indexes of geospatial and temporal. When you consider the diversity and the complexity of data required to resolve all that and deliver, at the tactical point of consumption, in any given mission. I mean, a massively complicated problem, really remarkable.
It’s truly … I don’t know how you guys are dealing with this flood of data that’s coming in, especially since you’ve got this mission to make it analytically relevant all the time. That’s a really challenging problem.
Yeah, it’s a journey, for sure and I think some of what we’ve seen, there’s early wins we can deliver, based on the current state but then no sure of the future missions to go after them. One of the areas that I saw with your work program at NGA and then what we’re doing, that’s really exciting, is how do we conflate mapping data, that we’re generating from our imagery, with other sources that would further attribute that?
I know it’s a building, but what type of building is it? What materials are associated with it? What were the things that were delivered to it and what left that building over time? I think those are areas where we can deliver the mission today but then also build an infrastructure the mission for tomorrow.
Yeah and it’s incredible, we see opportunities all the time on the commercial side, where Maxar data would be incredibly valuable. One of our favorite applications is wells mastering in oil and gas. We’ve built these great wells mastering systems, but the richness of geospatial data in those systems is relevantly rudimentary and really state of the art geospatial and these analytic capabilities into these commercial applications are incredibly powerful and huge opportunity for Maxar.
Now, Tony, talk to us a little bit more about how Maxar’s data and geospatial analysts to solve problems from these variety of data challenges. I think Andy touched on it a little bit, when we’re going into NGA, we’re talking to folks that really understand geospatial data, many of those folks do but when we’re moving into the commercial space, like you touched on, Andy, with oil and gas and some of the other commercial markets, you’re not always going to run up against a company or a customer that has a bunch of geospatial analysts on staff. So, how are we helping these organizations, these commercial companies from what we’re learning with geospatial analysts and our government customers and how are we helping those government geospatial analysts move along?
Yeah, great question, Carrie. I guess I would approach it from a couple of different angles. This is the work we do for the US government and the other largest governments in the world that have expertise in geospatial. That certainly is an area of ongoing commitment from Maxar and we’re parting with them on how we then go after bigger missions.
How do we go from providing the imagery for mapping to full mapping solutions, to support safety and navigation, the core requirement or how do we enable autonomous navigation of a drone, where I can get from point A to point B, without having GPS active in a access denied environment.
We’re working to [inaudible 00:23:34] up there but to your point, around the [inaudible 00:23:37], I’d say that this is really where partnerships, what we’re incubating with Tamr, is really important because what we’ve found is that there’s many consumers and to a certain extent developers, have gotten access to our imagery through what’s happened with commercial mapping. Andy mentioned earlier, Google with their maps, API and some of their other analytic IP facilities, like Earth Engine.
Those have been ways to get exposed to our content but through a number of our embedded partnerships with those data mapping providers. For me, the next phase is, how do we make geospatial capabilities, not just imagery but all the other data, accessible to developers? Because there’s a whole ecosystem of developer, data scientist, that don’t think geo first, they think answering spatial temporal question.
And so, having a way to make it easy for them, if they’re familiar with this database, this data integration technology, this cloud environment, how do we make it easy for them to be able to integrate into their work flow? That’s a lot of where we’re going, as we’re thinking about the next phase of filling out our ecosystem to support that broad-based access to the community, as opposed to just what they see as prefixer in a satellite view of a mapping application.
This is kind of a vaguer, broader question for both of you, where do you see geospatial intelligence going in the next 20 years?
I think that it’s almost like geospatial, as Tony was alluding to, geospatial becomes a feature of overall intelligence. I know that when we started this company, recorded a future up here in Cambridge, the cyber threat intel and you wouldn’t think that in cyber threat intel that geospatial matters but it actually matters a lot.
I see, from my point of view, both in the intelligence community, as well as on the commercial side, that geospatial is being integrated into the intelligence initiatives and the analytics initiatives and AI initiatives, across the board. It’s almost like it’s not a nice-to-have anymore, it’s almost kind of a must-have. In order to be a must-have, the core infrastructure that Tony was describing, where you’ve got these really high quality assets that are very granular available for the average developer or data scientist or analyst to consume, without requiring unnatural acts of programming or that requirement of specialty in geospatial.
That’s where the real trick is now, it’s how accessible can you make these tools so that they’re integrated into all the intelligence work that’s being done by this huge quantity of people. Very challenging in a whole bunch of ways and often, it just comes down to a people challenge. I think that both Tony and I used to say, there’s a lot of tech that’s available and the tools are at our disposal but making those tools accessible to lots of people, oh, that’s where the challenge is. And also, changing human behavior a bit so that they’re prone to think about geospatial when they’re doing any analysis.
Does that make sense?
Totally, yeah. I think that there’s a … What’s really interesting about this is, to an extent, if geospatial stays its own silo and it’s not integrated into the broader architecture, it’s not going to achieve it’s full potential, is the bottom line. And I think that’s what Andy described in a really eloquent way.
The thing I would add to that is, we’re seeing a, as the capabilities become more robust, both in terms of what quality data you can collect from space, we’re finalizing the build of our next generation imaging constellation, WorldView Legion and one of the things that enable is, we’ve been able to miniaturize the electronics that are associated with a satellite of that class so we could build six satellites at the cost of one.
And so, as you get that capital efficiency, it allows you to start to deploy those assets in new ways, to get more relevant information for our customers. Things that typically would take weeks or days to collect, we’re going to be seeing certain parts of the world up to 15 times a day.
That’s like Elon Musk scale improvement.
Definitely a big … Just a 10x improvement in the value composition. One of the benefits of that is, you’re able to answer different questions, if you have that type of persistence. But then, the bottleneck of it, it goes from collection through to actually, how do you get the data to the user? And then, how do you get information out on rapid timelines? A lot of the things that we’re seeing, in terms of future geospatial, completely agree with the end comment about it has to be pervasive. It has to be out of its silo and integrated across other forms of analytics and business intelligence.
It also needs to be more temporally relevant. It needs to support more real time scenarios, versus being forensic type capabilities. I think, as we get there, then it’ll drive … Also, what we’re all after, which is, how do you drive decisions? More informed decisions on more rapid timelines, which I think is a really exciting future.
It is incredible, that availability of that data, a huge quantity of data so much faster, is incredibly powerful and also, the quantity and the pace of it can almost prevent the user from actually … They get overwhelmed quickly and we need methods, analytic and user interface methods to deal with that temporal scale. Being able to see everything every 15 minutes or every five minutes is great, but it’s a lot of data really fast for any one person to digest.
Yeah, I think to that point, I would just say, on the people side, the skills are going to need to evolve and we have a lot of conversation about this in our industry today, where analysts that may have been used to interpreting imagery, they feel like is the computer a replacement for my job? And what we found, ultimately, is again, there’s just too much volume, this really needs to be a force multiplier for how people do their work and also I define different roles to just …
There’s a difference between a data steward that’s wrangling data and a data scientist and a software developer. They all need to be working together as an integrated team, in order to be able to support these missions. I think that’ll happen as well, skill specialization will definitely evolve over the next decade, if we think about how to better support the mission in future.
It’s amazing, we see those skills, that redefinition of skills that you’re describing and you guys are leading the way at Maxar. Those same skills are like templates for lots of other companies that are figuring out how to manage their data at scale but are maybe five to 10 years behind. These patterns, in terms of the personal behaviors and the roles and all such, is almost insatiable demand on the commercial side, to figure out how to sort out all those roles and skillsets and what people need to do.
Maxar’s, in my mind, a role model for how people are going to build out their organizations and their teams.
We talked a little bit about the back, what can we do for commercial industries, as much as what we’re doing for the government and we’ve talked about what we’re doing inside individual agencies but what are the opportunities for companies like ours, to bring together or assist in bringing together the different agencies and different IC and DOD customers to collaborate utilizing data, geospatial data and many other kinds of data for mission success across the various agencies and organizations?
Yeah, I can start, Andy and you can add on. One of the things, which is interesting, is we talked a lot about NGA but NGA has a functional manager and they really serve the community so the NSG, the National System for Geospatial, it includes participants that are all the combat and commands, different services et cetera. And they all consume products from NGA, which includes commercial data as part of that.
And then they do their own analysis or value add on top of it. And one pattern that we’ve seen, historically, is there’s different shoe box data sets, where people do their own interpretation or enrichment on top, but never gets back to the collectives. I do think one of the areas, Carrie, that we can see, whether it’s a DHS mission, a SOCOM, an army, a navy, that as this data goes out, if we can create the right type of mechanisms for people to do enrichment, but then also for that enrichment to get conflated back to a more central source, it’ll better serve the community.
I think that’s an area that we’re seeing some patterns of with the mapping project I referenced earlier, NOME, because that’s this whole concept of volunteered geographic information, as a way to encourage that but I think that it can be done at a much broader scale, particularly with some of the tools that, Andy, I know you guys are working on at Tamr.
Yeah, I totally agree with what Tony said. NSG is a natural place for this to come together, where their mission is inherently depends on bringing together data across lots of silos and enhancing and enriching that data. We see a similar kind of a thing at DHS, all the time, the work that we do there.
And then this idea that Tony presented, of a bi-directional flow of information in the data ecosystem, where when data’s being consumed and after it’s been consumed, for feedback to flow back into the data ecosystem about how the data was enriched, how it was changed, how it was used, what was good, what was bad about it. We think that, in most data ecosystems, that bi-directional flow’s almost non-existent.
We have a product at Tamr that we launched, Data Steward, which is like JIRA for data, it’s a mechanism to just capture what people like or don’t like about the data that they’re consuming in any tool, whether it’s a web browser or some proprietary tool and feeding that back into a queue that can be viewed by the folks in the data ecosystem, stewards, curators, data engineers.
But this bi-directional flow is mostly missing in modern data ecosystems and exactly as Tony said, there’s a huge upside to understanding that, not only for other people to be able to re-purpose enhanced data but also to shape the data upstream so that it’s more aligned with how it’s being consumed and enriched downstream.
Excellent. Everybody loves a good feedback tool, right?
We’re almost out of time, but I do want to give you guys a chance to provide any closing thoughts, as we’re forward looking to other intelligence and logistical initiatives. How might you want to wrap up the conversation today? We’ll start with you, Tony.
Okay, thanks Carrie. It’s been a really good discussion, appreciate the opportunity. I guess what I would just wrap up and say is, the future’s bright. There’s just so many, if you think about how all these capabilities can be applied, there’s really important missions that require us to solve these problems in order to keep people safe and also, help our nation compete against some various adversaries.
Some of the areas we’re really excited about are, how do we support a safety navigation mission when there’s a natural disaster, like wildfires occurring in California? If people are reliant on maps that are out of date, because the fire line has taken out or its loss of the environment, where roads aren’t passable or bridges are out, how do we update those based on what’s changed on the ground and get that out to users that need it? That would be one example.
Another area is, we do a lot in supporting drone navigation and so, how do we ensure that, as opposed to sending people out to scout an area, how can we provide confidence of what’s on the ground through navigating a terrain, using 3D data to help inform that. There’s many other missions that I could reference but I think the bottom line is, the mission matters, we have a massive opportunity, it’s going to take partnership with companies like Maxar and Tamr to get in after that and then deploy it in a way where it’s not just a one-off pilot but it’s something we scale to things that we’re seeing as just continuing emerging requirements around the world.
Following on, our inspiration in working with Maxar and the NGA and other agencies in the government is to really empower the people on the front lines to protect our country and to deliver on their missions and use as much data as possible, in order to make that happen. And a lot of the techniques that we have at Tamr, are the same that companies like Google are using everyday, to make sure that when you search for the coolest director on the planet, J.J. Abrams, that you get that nice little info box in the upper right-hand corner, but these techniques are fundamental and are very powerful and bringing these techniques to pair, to help protect our country and to stay one step ahead of our adversaries is what we’re all about at Tamr.
And we know that we can’t do it without partnership with established companies like Maxar, that are leading the way and so we’re really thrilled to be in this area of geospatial data curation and working with such talented people in the federal government, as well as at Maxar.
That’s great. I agree, this is a fantastic mission that we are all supporting and my career in the geo community, it has been inspiring to watch how this industry and this mission has grown over the years. It’s been great to talk to both of you today, I’ve learned a lot about the capabilities and the support that we’re providing and I’m excited about the future and where the data is going and where the technology is going and what we are going to do in support of this mission. Thank you both for your time and thank you for letting me be a part of the conversation.
Thank you, Carrie.
Thank you, Carrie.