datamaster summit 2020

Using data for digital transformation in financial services: a conversation with Santander UK and Accenture

 

Jonathan Holman and Mark Wilson

Jonathan Holman, Head of Digital Transformation: Corporate & Commercial Banking, Santander UK
Mark Wilson, Senior Manager, Accenture

Jonathan Holman, Santander UK head of digital transformation, and Mark Wilson, senior manager at Accenture, will discuss the role data management plays in digital transformation projects at financial services organizations. You’ll learn why mastering customer data was a key part of Santander UK’s digital transformation initiative and what Accenture sees as the stumbling blocks that prevent financial services institutions from leveraging data for digital transformation.

Transcript

Speaker 1 (00:00):
Welcome everyone. Thank you for joining us today for the webinar. [inaudible] UK accelerates digital initiatives by mastering customer data. I’m Fred O’Connor, customer marketing and partner manager at Tamer. Today. You’ll hear from Jonathan Holman from [inaudible] UK, he’s the bank’s head of digital transformation, corporate and commercial banking. He’ll talk about why data mastering is a key part of hunting, w Kay’s digital transformation journey, the role of machine learning plays and data mastering, and the business outcomes that Santander UK is achieving by mastering it’s customer data. Well, so during the discussion today is Robbie philosophy of Tamer. Robbie’s had this strategic sales videoing at Tamer. It will be moderator. And with that, I’ll hand things over to Jonathan.

Speaker 2 (00:51):
Hi, everybody glad to be with you today to share our experiences of using technology to help achieve and accelerate our digital transformation. For those of you who don’t know something, their group is a fairly large global institution, uh, centered on America, South America and Europe. And we are 55 on the Forbes global list of the 2000 leading companies. I would call 145 million customers, and more than 50% of those are now digital customers in some form or another in the UK, we’re a standalone bank with 14 million customers. We have about a hundred thousand SME or SMB customers, and about 10,000 corporate customers. Uh, we have a call loan book of 22 billion pounds of assets for UK companies. Um, and last year we turned a profit over a hundred at 981 million pounds. So I’m going to talk you through corporate and commercial banking and my responsibility within that pocket of the bank running digital and our transformation accelerated by digital and how we’ve split out the core activities of the bank into various systems that we bought together in one ecosystem.

Speaker 2 (02:04):
So there are six main activities that we have that, that dictate how we interact with our customers. And I’ve going to display those to you now. So we’re prospecting with customers to understand who we want to do business with. We then proceed to onboard them, taking us through credit decisioning and KYC and due diligence procedures. We then have some sort of fulfillment with the products and services they’ve chosen to purchase from us and contracts that we put in place. And we don’t have to monitor the ongoing risk and interaction with those customers, as well as potentially deciding whether or not to exit the relationship. If something changes materially in our dynamic, those activities, each have themes underneath them. And the grid that now on screen shows you those. And basically we’re considering each of these steps of the life cycle of managing a customer, how our strategy and whether or not our proposition fits with that customer, how our product and pricing will work, how our financial crime and money laundering risk profile might exist with those products and that customer and how our credit risk profile might exist with that customer and the products and services they choose.

Speaker 2 (03:13):
We’ve taken these activities across those six lifecycle stages, and we’ve divided them up into where they best fit in terms of operational systems. And the blue layer underneath prospecting is managed on Salesforce. And we think that the Salesforce sales and service cloud are some marketing technologies in that space. And it’s the first piece of cloud fast software that we’ve installed to be able to help manage our customers. We’ve built on that, adding to it with Encino, which has built on the, uh, Salesforce architecture. And that’s a bank operating system, which covers the vast majority of other tasks. And you can see the red on screen now that dictates where Encino covers the majority of activities that we have as a workflow system, as an operational and decisioning platform, uh, in all bank, the green layer underneath application forms and our product forms are where we’ve got customer facing digital application processes that are quite complicated and dynamic.

Speaker 2 (04:12):
And we’ll talk about more later on in the journey. Um, and they’ve, we’ve spent some significant time there trying to digitize and optimize those processes that face the customer. And lastly, we’ve got one more activity in the fulfillment space where we’re putting contracts or security, collateral documents in place with customers, depending on the products that they’ve chosen. And we’ve chosen DocuSign and DocuSign CLM in those spaces to help produce contracts using the data from Salesforce and Encino, and then electronically signing those by our DocuSign’s signature portal. And together, all those systems make up what we call the core and those four systems that manage our forms that face the customer that manage the sales and servicing processes that manage the operational credit risk and financial crime processes, and then help fulfill those with customers. Those are the day to day systems that our relationship banking teams operate in that face our customer and our horizontal capabilities in that they exist across all of the products and services that we sell. So now I’m going to discuss in more detail, the four components of that core system that underpin the six lifecycle cycle stages of our interactions with our customers.

Speaker 2 (05:27):
So first up is the digital services. So these are the services that face our customers and help them interact with our products and services either upfront when they’re applying for them or on an ongoing basis. So these are things like our onboarding process, which is digitized and online. Now the e-signature processes that I talked about through DocuSign, and then we have particular product portals or channels for online banking for doing transactional banking, for placing deposits for maybe conducting foreign exchange business with the bank, or for asset based products, such as trade finance or invoice finance, where the customer might be required to upload particular documentation. We’ll just have to draw down on a loan at any one time.

Speaker 3 (06:08):
Great,

Speaker 2 (06:13):
Our products, our income, our customer relationships are managed on our CRM system like many institutions. And as I said, we’ve chosen Salesforce to be the center of our ecosystem more than just for its service cloud and sales cloud capabilities that we employ, but for its ecosystem it’s app exchange, um, and its hub that we can use for all of the activities and all of the data that we hold about customer linking that together,

Speaker 2 (06:42):
DocuSign and DocuSign, CLM, formally spring cm. These are the tools that we use for document production for that contract creation East signing. And they sit both alongside Salesforce and alongside the last box and CNO, which are our two main operating platforms. And CNO builds in the Salesforce ecosystem using data and the same customer record from the CRM, but gives you KYC smart checklist functionality, document storage, as well as crucially credit risk decisioning workflow. That helps us execute the processes that are around our main lending products. We can then take that data into documentation that we negotiate online with customers using DocuSign and all of that. It brings together a single customer view that underpins it from all of our core banking platforms. And this is where things begin to get tricky for us in materializing those four core systems that display data to our relationship teams or to our customers. We have to have a consolidated single customer view feeding into those platforms from our core banking and core product systems in order to make it coherent and able to be used in automated workflows in those platforms.

Speaker 2 (07:59):
So on this slide, I’m going to give you a view of our architecture overall, and that core that we talked about on the previous slide of those four main systems, how it’s able to take external data and internal data in an aggregated view to be able to achieve the consolidated outcomes that we want, where our colleagues, not customers can work in one layout that comes across as not four systems or more or more than four systems, but as a continuous layer of joined up systems that operate in an ecosystem together. So the external layer of data in our architecture is plugged into that middle layer and it’s made up of different types of data that we use to complete different processes. So we’ve got entity data, which helps us with our due diligence and our credit risk. We’ve got accounting data and open banking data, which has recently come online in the UK, which we can help automate our credit risk processes using.

Speaker 2 (08:51):
We can also automate our monitoring of customers by feeding that data into core layer in the middle. We’ve got financial crime and legal checks that we need to do based on screening of politically exposed individuals, as well as sanctioned individuals or companies. And we’ve also got global versions of entity, data and identity data for individuals that are also used in our KYC processes. If we’re dealing with people outside the UK, all of those data services, we used to access by a portal if they did exist. And we used to download PDF. So check them manually by searching and we can now access these services via API. So generally we connect all of those external data services, the fire API, and sometimes we even aggregate them together using Breathtec tech aggregation services to help deduplicate and to help synchronize the searches that we do between different platforms, for the same customers or the same individuals associated with our corporate customers. And then we can feed that data into the Salesforce, Encino, operational layer, and use that data to surface, to customer in our onboarding processes, getting them to validate publicly available information rather than enter it. And also we can use it to pre-populate the facility lessons and contracts that we have in our document production system.

Speaker 2 (10:08):
And then to our internal layer, which sits beneath our architecture. We’ve got our systems of core customer record. And unfortunately, as a bank, that’s got different divisions and has had different acquisitions over time. We’ve got four of those now. And then we’ve got all of our products systems, which you can see on the bottom layer, which each exist in their own specialist, product booking system. We aggregate all that data now into the data Lake, which is indicated by the slightly darker gray box all around them. And we’ve got different tools upon the data Lake, such as machine learning, such as visualization tools, but also Tamer helping us to auto reconcile and create that single customer view from that data Lake data, which we can surface in the core layer of Salesforce and see, you know, in the middle. So let’s talk a bit more about that.

Speaker 2 (10:53):
Our customer record systems, we’ve got one in retail, we’ve got one in corporate banking, we’ve got Salesforce, which has its own unique customer identifier. And we’ve also got one in our investment banking division, and we use some of the investment banking products to sell to our SME corporate customers. And therefore they have to be onboarded into that division of the bank, as well as the primary division through which they’ve joined fire relationships in. So reconciling the individual records and the company records across those four areas of the bank is the main task for which we employ Tamer. And we do that by depositing customer records from all of those systems into the data Lake and letting Tamer look for common customers across those four systems, as well as duplicates within those systems. And we can then surface the results of those aggregated aliases that exist across different, unique identifiers in those different systems into the CRM, meaning that we can not have any duplicate customer records and be confident when we’re undertaking due diligence or credit risk or pricing activity that we understand holistically, the services and products the customer buys from us, but also in surfacing the data about the products to the customers.

Speaker 2 (12:00):
We can do that with confidence too, knowing the online portals will show the correct data associated with any one customer record.

Speaker 2 (12:09):
We then use that consolidated data from various products systems, which may be linked to one or both of those or one or more than of those core customer record unique identifiers from the product systems. And we can then use that data in financial reporting in credit risk and regulatory reporting, but also in building machine learning models, which help us to understand maybe credit risk profile or customer propensity for a particular product, as well as the visualization software where the DJ application that Tamer helps us achieve is also important. And those things were beginning to surface back in the ecosystem of Salesforce and Encino. Now keeping the front office colleagues and our credit risk colleagues, our product colleagues, middle office colleagues, able to work in one environment, no matter which lens or which activities they’re completing for the customer on the left and right hand side, you can see more detail about the digital services that we have facing the customer.

Speaker 2 (13:03):
And some of the other decision engines have plugged into Encino, which help us understand financial crime, risk ratings, credit risk ratings, and also help automate our credit risk decisioning. If we didn’t have the single customer view being fed in to Encino from the data Lake facilitated by Tamer, we wouldn’t be able to feed that information downstream into those decision or risk racing engines, to be able to understand maybe the capital we need to hold or indeed the financial crime risk profile and the implications of that, the amount of due diligence we have to do for our customers. So it’s crucial to able to underpin this architecture that you’re able to reconcile your core data together.

Speaker 4 (13:43):
Um, Jonathan, there’s quite a variety of external reference data being pulled in to support the core layer, not to mention the challenges of bringing in open banking data. Can you talk a bit about some of the operational challenges you faced in onboarding this data?

Speaker 2 (14:00):
Yeah, that’s a good point. So, um, UK data tends to operate, uh, via company’s house and the company’s house registration number. And so we use that as a unique identifier for external data, and we can reconcile and pull that in via Tamer from a UK data sources where that has been entered into accounting data. We were able to then pull accounting data from customers with permission and reconcile that against the same records, but it becomes harder with global data, for instance, where maybe an Lei is used on a global basis or a unique customer reference a new customer, and you need company reference in a particular jurisdiction. And so we have to try and have multiple aliases that can exist against the company. And obviously Tamer helps us do that with open banking, with tending to understand that customer would be commissioning the data, but again, between the different banks, there isn’t one unique reference apart from those external references, uh, like a company registration number that can be used to identify a particular business and the transactions that one might be able to pull all the products, information that one might be a support from open banking.

Speaker 2 (15:06):
And so we have to use many unique identifies to reconcile us together and the tasks that team performing. Isn’t just now across those internal systems and the internal data and the, maybe the complexity that we’ve created for ourselves within the organization, but the complexity that exists within a global as well as a multi-platform digitally ecosystem that exists outside of the bank that we’re now integrating entirely with.

Speaker 4 (15:32):
Yeah, that’s been our experience as well. There’s certainly no one unique identifier to rule them all in a particular jurisdiction, let alone globally.

Speaker 2 (15:41):
Indeed, it’s a very real task and something that is a, that’s a take advantage of if you can reconcile the data sets together. So now I want you to drill down into a bit more detail about some of our digital journeys that we’ve created across that ecosystem and let you see how each of the component parts plays its role in joining together to create a digital journey. So if we take onboarding of customers, there’s a sequence of our systems that starts with Salesforce and the prospecting activity, but very quickly leads us into inviting the customer into a process of completing due diligence and understanding that business. Now, we do this by checking them using judo and understanding that registered data and their profile, the risk that we might encounter from a financial crime perspective as a result of doing business with this customer. And we pre-populate that data into some quotidian based forms that exist on our internet site with a secure login when the customer logs in they’re able to validate external data.

Speaker 2 (16:40):
And once they’ve done that and chosen the products that they’d like to onboard to, they can then complete the mandate and authorize the application with lots of people signing in parallel on DocuSign. That’s where we think our application process on additional warnings, quite clever in that while the customer thinks they’re DocuSign the data from Claudia and some various reg tech based API checks that we do on the data entered into the forms. We pull it into Encino and begin our downstream workflow to open the accounts and complete the due diligence. So in Sienna, it takes all that data and it is able to complete a risk assessment against the customer and help any subjective assessment of whether or not that application makes sense for that business. And we can pass the data into our robotic process automation, which figures out the level of back office due diligence that might be undertaken as well as the products that need to be opened and posses that data into our core banking platform, removing the need for anyone to manually retype it and allowing to data continuity from the extent of data source from judo, all the way through quite in Encino, the RPA and into our core banking platform.

Speaker 2 (17:46):
The data from our core banking platform is surfaced back in Salesforce from our data Lake and helps reconcile the, what it was in progress for a case that was being opened at viral boarding into one. That’s now completed with an opportunity closed with having that feedback loop into Salesforce from our core banking platform. And so the customer’s got a nice digital journey that they’ve been out to DocuSign and work in parallel where there’s lots of people signing around the world and we’ve been out to work in parallel to them completing our downstream workflow. Again, the single customer view that allows the external data with our internal record that’s being created during this process, as well as the Salesforce prospect record, allowing all of those to be joined together on the fly to make sure that by the end of the process, Salesforce has an awareness and he’s able to feed back into the process, which customer is representative by which ideas, um, and linking those altogether.

Speaker 2 (18:39):
And the system is a crucial element, which team has helped us be able to achieve. We’ve now also digitized our credit risk proposals, and they start in much the same way in that we’re prospecting, uh, customers and products that we want. We’d like to, uh, offer to them in Salesforce and create a pipeline of those activities. And then we can potentially create a credit risk prospect, uh, with an entity profile, perhaps a group profile and security structure from the extent of data from judo into and seeing them and seeing those where we can then complete a credit risk application, completely a credit risk rephrasing of the entity and understand the capsule we’d have to hold against the lending as well as the covenance and security structure that we might put in place and the pricing of any assets that we would offer to the customer.

Speaker 2 (19:29):
We can then supplement that data from external sources and registered data like companies house available via do you do with permission data that we might gather from customers, accounting packages, and we use valid. This is an account software aggregation engine to pull data into Encino from customer’s accounting packages. And we also use a piece of software plugged into Encino called BizAnalyzer, which is able to do complete electronic reading of PDF financial statements that might be supplied to us indicating a customer’s performance over time, which we can then analyze in Encino having imported that data via this analyzer.

Speaker 2 (20:09):
Now we’re able to add via fax with labs, integrated into Encino, open banking data, and that helps us understand the customer’s transactions and make sure that we can reconcile those against the accounting entries we’ve seen in their accounting software data, and also look for any other behavioral anomalies or positive points in their cash flow, uh, that helps us make sure we’re assigning the right products and the right risk profile to this business. And all of that leads us to be able to make a credit risk decision in Encino, either via credit committee for large exposures or, um, an automated decision, um, for smaller exposures where all of this data that has been gathered during the process to date has been taken into account. After we’ve made a credit decision, we then need to fulfill those products and services, and that involves putting a contract in place and taking the security or collateral on file, maybe registering that with the relevant regulatory body. So Encino composites data from it, the credit risk decision about the products, about the entities, about the jurisdiction in which those entities live and the relevant legal framework, as well as things like covenants and conditions of those specificities and pass them through to DocuSign CLM. But the data can be used to produce relevant documentation and contracts that can even be negotiated online with customers.

Speaker 2 (21:28):
Once the documents have been agreed, the document signed, moved through CLM into DocuSign e-signature portal, where that could be executed by one or two directors whatever’s relevant to the particular entity type. And we received those documents back into the document manager functionality in Encina. We’re then able to draw down onto a particular loan system that products or services being sold and Mark the appropriate limits. And then we’re going to be feeding back into 19 from a product booking system like alpha alone record that now exists against a Salesforce and Encino customer record, and also against the KYC record that takes place in our core banking systems. So you can see that we’ve got three or four unique identifiers. The only to be reconciled by Tamer back into Encino to understand that that facility now exists in the core backing platform that has been fulfilled from the credit risk sanction that was granted, and that everyone understands the income and the profile of that customer all in one place for future financial crime and credit risk reviews.

Speaker 2 (22:32):
So if the diagram that I showed you earlier, uh, I’ve color coded here. What’s live in our ecosystem. Now, the oldest most interconnected as I’d like with API, but we continue to work month by month, year by year to extend that out. We’ve made huge progress by using SAS and cloud based solutions to deliver lots of folks, sanity and integrates here live very quickly. So Tamer has been live now for about a year and we’ve connected together all of the, um, uh, platforms that you see underneath it, as well as the core customer records and our BDP and GBO systems on the left. And that’s in that our internal data layer, some of which, the platforms in yellow, we’re upgrading as well into new cloud based versions, which should make the iteration of those pieces of software, as easy as we find it with Samer and other pieces of SAS that are developing over time within consulting the exact data into that live middle layer of quarrying, of Salesforce, of DocuSign and events.

Speaker 2 (23:30):
And two that we’ve got connected, um, either on prem or some SAS based decision engines and risk rating platforms. The external data that we’ve gathered is all live and tends to be cloud and SAS based services, but we’re in the process of connecting. Most of those fire API to that middle layer, the vast majority are either connected directly. Um, and do Validus practical labs. They’re examples of collected eight, eight guys that are connected directly into Encino. Whereas the right hand side, the rec techs tend to be aggregated via encompass, um, which is our chosen rec tech aggregation partner, which posts together lots of data from disparate sources in the KYC and due diligence space in Salesforce and Encino, helping us understanding in one fell swoop. Um, the customer from a lot of different financial crime angles.

Speaker 4 (24:21):
Jonathan, you mentioned some systems were upgraded. How much of a work from path legacy system integrations was carried forward into the new platform?

Speaker 5 (24:31):
Hey,

Speaker 2 (24:32):
Yeah, so most of our SAS providers are updating software multiple times a year. And we have to sort of say in sync with those upgrade cycles. Um, and generally it’s been the case that on-prem installations, uh, of our legacy CA have not had upgrades because it’s become too architecturally cumbersome. Perhaps application logic is tightly linked with data as well as tightly linked with a user interface. And so regression testing after changing any one part of the system often leads to a vast expense of testing across the whole platform, which is crucial. So sort of underpinning out day to day banking operations. So we try to create a cloud architecture that operating paradigms to that, taking the data from those core banking systems via Tamer into it. And so we don’t need to necessarily have too much ongoing upgrades to the existing kit instead using middleware or modern technology on top of it, to be able to deliver intuitive functionality and improvements.

Speaker 4 (25:33):
It’s a common challenge that we see many legacy applications, especially though has been deployed on prem becomes scape stale, and especially with the common challenge that we see with legacy applications becoming stale. And this has a knock on effect of the data. So I think that’s a good example of where using more SAS platforms allows for both the applications and the data to stay up to date.

Speaker 2 (25:58):
Yeah, it’s crucial and awesome. Well, so we’re finding that there’s this sort of interconnected relationship between lots of our chosen partners. So there’s this sort of winner that emerges in each space, obviously Salesforce and it’s, you know, are inherently linked to having been built on the same platform, but DocuSign has got apps that run on the app exchange and Salesforce, uh, do deal and valid. This fractal labs will build either Encino or Salesforce apps. And there’s often tightly coupled pairings between lots of these providers and their propositions where they can work together in harmony. And that means that the integrations that one might use are also prebuilt and you’re able to leverage to integrate the platforms and surface that data far faster. The continuity of data is one massive step and then tame. It helps us in that final step of piecing it all together to make sure we’ve got a holistic understanding of a single customer view.

Speaker 2 (26:50):
So what I’ve tried to emphasize in this architecture and in the digital journeys that we’ve been creating is that the business case for mastering customer data and having that single customer view is really the underpinning of the whole, of the rest of the piece. If we didn’t have a single customer view, and we didn’t, we weren’t able to surface from a myriad of products systems and a myriad of, uh, sort of legacy and inherited, um, acquired systems that we’ve got in the bank, uh, all the data that was relevant to a customer and bring it all together. And we wouldn’t have, um, the ability to operate in a sort of digital format where we’re putting that data in front of customers, or indeed trying to automate as much as possible, uh, processes and decisions from our customer. All of that requires reliable and reconciled data that one could employ.

Speaker 2 (27:36):
Um, our in Chino project was the catalyst to help deliver, um, our single customer view project using Tamer. And the two were very much intertwined, but it was a problem that the bank had grappled with for a long time. Um, and although there had been remediation attempts to remove duplicates or to help control operational processes, more tightly that created duplicate records between the four customer record systems that we had without automation at scale, without understanding those and linking them together. You’re not really going to be able to manage it in the longer term. And eventually things will become airfield and chips gets, will begin to exist again. And you find yourself back in another remediation project. So having something that was sustainable and was very important and building the business case, uh, inquired, looking back at remediation attempts and efforts gone in the past and whether or not they had been successful or not, um, and how often they might need to be repeated in future.

Speaker 2 (28:32):
So just to summarize what has gone into a, a monster backwards into our golden records that we create, we’ve got lots of those product booking and core customer record systems in the bank. We’re trying to consolidate some of those at the moment, but some of them carry unique references from one of the divisions core customer records systems, which helps to reconcile against them. But some of them don’t and only carry the customer’s name and address instead of how much the product looking systems become much like another core customer record system that we have to reconcile against. And time is helping us do that. In many instances, we’ve then got Salesforce and I’m on cloud stack with its own unique customer reference number, which is good, but creates an additional one. So the ones that we already had and therefore requires those aliases to exist against it in entails for the other customer records.

Speaker 2 (29:23):
And we do indeed surface in Salesforce, those other customer unique reference numbers, so that any of the underlying systems can be referred to jump too quickly, if anything, particularly needs to be understood, or perhaps people like using the UI from a legacy piece of software. Um, otherwise you can see that we’ve got a government and registered data, potentially external pricing or trading related information that’s taking place, uh, in the investment bank. Uh, all the fact coming from external sources to our own systems that we also need to reconcile him. And those might be on different unique customer reference numbers, um, and potentially have different fields against which a time a model of reconciliation of entity or individual data might be built. But they’re all important to got to create something holistic, particularly when you’re considering an AI model, that’s going to have lots of external data in there which will help you predict maybe the macro environment, which a company might find itself in how it pays it doing and how it might perform in future anticipated via AI and here’s to summarize the challenges that one might face.

Speaker 2 (30:29):
Um, I think everyone will know that manual solutions, uh, require huge amount of effort. And don’t really scale. I think everybody’s probably dealt with thousands and hundreds of thousands of duplicate records, across many different divisions of the bank and the lack of joined up vision and service that you can offer to the customer as a result of those two issues. Um, we can find ourselves very easily with our retailing and our corporate banking core systems with the same number representing, and we have an eight, eight digit customer reference number representing different entities and different individuals in each part of the bank that they could actually have, you know, coincidentally the same reference number, so tricky things like that can cause confusion systems that need to be worked around, um, when we’re using 10 to help us do that. I know what this leads to inefficiency, slow reporting cost and manual reconciliation and all those things are things we’re trying to avoid as we try to automate many of the mundane reporting. Am I, uh, data science and machine learning type tasks, as well as, um, just a single customer view that we want to surface to our colleagues and our customers in our core systems.

Speaker 4 (31:40):
Don’t say that had that, um, we often see that the needs of a golden record vary across different groups, for example, retail versus corporates, but there’s often a requirement for that core master data to be used as that source of truth to derive those golden records, especially when taking into account challenges, such as data sovereignty when you’ve been developing these golden records. Have you had similar challenge?

Speaker 2 (32:05):
Yeah. So, um, could examples are individuals that are associated as directors with limited companies. They might bank in the corporate bank, um, and those same individuals, who’ve got retail customer accounts, uh, at a retail customer record, uh, in another division, we might be doing financial crime screening, due diligence ID and V across both retail and corporate. And before a single customer view, we’re not able to share that data back within four weeks. So we might be duplicating the screening effort. You know, it be coming up with different financial crime verdicts. We might be completing ID and VR, an inconsistent, perhaps duplicating it for the customer. So all those problems can be avoided by reconciling the single customer view across the systems across the different divisions. And therefore even if the banks different positive bank, do you have different needs, they can all refer back to the same record and not duplicate where of a necessary wherever it’s unnecessary to do so.

Speaker 2 (33:00):
Great, excellent and scalable process for sure at some of our outcomes of having achieved, um, the master customer record that we need for Salesforce. And, you know, we were able to make better informed, more automated loan decisions. We’re obviously able to meet our regulatory requirements where we’re reporting gets any individuals or entities, particularly from a financial crime perspective. And then we’re also, uh, don’t have to clog up our chief data office or it record, uh, sorry, Alrighty, staff and their resources with, uh, requests to manually handle this, to get exception reports about duplicates, to that’s a process that’s duplicative and remove them, or indeed accommodate the strategy because my exists in various systems integrations instead from the data Lake insight core layer, we can assume that there is a DJ DJ application taking place from Tamer, and we can deal with any exceptions and mergers that need to be created via a report.

Speaker 2 (33:53):
But those diminished very quickly, uh, as the systems, uh, you know, find the load when you first install them. Um, but after that operational controls and the new systems using a single customer view prevents a recreation of those duplicates in future, um, I thought it might be give you, give you good to give you a sense of the strategy that I’ve employed in digitizing and transforming a sentence that UK in the corporate commercial bank overall, we’ve had different phases in which we’ve been executing our plan, and we’re not through all of those yet. And then we’ve had an approach of how we plan to execute those phases. So I’ll deal with each of those in term, my function is, uh, aligned to an integrated within the business and we’re a digital function that supports the business activities. And we think that is crucial to be able to be aligned to the strategy of the business and make sure we’re delivering on behalf of the business, not on behalf of the it team with different objectives.

Speaker 2 (34:45):
So as I’ve said throughout, uh, SAS software, there’s iterating where the roadmap is aligned to what you want to achieve in future, as well as the functionality. Then if it’s what you want to achieve. Now, we think is an excellent way of accelerating monster digital transformation without having to build software to achieve each task. When there’s specialist software out there that exists for those particular functions, even if it hasn’t been employed in your industry yet, you’ll often find that that has been the problem, the same problem solved in another industry, or buy a piece of SAS software. I think we’re fortunate to come around at a time where SAS does exist and one can make big jumps in digital and technology capability by employing the software that other people have created. Obviously cloud plays a big part in that software, uh, hosting it and making it accessible.

Speaker 2 (35:33):
That’s been particularly useful during the COVID situation where we’ve had to operate remotely. And the fact that most of our stack is cloud hosted and accessible on lots of different devices from any location has given us flexibility that we wouldn’t have otherwise enjoyed. And it allows for that continual update to take place easily, um, for the SAS software, that is a feature of many of our providers, um, and say, allows us to buy into a roadmap rather than just static functionality at any one time. And then thanks for continuity, uh, and the tasty driven approach that we want to take for our decision making, for our relationships with customers. Uh, these things are achieved by API APIs, which achieves that continuity and same, which helps reconcile those things together. We can then add to the data and the profile of the entities by, um, using things like AI and machine learning to help do that reconciliation, but also to take advantage of the insights that are apparent in that data, which maybe we haven’t been able to take advantage of before is through lack of computing power or the scale of the data.

Speaker 2 (36:34):
And that’s enabling us to work in three phases of digital transformation. The first phase I called faster horses like the Henry Ford quote, where we’re delivering basically the same goods and services that we do today, but we’re doing those in a more standardized, more streamlined, more efficient, more automated way. And digitalization helps us achieve that. But the byproduct of digitalization is that we then have the data in our core systems like Salesforce and Encino, we’ve digitized the processes to be able to understand each parameter. Often we can then add AI with that data trained, using decisions that make the humans in the past or in the systems looking at trends of outcomes for particular customer profiles, the products and the credit risks that they carry. Um, and we can then predict and anticipate and automate some of the decisions. Um, some of the product needs to our customers have and deliver those, uh, via the sort of more certainty, uh, execution modes that you have from phase one, where you’re just trying to speed up the existing processes that you had.

Speaker 2 (37:37):
And finally, we’re thinking about putting the intelligent distribution that AI, that certainty of execution from phase one together in a phase three, where we might think about how our products themselves could become more dynamic and Sue our customers. So if you think about this, empower that with the music industry, MP3 is foster horses, the same products that you might buy today at my previously on CD, but available faster and remote in a digital format, intelligent distribution, well that’s Spotify adding algorithms that help determine your music taste and will anticipate what new music you might like to listen to, what playlist and what mood you might be in. And then a dynamic service is something like what Netflix is doing now, where they’re watching viewer habits. And they are, um, curating content based on things, storylines that people like and doing how it’s to be able to have.

Speaker 2 (38:26):
And they’re also beginning to experiment as I’d imagined that music will in the future as well, uh, with dynamic content. And so they’ve had TV series and, uh, feature productions where you can choose your storyline, um, and that sort of allowing the product itself to respond to a viewer preference, uh, in banking that would be having dynamic products, maybe a deposit product, maybe a borrowing working capital product, maybe a foreign exchange products that flex to be able to suit the needs of a customer, as well as having anticipated those, using the AI from phase two. So we’re beginning to think about what that could look like. And again, none of that would be possible without reconciling lots of data together and having that single customer view on depending, uh, all the models that will drive these feature innovations

Speaker 4 (39:14):
Interested. Interesting, Jonathan, because I can see that the April strategy, especially around data and the dynamic products that you mentioned, like rely on machine learning and artificial intelligence. We still see a lot of skepticism around using these approaches compared to traditional methods, especially concerning, but relative to your rules versus models. What are you experienced? And have you built competence internally within the bank to use them going forward?

Speaker 2 (39:41):
Yeah, so, I mean, I think you can get into territory. That’s quite dangerous quite quickly with, you know, claiming that these are black boxes that no one understands. There are some very subtle relationships in a neural net, which might be difficult to explain, but they’re really statistical relationships. Um, and so both in using Tamer and in the machine learning that helps to statistical techniques to match records and cluster them together, we try and help people understand that, um, really it’s iterating through many different experiments with normal statistical techniques that you might be familiar with sociological clustering, um, inspiration through those speed, that machine learning is really helping us to do. And then when it’s more complicated decision making preferences, that really we’re looking for two statistical relationships that the same sort of relationships that we might have a gut feel for over years that maybe doing financial crime or credit risk assessment.

Speaker 2 (40:35):
And then we try and make the models as explainable as possible. So that the key factors that influence any one decision against any one, uh, customer that they’re able to be exported from the model and understood, and they’re waiting in the overall decision understood. And generally there’s three, four, five dominant features of any one model, any one decision that are exported and that helps people get comfortable with why a decision was made and making it explainable to the customer as well as explainable to our colleagues. And we can demonstrate how the model has been trained from previous decisions, which may have been made for humans or in teachers precedent was set by the book and how it exists at large, um, as well as advantages that we can have by having a sort of neutral party, uh, in, uh, a machine then technique run a slide rule over the data where it might be able to find duplicates or indeed find relationships, uh, in the data that haven’t been spotted previously.

Speaker 2 (41:31):
And that can help in fraud, in money laundering in transactional monitoring, lots of different areas where manual techniques had established ways of spotting problems or in the financial crime, but they wouldn’t flex quickly enough to, to be able to cope with an ever changing landscape. That’s also true in credit risk where obviously the macroeconomic situation can move very quickly as it has done with COVID. And so being asked to sort of feed data in that represents these situations and allows the models to adapt over time. People can help see both an upside to using technology and the pace and the scale of that. It can move up. Also, they can have explainability in a sort of simplicity to it by having the models, uh, give outputs that that makes that possible for human comprehension and simple, conversational, uh, relay of the decisions, um, which is crucial in a relationship led approach that we have in a corporate bank, black house.

Speaker 4 (42:26):
I agree, especially around the ability to iterate quickly as that so important to achieving the results, as well as providing transparency to that results by allowing for data conditions, to be adjusted and course corrected I’ve necessary.

Speaker 2 (42:39):
Yeah. And it’s an interesting process, right? We don’t train humans once and then say to them, okay, we’re never training you ever again. We have continuous learning and continuous professional development in most careers and in most areas of work. And we expect people to continue to read and learn and understand, and it’s no different with machine learning. You know, we don’t just train it once we continue to monitor the performance control to inform it and to add to that training move quickly on it and understand what it’s doing every time.

Speaker 4 (43:08):
Yeah, absolutely many people often fall into the trap of simply having humans in the loop at the beginning, but not entirely through the full data life cycle. And that’s really where lots of additional value can be found as well as Jonathan is just showing this a need for clean master data throughout the customer life cycle. Online now likes to take a moment to look at how Tamer provides the modern customer mastering experience. The advent of machine learning in the enterprise presented new opportunities to companies to make use of that data. But lots of initial focus in the analytics space. Many companies have established data science practices whose role is to take this data and use it to produce that analytic insights. What often happens though, is that the data scientists spend the majority of their time locating data, unifying it from multiple sources and clean it before they can even get started on building the desired analytics.

Speaker 4 (44:06):
Simply put the data is not in a state to use of how placing the analytic impact at risk such efforts often carried out as a one off task for a particular deliverable, meaning that new analyst needs will often require the same process to be run again or even worse. We created from scratch pulling the human guided machine learning solution offers an opportunity to dramatically improve the, these are, can now set up repeatable pipelines, which use the capabilities of machine learning to overcome common data, quality issues and data integration challenges about your workflow also opens up the power of machine learning. So additional personas such as the data curator, who’s responsible for providing data, which is fit for purpose as well as the subject matter expert who’s noise can be captured and institutionalized within the machine learning model. Without having to write code such a feedback cycle can continue from model development through to production.

Speaker 4 (45:05):
As Jonathan mentioned, the need to iterate is there. And users can be alerted to basic conditions which may require further inputs and cost control. The results of this is that the data science team is freed up from the task of cleaning data and can focus their efforts on consuming the outputs rather than wrestling with the data itself. I’d now like to talk about the recent customer mastering engagements with the leading US-based insurance company has recently embarked on an initiative to improve their customer marketing efforts by making better use of enrichment data it’s required constructing views of their customers are individual household and organizational levels and matching them up with referential data sourced from a wide variety of external beta vendors.

Speaker 4 (45:54):
The existing system was using a legacy rules based approach, which is very static and was proving to be very difficult to extend, to accommodate new sources. Problems were occurring both in terms of a data variety because those rules to match the data were embedded into the code itself, as well as with the data volume where performance had run into reliability issues due to the scale of beta exceeding over 300 million records, let’s have a neat carefully monitor the production deployments or failures frequently cut internal prototypes have been constructed by data science team, which received buying from machine learning based approach. But there was also the goal to empower other teams, to be able to provide feedback to the models, being developed a key part of the design of the new deployment what’s to make full use of the customer’s cloud infrastructure in order to provide additional performance gains in a budget friendly manner,

Speaker 4 (46:57):
Famous solution, more than ideal fit for this use case, the team was able to take advantage of payments platform. So established matching models at multiple levels of granularity, the ability to assign subject matter experts, items to review offered the opportunity to improve the performance of a model and gain trust in its output without having to write code, to apply the matching logic, and rather than consider of integration of new sources as an onerous task that can now be onboarded in a more manner. But previous learnings apply to reduce the integration time and cost. The famous solution is deployed natively in the customer’s AWS cloud environment, which brings cost savings by making use of ephemeral resources or compute tasks that are performed when data is refreshed. In addition, the deployments was able to be scaled from a small instance use for initial development and prototyping. Fruits are more substantial footprints as data volumes increase. The net result of this is that the customer now has a platform to reliably provide data to support my own needs today and can grow to accommodate new data sources while maintain a robust matching process that is trusted by those consuming the data. Yeah. Thank you, Jonathan, for walking us through the customer journey at Santam there. And so with that, now I’d like to pass back to Fred to take some questions from the audience.

Speaker 2 (48:27):
How important is culture in implementing digital transformation projects? Has there been a longterm culture of innovation at some time or UK? So I think cultures are very important in digital transformation. Not least you’re going to be changing materially the way people are going to be working and therefore people have to be bought into it. And also there has to be a drive from the top to inspire the motivation to engage the digital transmission. People are inherently sort of nervous about change and what that might mean. Um, and so it has to have clear direction and business outcome focus goals, uh, that aren’t just about maybe efficiency, automation or cost saving, but you have to have a customer present proposition that you wants to be able to create. And you have to be able to use technology in a positive way to be able to create that in a way that genuinely helps, um, your colleagues and your customer’s problems and how, and enabling them to operate the business that you want to in a better manner, if you can achieve those things and make it clearly articulated that’s when people’s buying will be easier to achieve.

Speaker 2 (49:31):
Um, and the digital transformation that you’re undertaking, which ultimately ultimately becomes a people culture and business operating model transformation, uh, more successful.

Speaker 2 (49:42):
Thank you. What advice do you have to offer your digital transformation peers who want to achieve quick wins, but also building a foundation to solve longterm challenges? So I’ve got two main pieces of advice. I would pick, um, a cloud architecture that helps you move away from legacy architecture. So you can move in quicker, more intuitive sprints and potentially build upon with partners that are integrated into that ecosystem. Uh, I’ve chosen Salesforce with Encino as the center of the new architecture that I’ve created, but there are many alternatives to that, but you should just make a conscious choice to be able to have something more than a flexible that you can work with as a new foundation adjacent to what perhaps you’ve had in place in the past, rather than, um, trying to replace it or migrate, um, legacy technology, which can be expensive, slow, um, potentially high risk.

Speaker 2 (50:38):
I would also then, uh, advise the integrating with like that is best achieved either via RPA and potassium, potentially utilizing user interfaces in front ends that the legacy technology might have, and that’s a good way of operationalizing and automating processes that exist in and around those legacy systems and then otherwise extracting data into it. Data like the data tables that underpin a legacy system might be the source of link that you need into that system. Um, the data really, it might be the most valuable thing. Um, and simply by extracting that into one place that can have modern connectors such as API into your cloud based architecture to view, visualize and analyze that data, that will be, uh, very important, uh, normally to underpin a single customer view and additional transformation that can use the data and the infrastructure that already exists in the business.

Speaker 1 (51:33):
What’s your proudest achievement and leading Santander UK was digital transformation project.

Speaker 2 (51:41):
So we’ve been fortunate enough to win a number of global technology banking awards for the projects that delivered the digital onboarding and the digital credit journeys that I showed you in the presentation. Um, the select model bank award we’ve won for both, uh, corporate lending that in 2020, and we won’t put a corporate digital onboarding in 2019. Uh, and those are probably among the most prestigious prestigious prizes in global banking technology. And so to win in two different categories two years in a row, uh, is a fortunate. And, uh, I felt to be a great achievement by the team. Who’s helped delivered these projects over the last couple of years.

Speaker 1 (52:18):
And when implementing digital transformation projects, processes are usually more important than technology. How would you recommend addressing process change and how do you get internal buy in?

Speaker 2 (52:31):
Yeah, this is a key question. And one, I addressed most importantly, structurally in my team, we’re a business, uh, led transformation unit and a sort of a hybrid technology team with business, uh, input and a business reporting line. And I make sure that each of the team leaders in my, uh, department have the main expertise and knowledge and experience, which means they understand inherently the risks, um, and the nature of work that needs to be undertaken within that particular domain. They understand the stakeholders and their incentives, their traditional processes, and so that they can speak to the needs of those areas is what is the problems they might have with the existing technology and the way they conduct the processes today. Um, and they can help create better solutions to that understanding the first principles, which often need to be resorted to, uh, when digitally something, if you put in place industrial technology, the same process that you have today, you would just never come up with things like the parallel e-signature and robotic process automation process that underpins our onboarding process now.

Speaker 2 (53:39):
And that buys us time in the process, which helps deliver fantastic customer outcomes. If we’d stuck with the linear way, the form was sent out to the customer today, signed POS around the office to be signed by lots of people on the Monday, and then sent back in before the clock started ticking, we would have had a similar, maybe faster performance, but similar performance instructions to the process that we had. But by going back to first principles, by looking at what was possible using the technology, we found them a single way to run two, two stages of the processing power in parallel and deliver a better customer outcome.

Speaker 2 (54:13):
The final question is what are the biggest challenges your team has faced when forced to adapt to working remotely? Yeah, so my team has been spread over the country with a geographical diversity that we have in the UK as a business at some time there. And after I tried to make sure that my team is present in the offices of our main stakeholders and has many different departments it’s across the country with which we have to interact in that they’re involved in the end to end processes that we’re trying to digitize and improve. So we’ve been used to working digitally and remotely and communicating for a long period of time. Um, and the nature of the technology that we’re employing, uh, as well as the communication methods that we use are inherently online and digital and remote, um, and allow access from anywhere. So we haven’t had to flex to dramatically during this crisis what’s had to happen though, is that we’ve had the rest of the business, has a catch up with our way of working in our structure of our team very quickly.

Speaker 2 (55:16):
And that’s made rolling out new technologies training, engaging with the teams that were based around the country, but now based at home, that’s made that slower and more delicate task. So we’ve had to really focus on how to make, uh, training sessions. WebEx is demonstrations, zoom calls, uh, engaging how to reinforce, learning, how to make videos available for people that’s referenced guys to go back to, um, and how to make sure that people feel able to ask questions like they would organically, uh, in a classroom based training environment as we’re rolling out new technologies. Um, I think we’ve done that reasonably well so far. Um, and it’s something that we continue to challenge ourselves on, but as with all human communication, you can’t ever be too good at it. Uh, and it’s something you need to, uh, emphasize and focus on in the team as digital turn transformation manifests. Because as I say in the end, everything’s a people transformation. Perfect. Well, thank you Jonathan, for your time and Ravi for serving this moderator and thank you everyone for joining us. Thank you very much.