A couple of weeks ago, I had the honor of hosting a fireside chat-themed session at Thomson Reuters’ Digital Financial Services Summit. Talking about digital identity and security in front of an audience of bankers can be a challenge, especially when the topic is a cross between geeky and scary: “We don’t know what it is, but we know that we don’t like it.”
Fortunately, my guests were Ed Amoroso, CEO of Tag-Cyber and the former CISO of AT&T, and Hemen Vimadalal, the CEO of Simeio. They were up to the challenge.
Why was it a challenge? Those of us who grew up in technology tend to believe that we should let facts speak for themselves. However, the complexity of the topic, coupled with the need for precision, results in our wrapping those facts in jargon. In the end, the facts can only speak to other technologists who possess secret decoder rings to unpack the meaning. Outsiders don’t know exactly what we’re talking about but they know it sounds scary.
As the event took place in the shadow of Facebook/Cambridge Analytica drama, misuse of digital identities was top of mind for the audience. It was also 48 hours before Europe’s GDPR went into effect, with a regulatory mandate that gives consumers a choice as to how their data is captured and used. This heightened the sense of "scary.”
Changing Client Expectations
And scary it is. Global cybercrime damages in 2021 are projected to cost $6T, and that’s after we spend an estimated $232B to prevent them. How’s that for ROI? In the week prior to the event, I counted eight significant data breach disclosures. How effective do our defenses feel against motivated attackers and abusers?
Ed, Hemen, and I began by discussing changing client expectations. Clients used to want security to be invisible and frictionless. However, as they became more concerned, institutions are increasingly asked to prove that accounts are secured by techniques such as step-up authentication (which adds extra requirements to risky transactions).
Today, clients are asking to see how data is being protected – even when they aren’t transacting. People who feel like they lack control over their data are willing to add limited friction while asking institutions to be more visible in data protection. As my college stats professor used to say, “Show your work.”
Client expectations have also changed with regard to who owns data about their actions. While most people would agree that a consumer owns the use of data that directly identifies them, it’s less clear who owns data about behavior. The web sites you visit and pages you click through are “metadata” that a service provider generates in providing you a service. Who owns that?
You may own the content of a phone call, but the phone company creates a Call Detail Record (CDR) that describes who you called and how long you spoke (it’s needed for billing). Do CDRs belong to you or the phone company? Do you own you, or do you own the use of data about you?
“We have to decide how data may be used, before we protect it from being misused.”
Who Gets to Decide?
The answer matters. When data is used in unexpected ways, such as happened at Facebook, consumers feel harmed. We have to know who gets to decide how data may be used before we can protect it from being misused.
If regulations like GDPR let consumers decide how their data is used, do we have mechanisms to track authorization? What if clients deny authorization to perform segmentation and analytics – financial services are investing heavily in analytics and AI. Does denying use break our hurt the ability to offer meaningful advice or even to look for fraud?
What if consumers simply say “no”?
If we know what misuse we are protecting against, how are attacks evolving? It used to be that consumer data had to be protected from being copied and misused for fraud (i.e., confidentiality). Then a new category of attack emerged that made data less available, often by encrypting files or overwhelming systems with traffic and demanding payment to stop.
It is expected that future attacks may focus on data integrity; malware that changes the values in databases at random. This would affect everything from financial statements to transaction reconciliation. Imagine if every time you logged into your bank account it showed the wrong balance or misidentified you. You would definitely lose confidence in your bank.
One way to protect against data integrity attacks is to control access to data. The growing trend to use biometrics such as your fingerprint or facial recognition manages access at the endpoint. While biometrics offers convenience and visible security, what happens if someone lifts your fingerprint or takes a video of your face? What if that digital data becomes compromised?
You can change your password if needed; changing your face is hard.
The Emergence of Identity Services
We ended with a discussion of cloud-based services and their expanded use in “open banking.” In this model, financial services depend on someone else’s software running in someone else’s data center (“the cloud”). Identity management, including login and behavioral analytics, are a perfect example. Google seems to be an early leader in authentication; I can use my Google password to log into many web sites.
Ed suggested that a handful of wireless carriers are best positioned to provide identity services in a mobile-dominated world. However, the downside of having only a few large providers is that it concentrates the risk in the hands of a small number of firms.
From a security perspective, if a widely used firm has a vulnerability (or suffers a breach) everyone using their software inherits the resulting problems. As an industry, would we be better off with more fragmentation of cloud service providers so everyone can’t fail all at the same time (and in the same way)? Hemen predicted that a “massive attack against identity will occur in the next three years.”
In the end, we concluded that that securing data and identities from misuse is scary, and that we don’t have all of the answers. The best thing about the Thomson Reuters summit is that it brought together technologists and business people, disrupters and incumbents.
By removing jargon and focusing on the interests of the client, we were at least able to ask the right questions.
Lou Steinberg is a Managing Partner at Authoriti (https://authoriti.net).