Å·ÃÀÈÕb´óƬ

Skip to Main Navigation
Podcast April 22, 2021

Tell Me How: Sharpening Policies for Better Cybersecurity Outcomes

View all episodes on our Tell Me How: The Infrastructure Podcast Series homepage

Many of us are no strangers to cyberattacks. Places we shop, bank, eat, and conduct online transactions are routinely the target of hackers. Consumers, governments, and businesses are all vulnerable as we are increasingly dependent on the internet.

What can we do to make our economies more cybersecure? Find out in this new podcast as Roumeen Islam catches up with Professor Tyler Moore, Tandy Professor of Cyber Security and Information Assurance in the Tandy School of Computer Science at the University of Tulsa and Editor of the Journal of Cybersecurity.

This podcast series is produced by Fernando Di Laudo and Jonathan Davidar. 
 

Listen to this episode on your favorite platforms: , , , , and 
 

Transcript

Roumeen Islam: This is Tell Me How, the World Bank's infrastructure podcast. In today's special episode, we discuss cybersecurity for more secure outcomes in an increasingly digitized economy. 

Over the last few years, several articles have been written about whether governments, firms, and others invest enough in cybersecurity. A recent report on the topic explains how, for decades, business executives didn't quite understand the specific issues or the potential costs to their business from cybercrime. As a result, they've under-invested in prevention.

Over the years, millions of people have had their credit card, social security, or other data hacked at companies where we shop, or bank, or even at hospitals. By understanding the incentives and constraints of the various stakeholders in the cybersecurity space, policies can be better designed to protect society from cybercrime. Let¡¯s find out how!

Good morning and welcome. I am Roumeen Islam, the host of Tell Me How.

And today, I have as my guest, Professor Tyler Moore from the university of Tulsa Department of Computer Science and Editor-in-Chief of the Journal of Cybersecurity. He has written extensively on security and cybersecurity economics among other things. Welcome Tyler. It's very nice to have you with us.

Tyler Moore: Great to be with you, Roumeen. 

Roumeen Islam: So, Tyler with the digital economy taking off, people, businesses, everybody, government's going online, concerns with cybersecurity are also on the rise. Do you think this is justified? Would you say that there's been an increasing frequency of cyber-attacks than say five to 10 years ago?

Tyler Moore: Well, people are definitely paying more attention. Our economy's becoming a lot more dependent on ICT across all sectors and cybercriminals are still out there continuing to be a threat. Now the question about frequency of attacks. It's not exactly clear that the frequency of attacks is occurring, but certainly their impacts seem to be increasing.

Roumeen Islam: I see. So, the impacts are increasing why? 

Tyler Moore: Well, it goes back to how tied our economy is to ICT, right? So, we are way more dependent upon the internet functioning than we were five, 10, 15 years ago. And meanwhile, the outages that may have been a mere nuisance, you know, those 10, 15 years ago, now can become mission critical.

Roumeen Islam: Do you see an increasing effort to protect against attacks? I am assuming that there must be some, and if so, what types of initiatives have been the most popular?

Tyler Moore: Leaders are paying a lot more attention to cybersecurity than they used to. So, what we see a lot more attention and investment being made by organizations to improve their overall cybersecurity. Let¡¯s face it, no company or government wants to be in the news because sensitive data has been hacked. And so, there's definitely an awareness to it and a desire to make investments. And so how do they do it? For the most part organizations are focusing on these cybersecurity investment frameworks. There's a whole range to choose from. Perhaps the most influential is the NIST cybersecurity framework, which sets out a whole series of controls that an organization can adopt in order to mitigate their cybersecurity risk.

Really, it's a structured way of deciding how to set out a cybersecurity plan, execute it, and improve it over time. I see. And what does NIST stand for? NIST is the US National Institute for Standards and Technology. So, they are the organization that responsible for all kinds of standards in the United States.

But this standard which was released several years ago is actually, sort of, free to use and has seen global adoption. I see, so they're voluntary, right? It's a voluntary mechanism. Actually, the NIST cybersecurity framework sort of rose out of the ashes of a failed, mandatory attempt by the U.S. Congress to pass legislation to require critical infrastructure sectors to adopt security controls and investments.

After that failed, the NIST cybersecurity framework was developed as a set of voluntary guidelines to encourage organizations to make appropriate investment. 

Roumeen Islam: Is cyber tax actually materially different from other types of security issues? 

Tyler Moore: Yes and no. I mean, a security threat is a security threat. It is platform agnostic. Whether, you know, an attacker wants to blow up a building or disrupt and destroy your digital assets, the effect is still a security threat, right?

What's really different here is the nature of digitized information, right? So as soon as we take our information and digitize it, the game has changed. We're no longer storing our corporate secrets or government records and paper files in offices that are protected under lock and key.

Now, they've been stored as bits. They're costlessly stored, available online, and they can also be easily copied. And, so this increases the risk of many of these threats actually succeeding. And then, the other, sort of, especially challenging part about it is, is that if someone breaks into your physical enterprise and steals records, the records are gone and there's evidence of it occurring. With digital information, it doesn't work that way anymore. Someone can access those bits. They can make a copy of that information, and you would never have even necessarily known it occurred. So, organizations are being hacked and not finding out for many months or years, or even at all, that it has occurred. 

Roumeen Islam: That's really kind of frightening. So, first of all, it's costless to keep attacking once you've, you know, gone through it. And then secondly, no one will know that you've gone in and attacked them. That makes it really frightening. Now, you've also done some stakeholder analysis, Tyler, so how do you think about these various stakeholders, the consumers, government, industry, or the infrastructure sector, electricity companies, for example, are there different issues they consider? Do they have different incentives when they think about cybersecurity? 

Tyler Moore: Absolutely. Every stakeholder has its own interest and response to them. And what we see in cybersecurity is that often they come into conflict and critical infrastructure sectors provide a good example. Electricity is one, water is another one that's recently come in the news. There was a water treatment plant in Florida that was recently hacked. And, you might wonder, okay, why is it that our critical infrastructure sectors like electricity grids, like water treatment plants - why are they so vulnerable to attack? Well, you could do some stakeholder analysis and explain maybe why this occurs.

If we go back in time, our infrastructure networks were often completely separated from the internet and IP networks that we use for computers and for checking email. But over time, we've seen this increasing convergence so that the networks that were separated become connected.

And the reason they do is because it's cheaper to do it, it's more efficient, and it's more convenient to bring those networks together. So, if you are the operator of one of those networks, so, you're the utility company, you see lots of benefits to bringing those networks together, even though it introduces a significant security risk.

And if you listen to any security expert, they will tell you that you need to keep those networks physically separated and never allow, say, remote access to these most sensitive systems. And yet, we know that utilities across the world across many infrastructure sectors do connect their systems to the internet, and they are extremely vulnerable to these attacks.

And so. Why does that happen? Well, the private sector operator of this infrastructure often values the convenience. You compare that to other stakeholders. The consumer, in this case, the consumer would very much maybe like to not have their water treatment system be poisoned or their electrical grid be taken down.

Obviously, they value the reliability of those systems, but they also can't reliably determine whether or not their utility is actually taking adequate steps. They can't observe it very well. And so, they can't reward good behavior versus bad. And finally, governments. For governments, they clearly have a strong interest in maintaining the security and reliability of these critical infrastructure sectors, but they often don't control those systems. And they often lack the expertise because they don't control those systems to be able to mandate the steps that need to be taken. And so, as a result you look at an environment where you have these different stakeholders, the stakeholder who controls the system is often the one who makes the decision. In this case, it's the operators. They choose to connect. And unfortunately we're at higher heightened risk of attack.

Roumeen Islam: That's interesting, but governments also come under attack. So, they also make choices about their own systems over which they have control. 

Tyler Moore: Exactly. It is the case that governments do have some control over their own systems and they can, and often do make much more significant security investments to try to protect those systems, right? And so, and so on the one hand, I would say that, overall, most governments do try to invest heavily in cybersecurity, but they have their own sets of challenges. Number one is that they do not have the same budgets that exist in the private sector. And there's all kinds of evidence that essentially the budget cycles and procurement make it difficult to make the kinds of investments that need to change kind of rapidly in cybersecurity on public sector systems. And, so what we end up seeing is a greater exposure. And, then finally it goes back to the value of the target, right? So often the government systems are highly valuable to attackers, particularly if those attackers are tied to other nation States for very significant attack budgets. And so even if you are making those systems more secure, the fact that there is an attacker who is willing to invest more resources to compromise those systems, they will still succeed.

Roumeen Islam: Okay. That that's really quite interesting. And this whole idea that incentives matter so much that the economics lens helps understand where we are in terms of how much cybersecurity there is in their economies, is very important. So, could we go through this a bit more? So, what types of economic analysis, what types of lens helps us to understand how to design cybersecurity policies?

Tyler Moore: Excellent question. I think economic starts what we were just talking about ¡ª that you have to think that the attackers and the defenders both operate strategically. So, you know, I sit in a computer science department, I teach a computer an information security class. And, you might not think that you needed to be instructing those students to think about the incentives and the strategy of the different players involved. But, we just like to focus on the technology. Well, it's about so much more than the technology. What matters is that you have the interests and incentives of the defenders.

What incentive do they have to make adequate investments in security, combined with the incentives and interests of the attackers? Do they want to go after a particular target versus another, you know, in the entirety of the internet and ICT infrastructure, there are practically infinite number of targets, but not all of them are equally valuable.

And so, when deciding how you're going to invest in your cybersecurity budget, you need to consider whether or not you were likely to be a target. So, that's the very first economic step, but it goes much deeper than that ¡ª where evidence leads into thinking about the market failures that are often at play.

So, there are two market failures that significantly affect cybersecurity: information asymmetries and externalities. And if you'd like I can go through each of those. 

Roumeen Islam: Yes, please. I was just about to ask you to do that. 

Tyler Moore: Okay. So, information asymmetries. The classic example of a market for information asymmetries is the used car market.

If you are trying to go buy a car, you go to a used car lot, and you see five cars and they all are the same make and model, and mileage. On the surface, they look to be equivalent yet you know, that there is in fact variable quality there. Some of those cars have been very well maintained, they're cherries, and they would run great and drive a very long time.

Other cars are lemons that you drive them off the lot, they're going to, you know, belch smoke and leak oil everywhere. 

Roumeen Islam: And so, Tyler, you have cherries and lemons and what you actually mean by information asymmetries is that the people who are engaged on two sides of the transaction who buy and sell have different kinds of information. I just wanted to clarify that¡ªso, they make suboptimal decisions. 

Tyler Moore: Absolutely, exactly. Yes, so in this case, the seller of the car, the used car dealer, they have a pretty good idea of the quality of the cars they're selling. However, the buyers don't. They don't know if their information is at a deficit.

And so, the interesting thing about a market with asymmetric information is that, with the market clearing price being low, the market ends up getting flooded with the lower quality goods. In this case, the low, the lemon cars are all that you can buy on the lot. And this plays out considerably when we think about cybersecurity.

Roumeen Islam: Why, does that happen? That happens because the buyer doesn't know whether the car is good or not. So, he's only going to assume that it's bad and try to pay the lower price and then the seller, what are the sellers' incentives?

Tyler Moore: Okay. Yeah. So, the dynamic at play is that, just as you say, the buyer of the car only has information that can't distinguish good cars from bad.

And so, they're not going to pay the premium for the good car, because they don't know if they're getting in the good or the bad. So, they're only willing to pay the lower price. And a seller in that market will look and say, okay, people are only willing to pay the lower price. So, the seller's only willing to provide cars at the lower quality goods.

So, in the car market, that means that people have the nice cars don't sell them. And the only people who do are the people who have the lemons. In the software market, what this means is we're trying to understand the quality of software, for example. How secure is it? Does it have vulnerabilities that could be exploited?

Did the software developers go through a rigorous process of identifying and eliminating bugs beforehand? This is very hard to determine as a buyer of that software. But it is also very expensive to do. And so, what you end up with is a world in which the buyer of software doesn't want to pay a premium for the more secure software, because they can't actually reliably tell that's what they're getting.

And the sellers in turn, don't devote resources to make their products as secure as they could. And so, what you instead see is a market environment in which things that can be observed are prioritized. How easy to use the software is, how pretty the interface is, how cheap it is, what the cost is. And though, that's what gets prioritized, security is deemphasized. And the net result is that even though we would all like to have more secure software, the market won't deliver it. 

Roumeen Islam: Okay. That, that's very interesting. What about the problem of externalities you mentioned? I guess that means we're going to be talking about the impact of a single firm's decision on the overall security environment, like individual firms make the decision and it affects the overall environment. 

Tyler Moore: The classic case for externalities is environmental pollution. In security, are something called botnets. So, a botnet is a collection of many thousands of computers, typically, which have all been compromised.

And they're under the control of a single criminal actor, often called a botnet herder ¡ª herding lots of little infected computers together. And what the botnet herder can do is issue instructions to these compromised computers, to do whatever they want. Now, one possibility would be for the botnet herder to just harm that computer itself: just steal their passwords and do bad things only to that host in which case there's no externality. But what actually happens in practice is the botnet herder uses those computers to harm other people. They can use that computer to send email spam, to launch denial of service (DoS) attacks on other computers to try to compromise other computers. In this case, the harm is directed elsewhere.

And so, what that means is that the computers that are compromised in the botnet, they often have no idea that they're even in the botnet. They're not experiencing any adverse consequences. They're just a host to carry out harm elsewhere. And in fact, I can state pretty confidently that, that at least some of your listeners have a computer at home that is, or has recently been in a botnet because it's again, not observable and, and they try to keep you from seeing it happen.

Roumeen Islam: Is it that prevalent? 

Tyler Moore: Yes, absolutely. This is why it's an economics problem. On the one hand because there is no strong incentive for the host of the vulnerable computer to fix the problem because the harm isn't being directed to them, it's being directed elsewhere. And we see this time and again, which leads to underinvestment in security.

Roumeen Islam: So all these incentive problems lead to an under-provision of security in the market, less security than would be optimal if we were to consider everyone's welfare. But then what sort of policy fixes should one be thinking of?

Tyler Moore: The good news is that, you know, we've identified some market failures: information asymmetries and externalities, and this motivates the need for a policy intervention to try to correct for them. And, you know, there are a few standard approaches you might try. And many of them can be put into two buckets.

You have ex ante rec safety regulation and ex-post liability. So ex-ante safety regulation is used whenever the harm is potentially so great that you want to prevent it from happening in the first place. You don't want your electrical grid to be taken down by an attacker so you can impose some rules that would mandate some level of security investment to fix a problem.

And so, this is certainly a possibility but, one of the unfortunate realities has been that it's been very hard to regulate internet industries in order to improve security. So, while it is available, it's not extremely widespread and only affects certain limited sectors.

On the ex-post side, what you do here is you say: Okay, we're not going to require you to adopt any certain standards of making the screen investments. We think you should. And the way that you try to fix this problem is you say, if something goes wrong, if there is an attack that causes harm, you can assign liability for that harm to the party who committed the problem.

So, this is a way essentially of providing a punishment if things go wrong, which should encourage the actors to make better investments. And so, this is possible. But again, we have, we have a bit of an issue in that in the software space. Software liability is essentially a non-starter and has been for decades

Roumeen Islam: Why is that?

Tyler Moore: Well, so there's a couple of reasons. You could say it's effective lobbying, but there is a challenge in terms of how do you design secure software? You know, this sort of pains me to say, as someone in a computer science department, we don't do a very good job explaining how to build secure software that can be completely free of vulnerabilities.

And so, we essentially train people to go out into the world and build software that powers our entire global economy. But the software that we train people to build is riddled with bugs. And it's always going to have bugs even if you do the right things and try to test for them, remove them, and advance, it's an imperfect and incomplete process. 

So, there's always a chance that there will be remaining vulnerabilities that could be exploited. One consequence of this is that you don't have organizations that are assigned liability for fault when these bugs are discovered.

Roumeen Islam: Yes. I hear all coders talk about bugs endlessly. So, I understand that part. Now in terms of ex-ante mandated rules, why I didn't quite understand why they did not work, is it because they couldn't be implemented for some reason?

Tyler Moore: So, on the one hand, there have been some examples of sectoral specific ex-ante regulations being deployed. But it's tended to be on a fairly narrow basis. So in the United States after the Enron scandal, there was, the Sarbanes Oxley financial regulations which were passed. A key part of that was trying to ensure the integrity of financial reporting by public companies. And so, again, what does it have to do with security? It turns out, quite a lot because as part of being able to ensure that you are accurately reporting financial statements, you need to be able to ensure that the data on which it is based has not been compromised. 

And, so this has led to a whole suite of security requirements being placed on publicly traded companies in order to comply with these regulations. So, in that sense, we have not a complete requirement to invest in cybersecurity, but publicly traded companies do need to invest in cybersecurity controls specific to complying with something like Sarbanes-Oxley.

And, if you were around in the early 2000s, when Sarbanes-Oxley was first passed, there were many years when those in the business community objected to Sarbanes Oxley saying it was way too onerous to comply with these rules, right? And, so one piece of that is the cost of security compliance.

In general, the main argument against ex-ante safety regulation, it's always, it's too onerous and costly. And so, it really comes down to what you prioritize, to what extent do you accept that you need to impose these costly interventions to take adequate protections. And so that's always going to be a trade-off and people will come out of it differently.  

Roumeen Islam: All right. So, if you're saying that neither ex-ante nor ex-post regulation really work, what then? 

Tyler Moore: I think we should look at targeted interventions that directly go after these two key market failures, the first being information asymmetries, right? And, so the best way to combat information asymmetries is to increase transparency. And we've seen this already done through information disclosure, regulations and laws, right? So, beginning with the State of California in 2002, we had a data breach notification law. And what that meant is, any time there's been a breach of personal information, there's now an obligation to tell the consumer that that has occurred.

And this was actually modeled after earlier regulations at the environmental protection agency in the United States for something called the toxic release inventory. So, anytime certain toxic chemicals were released into the environment, a law requires that that be disclosed so that people are aware. There's no further penalties apart from this information having to be transmitted are required, but it turns out that's a very powerful tool to make people: number one, learn how prevalent something is, but also try to prevent it from happening at all. 

Roumeen Islam: That going to be my next question? That's right. Yes, please. Sorry, I didn't mean to interrupt. Go ahead.

Tyler Moore: In security, the challenge often is how can we measure things better? How can we understand the true magnitude of risk? How prevalent these risks are? How effective our responses can be? So, take data breaches: before these data breach notification laws were in place, we didn't know how prevalent companies were in losing our personal information all the time. Unfortunately, we now know that they do it a lot, and it's because of these requirements to disclose. Now, there's many other cybersecurity threats beyond data breach that don't have this disclosure requirement that we know about as being possible, but we don't have the same level of knowledge of how prevalent they are, and we don't have the same association with sort of the cost of having  to share.

And, so I think I would start there in terms of a policy intervention to try to improve our knowledge to mitigate this information asymmetry. 

Roumeen Islam: Yes, you know, they're in many fields, and in economics, in many markets, these are very important tools: information, disclosure, requirements. I mean, think of capital markets and firms listing. They need to disclose how they're doing - their profits and losses. 

Tyler Moore: So, there¡¯s another area that I think is maybe underappreciated is the role that a government can play in coordinating. So, there's this coordination effect. So, I talked about the NIST cybersecurity framework already, which is a voluntary arrangement.

Companies are not compelled to use it, but the government developed it. They brought together private stakeholders and made this framework. And, now it's a common framework that organizations can now use and that has actually been quite powerful and governments can bring different parties together in a way that no one else can.

And I think this is actually a very effective policy instrument that should be used more. I'll give you one more example. There's something called a Bill of Materials. So, we know about bills of materials. Anytime you purchase physical goods, there's often a bill of materials talking about the ingredients that exist and the different components of the product.

We don't have a bill of material requirement for software. You buy a piece of software, you kind of don't know what's in it. Well, there has been a push to try to create something called a Software Bill of Materials where you could identify the software libraries and the different types of code that's present.

And there's been an effort led by the U.S. Department of Commerce to adopt a Software Bill of Materials standard where companies could at least know what¡¯s in the software they're buying in terms of which libraries they're dependent upon. So, if there's a vulnerability that's later discovered, it can be fixed.

Roumeen Islam: And, this is under consideration now?

Tyler Moore: It's happening now, right? So, it's not something that is going to be mandated per se. It's not saying that every piece of software produced will have to do this, but the government has played a key role in setting up this framework, making it actually become a reality. And, it naturally solves one of the key information asymmetry problems, right. It helps you know something about the software itself. So, these are the kinds of innovative policy interventions that I think could make a real dent in this very big problem that we're going to have with us for a long time.

Roumeen Islam: All right. So, you've spoken about voluntary guidelines, some mandatory requirements on information disclosure. And, you think all of these will affect incentives enough to have some movement in this field, but is information disclosure enough? Are there other things that could be done?

Tyler Moore: Certainly, the best thing you can do is assign responsibility, AKA liability for problems and for an attack when it occurs. But you have to be smart about how you assign that liability.

And so, I think it's important to tend to try to focus on what we call the least cost avoider. We see examples where people are saying, you know, that we should assign liability, say to consumers, if you were to be careless with your password and you get attacked, then maybe it's the consumers who should be held responsible.

I think that's misguided because there's a significant limit to what we as consumers can do. However, there are often other stakeholders within the system who are in a much better position to fix things. And, so the best example, I'll point to is internet service providers: the company that you get your internet connection from.

They have security teams and they are often in a very good position to identify which of their customers have been hacked and are placed in these botnets I was speaking about. So, the reason they can is because they sort of sit as a mediator between their customers and all the different spots on the internet that we try to communicate with.

And they can quite readily detect the patterns of anomalous traffic and identify when whenever a customer has been infected. So, they're in a really good position to do something about it. Right now, they don't. Why don't they? Because of the externality, the harm is not affecting them. And, so I think you can assign liability to a key intermediary like an ISP to try to correct and internalize that externality when it's present.

Roumeen Islam: Has that been successful ¡ª assigning this liability to third party?

Tyler Moore: In a context other than cybersecurity, sure, all the time. One example is anytime you're out driving in a car, you see there's often a commercial vehicle where there's a bumper sticker saying, ¡°How's my driving? Call this number to report.¡± That that is an example of indirect intermediary liability that works quite well.

So, if you have someone, an employee who is driving a company car. If they're driving recklessly, and they cause an accident, then, on the one hand you could hold them responsible, but there's some downsides. One is that they don't have as deep pockets as their employer might but they can cause real harm. And so, these liability regimes are in place where you say, okay, we assign the responsibility to the employer. And, because the employer is in a good position to monitor their employees, say driving performance and safety record, and they can influence it. 

We do see this work in the offline space. Could it work in the online space? I think it could. I think ISPs are the best example. We have seen that there have been some examples of the government playing a coordinating role to encourage this voluntarily. Several years ago, the federal communications commission brought together large internet service providers to try to work on this problem and come up with responses that they could make to deal with these botnets.

And it worked, but only to a point. Because, you know, the ISPs listened and played very nicely when they were having the meetings and they said they were going to do great things. But you know, the pressure is off and there's no mandatory response to do anything about it. And so, we're kind of back to where we were a few years later. So, I think eventually you have got to have some teeth on these requirements. 

Roumeen Islam: Okay. You had just talked about third party liability, but could we go back a bit to assigning liability to the parties directly involved? You said consumers can't pay for this extra security that might be needed. What about some other firms? What about financial firms?

Tyler Moore: Yeah, absolutely to an extent you could say maybe in the financial sector there¡¯s the case where this actually sort of works, you know, at least in part. Online banking fraud, this is something that in most countries, the regulations stipulate that whenever there is online banking fraud in some form, the customer cannot be held liable unless they can be shown that it was their fault.

And so, this has encouraged financial institutions to make very significant investments, to try to minimize and manage that particular risk. I think we do see particular instances of this. The real challenge is that once you get beyond the financial sector, these individual sectors, we have cybersecurity and cyber insecurity affecting all sectors of the economy. And, and we don't, at this point, have a robust response across our economy to fix these challenges. But I think if we're going to start somewhere, you want to start at the enterprise level. You want to first empower enterprises to be able to improve their cybersecurity so you devise these security frameworks. You give resources to help make it easier for them to adopt these changes. And then perhaps eventually you do need to look at more of the stick side of the equation and consider assignments of liability where you know, insufficient resources have been assigned to cybersecurity and an organization that gets hacked, for instance.

Roumeen Islam: I think many regulations may indeed have to be taken at the specific sectoral level. And I'm going back to my favorite example of the financial sector because I don't have to pay when there's credit card fraud. And I'm just wondering if that's the case in all countries.

Tyler Moore: So, it varies. It varies a lot because every country has their own financial regulators and you know, it goes back to one of the first classic examples of economics applying to security. You know, in the 1990s, the British banks had very different regulatory environments and rules than the U.S. banks around this very problem.

At that time there were significant rates of ATM card fraud ¡ª people copying ATM cards and emptying other people's bank accounts. In the US, the regulations have long been clear: When this happens, the banks have to pay. Whereas in the UK banks could blame the customer. They could essentially assert that the customer was careless with protecting their information and then the onus was on the customer to prove otherwise, which is often impossible. And what's interesting is we ended up seeing much higher rates of fraud in the United Kingdom than in the United States, because the incentive to invest and try to eradicate this fraud was much weaker in the United Kingdom because they could share the losses with consumers. It's instructive to see examples like that. 

Roumeen Islam: Are we moving in the right direction to get better outcomes?

Tyler Moore: I think we can. I think on the current path, I'm not sure that we are. I think there is probably far too great of an acceptance of the status quo because it certainly suits the cybersecurity industry. They want to be able to increase their revenues and sell more products. And frankly, I think they've gotten away with not having to have the pressure of evaluating their products for their effectiveness for a bit too long. And I think if we're going to change that we need to essentially demand more accountability for the effectiveness of our products, but to get there, I think we need government to play a coordinating role.

And one way to do that is for governments to take a stronger role in collecting relevant data on not only how prevalent breaches can be and the different security threats, but also tracking the effectiveness of security interventions. There are proposals to develop things like bureaus of cyber statistics in different countries and I think that could be a reasonable way forward. I don't think we're going to get out of this until we¡¯ve had a healthy dose of empiricism to the problem. 

Roumeen Islam: Let¡¯s talk about data privacy versus cybersecurity. Are they related? Because some may say they are, and in what way? And are there differences across countries?

Tyler Moore: In many ways privacy and security are very closely related and where they come most closely related is when you think about protecting against breaches of private information, right? So, we've talked about data breach notification laws, which are required any time personal information is accidentally shared or deliberately shared. And so, maintaining privacy of your personal sensitive information is absolutely part of what you would want to do for a cybersecurity effort.

And when you think about this, the approaches to data privacy in places like the United States versus the EU are wildly different. In the United States, we have a very sectoral approach where you have rules that would govern particular types of information. So, we have a HIPAA which regulates health data.

We have FIRPA, in my case, which represents educational personal information. And if you're not covered by one of those particular sectors, then anything goes basically. Whereas the EU, through the GDPR, takes a much more comprehensive approach. All personal information needs to be protected.

They're much more comprehensive in that respect, but they're also much more tuned to taking an ex ante approach. We have strong requirements upfront. So, the GDPR talks about data minimization. Essentially companies can limit the amount of data that they maintain.

That's personal information because that will hopefully limit the scope of future harm. So, there's a lot of steps that have to be taken upfront to make it less likely that a breach occurs. And interestingly, the GDPR does have a data breach notification component to it, which is quite different than how it's done in the United States.

In the United States, it's all state based, so there's 48 or so different ones. One thing. Yeah. But in nearly all cases, the consumer has to be notified and in the EU¡¯s case, anytime there's a data breach, the regulator has to be notified and only if there is significant harm, does the consumer also have to be notified. And so, what we see in many cases is that breaches aren't actually publicly disclosed which satisfies companies who don't want their dirty laundry being aired for sure. And, also might encourage them to disclose more to the regulator because they won't necessarily be punished through public shaming.

Roumeen Islam: Okay, so there's a good side. At least they're disclosing more.

Tyler Moore: Yeah, potentially. It's an empirical question whether they're going to be disclosing more. I think you would expect that that would happen. And, I think as think about where else we might use breach notification or data disclosure requirements, I think it makes sense to not always require public disclosure because in the case of personal consumer information, then you need to notify the consumer so I think it¡¯s largely unavoidable to report.

But in other areas, maybe intrusions into their electrical grid or other critical utilities, maybe you don't. Maybe you need to disclose that to your regulator unless there is a direct impact ¡ª an outage or something like that¡ªthen maybe you don't have to disclose it publicly.

And so, so I think you could craft it in such a way that would encourage broader disclosure of the cybersecurity risks without essentially scaring organizations away from trying to not want to say anything. 

Roumeen Islam: That's a very good point, some very good insights there, Tyler. Thank you very much for sharing your opinions with us and teaching us so much, thank you. 

Tyler Moore: Thank you!

Roumeen Islam: Well, listeners, we learned that designing a cybersecurity system that works well is rather challenging. Importantly, incentives matter. And, there's a role for both the private sector, as well as for government in building a solid system. Private initiatives alone with each firm, or each individual taking their own decisions on cybersecurity investments, based on the information available to them, do not deliver the best outcomes. This is because the market for cybersecurity has two important market failures: that of asymmetric information and externalities. Information on specific products and risks are not equally well known by everyone and the single firms or a single individual's decisions or actions can affect many of us.

So, what do you do? A mix of ex-ante regulation often composed of standards or guidelines and ex-post regulations, which usually determine liability: that is, who should pay once you've had an attack, is needed. And we discussed a number of these today. Thank you for listening, and bye for now.

This episode was recorded in April 2021. 

If you have questions or comments, we¡¯d love to hear from you. You can reach us at tellmehow@worldbank.org. Don¡¯t forget to subscribe and thanks for listening!

View all episodes on our Tell Me How: The Infrastructure Podcast Series homepage