Most honest security researchers operate under the ethic of responsible disclosure. Responsible disclosure work like this: when a researcher — a “white hat hacker” — discovers a vulnerability in a company’s code, they tell the company and give the company’s developers time to release a patch to users. If the company doesn’t release a patch, the researcher will release full details of the vulnerability, often with a proof of concept, to the public. When the publisher does release a patch, the details are released after some time — usually so the researcher gets credit, but also so that the problem becomes widely known and can serve as a lesson.
The release of vulnerability details is called full disclosure, and it’s controversial, especially for the companies who make software. Consider the recently reported case of the car manufacturer who was informed that its vehicle security was vulnerable to being hacked. Instead of fixing the vulnerability, the company took researchers to court to try to stop them from publishing the details.
Companies and developers in this position often make the argument that by widely publicizing the vulnerability, researchers are putting users at risk. The more people know about a vulnerability and how to use it, the greater the risk to users. This is inarguably true, and although it may seem a good argument against putting vulnerability details into the public eye, it’s not a sufficient reason to abandon the principle of full disclosure.
Bruce Schneier, a leading security writer, makes the case for why in a classic article. It boils down to incentives.
“To a software company, vulnerabilities are largely an externality. That is, they affect you — the user — much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem.”
No company likes to have vulnerabilities in its software and most will try to ensure there aren’t any as much as is practically and economically feasible. Vulnerabilities can be expensive to fix and often take developers away from work that makes the company money and these companies exist to make money.
Not fixing a vulnerability is bad for the user, but unless the vulnerability is widely publicized, it isn’t especially bad for the company. From a certain perspective — one that’s not unusual in corporate environments — refusing to fix vulnerabilities that aren’t widely known about is the rational choice.
So we have a position where fixing the vulnerability isn’t necessarily the company’s first choice, but it’s definitely best for the user. It’s entirely likely that if a researcher knows about the problem, her “black hat” counterpart knows about it as well.
How do researchers assert the interests of users? By making the vulnerability public. Firstly, the knowledge that if they decline to act, users will be made aware of the problem, puts pressure on companies to fix the software. Without that pressure, researchers don’t have much influence. And secondly, by releasing the information, researchers give users the ability to make their own choice about whether to continue using the vulnerable software.
If users know about the vulnerability, instead of just being the users’ problem, it becomes the company’s problem too. The rational calculus is changed and the “right choice” from the company’s perspective aligns with the beneficial outcome for the user.
Earlier we mentioned that releasing vulnerability information could put users and businesses at risk. It might in a specific case — although it’s by no means a certainty. However, if researchers don’t have full disclosure up their sleeves, they have no influence at all over the developers of vulnerable software.