When a software vulnerability is found — whether by a security researcher, a sysadmin, or an end user — there are four ways the discoverer might proceed. Firstly, the discoverer might do nothing at all. Secondly, they might immediately release details of the vulnerability to the public (full disclosure). Thirdly, they might contact the developer of the software directly, report the vulnerability, and then do nothing. Finally, they might contact the developer with the details, give them time to fix the vulnerability, and then release the details publicly after a patch is released (responsible disclosure).
(We’re not considering the fifth potential option: selling vulnerability information. Individuals who take that path aren’t interested in protecting users.)
Security researchers dismiss the first option because it helps no one. Unless they are working privately for the company responsible for the software, they also dismiss the third option — it gives developers no incentive to fix the vulnerability.
That leaves us with two sensible options — full disclosure and responsible disclosure. Neither option is perfect: full disclosure makes users aware of vulnerabilities but it also puts them at risk, and responsible disclosure relies on the developer being willing to implement a fix in a reasonable timeframe. It’s worth noting that both of these definitions are contentious, and many would describe responsible disclosure as the process I’m about to recommend.
The approach recommended by companies like Google and security experts like Bruce Schneier is a hybrid approach — report the vulnerability to the software’s developers, give them time to implement a fix, and release publicly. However, if the developer appears unwilling or unable to release in a reasonable timeframe (Google recommends 60 days), then the details of the vulnerability are released publicly regardless of whether a patch has been released.
This approach is superior in several ways: it incentivizes companies to fix their software, and it ensures that the public is not kept in the dark about vulnerabilities that may be being actively exploited.
This process works well across most types of software, from open source (assuming you can’t just push a patch yourself) to proprietary, and from operating system components to end-user applications like WordPress.
The process differs depending on the software, but would typically look like this:
- You discover a vulnerability. Gather as many details as you can — the more the developer knows, the faster they can implement a fix.
- Find any guidelines the software’s developers have published about vulnerability disclosure. For example, WordPress.org has this guide with contact details.
- If there are no published guidelines, find contact details for the company or developers working on the project.
- Inform the developers of the vulnerability by email.
- Wait for a response. If you don’t receive a response try again. Keep trying until a reasonable amount of time has elapsed.
- Keep in touch with the developer, and consider their timeframe for fixing the vulnerability.
- If the vulnerability is serious and the developer does not respond for several weeks, or makes no effort to implement a fix, you may be justified in releasing the details publicly.
- Once the developer has released a patch, and users have had sufficient time to update, publish the details publicly if you so desire.
Keep in mind that full disclosure before a patch is released will put end users at risk. You have to balance that risk with the existing risk of leaving the vulnerability unfixed and unknown to users.
InterWorx supports responsible disclosure and we will always make every effort to respond to vulnerability reports quickly and implement fixes as soon as soon as possible. Our vulnerability reporting policy can be found here. InterWorx’ responsiveness to vulnerably reports is significantly faster than that of other web control panels, averaging 1 day to respond and 7 days to resolution.