This seems like a good move. I do hope that slow moving consumers of the software in question can start anticipating an upcoming release and construct remediation plans instead of doing that after the release.
I love it; it's a big-company reformulation of the classic vulnerability researcher's "reporting transparency" process: post "Found a nasty vuln in XYZ: 6f0c848159d46104fba17e02906f52aef460ee17d1962f5ea05d2478600fce8a" (the SHA2 hash of a report artifact confirming the vuln).
Related links:
- Vulnerability Disclosure FAQ ( https://googleprojectzero.blogspot.com/p/vulnerability-discl... )
- Reporting Transparency ( https://googleprojectzero.blogspot.com/p/reporting-transpare... )
> This data will make it easier for researchers and the public to track how long it takes for a fix to travel from the initial report, all the way to a user's device (which is especially important if the fix never arrives!)
This paragraph is very confusing: What data is meant by "this data"? If they mean the announcement of "there's something", isn't the timeline of disclosure made public already under current reporting policy once everything has been opened up?
In other words, the date of initial report is not new data? Sure the delay is reduced, but it's not new at all in contrast to what the paragraph suggests.
I find the stated goal of alerting downstream a bit odd. Most downstreams scan upstream web pages for releases and automatically open an issue after a new release.
Project zero could also open a mailing list for trusted downstreams and publish the newly found announcements there.
The real goal seems to be to increase pressure on upstream, which in our modern times ranks lowest on the open source ladder: Below distributors, corporations, security pundits (some of whom do not write software themselves and have never been upstream for anything) and demanding users.
Not sure what is the measurable metric here, and what will be considered a success in this trial period.
Propagating the fix downstream depends on the release cycles of all downward vendors. Giving them a heads up will help planning, but I doubt it will significantly impact the patching timeline.
It is highly more likely that companies will get stressed that the public knows they have a vulnerability, while they are still working to fix it. The pressure from these companies will probably shut this policy change down.
Also, will this policy apply also to Google's own products?
It also seems to disclosesinteresting internal products: what is Google Bigwave ?
It is indeed a complex problem. But is Google now killing FOSS slowly? IMHO there is far too much emphasis on Foss security and far too little on closed sourced hardware, firmware and software. Too much blame and pressure will not solve the complex problems as stated in the blog.
If Google is adopting this, maybe rachelbythebay's vagueposting was ahead of the curve?
I jest; the vagueposting led to uninformed speculation, panic, reddit levels of baseless accusation, and harassment of the developers: https://news.ycombinator.com/item?id=43477057
I hope Google's experiment doesn't turn out the same.
This policy change makes sense to me; I'm also sympathetic to the P0 team's struggle in getting vendors to take patching seriously.
At the same time, I think publicly sharing that some vulnerability was discovered can be valuable information to attackers, particularly in the context of disclosure on open source projects: it's been my experience that maintaining a completely hermetic embargo on an OSS component is extremely difficult, both because of the number of people involved and because fixing the vulnerability sometimes requires advance changes to other public components.
I'm not sure there's a great solution to this.