I wouldn't put too much stock in WotC's methodology unless they actually reveal it. My suspicion is there is a lot more sampling error than any of us think. Any quantitative analysis is about minimizing sources of error, and focusing solely in win% won't do that.
For one thing, you would need identical lists. Comparing archetypes is going to inject error and complexity that is pretty difficult to correct. This is the same reason why things like tracking surveys use identical questions for years and years. Changing even the phrasing can lead to different results and makes the data incomparable.
It's fine provided the results acknowledge limitations and sources of error, but people tend to like making bold statements off data that won't generalize where they want it applied. Honestly, that's where subjectivity comes into play. No one should want a truly data driven format. Subjective experience matters and is measured quite differently from win rate.
WotC should be spending time measuring things like satisfaction with a given format, whether we're talking about Vintage or Standard. They could develop a validated tracking metric and collect data weekly, bi-monthly, whatever, and measure self-report satisfaction with a given format over time. It would help them an awful lot when it comes to delivering sets that are optimized for what people want to play. Their current market research does not impress me.
This is a long winded way of saying win% is a heuristic, not a complete picture.