No, if the Director had overridden the majority of the membership, the browser vendors would've shipped something anyway, and the YouTubes and Netflixes of the world would be using it anyway.
Essentially. The money gated behind a closed DRM solution is so large that the w3c ran the risk of becoming an irrelevant standards body for this space if they didn't comply with what their members wanted to do.
It's sub-optimal, but I don't think an optimal solution actually existed. A standards board divorced from reality is no better than no standard at all.
> No, if the Director had overridden the majority of the membership, the browser vendors would've shipped something anyway
That's fine. It's better that the burden for maintaining non-standard plugins be put on the sites and browsers that choose to do that, rather than be placed on everyone else.
It's funny how people try to make "standard" mean something magical when it's not. An Internet standard is just a document written by a committee of people who intend to do what it says. They then publicize it and try to get people to go along with it. You can't keep people from getting together to write a document or from doing what the document says. You can just choose whether to participate.
If W3C chose not to help write the DRM standard, the browser vendors could easily create a new organization and write a standard anyway (as happened with WHATWG).
Browser vendors and website authors could then read that document just as easily as anything published on the W3C website, so there is no "burden" for them. There would be no difference to the end user. The only burden we're talking about is the inconvenience of setting up an organization to do the writing. It's a minor speedbump at best.
The upshot is that there is no way to prevent browser vendors from standardizing anything they want. It only gets blocked if they disagree.
No one is implying that not infecting W3C with DRM is going to kill DRM. Of course anyone can agree to things in whatever organized way they want to.
The reason to keep it out of W3C is because it violates their core mission: https://www.w3.org/Consortium/mission#principles . Other organizations with a different mission are free to do as they wish, obviously.
How would that have improved the current situation? The videos that Metastream wants to play would still have been DRM'd and would still be playable in the mainstream browsers. What would the benefit have been? What burden is being placed on people now that wouldn't be placed on people in that scenario?
Making the user experience of DRM worse is better because then fewer people will use it. If the platforms all made it so that you have to solder a new chip into your phone before you can play DRM content, there would be a lot less DRM.
The argument that platforms have to do this for competitive reasons is doublethink. If the experience is worse and that will cause customers to flee, how is it that they would only flee from the platforms that don't have DRM but not the content providers that require it? Wouldn't that create a huge market opportunity for new DRM-free studios, who would then out-compete the traditional ones by being available on all platforms instead of only on Insecure Expensive Proprietary Slow Cableco Platform Nobody Likes?
> If the platforms all made it so that you have to solder a new chip into your phone before you can play DRM content, there would be a lot less DRM.
I mean, yes, but why would they do that?
> Wouldn't that create a huge market opportunity for new DRM-free studios, who would then out-compete the traditional ones by being available on all platforms instead of only on Insecure Expensive Proprietary Slow Cableco Platform Nobody Likes?
You're assuming that content is fungible. If I want to watch Game of Thrones, I want to watch Game of Thrones, not "Winter Dragon," and "Winter Dragon" being DRM-free won't incentivize me to watch it.
Furthermore, development of media content is expensive and requires a bunch of up-front capital / investment. So while there is a market opportunity, it isn't obvious that taking advantage of it without connections to the existing industry is a profitable strategy.
So that they're not beholden to adversarial corporations.
> You're assuming that content is fungible. If I want to watch Game of Thrones, I want to watch Game of Thrones, not "Winter Dragon," and "Winter Dragon" being DRM-free won't incentivize me to watch it.
Except that it is fungible, it's just not universally fungible.
The reason Winter Dragon isn't fungible with Game of Thrones is that you don't like it as much. You'd rather watch Game of Thrones. But there are thousands of shows, and out of those there are hundreds you might want to watch, yet there is only time to watch dozens or fewer.
Nobody can actually watch all of the shows they might want to watch. Letting "lack of DRM" be the thing that chooses between the ones of equal desirability to you is as good a way of pruning the list as any.
> Furthermore, development of media content is expensive and requires a bunch of up-front capital / investment. So while there is a market opportunity, it isn't obvious that taking advantage of it without connections to the existing industry is a profitable strategy.
Who says it has to be someone without connections to the existing industry? New independent studios form all the time as existing talent strikes out on their own. All it takes is for one of them to prove the market before everybody is doing it.
> So that they're not beholden to adversarial corporations.
What is so adversarial about these corporations to the browser makers? What benefit, concretely, do Microsoft or Google or Apple get from being free of the shackles of Disney or CBS?
One concrete benefit I see is less risk of the third-party code destabilizing your code because it has bugs and is running within your address space, but there's an easy solution there: sandbox the EME blob like Firefox (and other browsers too, I assume) does. Then its crashes and buffer overflows don't become your crashes and memory corruptions.
Only in the case of Firefox is it really third-party code; both Chrome, Edge, and Safari ship with the EME modules developed by the respective companies, but they still sandbox it.
Plugins like Flash, which are the historic answer for DRM on the web, have a huge surface space and can interact in the browser in all kinds of odd ways. These EME modules are much smaller, they are much less powerful (AFAIK they either return a frame to the browser to composite or directly to the OS compositor, so you don't need to worry about how they change layout and then change layout again as you reflow), and as a result of that can be put in stricter sandboxes. That's a clear win from a browser security and stability point-of-view, which is a concrete benefit for browser vendors in making it viable to drop Flash (and dropping Flash without providing a replacement for encumbered video isn't an option: breaking websites like Netflix will cause users to use other/older browsers that do support Flash).
> Only in the case of Firefox is it really third-party code; both Chrome, Edge, and Safari ship with the EME modules developed by the respective companies, but they still sandbox it.
They still sandbox it because from the user's perspective it's still an unauditable black box, so at least the user can verify the sandbox. But that doesn't actually solve the problem, because the black box code is interacting with black box hardware. If there is a bug, you've done the opposite of sandboxing it -- you've prevented it from being traced and given it direct access to hardware.
> and dropping Flash without providing a replacement for encumbered video isn't an option: breaking websites like Netflix will cause users to use other/older browsers that do support Flash
The solution to Flash should have been to have someone reverse engineer it and publish a 100% open source implementation, including the DRM. Then let them keep publishing using Flash format as long as they like, but no more black box.
> What is so adversarial about these corporations to the browser makers? What benefit, concretely, do Microsoft or Google or Apple get from being free of the shackles of Disney or CBS?
These companies make Xbox, Chromecast/Stadia, Apple TV, etc. Things that could plausibly be a media center, given some latitude and open standards. You could upload your movie collection onto it, give it your streaming account credentials and it gives you a single interface to all your media.
DRM kills that. You can't make an interface that allows the user to watch a Disney movie they've paid for and then have it show the YouTube commentary on it. You can't have something that recommends Orange Is The New Black after you watch The Wire because one is Netflix and the other is HBO.
Because DRM allows the studios to assert rights that copyright doesn't give them. That's all it does -- that's why they want it. It clearly doesn't prevent piracy.
> One concrete benefit I see is less risk of the third-party code destabilizing your code because it has bugs and is running within your address space, but there's an easy solution there: sandbox the EME blob like Firefox (and other browsers too, I assume) does. Then its crashes and buffer overflows don't become your crashes and memory corruptions.
The problem with this is that it can't simultaneously have such low privileges that it can't do anything harmful even if totally compromised by malicious actors, while also having such high privileges that it's immune to interference by even the owner of the system with physical access to it. They're diametrically opposed objectives. And the second one systematically fails regardless, but having to pretend that that isn't the case compromises the ability to do the first.
Would it have made the user experience of DRM any worse than it currently is, though?
The DRM module would still ship with Chrome and Edge (and likely Safari too, given Apple became involved pretty quickly), you'd still need multiple different streaming formats (in the form of different DRM formats) as you do today, and maybe you'd need slightly different JS codepath per-browser too (but that's not a big difference to today with the different DRM formats).
It's very unclear to me that the W3C refusing to be involved from day one would've led to any outcome very more than subtly different than the one we ended at. At the point that the specification went to Recommendation, there were already multiple interoperable implementations, so objecting at that point was purely a matter of principle, it literally wouldn't have affected the outcome in any way.
If the W3C making the right decision would make them irrelevant then what has actually happened is that they're already irrelevant, and becoming a rubber stamp on bad ideas only serves to prove that and erode their credibility.
Moreover, such organizations are made up of their members, and it's up to the members to do the right thing as well. Nobody had to volunteer to be the first to add this gunk to their browser. It can't be a competitive disadvantage if nobody else has it either, and it can't be a competitive advantage if everybody else has it, and those are the two options so why not choose the first?
This is just the age old discussion of whether it's better to capitulate in small ways so you can steer a group away from bad behavior/decisions later or to make a stand on principle to draw attention to the current bad decisions.
As much as some people like to say one is better than the other, I think the answer is always "it depends". Unfortunately, it depends not only on the relative power and momentum behind the current problem when deciding, but also on unknowns such as what will happen in the future.
It's hard for me to find too much fault in them deciding that they would rather stay somewhat relevant to the process than become obviously irrelevant (if that was indeed the thought process), as there's still a lot they can affect in the future. Armchair quarterbacking about what they should have done isn't too useful in my eyes.
Except that there was no such trade off here. If they refuse to approve DRM and then some browsers unwisely implement it anyway, having their approval makes it worse, not better. The browsers doing the wrong thing can claim to be following a standard, even though the standard is useless garbage because the entire point of having a standard is so that anyone can implement it, which in this case they still can't.
The trade off is in relevancy. If the standards body doesn't force a confrontation it knows it can't win, then it retains some power that it can throw behind or against future proposals. If the major browsers have already decided to completely ignore them and create their own consensus, there's that much less reason to listen to them next time. Not only has a precedent been set, but coordination on features outside the control may have already been somewhat standardized behind the scenes (beyond what they already do), making it easier next time.
The downside is as you say that the browsers can point to the standard as for why they implemented it, but that's why it's a trade off, and not cut and dry (IMO)
You seem to be mistaking the fact that the W3C for Web Standards is just the browsers. The last time it wasn't, the browsers formed WHATWG and the W3C became irrelevant.
The existence of features in any piece of software is a burden on further development of said software. Every time we go to add some other new feature to the spec we have to take into account how it will affect EME. That's just how software works.