I agree with this, but feel like there might be issues I'm not thinking of. Can anyone shed some light on why browsers are so tolerant today and why that might be a good thing?
The Robustness Principle states "Be conservative in what you do, be liberal in what you accept from others."
Following that maxim, browser developers assumed that, even if HTML wasn't inherently correct, if they could figure out what the user logically meant then assuming that was better than not working at all.
In short, people wrote garbage HTML and it proved easier to fix browsers than people. At first, it wasn't too problematic, but as HTML got more complex more problems surfaced and now everything is a mess.
This was the goal of XHTML: HTML that was required to validate as XML or it wouldn't work at all, and some browsers were, indeed, strict at this. The idea was that you'd only use XHTML if you were generating it with an XML parser or some other template generator that could produce valid code. In reality, that just meant that browsers that didn't understand XHTML treated it like HTML and worked, and browsers that did understand XHTML and validated it would show errors. Thus, users saw that browser X (doing the right thing) couldn't display a site, but browser Y (doing the wrong thing) could.
Probably legacy reasons and the type of errors you can get.
Since JS used to be "sugar on top", it wouldn't make sense to completely eliminate the page when that piece of code which makes a logo flash doesn't work right.
Also, you can have JS errors coming from loads of places. What if an extension you use has a bug in it that triggers only on certain sites because of some stupid unicode issue? What if some ad has an issue like that?
And basically, it really boils down to: we all ship buggy fucking software. Everything has some kind of a threshold for errors (or errors that blow up only under certain conditions). It's good to have some built-in fault-tolerance that prevents an all-out disaster.