Even in ML, it's common knowledge that the long tail of papers demonstrate brittle effects that don't really replicate/generalize and often do uncomparable evaluations, fiddle with hyperparameters to fit the test data, use various evaluation tricks (Goodhart's Law) to improve the metrics, sometimes don't cite better prior work, etc. etc. Industry people definitely know not to just take a random ML paper and believe that it has any use for applications.
This isn't to say there are no good works, but in a field that produces >10,000 papers per year, the bulk of it can't be all that great, but academics have to keep their jobs, PhD students have to graduate etc. So everyone keeps pretending.
What I wrote also applies to most papers at top tier conferences, like Neurips and CVPR. There are thousands of papers published per year even just in those top conferences. What gets picked up by the media or even just reaches the HN crowd is just a small tip of the iceberg.
This isn't to say there are no good works, but in a field that produces >10,000 papers per year, the bulk of it can't be all that great, but academics have to keep their jobs, PhD students have to graduate etc. So everyone keeps pretending.