The trouble with stock market systems is that any successful one is defeated by its own success - as its logic gets factored into everyone else's strategy.
This self-defeating problem is not present in the other AI applications you mentioned.
Again, I'm not trying to provide an exhaustive survey of AI or ML applications. Merely point out that the fallacy of using past failures of AI and ML to conclude that they will continue to fail at tasks against which they have thus far made limited headway.
Bear in mind that in the 90's only a handful of supercomputers had teraflops of computing power, where now you can get an 11 TFLOPS Titan X Ultimate for $1200. Compute power continues to grow exponentially, yet it has only recently reached a level where certain kinds of approaches are truly practical. As Heinlein said, "When it's time to go railroading, people go railroading."
It's interesting that you should talk about antagonistic systems, since Actor-Critic Models, dueling architectures, Generative Adversarial Networks (GANs) are an extremely hot area of AI/ML research at the moment.
This self-defeating problem is not present in the other AI applications you mentioned.