Imagine if the immediate outcome of AI is not that we replace taxi drivers, dishwashers, and factory workers, but instead we displace most knowledge-worker white collar jobs, like quant and software engineer?
There's an old (and sometimes forgotten) idea in AI that perhaps things we think are simple, like vision and control (robotics), are actually incredibly complicated and took millions of years to evolve.
Whereas things we think are complicated, like playing Go or picking stocks or computer programming, are actually quite simple to learn.
This would be counter-intuitive but---as you observed, and taking my argument recursively---common sense might be much more difficult to get right than obscene pathological thinking.
Anyway, I've always thought a good startup would be to automate away Silicon Valley using AI. It's so punk rock that a lot of disillusioned smart techies would join under this banner. A collaborator of mine has already used AI to do high-level bug finding in blockchain code.
I'm not sure that people appreciate how even the highly technical white collar jobs have large social elements in them. You might be able to get the AI to write the code, but can you get it to attend the meetings?
And it's understanding what the right thing is to build that's the critical challenge in programming.
What if you no longer need meetings? Take accounting software for instance. This function will probably go from an entire team of accountants to one of the C-levels just triggering the right software at the right time as part of their normal duties.
The software just isn't there yet, but we have some inkling of what might be possible in just a few years. Perhaps a closer analogy would be human computers. You would have meetings with them back in the day to set out calculation tasks, but now they are so reduced away that their existence is in itself something that has been forgotten by most. Employees just perform the duties of the human computer throughout the course of their day without even thinking that they replaced what used to be an independent function.
True but that won't save men. Already women are better suited for jobs that involve communication and empathy.
I was in a hospital last week: almost a full female staff.
> There's an old (and sometimes forgotten) idea in AI that perhaps things we think are simple, like vision and control (robotics), are actually incredibly complicated and took millions of years to evolve.
I think there's a lot of truth to this. I'm new to ML, still going through the ropes on some online courses, but already I can see that, once I get a bit of muscle memory in setting up models etc, there's a whole lot of power and efficiency to be unlocked by using simple models - specifically in CS/X and Marketing. Obviously model quality matters, so you have to have proper monitoring etc, but this stuff is low hanging fruit and should enable teams to be so much more efficient.
I dont't think it's intuition. There's a whole field of junk economics dedicated to telling us that your position in the economic class hierarchy determines the automatability of your job. In general it goes unquestioned. A vast amount of capital is also deployed based upon this assumption.
This is an example paper that, for instance, mathematically blurred the distinction between offshoring and automation:
There was another paper (that i cant find right now) that basically surveyed people about how creative they thought their job was and just assumed that creativity was inversely proportional to automatability.
Ironically I think a widespread belief in this myth helped, among other things, lead to the trucker shortage. Who wants to join a profession with a high barrier to entry that they believe will be automated soon?
If software engineers end up automated away before truck drivers are, (not a completely harebrained concept given that one type of AI is doing better than expected and the other worse), it will put a hilarious spin on the "truckers should just learn to code" concept.
Those "Complicated" tasks are all built in artificially constrained systems with limited degrees of variability, which is perfect for an algorithm to learn.
Those "Simple" tasks have so much variation in them that it takes a billion+ years of evolution + the genetic pretraining to be able to perform.
There's an old (and sometimes forgotten) idea in AI that perhaps things we think are simple, like vision and control (robotics), are actually incredibly complicated and took millions of years to evolve.
Whereas things we think are complicated, like playing Go or picking stocks or computer programming, are actually quite simple to learn.
This would be counter-intuitive but---as you observed, and taking my argument recursively---common sense might be much more difficult to get right than obscene pathological thinking.
Anyway, I've always thought a good startup would be to automate away Silicon Valley using AI. It's so punk rock that a lot of disillusioned smart techies would join under this banner. A collaborator of mine has already used AI to do high-level bug finding in blockchain code.