This level of hype reminds me of the AI winter. Concerned that public interest hits a peak and then a few months later disillusion sets in and AI becomes a discredited failure in the public's eye, since even rapid progress moves slower than an election or typical news cycle.
AI Winter was not the result of changing public interest.
It was lost interest from investors and government.
All that money poured into AI research produced little rewards. There were expert systems that worked well and they became profitable businesses, but othervice there was little to show. In retrospect I think it was good idea to adjust the money to match the results and wait until computer scientist come up with new ideas.
Current Ai boom is result from 'Canadian mafia' diligently working and actually producing results and faster computing, especially GPCPU.
Unless we get constant stream of new ideas that build up on the current ones, there should reduced interest and investments once most benefits have been materialized.
>There were expert systems that worked well and they became profitable businesses, but othervice there was little to show.
One could similarly say about our time: "there were neural network applications that worked well, but otherwise there was little to show". What is the fundamental difference between what is going on now and what was happening before the previous AI winter?
I feel that people constantly misrepresent how impressive expert systems seemed back in their heyday. They had a lot of practical applications and they could do some very cool tricks.
Interestingly, the highly impressive accomplishments of SVMs, random forests and boosting went mostly unnoticed, precisely because of the AI Winter.
Well, except for those pesky NSA and DARPA agencies, to name just two of many, that have access to technology you haven't even dreamed of yet.
Congress might be full of idiots or smart people trying to make you believe they're idiots, but don't for a moment think the federal government as a whole is technologically stunted.
Certainly not but consider the level of dysfunction and complete lack of interdepartment cooperation. We have the NSA actively hacking other nation-states and our own private sector and then we have an FBI that resets an icloud password preventing them from getting a backup of data they desperately needed.
The NSA hacks anyone who seems interesting. But that, arguably, is their job. As they see it, anyway. The FBI isn't so high-tech, for sure. But they get help, eventually.
I used to work with these agencies and cool stuff almost never see's the light of day. Even within the well funded agencies, really breakthrough stuff almost never makes it to the people in the building - let alone to the public. So it doesn't really matter what they are doing.
A 7' tall simpleton with enormous strength is dangerous, and powerful. It would be unwise to underestimate them, but it would be always unwise to misinterpret the source and nature of that power.
Yes they do: shoot, jail, or coerce everyone capable of producing advanced AI. Problem solved, if you're assuming that advanced AI is sufficiently dangerous that not making any at all is a better idea than taking a risk.
And governments will always want to avoid risks, especially risks that knock them out of their monopolies on force and economic power.
That won't happen because the industry controls our governments too much. There is a lot of value to be produced before the algorithms become really dangerous.
Moreover, controlling AI research is ever harder than nuclear research. Creating a technological disadvantage compared to rogue states without such regulations does not seem like a good idea.
I haven't read that but I assume it goes along the lines of...
Gambler: "I already got five 6's. I can't possibly get another one. That's a statistical improbability!"
Wrong assumption: Past dice rolls affect future rolls where as dice rolls are independent. Its improbably to get six 6's in a row but GIVEN that you have 5 sixes, getting a sixth is just 1/6
I am optimistic, AI winter is no longer the case this time.
We almost solved image/speech recognition in the past 5 years. Once those works went out of academia to real application, the amount of disruption to the current society is pretty hard to imagine.
We've made impressive progress but even with computer vision there is still a lot to do. For example, it's great that we can recognize certain objects are in a picture, but a lot of real-world applications depend on the exact location, e.g. image segmentation. Current state of the art models generate hundreds of similar object proposals which cannot realistically narrowed down to a single one to present to a user in an application.
It's a reference to Steve Job's infamous "You're holding it wrong" response[1] to complaints of iPhone 4's signal failing when held in a certain manner.
I don't feel like "getting the overall meaning of what you said" and "100% accurate voice transcription" are the same problems and comparing the two isn't fair. When I speak to you in a thick accent, it's OK if you only understand 1 out of 3 words because human-to-human communication is lossy and able to deal with misunderstood, misheard, or completely unintelligible data points. Transcription requires 100% percent accuracy because you want the written word to be exactly the same as the words that come out of your mouth. This is a much higher bar and is one that human-to-human speech rarely achieves.
It's hard to tell these days. Many people today fully accept the idea that human should adapt themselves to the existing machines and technologies, rather than design/adapt those machines and technologies to human needs.