Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This level of hype reminds me of the AI winter. Concerned that public interest hits a peak and then a few months later disillusion sets in and AI becomes a discredited failure in the public's eye, since even rapid progress moves slower than an election or typical news cycle.


AI Winter was not the result of changing public interest.

It was lost interest from investors and government.

All that money poured into AI research produced little rewards. There were expert systems that worked well and they became profitable businesses, but othervice there was little to show. In retrospect I think it was good idea to adjust the money to match the results and wait until computer scientist come up with new ideas.

Current Ai boom is result from 'Canadian mafia' diligently working and actually producing results and faster computing, especially GPCPU.

Unless we get constant stream of new ideas that build up on the current ones, there should reduced interest and investments once most benefits have been materialized.


>There were expert systems that worked well and they became profitable businesses, but othervice there was little to show.

One could similarly say about our time: "there were neural network applications that worked well, but otherwise there was little to show". What is the fundamental difference between what is going on now and what was happening before the previous AI winter?

I feel that people constantly misrepresent how impressive expert systems seemed back in their heyday. They had a lot of practical applications and they could do some very cool tricks.

Interestingly, the highly impressive accomplishments of SVMs, random forests and boosting went mostly unnoticed, precisely because of the AI Winter.


At this point, I really don't think our government has any idea how to deal with what is coming.


Our government doesn't know how to deal with what happened a couple of decades ago, technologically speaking; a couple of decades at least.


Well, except for those pesky NSA and DARPA agencies, to name just two of many, that have access to technology you haven't even dreamed of yet.

Congress might be full of idiots or smart people trying to make you believe they're idiots, but don't for a moment think the federal government as a whole is technologically stunted.


Certainly not but consider the level of dysfunction and complete lack of interdepartment cooperation. We have the NSA actively hacking other nation-states and our own private sector and then we have an FBI that resets an icloud password preventing them from getting a backup of data they desperately needed.


The NSA hacks anyone who seems interesting. But that, arguably, is their job. As they see it, anyway. The FBI isn't so high-tech, for sure. But they get help, eventually.


It's much harder to write laws when the state of technology keeps changing day by day and also be technoliterate to the changes.


I used to work with these agencies and cool stuff almost never see's the light of day. Even within the well funded agencies, really breakthrough stuff almost never makes it to the people in the building - let alone to the public. So it doesn't really matter what they are doing.


You're right, and I should have been clear that I meant the legislative branch in particular.


It is foolhardy to underestimate the abilities of the most powerful organization on the planet.


A 7' tall simpleton with enormous strength is dangerous, and powerful. It would be unwise to underestimate them, but it would be always unwise to misinterpret the source and nature of that power.


Frankly, I don't think our government really has any idea how to deal with what is actually happening right now.


Yes they do: shoot, jail, or coerce everyone capable of producing advanced AI. Problem solved, if you're assuming that advanced AI is sufficiently dangerous that not making any at all is a better idea than taking a risk.

And governments will always want to avoid risks, especially risks that knock them out of their monopolies on force and economic power.


That won't happen because the industry controls our governments too much. There is a lot of value to be produced before the algorithms become really dangerous.

Moreover, controlling AI research is ever harder than nuclear research. Creating a technological disadvantage compared to rogue states without such regulations does not seem like a good idea.


It's a self-fulfilling prophecy - we're now at the 6th season of hearing "winter is coming", it's bound to happen any time now.


Gambler's fallacy.


I haven't read that but I assume it goes along the lines of...

Gambler: "I already got five 6's. I can't possibly get another one. That's a statistical improbability!"

Wrong assumption: Past dice rolls affect future rolls where as dice rolls are independent. Its improbably to get six 6's in a row but GIVEN that you have 5 sixes, getting a sixth is just 1/6

Am I right?


There's also a reverse version with the same mistaken assumption.

Gambler: "I already got five 6's. I must be on a roll! I'll surely get another one."


The correct assumption to make in this case would be that the dice is loaded. Same decision, different reasoning.


Well, getting five 6's in a row is 1/7776 , so I suppose it is debatable if one could establish the dice are loaded from such a small sample set.


Hot hand fallacy


woosh


Yes. Where was the White House tech group (OSTP) during the Apple & FBI encryption debate? Silent!

I won't hold my breath for them to produce anything of value here.


I am optimistic, AI winter is no longer the case this time.

We almost solved image/speech recognition in the past 5 years. Once those works went out of academia to real application, the amount of disruption to the current society is pretty hard to imagine.


We've made impressive progress but even with computer vision there is still a lot to do. For example, it's great that we can recognize certain objects are in a picture, but a lot of real-world applications depend on the exact location, e.g. image segmentation. Current state of the art models generate hundreds of similar object proposals which cannot realistically narrowed down to a single one to present to a user in an application.


Siri mis-hears about every other sentence I send her, so I'd say speech recognition is far from solved.

But we are getting closer.


You're saying it wrong.


Maybe, but good chance that a human would still understand what he means. So there's still a long way to go for AI.


It's a reference to Steve Job's infamous "You're holding it wrong" response[1] to complaints of iPhone 4's signal failing when held in a certain manner.

[1]: http://www.engadget.com/2010/06/24/apple-responds-over-iphon...


Except even shown in that link you've attached it was never said.

The actual quote is, "Just avoid holding it in that way" which is different.


I don't feel like "getting the overall meaning of what you said" and "100% accurate voice transcription" are the same problems and comparing the two isn't fair. When I speak to you in a thick accent, it's OK if you only understand 1 out of 3 words because human-to-human communication is lossy and able to deal with misunderstood, misheard, or completely unintelligible data points. Transcription requires 100% percent accuracy because you want the written word to be exactly the same as the words that come out of your mouth. This is a much higher bar and is one that human-to-human speech rarely achieves.


I think he was being sarcastic.


It's hard to tell these days. Many people today fully accept the idea that human should adapt themselves to the existing machines and technologies, rather than design/adapt those machines and technologies to human needs.


I should have put it in quotes, but it's too late to edit it.


Try google doc voice typing... works like magic.

And Apple is not really good at machine learning either


Google docs voice typing, which I've tried, has similar success rate for me as Siri.


Don't talk with your mouth full ( which was actually suggested to me by some MS speech software around '00 ).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: