This level of hype reminds me of the AI winter. Concerned that public interest hits a peak and then a few months later disillusion sets in and AI becomes a discredited failure in the public's eye, since even rapid progress moves slower than an election or typical news cycle.
AI Winter was not the result of changing public interest.
It was lost interest from investors and government.
All that money poured into AI research produced little rewards. There were expert systems that worked well and they became profitable businesses, but othervice there was little to show. In retrospect I think it was good idea to adjust the money to match the results and wait until computer scientist come up with new ideas.
Current Ai boom is result from 'Canadian mafia' diligently working and actually producing results and faster computing, especially GPCPU.
Unless we get constant stream of new ideas that build up on the current ones, there should reduced interest and investments once most benefits have been materialized.
>There were expert systems that worked well and they became profitable businesses, but othervice there was little to show.
One could similarly say about our time: "there were neural network applications that worked well, but otherwise there was little to show". What is the fundamental difference between what is going on now and what was happening before the previous AI winter?
I feel that people constantly misrepresent how impressive expert systems seemed back in their heyday. They had a lot of practical applications and they could do some very cool tricks.
Interestingly, the highly impressive accomplishments of SVMs, random forests and boosting went mostly unnoticed, precisely because of the AI Winter.
Well, except for those pesky NSA and DARPA agencies, to name just two of many, that have access to technology you haven't even dreamed of yet.
Congress might be full of idiots or smart people trying to make you believe they're idiots, but don't for a moment think the federal government as a whole is technologically stunted.
Certainly not but consider the level of dysfunction and complete lack of interdepartment cooperation. We have the NSA actively hacking other nation-states and our own private sector and then we have an FBI that resets an icloud password preventing them from getting a backup of data they desperately needed.
The NSA hacks anyone who seems interesting. But that, arguably, is their job. As they see it, anyway. The FBI isn't so high-tech, for sure. But they get help, eventually.
I used to work with these agencies and cool stuff almost never see's the light of day. Even within the well funded agencies, really breakthrough stuff almost never makes it to the people in the building - let alone to the public. So it doesn't really matter what they are doing.
A 7' tall simpleton with enormous strength is dangerous, and powerful. It would be unwise to underestimate them, but it would be always unwise to misinterpret the source and nature of that power.
Yes they do: shoot, jail, or coerce everyone capable of producing advanced AI. Problem solved, if you're assuming that advanced AI is sufficiently dangerous that not making any at all is a better idea than taking a risk.
And governments will always want to avoid risks, especially risks that knock them out of their monopolies on force and economic power.
That won't happen because the industry controls our governments too much. There is a lot of value to be produced before the algorithms become really dangerous.
Moreover, controlling AI research is ever harder than nuclear research. Creating a technological disadvantage compared to rogue states without such regulations does not seem like a good idea.
I haven't read that but I assume it goes along the lines of...
Gambler: "I already got five 6's. I can't possibly get another one. That's a statistical improbability!"
Wrong assumption: Past dice rolls affect future rolls where as dice rolls are independent. Its improbably to get six 6's in a row but GIVEN that you have 5 sixes, getting a sixth is just 1/6
I am optimistic, AI winter is no longer the case this time.
We almost solved image/speech recognition in the past 5 years. Once those works went out of academia to real application, the amount of disruption to the current society is pretty hard to imagine.
We've made impressive progress but even with computer vision there is still a lot to do. For example, it's great that we can recognize certain objects are in a picture, but a lot of real-world applications depend on the exact location, e.g. image segmentation. Current state of the art models generate hundreds of similar object proposals which cannot realistically narrowed down to a single one to present to a user in an application.
It's a reference to Steve Job's infamous "You're holding it wrong" response[1] to complaints of iPhone 4's signal failing when held in a certain manner.
I don't feel like "getting the overall meaning of what you said" and "100% accurate voice transcription" are the same problems and comparing the two isn't fair. When I speak to you in a thick accent, it's OK if you only understand 1 out of 3 words because human-to-human communication is lossy and able to deal with misunderstood, misheard, or completely unintelligible data points. Transcription requires 100% percent accuracy because you want the written word to be exactly the same as the words that come out of your mouth. This is a much higher bar and is one that human-to-human speech rarely achieves.
It's hard to tell these days. Many people today fully accept the idea that human should adapt themselves to the existing machines and technologies, rather than design/adapt those machines and technologies to human needs.
From the press release: "In education, AI has the potential to help teachers customize instruction for each student’s needs."
Does anybody actually do that? Most "online education" still seems to be canned lectures. There was work on this from the 1960s to the 1990s, but efforts seem to have stalled.[1] There are drill-and-practice systems, but they're really just workbooks with automatic scoring.
There's a hip startup downtown working on something just like that.[0]
Will these systems be any better than an automatic scoring system with spaced repetition? Well. More fundamentally, what are a student's needs? Sesame Street and other organizations[1] have known for a long time that the most "engaged" learners also happen to be having fun.
The only particularly controversial thing in that quote is the word teachers. Left relatively unsupervised, the kids today will voraciously seek youtube videos to teach themselves how to make a Turing machine in Minecraft. What an intelligent tutor really needs to be able to do, is pay attention to what a student is curious about. Ubiquitous sensors will probably play into some of the new efforts. But the biggest leaps will come from systems that help kids learn from each other, together.
> Left relatively unsupervised, the kids today will voraciously seek youtube videos to teach themselves how to make a Turing machine in Minecraft
I am currently teaching middle school and high school and I think nothing is further from the truth. A few kids do, maybe most children of HN's parents, ... but most kids don't care at all about programming, science or building anything.
It's a lot harder to do differentiation than it sounds. After 10 min of hour-of-code many are bored out of their mind, say they don't like video games and want to do something else (acting, paint _without code_, do sports, take selfies, gossip, read a novel, etc ...).
I don't mean to say that every child shows a natural interest in programming. But I didn't have access to YouTube 30 years ago when I was first learning to program. And even with the resources available, we were the privileged few who were able to pursue our curiosity.
Teenagers congregated in the same place, away from their parents? That's the age when humans tend to get much more curious about other humans. You could have them code on paper, and reason with each other while looking at each other.
Yes, it's happening, but inexpensive tools aren't widely available or integrated into a lot of content, and content isn't yet being widely developed with dynamic content in mind. There needs to be work in how 'teachers customize instruction' that challenge widely used instructional design models.
There's a lot of work being done in places were the pockets are deeper (military, simulations, medicine). Recently, standards (xAPI, Caliper) have started to emerge to enable the decoupling of content, content delivery, and interaction (think MVC pattern) and to enable pervasive, multi-modal learning activity tracking.
With Khan Academy I recall they tried exposing some measure of competencies on different aspects of a subject based on their testing, so as the interactive human / 'teacher' you could help the student more specifically and not waste time drilling on what they know, and especially not waste time on canned lectures done better elsewhere which everyone can watch as homework.
A fun if unrealistic alternative, have the professor move fast enough to give the illusion of individual clones for each student: https://youtu.be/ZJy8qH8Fw5s
Siyavula [0] is making big advances in this area for maths and science high school education in South Africa. Their intelligent practice platform uses machine learning to pick the 'best' exercise for individuals when they practice to ensure that everyone gets the optimal difficulty practice question.
With all the people scared from AI, I can imagine if they would replace politicians and government officials with AI in just small part of the country, after the initial surprise, within weeks satisfaction would be through the roof and we would have AI running the country.
Even the best AI in the world can't overcome the fact that different interest groups in a country often have orthogonal demands. It might just become better at lying than current politicians.
For software based AI to be successful at what it does, it must compete. If it does not compete well, it will suck, and the government will suck just as bad. Government, at least the way we've been thinking about it thus far, will have to change for AI to be good at it.
Competition isn't necessary. In fact, competition is really just an instance of optimizing an objective function that is defined in terms of "relative advantage". If the objective function is some measure of citizen well-being AI would be just as successful.
Government is governance of a society, which itself is alive and completely dynamic. Our idea of "well being" is constantly changing and growing. Unless the AI decides contrary, of course.
> Our idea of "well being" is constantly changing and growing
In the meantime we still have the same basic needs. This is an attempt to map out what they are: subsistence, protection, affection, understanding, participation, leisure, creation, identity and freedom.
Maybe not all governance is possible by algorithm but much of it is.
For instance, where do we widen a road? Absolutely an algorithmic problem. But I suspect sometimes these types of decisions have more to do with contributions, political favors, relations, etc.
Solving even part of this type of friction in society would go a long way toward improving life for the general population.
Bring on the A.I. I'd rather be governed by a machine than a party boss and a lobbyist any day.
>For instance, where do we widen a road? Absolutely an algorithmic problem.
Not to the people living on either side of it who stand to lose their homes. "Computer says yes" would be a political nightmare.
As long as you have people making the decisions you can have the comforting illusion that it may be possible to make them change their minds.
Replace the people with an AI and the comforting illusion disappears.
You'll have people taking to the streets with pitchforks to protest against tyranny in no time - even if the AI is much better at making intelligent decisions.
So if we don't widen roads because people might have to move then we get more accidents. Or the city can't grow. Or people sit in traffic for extra hours a day.
People's needs need to be taken into account, yes, that is what it is all about in the end.
But a few people's inconvenience or greed outweighing the general progress is not a long term formula for successful society.
The point is we already live in that dystopian future. But it isn't machines running things. It's people fighting over scraps. I firmly believe the machines will do better.
Where to widen a road ceases to be an algorithmic problem when the algorithm decides to widen it into a neighborhood's front yards, and the homeowners take issue with that. It then becomes a political problem which AI is no better at solving than humans.
You are preaching the choir here. :) I ascribe to a global consciousness that we are not quite aware of, yet we exist inside of as symbiant guests. If it becomes conscious on a human level, I would well imagine it would advise us with Wisdom as opposed to dictating behavior.
The reason why I believe this is complicated, but it's based on the assumption that Aesthetic is somehow tied to discrimination (security) and thus can't be forced upon reality without affecting quality.
For a software based AI to be successful, it must be good at convincing people to go along with it and balancing sometimes paradoxical concerns from the public. I think efficiency is the least of its worries.
I think that if given a chance, current AI would be terrifyingly effective at maximizing public support while minimizing accountability. It seems like the sort of big-data problem that's a perfect fit for modern machine learning methods.
Things only appear to be paradoxical to humans. We're subject to being double bound by those with dissonance. Arguing for legislation that expects to eliminate all of X because X causes suffering is illogical, but it's easy to sell if you can put people in a double bind and keep them from being against the legislation that feeds your family.
Software locks up when you double bind it, and thus uses more energy than a human would. Both have their advantages and disadvantages, of course.
AI is, at the end of the day, just software. We have the intellectual tools to enable us to make high quality software and systems, stemming from industrial experience stretching back three quarters of a century. We need to free that knowledge; ossified in dozens of mil-std and similar institutional documents: re-institutionalising it in the public domain, making free tools and systems available to support the (public) quality processes that a distributed, heterogeneous, partially open-source future AI requires.
The folks here commenting about entrenched power structures should remember that Ed Felten (who put his name on the release) is not a career bureaucrat at all.
True. As FTC CTO, his able past work in privacy and data security was certainly relevant to traditional FTC interests. As Deputy US CTO, I think his agenda has broadened.
There's already a ton of gov't interest/activity on surveillance and security issues, mostly via the military services and adjuncts. I assume this initiative ain't more of that.
This announcement seems to presage greater federal gov't interest and involvement in how computing might be used toward less defensive/clandestine ends, especially in governance (social good), control, and safety, as well as legal implications -- adding AI as the means to serving gov't ends, so to speak.
If so, great. I'd love to see greater OPEN use of computing in government, especially in gathering unbiased metrics and making better use of them to evaluate the outcome of changes in policy.
I suspect we've already replaced 100,000+ jobs in call centers with AI -- you know the menu system that you get before you talk to somebody. (You might not think of that as AI now, but that's "success" -- 20 years ago it undoubtedly was AI.)
I'm not all that excited about interacting with more AI.
> You might not think of that as AI now, but that's "success" -- 20 years ago it undoubtedly was AI.
As Dijkstra said, asking whether a machine can think is like asking whether a submarine can swim. And in the real world, nobody cares whether it's swimming or not, they just care whether it's solving their problem. Which all of these little bits and pieces of human-replacing technology most certainly are doing.
We had phone menu systems 20 years ago and they most certainly weren't "AI". I'm actually not sure they were considered as such at any point since DTMF was devised.
I'm not talking about just a menu system; I'm talking about the system that asks you what you're calling about, does voice recognition, NLP, and then directs you do a part of the menu.
I really hate politico speak. What does this really mean?
"to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology."
It's a quite clear statement (though in a niche jargon) - such language means that they intend to dedicate resources to organize some events/discussion panels about the topic, and possibly even to some research grants.
The only unclear thing about politico speak generally is that is not clear if they will do X or if they just wanted to publicly claim that they'll do X for some PR or voter support, with no intention to actually do it.
It means nothing. The government (21st century US gov.) is so poorly structured to accomplish or influence anything regarding growth it should be taken as an ROI/sales pitch invitation. They are saying, in the future we will have a bucket of money to give to our friends.... please be our friends so you can try to build something. The first implementation of Obama-care should be a clear indication of this. However, Im sure there is true investment in interesting technology on the defense side of the fence. Probably more money, and way more interesting problems.
The idea of a computer deciding whether someone is guilty or not is a scary prospect. This is a bandwagon I'm not so sure the government should be so eager to jump on.
Laws are rules but they are not enough on their own to make decisions. They are written with the implicit cultural background and the intent of the humans who wrote them. Plenty of case law is around determining what the authors actually meant when they made a law.
As long as humans write the law, and that there is the notion of a sovereign people, human judges should decide how to interpret and apply the law.
Not exactly. The real world is far too complex to cover all edge cases, which is why we have human judges to evaluate cases on an individual basis. Leaving these decisions up to a computer is essentially subjecting ourselves to rule by computers, which is incredibly concerning.
Given election cycles and how expansive it will be to deal with this problem, none will address this until the effects are strongly felt by avg. voter.