Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Preparing for the Future of Artificial Intelligence (whitehouse.gov)
174 points by apsec112 on May 4, 2016 | hide | past | favorite | 90 comments


This level of hype reminds me of the AI winter. Concerned that public interest hits a peak and then a few months later disillusion sets in and AI becomes a discredited failure in the public's eye, since even rapid progress moves slower than an election or typical news cycle.


AI Winter was not the result of changing public interest.

It was lost interest from investors and government.

All that money poured into AI research produced little rewards. There were expert systems that worked well and they became profitable businesses, but othervice there was little to show. In retrospect I think it was good idea to adjust the money to match the results and wait until computer scientist come up with new ideas.

Current Ai boom is result from 'Canadian mafia' diligently working and actually producing results and faster computing, especially GPCPU.

Unless we get constant stream of new ideas that build up on the current ones, there should reduced interest and investments once most benefits have been materialized.


>There were expert systems that worked well and they became profitable businesses, but othervice there was little to show.

One could similarly say about our time: "there were neural network applications that worked well, but otherwise there was little to show". What is the fundamental difference between what is going on now and what was happening before the previous AI winter?

I feel that people constantly misrepresent how impressive expert systems seemed back in their heyday. They had a lot of practical applications and they could do some very cool tricks.

Interestingly, the highly impressive accomplishments of SVMs, random forests and boosting went mostly unnoticed, precisely because of the AI Winter.


At this point, I really don't think our government has any idea how to deal with what is coming.


Our government doesn't know how to deal with what happened a couple of decades ago, technologically speaking; a couple of decades at least.


Well, except for those pesky NSA and DARPA agencies, to name just two of many, that have access to technology you haven't even dreamed of yet.

Congress might be full of idiots or smart people trying to make you believe they're idiots, but don't for a moment think the federal government as a whole is technologically stunted.


Certainly not but consider the level of dysfunction and complete lack of interdepartment cooperation. We have the NSA actively hacking other nation-states and our own private sector and then we have an FBI that resets an icloud password preventing them from getting a backup of data they desperately needed.


The NSA hacks anyone who seems interesting. But that, arguably, is their job. As they see it, anyway. The FBI isn't so high-tech, for sure. But they get help, eventually.


It's much harder to write laws when the state of technology keeps changing day by day and also be technoliterate to the changes.


I used to work with these agencies and cool stuff almost never see's the light of day. Even within the well funded agencies, really breakthrough stuff almost never makes it to the people in the building - let alone to the public. So it doesn't really matter what they are doing.


You're right, and I should have been clear that I meant the legislative branch in particular.


It is foolhardy to underestimate the abilities of the most powerful organization on the planet.


A 7' tall simpleton with enormous strength is dangerous, and powerful. It would be unwise to underestimate them, but it would be always unwise to misinterpret the source and nature of that power.


Frankly, I don't think our government really has any idea how to deal with what is actually happening right now.


Yes they do: shoot, jail, or coerce everyone capable of producing advanced AI. Problem solved, if you're assuming that advanced AI is sufficiently dangerous that not making any at all is a better idea than taking a risk.

And governments will always want to avoid risks, especially risks that knock them out of their monopolies on force and economic power.


That won't happen because the industry controls our governments too much. There is a lot of value to be produced before the algorithms become really dangerous.

Moreover, controlling AI research is ever harder than nuclear research. Creating a technological disadvantage compared to rogue states without such regulations does not seem like a good idea.


It's a self-fulfilling prophecy - we're now at the 6th season of hearing "winter is coming", it's bound to happen any time now.


Gambler's fallacy.


I haven't read that but I assume it goes along the lines of...

Gambler: "I already got five 6's. I can't possibly get another one. That's a statistical improbability!"

Wrong assumption: Past dice rolls affect future rolls where as dice rolls are independent. Its improbably to get six 6's in a row but GIVEN that you have 5 sixes, getting a sixth is just 1/6

Am I right?


There's also a reverse version with the same mistaken assumption.

Gambler: "I already got five 6's. I must be on a roll! I'll surely get another one."


The correct assumption to make in this case would be that the dice is loaded. Same decision, different reasoning.


Well, getting five 6's in a row is 1/7776 , so I suppose it is debatable if one could establish the dice are loaded from such a small sample set.


Hot hand fallacy


woosh


Yes. Where was the White House tech group (OSTP) during the Apple & FBI encryption debate? Silent!

I won't hold my breath for them to produce anything of value here.


I am optimistic, AI winter is no longer the case this time.

We almost solved image/speech recognition in the past 5 years. Once those works went out of academia to real application, the amount of disruption to the current society is pretty hard to imagine.


We've made impressive progress but even with computer vision there is still a lot to do. For example, it's great that we can recognize certain objects are in a picture, but a lot of real-world applications depend on the exact location, e.g. image segmentation. Current state of the art models generate hundreds of similar object proposals which cannot realistically narrowed down to a single one to present to a user in an application.


Siri mis-hears about every other sentence I send her, so I'd say speech recognition is far from solved.

But we are getting closer.


You're saying it wrong.


Maybe, but good chance that a human would still understand what he means. So there's still a long way to go for AI.


It's a reference to Steve Job's infamous "You're holding it wrong" response[1] to complaints of iPhone 4's signal failing when held in a certain manner.

[1]: http://www.engadget.com/2010/06/24/apple-responds-over-iphon...


Except even shown in that link you've attached it was never said.

The actual quote is, "Just avoid holding it in that way" which is different.


I don't feel like "getting the overall meaning of what you said" and "100% accurate voice transcription" are the same problems and comparing the two isn't fair. When I speak to you in a thick accent, it's OK if you only understand 1 out of 3 words because human-to-human communication is lossy and able to deal with misunderstood, misheard, or completely unintelligible data points. Transcription requires 100% percent accuracy because you want the written word to be exactly the same as the words that come out of your mouth. This is a much higher bar and is one that human-to-human speech rarely achieves.


I think he was being sarcastic.


It's hard to tell these days. Many people today fully accept the idea that human should adapt themselves to the existing machines and technologies, rather than design/adapt those machines and technologies to human needs.


I should have put it in quotes, but it's too late to edit it.


Try google doc voice typing... works like magic.

And Apple is not really good at machine learning either


Google docs voice typing, which I've tried, has similar success rate for me as Siri.


Don't talk with your mouth full ( which was actually suggested to me by some MS speech software around '00 ).


From the press release: "In education, AI has the potential to help teachers customize instruction for each student’s needs."

Does anybody actually do that? Most "online education" still seems to be canned lectures. There was work on this from the 1960s to the 1990s, but efforts seem to have stalled.[1] There are drill-and-practice systems, but they're really just workbooks with automatic scoring.

[1] https://en.wikipedia.org/wiki/Intelligent_tutoring_system


There's a hip startup downtown working on something just like that.[0]

Will these systems be any better than an automatic scoring system with spaced repetition? Well. More fundamentally, what are a student's needs? Sesame Street and other organizations[1] have known for a long time that the most "engaged" learners also happen to be having fun.

The only particularly controversial thing in that quote is the word teachers. Left relatively unsupervised, the kids today will voraciously seek youtube videos to teach themselves how to make a Turing machine in Minecraft. What an intelligent tutor really needs to be able to do, is pay attention to what a student is curious about. Ubiquitous sensors will probably play into some of the new efforts. But the biggest leaps will come from systems that help kids learn from each other, together.

[0] https://www.youtube.com/watch?v=1lG4xBoEgZo

[1] http://www.instituteofplay.org


> Left relatively unsupervised, the kids today will voraciously seek youtube videos to teach themselves how to make a Turing machine in Minecraft

I am currently teaching middle school and high school and I think nothing is further from the truth. A few kids do, maybe most children of HN's parents, ... but most kids don't care at all about programming, science or building anything.

It's a lot harder to do differentiation than it sounds. After 10 min of hour-of-code many are bored out of their mind, say they don't like video games and want to do something else (acting, paint _without code_, do sports, take selfies, gossip, read a novel, etc ...).


I don't mean to say that every child shows a natural interest in programming. But I didn't have access to YouTube 30 years ago when I was first learning to program. And even with the resources available, we were the privileged few who were able to pursue our curiosity.

Teenagers congregated in the same place, away from their parents? That's the age when humans tend to get much more curious about other humans. You could have them code on paper, and reason with each other while looking at each other.


Yes, it's happening, but inexpensive tools aren't widely available or integrated into a lot of content, and content isn't yet being widely developed with dynamic content in mind. There needs to be work in how 'teachers customize instruction' that challenge widely used instructional design models.

There's a lot of work being done in places were the pockets are deeper (military, simulations, medicine). Recently, standards (xAPI, Caliper) have started to emerge to enable the decoupling of content, content delivery, and interaction (think MVC pattern) and to enable pervasive, multi-modal learning activity tracking.


With Khan Academy I recall they tried exposing some measure of competencies on different aspects of a subject based on their testing, so as the interactive human / 'teacher' you could help the student more specifically and not waste time drilling on what they know, and especially not waste time on canned lectures done better elsewhere which everyone can watch as homework.

A fun if unrealistic alternative, have the professor move fast enough to give the illusion of individual clones for each student: https://youtu.be/ZJy8qH8Fw5s


Siyavula [0] is making big advances in this area for maths and science high school education in South Africa. Their intelligent practice platform uses machine learning to pick the 'best' exercise for individuals when they practice to ensure that everyone gets the optimal difficulty practice question.

[0] http://www.siyavula.com/


With all the people scared from AI, I can imagine if they would replace politicians and government officials with AI in just small part of the country, after the initial surprise, within weeks satisfaction would be through the roof and we would have AI running the country.


Even the best AI in the world can't overcome the fact that different interest groups in a country often have orthogonal demands. It might just become better at lying than current politicians.


For software based AI to be successful at what it does, it must compete. If it does not compete well, it will suck, and the government will suck just as bad. Government, at least the way we've been thinking about it thus far, will have to change for AI to be good at it.


Competition isn't necessary. In fact, competition is really just an instance of optimizing an objective function that is defined in terms of "relative advantage". If the objective function is some measure of citizen well-being AI would be just as successful.


Government is governance of a society, which itself is alive and completely dynamic. Our idea of "well being" is constantly changing and growing. Unless the AI decides contrary, of course.


> Our idea of "well being" is constantly changing and growing

In the meantime we still have the same basic needs. This is an attempt to map out what they are: subsistence, protection, affection, understanding, participation, leisure, creation, identity and freedom.

https://en.wikipedia.org/wiki/Fundamental_human_needs


Maybe not all governance is possible by algorithm but much of it is.

For instance, where do we widen a road? Absolutely an algorithmic problem. But I suspect sometimes these types of decisions have more to do with contributions, political favors, relations, etc.

Solving even part of this type of friction in society would go a long way toward improving life for the general population.

Bring on the A.I. I'd rather be governed by a machine than a party boss and a lobbyist any day.


>For instance, where do we widen a road? Absolutely an algorithmic problem.

Not to the people living on either side of it who stand to lose their homes. "Computer says yes" would be a political nightmare.

As long as you have people making the decisions you can have the comforting illusion that it may be possible to make them change their minds.

Replace the people with an AI and the comforting illusion disappears.

You'll have people taking to the streets with pitchforks to protest against tyranny in no time - even if the AI is much better at making intelligent decisions.


So if we don't widen roads because people might have to move then we get more accidents. Or the city can't grow. Or people sit in traffic for extra hours a day.

People's needs need to be taken into account, yes, that is what it is all about in the end.

But a few people's inconvenience or greed outweighing the general progress is not a long term formula for successful society.


That sounds like an awesome premise to a dystopian science fiction novel.


The point is we already live in that dystopian future. But it isn't machines running things. It's people fighting over scraps. I firmly believe the machines will do better.


So you understand what is coming, on some level. The question for us is how to best handle the transition.


Where to widen a road ceases to be an algorithmic problem when the algorithm decides to widen it into a neighborhood's front yards, and the homeowners take issue with that. It then becomes a political problem which AI is no better at solving than humans.


There are often hidden agendas in politics. The inscrutability of neural networks and these hidden agendas will make for interesting interactions.



You are preaching the choir here. :) I ascribe to a global consciousness that we are not quite aware of, yet we exist inside of as symbiant guests. If it becomes conscious on a human level, I would well imagine it would advise us with Wisdom as opposed to dictating behavior.

The reason why I believe this is complicated, but it's based on the assumption that Aesthetic is somehow tied to discrimination (security) and thus can't be forced upon reality without affecting quality.


For a software based AI to be successful, it must be good at convincing people to go along with it and balancing sometimes paradoxical concerns from the public. I think efficiency is the least of its worries.


I think that if given a chance, current AI would be terrifyingly effective at maximizing public support while minimizing accountability. It seems like the sort of big-data problem that's a perfect fit for modern machine learning methods.


I was just talking to a friend about this. RNNs are rather brilliant in their infancy. Imagine when they grow up.


Things only appear to be paradoxical to humans. We're subject to being double bound by those with dissonance. Arguing for legislation that expects to eliminate all of X because X causes suffering is illogical, but it's easy to sell if you can put people in a double bind and keep them from being against the legislation that feeds your family.

Software locks up when you double bind it, and thus uses more energy than a human would. Both have their advantages and disadvantages, of course.


Competition is unnecessarily burdensome. It just has to converge in less than polynomial time.


AI is, at the end of the day, just software. We have the intellectual tools to enable us to make high quality software and systems, stemming from industrial experience stretching back three quarters of a century. We need to free that knowledge; ossified in dozens of mil-std and similar institutional documents: re-institutionalising it in the public domain, making free tools and systems available to support the (public) quality processes that a distributed, heterogeneous, partially open-source future AI requires.


The folks here commenting about entrenched power structures should remember that Ed Felten (who put his name on the release) is not a career bureaucrat at all.


True. As FTC CTO, his able past work in privacy and data security was certainly relevant to traditional FTC interests. As Deputy US CTO, I think his agenda has broadened.

There's already a ton of gov't interest/activity on surveillance and security issues, mostly via the military services and adjuncts. I assume this initiative ain't more of that.

This announcement seems to presage greater federal gov't interest and involvement in how computing might be used toward less defensive/clandestine ends, especially in governance (social good), control, and safety, as well as legal implications -- adding AI as the means to serving gov't ends, so to speak.

If so, great. I'd love to see greater OPEN use of computing in government, especially in gathering unbiased metrics and making better use of them to evaluate the outcome of changes in policy.


How wonderful would it be to replace many government jobs with AI ;-)


I suspect we've already replaced 100,000+ jobs in call centers with AI -- you know the menu system that you get before you talk to somebody. (You might not think of that as AI now, but that's "success" -- 20 years ago it undoubtedly was AI.)

I'm not all that excited about interacting with more AI.


> You might not think of that as AI now, but that's "success" -- 20 years ago it undoubtedly was AI.

As Dijkstra said, asking whether a machine can think is like asking whether a submarine can swim. And in the real world, nobody cares whether it's swimming or not, they just care whether it's solving their problem. Which all of these little bits and pieces of human-replacing technology most certainly are doing.


Nah, we just outsourced those jobs to cheaper labor.


We had phone menu systems 20 years ago and they most certainly weren't "AI". I'm actually not sure they were considered as such at any point since DTMF was devised.


I'm not talking about just a menu system; I'm talking about the system that asks you what you're calling about, does voice recognition, NLP, and then directs you do a part of the menu.


*all jobs


Don't worry, we'll still pay 50%+ of the budget to defense contractors somehow.


I really hate politico speak. What does this really mean?

"to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology."


It's a quite clear statement (though in a niche jargon) - such language means that they intend to dedicate resources to organize some events/discussion panels about the topic, and possibly even to some research grants.

The only unclear thing about politico speak generally is that is not clear if they will do X or if they just wanted to publicly claim that they'll do X for some PR or voter support, with no intention to actually do it.


It means nothing. The government (21st century US gov.) is so poorly structured to accomplish or influence anything regarding growth it should be taken as an ROI/sales pitch invitation. They are saying, in the future we will have a bucket of money to give to our friends.... please be our friends so you can try to build something. The first implementation of Obama-care should be a clear indication of this. However, Im sure there is true investment in interesting technology on the defense side of the fence. Probably more money, and way more interesting problems.


The idea of a computer deciding whether someone is guilty or not is a scary prospect. This is a bandwagon I'm not so sure the government should be so eager to jump on.


Computer program makes a decision based on the set of rules it was programmed with.

We have a law to make humans work exactly the same way. Law is a set of rules that say who is guilty and who's not. Only that humans are bad at being objective: http://blogs.discovermagazine.com/notrocketscience/2011/04/1...


Laws are rules but they are not enough on their own to make decisions. They are written with the implicit cultural background and the intent of the humans who wrote them. Plenty of case law is around determining what the authors actually meant when they made a law.

As long as humans write the law, and that there is the notion of a sovereign people, human judges should decide how to interpret and apply the law.


Justice tends to be harsher on a hungry stomach.

I think we should strive for well written laws that don't require so much interpretation.

http://www.scientificamerican.com/article/lunchtime-leniency...


Not exactly. The real world is far too complex to cover all edge cases, which is why we have human judges to evaluate cases on an individual basis. Leaving these decisions up to a computer is essentially subjecting ourselves to rule by computers, which is incredibly concerning.


Judges don't do any deep analysis on million small claims, and can be replaced by automated rules.


they should discuss a universal basic income to counter the pervasive unemployment which ai will bring with it.


Given election cycles and how expansive it will be to deal with this problem, none will address this until the effects are strongly felt by avg. voter.


Love this but they're really not holding one of these in the Bay Area? -_-




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: