Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>"However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems"

I'd go as far as saying that ML is now at a point where it's basically a mirror image of GOFAI with the exact same issues. The old stumbling block was that symbolic solutions worked well until you ran into an edge case, everyone recognized that having to program every edge case in makes no sense.

The modern ML problem is that reasoning based on data works fine, unless you run into an edge case, then the solution is to provide a training example to fix that edge case. Unlike with GOFAI apparently though people haven't noticed yet that this is the same old issue with one more level of indirection. When you get attacked in the forest by a guy in a clown costume with an axe you don't need to add that as a training input first before you make a run for it.

There's no agency, liveliness, autonomy or learning in a dynamic real-time way to any of the systems we have, they're for the most part just static, 'flat', machines. Honestly rather than thinking of the current systems as intelligent agents they're more like databases who happen to have natural language as a way to query them.



"When you get attacked in the forest by a guy in a clown costume with an axe you don't need to add that as a training input first before you make a run for it."

Sure, because it's already a training input. We'd run because we recognize the axe, the signs of aggression, the horror movie trope of an evil clown, and so forth. We have to teach "stranger danger" to children.

"There's no agency, liveliness, autonomy or learning in a dynamic real-time way to any of the systems we have, they're for the most part just static, 'flat', machines."

Well, that's at least in part because we design them that way. It's more convenient to separate out the "learning" and "doing" parts so we have control over how the network is trained.


>Sure, because it's already a training input

not in any meaningful sense, no. I can tell you, "if something's fishy about the situation, just leave". You can do this not because of some particular training inputs or examples I give you, but because you have common sense and a sort of personality and intuition for how to behave in the absence of data. If you told that sentence to a state of the art ML model you'd probably get "what fish?" as an answer.

>Well, that's at least in part because we design them that way

It's mostly because we have no idea how to design them anyway else. I think if anyone knew how to build complex agents with rich internal states that have the intent and communication abilities of humans we'd do that. It's not even really conceivable right now how you could have an ML type system that also can just directly adopt high level concepts dynamically just by communicating them.


> not in any meaningful sense, no. I can tell you, "if something's fishy about the situation, just leave"

"Fishy" is doing a lot of work in this sentence. How much training went into refining an instinct for what's "fishy"? Do you not agree that everyone has a different view on what's fishy?

> I think if anyone knew how to build complex agents with rich internal states that have the intent and communication abilities of humans we'd do that.

I'm not so sure. There doesn't seem to be much commercial value in having an agent with intent and its own goals, and most AI advancements are for commercial entities these days.


"I can tell you, "if something's fishy about the situation, just leave". You can do this not because of some particular training inputs or examples I give you, but because you have common sense and a sort of personality and intuition for how to behave in the absence of data."

Only if I had a baseline to compare the situation to. If you took out all familiar elements, I'd have no way of telling whether a situation was normal or suspicious. My understanding of the word "fishy" is born from 300 thousand hours of training data.

"It's mostly because we have no idea how to design them anyway else. I think if anyone knew how to build complex agents with rich internal states that have the intent and communication abilities of humans we'd do that."

That's a different question. We can build machines that learn autonomously; they just don't have the capability of biological minds.


GOFAI = "Good old fashioned AI" for those not familiar with the acronym


If GOFAI is Weizenbaum's Eliza - yes.

If GOFAI includes semantic reasoning in real world concepts modeled eg. with theasauri and concept maps - I think AI research was on the right track but went astray as there was not enough resounding business success to warrant further funding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: