Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

http://www.technologyreview.com/view/518006/how-a-fly-brain-...

I'm not an expert but... Fly brains and human brains already have have neural networks for common tasks, like sensing motion, baked into our genetics. Evolution has already trained these networks to an extent. Training a computer-simulated neural network seems to yield similar results to what nature has done.

My personal uninformed opinion is that we'll start to see how much human experience and expression is driven by our brain wiring. That we'll discover how similar we all are to each other based our wiring. And that we'll find that approximating advanced cognition will be the result of putting that wiring into a computer model -- and having a computer powerful enough to run it.



I think what he means by that is the AI is only trained against the handwriting itself. But the humans are trained against the quality of paper, the sound the pen makes, the stress of an exam and the joy of writing a birthday card. Based on the current training parameters the AI will be oblivious to those hence will not be able to reflect those experiences upon the writing itself. It could be perfectly acceptable for most scenarios (like handwriting fonts we have today), but far from actual human.


Well said. Furthermore, data on the joy of writing a birthday card (to borrow one of your examples) can be useful in other tasks (such as determining what to write).

Typical machine learning problems deal with isolated training sets and isolated problems. This approach seems strange to me; in the case of neural networks, this is somewhat analogous to a newborn child who is deprived of all senses except the limited training data to make up their world, and good/bad feedback from the loss gradient. How can one expect this hypothetical newborn to learn any meaningful representation of the world with which our machine learning problems are derived?

I think the first step towards realizing anything like "Hollywood General AI" will be a system that spends an early portion of its existence ingesting a universe of contextual data, before it is presented with a problem to solve (at which point it can make use of seemingly unrelated information to do something like handwriting). Andrew Ng's work on self-taught learning (built on transfer learning) is particularly relevant here, but I think those ideas could be taken a lot further.


General AI has always been very fascinating and exciting to me. However, recently I feel like it will serve us humans better to continue along the path of ML.

We want an AI trained with ML to keep focusing on flying plane. We may not want an AGI pilot who can also get bored just like humans and can get distracted playing games.


Wasn't this the original aim of AI, and one of the reasons the AI Winter occurred?


My personal uninformed opinion is that we'll start to see how much human experience and expression is driven by our brain wiring.

In other words, we will start thinking about people as if they are mere things. Complex things, but things nevertheless.


And, at the very least, thinking about the human mind as an object will lead to greater advances in artificial intelligence. IMO.

My mind will change when my brain does.

https://en.wikipedia.org/wiki/Neuroplasticity


Complex and important things. What else did you expect?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: