Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Artificial Intelligence: The Revolution Hasn’t Happened Yet (2018) (medium.com/mijordan3)
74 points by okfine on Oct 28, 2022 | hide | past | favorite | 88 comments


This is a common sentiment, and pundits have been making similar remarks for decades. This author writes "Sixty years later, however, high-level reasoning and thought remain elusive."

That's the wrong problem with AI. The trouble with AI is that it still sucks at manipulation in unstructured situations and at "common sense". Common sense can usefully be defined as getting through the next 30 seconds of life without a major screwup. At, at least, the competence level of the average squirrel. This is why robots are so limited.

If we could build a decent squirrel brain, something "higher level" could give it tasks to do. That would be enough to handle many basic jobs in unstructured spaces, such as store stocking, janitorial, and such. It's not the "high level reasoning" that's the problem. It's the low-level stuff.

A squirrel has around 10 million neurons. Even if neurons are complicated [1], somebody ought to be able to build something with 10 million of them. Current hardware is easily up to the task.

The AI field is fundamentally missing something. I don't know what it is. I took a few shots at this problem back in the 1990s and got nowhere. Others have beaten their head against the wall on this. The Rethink Robotics failure is a notable example.

The real surprise to me is how much progress has been made on vision without manipulation improving much. I'd expected that real-world object recognition would lead to much better manipulation, but it didn't. Even Amazon warehouse bin-picking isn't fully automated yet. Nor is phone manufacturing. Google had a big collection of robots trying to machine-learn basic manual tasks, and they failed at that.

That's the real problem.

[1] https://www.sciencedirect.com/science/article/pii/S089662732...


> Current hardware is easily up to the task.

I don't think so. If you want to model a single synapse in full to capture all effects that might lead to "learning", you have a system of ordinary differential equations. Solving that is very hard, and solving that for 10 million neurons is impossible.

On current hardware can only implement but a poor caricature of a real neuron.


While this is true, the complexity perspective misses something more fundamental.

1) Our brains, and moreso those of animals, come with a really good pretraining at birth. This is collective genetic knowledge of millions of generations distilled into your brain.

2) Our brains have a lot of sensors and actuators to interact with the world. We only learn by reading as adults when our brains can already do the synesthesia of translating words into thought. But even as adults, most of us learn better if we do something, write something, engage in dialog, instead of passively listening, reading, or watching.

Passive data can never replicate the rich environment our brains grow up in.


While true, there’s a relatively small upper bound on how many bits of information are in this pre-training. Specifically, in the form of how much information is contained in DNA, which is only a couple gigabytes.


Stable diffusion model is around 4 gigabytes, inside that 4 gigabytes you have understanding of the whole english language model and mapping to billions of objects, people, concepts etc capable of generating from just a single sentence almost any picture in any style you imagine. Seems like a few gigabytes can hold a lot of information.


That problem has been overcome.[1]

This is a neat result. This research started with the differential equation model of a neuron and tried to train various neural nets to get the same result to within 99%. They succeeded. Worst case took an 8-layer net with 256 elements per layer. See Fig. 4. So, 10 billion elements for a squirrel. Not that big by current standards.

It's not clear that a model which tracks the biological neuron that accurately is needed. They discuss simpler models that are almost as good.

Low-end mammal brains should be buildable right now. It's not a hardware limitation.

[1] https://www.sciencedirect.com/science/article/pii/S089662732...


I think this requires the assumption that modeling the complexity of biological synapses is required for general intelligence, when we don't know that to be the case. Personally, I believe that it's not a requirement at all and that the first AGI will be strikingly non-neuromorphic. Just my two cents though.


> On current hardware can only implement but a poor caricature of a real neuron.

We don't need a complete physiological model for it to be useful. We don't need a perfectly accurate silicon-based mirror of a mammalian brain to outsmart ours on every task we do (and many we don't even realize we could). The challenge will be to coexist and cooperate with these completely alien intelligences that share almost nothing with ours.


I agree with your comment (but the OP point was not about which approximate model might still work)


Real neurons are far slower (interaction is chemical vs electrical) and far less precise ( iirc something comparable to 4 - 7x less precise than 32bit float) than physical neurons.


Biological brains have had a few billion years to optimize. Over the past decade or two, it's been increasingly apparent that the structure and algorithms that govern a particular neural net's behaviour are extremely important to its efficacy.

We likely have a very warped view of what intelligence is, because the most prominent examples of it have been aggressively honed over an extremely long period of time to be good at tasks crucial to their survival, such as effectively navigating a 3D environment. We consider art to be a difficult and complex task, and making a sandwich to be a simple one, but that's because our particular brand of intelligence is optimized toward the latter.


> Biological brains have had a few billion years to optimize.

Not just that, but it's grown on a body which has been optimized for survival during a few billion years; and that body is built on cells that have evolved to survive hostile environments, and those cells are built with self-replicating molecules, evolving from complex chemical reactions in several changing environments, that competed with and displaced other less-successful self-replicating molecules that disappeared.

Each of those layers provides a degree of adaptability and self-healing that is extremely hard to replicate. And if we managed to reverse-engineer and replicate one of those layers, it would still be missing all the layers below.

Our best hope to create fully independent agents will come from re-adapting and controlling biological entities, not from tools built from the ground up with current engineering techniques.


Multi-cellular life has only been around for 600 million years or so.


Imagine if the immediate outcome of AI is not that we replace taxi drivers, dishwashers, and factory workers, but instead we displace most knowledge-worker white collar jobs, like quant and software engineer?

There's an old (and sometimes forgotten) idea in AI that perhaps things we think are simple, like vision and control (robotics), are actually incredibly complicated and took millions of years to evolve.

Whereas things we think are complicated, like playing Go or picking stocks or computer programming, are actually quite simple to learn.

This would be counter-intuitive but---as you observed, and taking my argument recursively---common sense might be much more difficult to get right than obscene pathological thinking.

Anyway, I've always thought a good startup would be to automate away Silicon Valley using AI. It's so punk rock that a lot of disillusioned smart techies would join under this banner. A collaborator of mine has already used AI to do high-level bug finding in blockchain code.


I'm not sure that people appreciate how even the highly technical white collar jobs have large social elements in them. You might be able to get the AI to write the code, but can you get it to attend the meetings?

And it's understanding what the right thing is to build that's the critical challenge in programming.


What if you no longer need meetings? Take accounting software for instance. This function will probably go from an entire team of accountants to one of the C-levels just triggering the right software at the right time as part of their normal duties.


Think about why this isn't the case already? What specific capabilities does AI introduce to accounting?


The software just isn't there yet, but we have some inkling of what might be possible in just a few years. Perhaps a closer analogy would be human computers. You would have meetings with them back in the day to set out calculation tasks, but now they are so reduced away that their existence is in itself something that has been forgotten by most. Employees just perform the duties of the human computer throughout the course of their day without even thinking that they replaced what used to be an independent function.


True but that won't save men. Already women are better suited for jobs that involve communication and empathy. I was in a hospital last week: almost a full female staff.


Humans are famously unable to adapt to changes in their environment.


> There's an old (and sometimes forgotten) idea in AI that perhaps things we think are simple, like vision and control (robotics), are actually incredibly complicated and took millions of years to evolve.

https://en.wikipedia.org/wiki/Moravec%27s_paradox


I think there's a lot of truth to this. I'm new to ML, still going through the ropes on some online courses, but already I can see that, once I get a bit of muscle memory in setting up models etc, there's a whole lot of power and efficiency to be unlocked by using simple models - specifically in CS/X and Marketing. Obviously model quality matters, so you have to have proper monitoring etc, but this stuff is low hanging fruit and should enable teams to be so much more efficient.


In a lot of these cases - maybe most, in fact - the sludge in the data pipeline makes makes the low hanging fruit hang high.

I've worked on a number of projects where it looked simple to automate from the outset and impossible in retrospect.


Agree entirely, I wouldn’t want to build a saas startup around it due to data quality, but if you have control of your pipeline it’s easier.


I dont't think it's intuition. There's a whole field of junk economics dedicated to telling us that your position in the economic class hierarchy determines the automatability of your job. In general it goes unquestioned. A vast amount of capital is also deployed based upon this assumption.

This is an example paper that, for instance, mathematically blurred the distinction between offshoring and automation:

https://talkbusiness.net/2017/07/ball-state-study-automation...

There was another paper (that i cant find right now) that basically surveyed people about how creative they thought their job was and just assumed that creativity was inversely proportional to automatability.

Ironically I think a widespread belief in this myth helped, among other things, lead to the trucker shortage. Who wants to join a profession with a high barrier to entry that they believe will be automated soon?


If software engineers end up automated away before truck drivers are, (not a completely harebrained concept given that one type of AI is doing better than expected and the other worse), it will put a hilarious spin on the "truckers should just learn to code" concept.


Those "Complicated" tasks are all built in artificially constrained systems with limited degrees of variability, which is perfect for an algorithm to learn.

Those "Simple" tasks have so much variation in them that it takes a billion+ years of evolution + the genetic pretraining to be able to perform.


So who says that picking stocks or computer programming don't require common sense?


There’s not a single missing something, there’s at least 2.

One of them is physical structure. You can get 10M somethings, sure, but how do you wire them together is probably more important than how many there are. And there’s many possible combinations.

The other missing part is that we have not figured out the high level software. A squirrel brain is a “desktop PC running windows”-level of utility. A bunch of neurons interlinked is some fashion is equivalent to a blank CPU. We know how the individual transistors work, but the BIOS, and OS are still unknown.

It’s quite possible that problem 1 and problem 2 are related, because evolution doesn’t care about making things easy to understand for us with clear delimitations.


They need well-factored accurate multimodal world models. Things like transformers and stable diffusion are promising such as the 3d video stable diffusion paper or DeepMind's multimodal transformer.

One thing that has held back progress is the way putting knowledge directly into the system has become taboo. So much so that they often fail to even guide the training towards really core aspects of the world model. Or even deliberately going about it with the assumption that everything from start to finish must be determined from the barest input data such as pixels. Then being surprised when it learns random inaccurate and overfit models that miss the underlying hierarchical structures.


> The AI field is fundamentally missing something. I don't know what it is.

I can only speculate but it certainly is for a reason that certain parts of our body are vegetativelly controlled whereas others are under the active control of our consciousness.

If you step on a nail, the first reaction comes from vegetative stimulus, later your consciousness processes that information. A squirrels neuronal network is also separated in that way. That may be a reason.

And second, AFAIK AI still doesn't 'think' in concepts, it has no notion of the 'world'.

And third: The capability of reproduction and acting accordingly may be another thing.


Automation only happens when it's cheaper than the human counterpart. In the US there are plenty of immigrants whom cost peanuts to employ. I don't expect the robotic future to come from North America.


> A squirrel has around 10 million neurons.

I don't believe this is correct. It's too low.

It's more like 400 million.

> somebody ought to be able to build something with 10 million of them

Build them, sure, but they need to be connected in the right way.


>"However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems"

I'd go as far as saying that ML is now at a point where it's basically a mirror image of GOFAI with the exact same issues. The old stumbling block was that symbolic solutions worked well until you ran into an edge case, everyone recognized that having to program every edge case in makes no sense.

The modern ML problem is that reasoning based on data works fine, unless you run into an edge case, then the solution is to provide a training example to fix that edge case. Unlike with GOFAI apparently though people haven't noticed yet that this is the same old issue with one more level of indirection. When you get attacked in the forest by a guy in a clown costume with an axe you don't need to add that as a training input first before you make a run for it.

There's no agency, liveliness, autonomy or learning in a dynamic real-time way to any of the systems we have, they're for the most part just static, 'flat', machines. Honestly rather than thinking of the current systems as intelligent agents they're more like databases who happen to have natural language as a way to query them.


"When you get attacked in the forest by a guy in a clown costume with an axe you don't need to add that as a training input first before you make a run for it."

Sure, because it's already a training input. We'd run because we recognize the axe, the signs of aggression, the horror movie trope of an evil clown, and so forth. We have to teach "stranger danger" to children.

"There's no agency, liveliness, autonomy or learning in a dynamic real-time way to any of the systems we have, they're for the most part just static, 'flat', machines."

Well, that's at least in part because we design them that way. It's more convenient to separate out the "learning" and "doing" parts so we have control over how the network is trained.


>Sure, because it's already a training input

not in any meaningful sense, no. I can tell you, "if something's fishy about the situation, just leave". You can do this not because of some particular training inputs or examples I give you, but because you have common sense and a sort of personality and intuition for how to behave in the absence of data. If you told that sentence to a state of the art ML model you'd probably get "what fish?" as an answer.

>Well, that's at least in part because we design them that way

It's mostly because we have no idea how to design them anyway else. I think if anyone knew how to build complex agents with rich internal states that have the intent and communication abilities of humans we'd do that. It's not even really conceivable right now how you could have an ML type system that also can just directly adopt high level concepts dynamically just by communicating them.


> not in any meaningful sense, no. I can tell you, "if something's fishy about the situation, just leave"

"Fishy" is doing a lot of work in this sentence. How much training went into refining an instinct for what's "fishy"? Do you not agree that everyone has a different view on what's fishy?

> I think if anyone knew how to build complex agents with rich internal states that have the intent and communication abilities of humans we'd do that.

I'm not so sure. There doesn't seem to be much commercial value in having an agent with intent and its own goals, and most AI advancements are for commercial entities these days.


"I can tell you, "if something's fishy about the situation, just leave". You can do this not because of some particular training inputs or examples I give you, but because you have common sense and a sort of personality and intuition for how to behave in the absence of data."

Only if I had a baseline to compare the situation to. If you took out all familiar elements, I'd have no way of telling whether a situation was normal or suspicious. My understanding of the word "fishy" is born from 300 thousand hours of training data.

"It's mostly because we have no idea how to design them anyway else. I think if anyone knew how to build complex agents with rich internal states that have the intent and communication abilities of humans we'd do that."

That's a different question. We can build machines that learn autonomously; they just don't have the capability of biological minds.


GOFAI = "Good old fashioned AI" for those not familiar with the acronym


If GOFAI is Weizenbaum's Eliza - yes.

If GOFAI includes semantic reasoning in real world concepts modeled eg. with theasauri and concept maps - I think AI research was on the right track but went astray as there was not enough resounding business success to warrant further funding.


Randomly watched this yesterday https://www.youtube.com/watch?v=hXgqik6HXc0&ab_channel=LexFr... where Roger Penrose argues that we're missing something fundamental about consciousness and his best bet is a structure called the microtubules. This talk reminded me of my own research into "AI" back in the 00's and that it's almost impossible to talk about AI since everybody has a different idea as to whay AI is, yes i know there's a pretty good classification ANI, AGI, ASI but most people don't know about this and think of AI as a machine that thinks like conscious human. I'd argue that we've solved or at least partly solved the part of AI that has to do with neural nets. We're still some way off utilizing the full potential of neural nets since our hardware hasn't quite reached the capability of emulating even the simplest of complex animals. The thing is that Neural nets are probably only part of intelligence and creating bigger and more complex neural nets probably wont result in what most people consider AI but i guess there's still a chance it might. We might have to wait several years to find out since moors law is plateauing and neural chips are still in it's infancy. My best guess is that we'll solve "Intelligence" long before we solve consciousness and i think we're actually quite far along here. The best theory of intelligence i've read so far is Jeff Hawkins 1000 Brain Theory and i'm really looking forward to see how far it can go. The problem with this theory is that it's still missing the most critical component which is the illusive mechanism that binds all the "Intelligent" stuff together and i guess that might be hidden in the quantum nature of the microtubules but to solve that we kind of need a new component to our theory of Quantum Mechanics and Quantum Effects.

Sorry if i went a bit off topic, but just needed to get my thoughts since yesterday out my head.


Having watched Penrose and Hammeroff for a while now, IF microtubules contribute to consciousness experience then it is the collapse (or inference) that gives it wheels.

I'm laughing here because when I posted their ideas to HN oh so long ago, I got downvoted to oblivion because "there's no way organic matter can act as a quantum device". For a place that considers itself a "safe" place to explore ideas, it can be quite dangerous to share too much too early, sometimes.

Time will tell, but my instincts are that we're getting close. We needed computers dreaming first, and we have that now with generative networks!


Roger Penrose & Stuart Hammeroff’s “Orchestrated Objective Reduction” theory [1] is fascinating and really captured my imagination when I came across it.

But like almost all scientific theories tackling The Hard Problem, it’s built on the assumption that matter gives rise to consciousness.

As time goes by and my own understanding deepens, I’m becoming more and more convinced that this assumption is wrong. Instead we should start considering that consciousness is fundamental, and matter is a product of universal conscious experience.

Idealism is still compatible with the material world, but it seems futile to search for “the experiencer” within the experience itself.

[1] https://en.m.wikipedia.org/wiki/Orchestrated_objective_reduc...


> Idealism is still compatible with the material world, but it seems futile to search for “the experiencer” within the experience itself.

You're right, that's why the concept of "the experiencer" is ultimately an illusion. It's the same sort of illusion as "tables" and "a day job". None of these concepts fundamentally exist in physics, they are labels we apply to loosely defined categories of observations.

Ultimately, Descartes was wrong, "I think therefore I am" is false because it's circular; it assumes the existence of "I" to conclude that "I" exists. The fallacy-free version is "this is a thought therefore thoughts exist", and as you can see, no "I" can be inferred.

If you want to understand what sort of answer neuroscience is starting to provide to the hard problem, I recommend this paper:

A conceptual framework for consciousness, https://pnas.org/doi/10.1073/pnas.2116933119


Alwyn Scott's Stairway to the Mind (1995) has an accessible critique of Penrose's theory from the perspective of neurophysics. Basically, he argues that neuronal activity is on such a large time and energy scale that quantum effects are unlikely to be relevant.


it does sound quite out there, i agree. and i personally think that it's too early to conclude that algorithms with neutral nets won't get us there, we simply don't have enough computing power to conclude that yet. Penrose has always been a Maverick and as he admits himself he's not even close to an explanation himself. his only lead comes from the fact that anesthetic gases seem to have some kind of effect on the microtubules and i guess that in itself could have a totally different explanation than Quantum magic. i mean the micrtubules could be important but for different reasons.


I like how the author emphasizes IA — Intelligence Augmentation as a counterpoint to GOFAI. I’m less inspired by his vision of II (Intelligent Infrastructure); probably bc I’m concerned with the degree of surveillance we already have to live with.


The question to ask is whether or not any algorithmic system is capable of exceeding the programming on which it is based. This question applies to every kind of system we have developed over the years.

The other point to make is that we already build systems that can exceed their programming and they are called children.


This is one of my favorites. So much of industrial AI is about replacing labor (usually cheaper but lower quality). In a way, AGI is only slightly more ambitious. We should be setting higher goals for AI, including helping individuals be superhuman, and helping organizations coordinate betteele.


Does it resemble how CGI incremented to VR and AR to replace analog experiences (nowhere near as good)?

Even tcp/ip has devolved into a "failed social experiement" with petabytes of low quality/low aptitude vocabulary.

AI is just ambiguous phrasing to color gibberish.


the way people are learning to interact with stable diffusion is incredibly interesting to me, almost learning a new language via prompts to get it to produce desired results. i feel like that may be the key to the next step in ai, realising that human directed ai fills huge gaps in talent at both ends


maybe the A.I. affords every human to be an affluent retiree/philosopherking and have free-will


If so, it seems it would be undesirable for such AI to possess what we call sentience.


I suspect biological brains have a pretty groundbreaking hack to solve the long-term short term learning problem. Maybe involving sleep.

What I mean by that is that AIs, the way they are currently built, need to learn very slowly on short term inputs or they overfit. Whereas humans can learn something just by explanation short term and don't have overfitting problems.

I suspect this is solved by sleep, and I haven't seen AI with a similar mechanism.


That's an interesting take. I'll have to sleep on it and get back to you.


Memory in the brain has a tree-like structure, kind of like an abstract syntax tree. When Starship returns to the launchpad and sticks the landing on the launch tower, I guess A.I. will have progressed a little bit more.


How is sleep involved? There are a lot more differences than sleep.



Short term to long term memory restructure.


As a theory person who usually explains O notation using concrete numbers, the degree of the neural network in our brain is approx 7000. Taking approx 86 billion ~ 100 billion, this itself is a graph with approx 6x10^(14) edges - does AGI proponents really hope to be able to do this? I am genuinely curious to know : is there some simplifying assumption which makes things faster?


This comparison is not perfect for various reasons. For example, the average firing rate of neurons is pretty low. Most attempts at comparison are done using FLOPS, e.g. https://www.openphilanthropy.org/research/how-much-computati.... You may prefer this comparison using Traversed Edges Per Second: https://aiimpacts.org/brain-performance-in-teps/.


That’s only 3 orders of magnitude off from today’s largest models like PaLM (5x10^11 parameters), a gap that’s narrowed by 3 orders of magnitude just since 2019.

How far away do you think we are, exactly?


Thank you for this information. I did not know this. But my view (I may be wrong), is that AGI is too resource-intensive to be within the reach of normal computing of the ordinary user for at least 2 decades.


> is that AGI is too resource-intensive to be within the reach of normal computing of the ordinary user for at least 2 decades.

Hardware is still accelerating exponentially in density, albeit a bit slower. What you're not considering is that algorithmic improvements in machine learning are outpacing hardware improvements.

For instance, NVidia recently revealed how to switch from 32-bit floats to 16-bit floats with no perceptible loss in effectiveness, and they're working on 8-bit floats next. That's a full doubling in number of parameters in your model in only a single step. Other improvements are refinements to language models themselves to reduce overfitting and boost effectiveness with fewer parameters.

Arguably a machine learning model will achieve parity with human neuron density, in terms of number parameters, within the next decade. What that actually means is unclear.


You’re entitled to your own opinion, of course, but why do you hold this view?

And why is “the reach of normal commuting of the ordinary user” a relevant bar — Google search (as an example) requires computation beyond the reach of normal computing of the ordinary user yet has still had a tremendous impact.


This is very impressive, but since a biological brain is so much more complicated, who could really make a solid guess? Probably no one right now.

PaLM is not an attempt at AGI, a parameter is not equivalent to a neural connection, an activation function is not equivalent to a neuron (of which you have many different types), biological connection patterns are much richer, and biological stimuli are not like slideshows of a single type of data, so...


I made no claims contrary to anything in your post; your response — none of which I disagree with — makes me worry that you are coming in with a preexisting belief and just looking for reasons it must be true.

That said, there are plenty of multimodal networks (ie not slideshows), and we know very little about the relevance to intelligence of the “richness” of neural connections, activations, etc. — but it’s inarguable that we’ve made great strides in scale alone.


In your previous comment you seemed to suggest that we should not be very far. Maybe I misinterpreted you.

I do believe that AGI is possible and that it does not have to resemble a biological brain though.


You can make the same argument for cats, dogs and other mammals, which do have embodied intelligence but not the skills we typically associate with general intelligence (language, deductive reasoning, math, etc). Raw neuron count is only loosely associated with intelligence, which is highly variable even between individual humans with nearly equivalent neurons.

Our brains are made of tiny little animals because that's just how life evolved on this planet. It's not a given that this is the best or even a good way to approach the problem.


Raw neuron count is only loosely associated with abstract problem solving intelligence. A cat's neural network is incredibly well optimized for things that cats care about.

Brains are very well optimized for computation/energy (while also being self replicating and self repairing), the tasks researchers care about just aren't the ones evolution cares about.


The hardware is now here but the algorithms are not. A crow knows not to land on sharp nails without ever having any experience stepping on one. Current architectures lack this basic intuition. Something is missing. Probably an internal world model or simulation


The crow has probably stuck its foot somewhere before and can associate the nail with that past experience. That being said, birds seem to have surprisingly complex innate behavior and even pattern recognition encoded within their brains.


or perhaps just increased computing power


How does increasing computing power help with intuition/common sense about the world? Computing power isn't magic. It has to have a way to understand the world the way animals do.


Intuition and common sense are the result of latent learning via experience. A model with the right architecture could learn the same things given the training data.


Remember growth is exponential - we won't recognize the next revolution because we'll still be dealing with the fallout of the previous one. Or previous dozen.


Incorrect. Any growth in a system of finite resources is sigmoidal, with an exponential portion early in the curve before diminishing returns kicks in.


We've been promised this since the 90's, and, yet, we've been pushing back the wall Moore's law was supposed to hit for more than 30 years. And we haven't even properly started to play with non-transistor logic and analog neural devices, so I am cautiously imagining we'll remain exponential for the time being.


It all depends on where we are in the curve.


We are at the stage we are starting to explore the possibilities of using analog components, so my guess would be there is a lot of road to cover.

Plus, there are a couple different lines of research for new materials as well, so any one of those can yield something interesting.


From reading these comments, I will say that people should try out GitHub Copilot. A.I. is a bit further one than people might think.


We already have artificial intelligence. It’s called children


Children are natural intelligences. An existing artificial example would be corporations or governments, although it's not a singular/individual kind of intelligence, but rather an organizational one.


There is little intelligence in a corporation


I think this essay includes a specific prediction, that human level ai is far away, that might be disproved this decade. If human level ai is close, focusing on some other kind of ai is more likely to be a waste of time.


There's no reason to think it's close this time, just like there's little reason to think this time automation is going to put everyone out of work.


Actually, there are many good reasons to think it's close:

https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-thin...


Microsoft already got to human level ai.

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: