I guess, in a nutshell, they attach a small amount of muscle to the nerve that controls something that's going to be cut off. And they anchor that muscle on both ends. So that contracting and relaxing it provides a real-world force feedback loop. Really clever.
Herr said the AMI patients have felt less pain in their residual limbs, and their limbs don’t atrophy, as is typical after a standard amputation, resulting in a poor fit and pain when using a prosthesis.
So this is a really significant step forward with immediate important benefits. "Use it or lose it." Muscle function is preserved so the stump isn't simply rotting. It still has muscle function which is essential to preserving vitality.
> Outside of a brain scanner, the restoration of proprioception can in some ways give patients the feeling of having a real foot. One AMI amputee was hiking recently while wearing a standard prosthesis and stepped into a creek. He later described having the sensation of water flowing over his prosthetic foot even though it had no way to perceive that. “He trusted or embodied his prosthesis more than someone who doesn’t have this phantom sensation,” said Carty.
I read about an experiment where people were shown various black and white drawings over random grayish backgrounds. They then had to estimate the hue of the grayish background - which was tricking because it only has slight amount of red, or blue, etc). The results clearly shows that the black sketch which was overlayed on the gray background skewered peoples perceptions of the color of the background. People could not simply observe the color of the background without their other knowledge skewing their perception.
Yet another example that we don't really get a "raw feed" from out senses. What we perceive has already gone through a lot of "post-processing" by our brain.
Absolutely. This is one of the reason why it is really hard to explain to laymans why robotic perception is hard. The conversation usually goes: why can’t you just make it do “obviously good action” when “condition” happens? And since “condition” is something they can just see it is hard to convince them the robot doesn’t have access to it in a clear and unambigous maner.
Rodney Brooks talked about building an ant robot trying to mimic the real thing. He said a real ant has hundreds of thousands to millions of sensors while the robot his team built had 150, and they only barely could handle that. The data living things gather from their environment and the processing used on it is absolutely mind boggling.
> I read about an experiment where people were shown various black and white drawings over random grayish backgrounds. They then had to estimate the hue of the grayish background - which was tricking because it only has slight amount of red, or blue, etc)
I'm colour blind, and if a picture is in black and white, I can't always tell.
A long while back our CRT TV broke, and started displaying everything without colour - I had no idea until my wife pointed it out! It's like my mind just makes up what it doesn't see.
Another example that I always use is grass: if you showed me a cropped image of grass and I didn't know it was grass, I'll tell you "I don't know what colour it is, but I think it's either red, brown or green". But if you point to some grass, I'll always say "it's green!".
I'm curious about your experience, do you actually feel like you are seeing green, or do you just know that green is the correct answer?
We had a bunch of different candies on the table at game night and I gave my friend some M&Ms, he thought they came from a bag of mint flavored M&Ms. He said, "wow I never tried these before, they really taste like mint." We all did a double take, he definitely did not eat mint M&Ms, I don't really remember if we even had mint M&Ms at the table. I believe him when he says he says he tasted mint though, like his brain placeboed the mint into place.
With the TV, I actually feel like I'm seeing colours, even if I don't know exactly what they are.
With the grass example, I'm actually not entirely sure! I mean, I obviously know that green is the correct answer, but that knowledge kind of makes me feel like I'm seeing green.
But colour blindness is kind of weird, because for any given thing/colour it's never just "I don't know" - I can narrow it down to a few options, and usually one of them is the correct one; like for a desaturated green, I might think it's grey, green or pink.
I don't think so - the strawberry effect is clearly chromatic adaptation at play and works because of cyan tint of the rest of the image (even the article you linked says so). Remember the black-blue/white-gold dress photo? That's the same thing - here the lightning is obvious, but in case of that dress the lightning conditions were ambiguous enough to make different people see different colors.
To control a large body with ample signal propegation delays, the brain probably predicts a lot (Werner's "Cybernetics" -- outdated but not wrong). Big bodies have bigger brains partially for this reason.
> Yet another example that we don't really get a "raw feed" from out senses.
I love this description http://www.dspguide.com/InnerLightTheory/frontcov.htm of the front cover of a free ebook on consciousness (there's a link to the book at the bottom of the page. the author isn't a neuroscientist or a philosopher but an electrical engineer, but his theory of consciousness is interesting nonetheless. he also has a free ebook on signal processing http://www.dspguide.com and it's pretty cool). The main theme of the book is that consciousness arises exactly because we don't have a "raw feed" and we can't inspect our inner workings; a being that can will not have the same subjective experience.
> The front cover illustrates the image detected by your right eye as you stand a few feet from the Mona Lisa. The gray filaments are regions where you are totally blind, a result of blood vessels in the retina blocking the detection of light. Likewise, the large rectangular region is where the optic nerve connects with the retina, where humans are also sightless. This is called the blind spot, and is really quite large, about the size of an apple at arm’s length. As long as your eye remains fixed on the center of the painting, these gray regions are totally blocked from your gaze; you perceive nothing about the image in these areas.
> When you first looked at the cover, you probably wondered what the gray spider-like pattern represented. It probably struck you as quite odd, like something out of a bad science fiction movie. It was totally unfamiliar and foreign to your conscious experience. But how could this possibly be? This pattern has been superimposed on your visual field since you first opened your eyes as an infant. Even as you read this paragraph the pattern is present. It should be more familiar to you than anything you have ever seen. How is it possible that our conscious experience knows nothing of these blind areas?
> Yet another example that we don't really get a "raw feed" from out senses. What we perceive has already gone through a lot of "post-processing" by our brain.
Looking forward to HDR and Image Stablization in Humanity 2.0 coming in 102022.
I have always felt that the reason that we can't run in dreams is because the sensation of running is not something that we remember well enough to model. We can verify that sensation, but not recreate it. An enormous amount of computation is offloaded to the world.
I run in dreams. Good and bad. One of my favourite dreams start with me running, and it crescendos towards a feeling of effortlessness. The ease of my stride becomes complete, rarely and barely do I have to touch the ground to continue propelling myself forward. The few times that I do have to make ground contact it is as if my forefoot merely licks the ground and I experience something akin to almost-flight.
It is not as if I'm weightless as much as having an unbelievably elastic Achilles tendon that allows me to bound ahead, completely and utterly free. With nothing to impede me.
In nightmares, if I have to run, I'm not as swift.
Interesting. When I have one of those dreams I start running on all fours. Which is weird because I could never run like that in real life. Although I tried to repeat the movement when I was in the sea, once, and it kiind of worked, in that I could propel myself forward, except I had to be underwater.
Now that I think of it, I've never had a dream where I can't swim and I dream of swimming very often.
I can run in dreams, but more agile motions fail me: jumping over a small fence, fighting someone. Then it's like either there are lead weights attached to my feet, or my arms and legs are like those of rag dolls. Don't respond to my commands well. Ironically, flying is much easier.
My assumption why in my dreams running feels hard like your legs are made of led is because of the paralysis we suffer when dreaming. Like I want to run but some subconscious things is signaling that your legs are blocked so I get this sensation.(I had similar experience with attempting to open my eyes n dreams and feeling that they are glued or hard to open/keep them open)
Interesting I run in my dreams quite often. I didn’t know some people couldn’t. However in my dreams I can’t really punch, all punches feel like they have no “weight” to them if that makes sense.
Caveat: not a neuroscientist, but have worked with many and been involved in MRI and fMRI imaging for investigating HCI concepts/ products and marketing. As well, when I was younger had trigeminal neuralgia, considered the most painful disease (explained below), and became fascinated with neuroscience.
The approach of closing the loop on agonist/ antagonist feedback in muscle fibers makes an enormous amount of sense and is brilliant.
Without the two “outbound requests” happening in a closed system the brain is left to approximate the difference between the slight signal shift. Even nanosecond differences in “send/ receive” requests would be registered as pain.
I had a neurological condition called trigeminal neuralgia where the myelin wears down between the positive/ negative nerve bundles in the trigeminal nerve and occasionally touch.
The reason it’s considered the most painful affliction is the trigeminal nerve is where your whole body’s pain, pressure, and heat sensory nerves converge. So when the +/- touch, every fiber of you feels like it’s being stabbed, burned, and crushed at the same time. About 90% of folks with it have have one occlusion in the myelin and will feel excruciating pain in a localized are. I had 9 full occlusions and 20+ partial. So lucky me, I got to experience it through my whole body.
Basically, when the +/- nerves touch, no part of your body is receiving/ returning the expected “latency” for pain, pressure, or heat, and perceives the differences of return times as nerves (including those in agonist/ antagonist muscle) being split/ damaged, thus a pain response.
Totally makes sense that closing the proper agonist/ antagonist fibers to create a proper feedback loop could eliminate the unexpected diff on signal requests that trigger phantom pain.
EDIT: Just to give yourself some real world examples.. Do something like build a fence with your friends, all day. One person operates the saw, one the hammer, so on and so fourth.
At the end of the day using the same tool in repetition, hold your tool, everyone applies blindfolds, then have an “experimenter” touch the subjects forearm at different points. They’ll experience the touch at a different point than if they weren’t holding the tool.
When you use something for a long time you integrate it into your “body schema” - basically your brains map of yourself.
My absolute favorite example of body schema extension are experienced crane operators in docks. They can see a container on a ship, need to place it on a massive pile they’ve been building all day, and can no longer see in their line of vision where they’ll place it. But they can still do it without being able to see where they’re dropping it.
All that’s to say - your brain builds better and better models of it’s container (body) and environment by developing more and more precise models of input/ output diff expectations and essentially a “path prediction” algorithm.
When the actual data is outside of parameter/ estimates get out of whack, we experience things like pain where there’s no harm. But in an amputee, agonist/ antagonist muscle fibers are no longer on the same circuit. The body adapts, and sends two different “ping” requests to both channels.
So I guess to simplify all that: When theres a micron of difference in length in key paired muscle fibers (as is common in amputations, injury, etc), the brain sends two signals, diffs the return times and if they’re out of wack with past lived experience you get pain.
While the amputee still doesn’t have the “normal” limb length the body is expecting, closing the loop provides the expected send/ receive timing variance.
Was an experimental one, and actually need to get part of it repaired soon which I’m not looking forward to.
But basically, the trigeminal nerve bundle comes up from your neck through your right jawbone, heads up around your mouth and cheek bone to your ear and brain stem.
Mine involved having a molar removed (I need a new false one - this one got me 16 years but the new ones are billed as lasting a life time) with laparoscopic surgery to move up into the cavity, and insert small bits of poly-fibers between the offending occlusions in the myelin. If they manage to get all of the offending parts, the myelin degradation stops - for most people. Some with atypical cases or a case brought on by multiple sclerosis there’s no cure. As it is, there’s no treatment for symptoms but my god for those without a solution, assisted suicide is truly the only humane thing.
This reminds me of an experiment recounted in a great book Phantoms in the Brain, which I always wanted to try.
The subject sits with a model nose placed some distance in front of them on which someone taps and strokes in some non predictable fashion. At the same time the subject's own nose is touched in precisely the same pattern. Subjects apparently report an overwhelming sensation that the model nose, some meters in front of them, is thier own nose!
https://videos-fms.jwpsrv.com/0_6146535c_0xe8e28ca538bc35a69...
And this article does a better job of explaining the surgery: https://bionicsforeveryone.com/agonist-antagonist-myoneural-...
I guess, in a nutshell, they attach a small amount of muscle to the nerve that controls something that's going to be cut off. And they anchor that muscle on both ends. So that contracting and relaxing it provides a real-world force feedback loop. Really clever.