Discussion about this post

User's avatar
Brian Villanueva's avatar

I teach robotics to HS students. "Manipulation is the problem" is something we talk about quite a bit. I show them the old DARPA challenge videos and they get a laugh at robots trying to just open a door. Robots have gotten better, but simple tasks still baffle them. Why? Tactile sensor density.

Computer vision has gotten very good: high resolution, AI pattern rec, the robot knows what's around it. But manipulation isn't visual. It's tactile, and tactile sensor density simply hasn't kept up. Human tactile resolution in the fingertip is about 1/2mm. And that's not just binary -- "am I touching something?". It's quite complex: "How much pressure?"; "Is it hot or cold?"; "Is it hard or soft?" The human palm is less sensor dense, but still far denser than any robotic fingertip. This is a biggest limitation to humanoid robots today: they can't feel. (And I don't mean emotions.)

You're also correct that dexterity and strength are largely a tradeoff and probably always will be. Strength requires higher power servos, but their larger size limits how many of them can be put in something small like a finger. Also, the higher pressures of lifting heavy things tend to damage the tactile sensors needed for more dexterous applications. This is likely unsolvable but won't matter long-term though. Robots will become cheap enough they will be specialized. The unit that can crack eggs for an omelet doesn't need to lift 50 pounds.

Vision is there. AI is there. Servos are close. But tactile sensors are the biggest hurdle. There's lots of folks working on this, and it's going to get there. But it's not there yet.

Expand full comment
Ryan Davidson's avatar

It's not just tactile sensor efficacy and density that's the issue. It's proprioception, or kinesthesia. Having a "sense" of position and movement, both of one's own body and the surrounding environment, without the use of visual or auditory inputs. Another commentor mentioned having an "almost visual image" of the task of buttoning a shirt. That's what we're talking about here.

This is vitally important to human (or, really, animal) dexterity. It's not enough to just have the tactile input. Or, rather inputs, because as a different commenter mentioned, our sense of touch is multi-channel in terms of both the number and types of inputs, all of which are analog, not binary. Pick up a pencil. The sensation you experience is an integration of signals sent by potentially thousands of different nerve endings. Those signals are integrated in both time and space, with each signal being interpreted in relation to the others. That's how you can know that a ball is round, for instance: integrating the sensation from your entire hand taking into account the position of each finger in relation to the whole.

This appears to be a very difficult problem even for biological nervous systems. Humans, and most mammals, appear to have a pretty good sense of proprioception. But mammals are vertebrates. We have rigid internal skeletons. Our limbs may move in relationship to each other, but their dimensions are basically fixed. This means that our nervous system can treat its own dimensions as a constant.

Which is why we're so awkward around puberty, for what it's worth: for a while there, we grow faster than our nervous system has time to account for. We literally outgrow our own feet.

Anyway, this probably why animals like octopuses don't appear to have much in the way of proprioception. But their legs are boneless. They can change not only the relative position of their legs (independently!) but bend them at every point along their length as well as change both the diameter and length of each one. Their nervous systems are pretty damn complex, though clearly not as complex as ours. But unlike mammals, with our rigid limbs, their nervous systems can't take any of their own dimensions as a given. So they basically don't bother with proprioception, as far as we've been able to tell.

Octopuses can get away without having proprioception because they're basically all legs, live in the water (meaning their limbs don't have to support their own weight or the weight of things they're trying to manipulate), and they are basically infinitely flexible. This means that for an octobus, the answer to "Where are my legs?" is effectively "Everywhere!" They can also squeeze themselves through any hole larger than their eyeballs, which is a pretty neat trick.

Needless to say, we can't build robots that way. But that being the case, we're left trying to replicate an incredibly sophisticated, analog, multi-variate, multi-channel sensory phenomenon with digital, algorithmic brute force.

Expand full comment
50 more comments...

No posts