Bionic prostheses usually look futuristic and cool in scientific articles and commercials, but most users consider them “stupid” devices that are tedious to use. Most of these prostheses are primitive to control; the user must tell the prosthesis what and how to do in order to achieve the simplest result, such as picking up a cup of coffee. Researchers at the University of Utah have proposed transferring some control of the prosthesis to a computer so that it decides how to move and the user focuses on what needs to be done. This relieves bionic prosthesis users of a great deal of mental stress and finally makes them comfortable to use — a strategy that can be applied not only to prosthetics but also to many other areas of life where humans interact with machines.
The public often perceives them as visitors from the future, from a science-fiction world where people can upgrade themselves by adding new, enhanced body parts at will. Some owners of high-tech bionic prostheses call themselves cyborgs and attend comic conventions to show off themselves — and their technological augmentations — to anyone eager to touch the future.
Developers of advanced prosthetics actively draw on this pop-culture aura. A Pentagon-funded project for a new prosthetic was even named the “Luke Skywalker Arm,” and in 2015, Robert Downey Jr. presented a seven-year-old boy with a 3D-printed Iron Man — styled prosthetic arm.
“My 21st-century myoelectric hand seemed remarkable — until I tried using it for some routine tasks, where it proved to be more cumbersome and time-consuming than if I had simply left it on the couch,” said in a conversation with IEEE Spectrum one “cyborg”, Britt H. Young, who has tried many high-tech prostheses over the course of her life. “I become more disabled when I wear one [a bionic hand],” she admitted to Input.
This is neither a whim nor an individual intolerance. Many people who have lost upper limbs are unable to adapt to their new mechanical appendage and prefer to live with one arm. A 2007 review study showed that rejection rates for body-powered devices were 26% and for electric devices 23% among adults — figures that have not declined. In 2024, scientists found that 50% or more of people who had undergone amputation abandoned prosthetic use altogether.
Wearing a prosthesis continuously can be physically exhausting. Modern prostheses may weigh 1.5 to 2 kilograms; this may not sound like much, but carrying that weight all day is physically demanding. “Cyborgs” complain about the lack of precision: prosthetic hands do not sense applied force, making it hard to grasp fragile objects or even to tell whether an object is securely held. But perhaps the greatest difficulty lies in prosthetic control systems. The most advanced and widely used devices — bionic prostheses — are typically controlled by myoelectric systems, meaning they respond to electrical signals from the remaining muscles in the residual limb.
In most cases, these are dual-site prostheses capable of reading signals from two points on the residual limb — in other words, a device with two “buttons.” Yet bionic hands are extremely complex, with more than a dozen degrees of freedom, and switching between modes using just two “buttons” can be profoundly tiring.
“The ‘power grip’ — a fist — is cool, but, oh my God, trying to cycle through all the modes to find it is infuriating,” Britt Young complains.
Manufacturers are trying to address this problem in various ways. Some are looking for additional control mechanisms — for example, sensors on shoes that respond to specific foot movements and transmit control signals to a prosthetic hand. Another approach is to abandon attempts to replicate nature by building a universal hand, using instead a set of specialized attachments, each highly optimized for a specific task.
Orthopedic specialists and engineers at the University of Utah chose to approach the problem from a different angle. They asked: What if a bionic hand did not require constant “manual control,” but could instead infer what its user wanted to do and assist accordingly?
A team led by Jakob George, an electrical and computer engineer at the University of Utah, decided to use shared human — machine control for an intelligent bionic hand.
To do this, they embedded multimodal sensors capable of detecting both proximity and pressure into the fingertips of a commercial prosthetic hand — sensors that could “see” objects using infrared and “feel” them through pressure. In a series of experiments, they succeeded in enabling the hand to automatically select the appropriate force and configuration to grasp and hold a paper cup or a ball based on sensor input.
But this was only half the challenge. The system also had to allow the user to control the hand without micromanagement, so that the hand could understand the user’s intentions. The researchers conducted a series of training experiments in which able-bodied volunteers controlled the hand. A machine-learning algorithm learned behavioral rules for a range of everyday tasks.

The researchers then created a shared human — machine control system in which the algorithm combined human commands with the “behavioral rules” the hand had learned. At this stage, the authors tested the system with four volunteers with upper-limb amputations. In each case, the prosthesis was first controlled solely by the user, after which the shared-control system was activated.
In the first condition, participants were able to complete the tasks successfully only once or twice out of ten attempts. In the second condition, they were able to pick up and hold a paper cup, a sheet of paper, and an egg with 80 — 90% success.
“The next step is to really take this system into the real world and have someone use it in their home setting,” says study co-author Marshall Trout.
He expects that technological progress will make other methods of capturing user signals more accessible, such as reading signals directly from nerves, since signals obtained from the skin surface are still too noisy. “Using neural implants can really improve the algorithms we already have,” Trout argues.
“The goal is to combine all these approaches in one device,” George says.

