How we determine who’s to blame
How do people assign a cause to events they witness? Some philosophers have suggested that people determine responsibility for a particular outcome by imagining what would have happened if a suspected cause had not intervened.
This kind of reasoning, known as counterfactual simulation, is believed to occur in many situations. For example, soccer referees deciding whether a player should be credited with an “own goal” — a goal accidentally scored for the opposing team — must try to determine what would have happened had the player not touched the ball.
This process can be conscious, as in the soccer example, or unconscious, so that we are not even aware we are doing it. Using technology that tracks eye movements, cognitive scientists at MIT have now obtained the first direct evidence that people unconsciously use counterfactual simulation to imagine how a situation could have played out differently.
“This is the first time that we or anybody have been able to see those simulations happening online, to count how many a person is making, and show the correlation between those simulations and their judgments,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a member of MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the new study.
Tobias Gerstenberg, a postdoc at MIT who will be joining Stanford’s Psychology Department as an assistant professor next year, is the lead author of the paper, which appears in the Oct. 17 issue of Psychological Science. Other authors of the paper are MIT postdoc Matthew Peterson, Stanford University Associate Professor Noah Goodman, and University College London Professor David Lagnado.
Follow the ball
Until now, studies of counterfactual simulation could only use reports from people describing how they made judgments about responsibility, which offered only indirect evidence of how their minds were working.
Gerstenberg, Tenenbaum, and their colleagues set out to find more direct evidence by tracking people’s eye movements as they watched two billiard balls collide. The researchers created 18 videos showing different possible outcomes of the collisions. In some cases, the collision knocked one of the balls through a gate; in others, it prevented the ball from doing so.
Before watching the videos, some participants were told that they would be asked to rate how strongly they agreed with statements related to ball A’s effect on ball B, such as, “Ball A caused ball B to go through the gate.” Other participants were asked simply what the outcome of the collision was.
As the subjects watched the videos, the researchers were able to track their eye movements using an infrared light that reflects off the pupil and reveals where the eye is looking. This allowed the researchers, for the first time, to gain a window into how the mind imagines possible outcomes that did not occur.
“What’s really cool about eye tracking is it lets you see things that you’re not consciously aware of,” Tenenbaum says. “When psychologists and philosophers have proposed the idea of counterfactual simulation, they haven’t necessarily meant that you do this consciously. It’s something going on behind the surface, and eye tracking is able to reveal that.”
The researchers found that when participants were asked questions about ball A’s effect on the path of ball B, their eyes followed the course that ball B would have taken had ball A not interfered. Furthermore, the more uncertainty there was as to whether ball A had an effect on the outcome, the more often participants looked toward ball B’s imaginary trajectory.
“It’s in the close cases where you see the most counterfactual looks. They’re using those looks to resolve the uncertainty,” Tenenbaum says.
Participants who were asked only what the actual outcome had been did not perform the same eye movements along ball B’s alternative pathway.
“The idea that causality is based on counterfactual thinking is an idea that has been around for a long time, but direct evidence is largely lacking,” says Phillip Wolff, an associate professor of psychology at Emory University, who was not involved in the research. “This study offers more direct evidence for that view.”
(Image caption: In this video, two participants’ eye-movements are tracked while they watch a video clip. The blue dot indicates where each participant is looking on the screen. The participant on the left was asked to judge whether they thought that ball B went through the middle of the gate. Participants asked this question mostly looked at the balls and tried to predict where ball B would go. The participant on the right was asked to judge whether ball A caused ball B to go through the gate. Participants asked this question tried to simulate where ball B would have gone if ball A hadn’t been present in the scene. Credit: Tobias Gerstenberg)
How people think
The researchers are now using this approach to study more complex situations in which people use counterfactual simulation to make judgments of causality.
“We think this process of counterfactual simulation is really pervasive,” Gerstenberg says. “In many cases it may not be supported by eye movements, because there are many kinds of abstract counterfactual thinking that we just do in our mind. But the billiard-ball collisions lead to a particular kind of counterfactual simulation where we can see it.”
One example the researchers are studying is the following: Imagine ball C is headed for the gate, while balls A and B each head toward C. Either one could knock C off course, but A gets there first. Is B off the hook, or should it still bear some responsibility for the outcome?
“Part of what we are trying to do with this work is get a little bit more clarity on how people deal with these complex cases. In an ideal world, the work we’re doing can inform the notions of causality that are used in the law,” Gerstenberg says. “There is quite a bit of interaction between computer science, psychology, and legal science. We’re all in the same game of trying to understand how people think about causation.”
Theatre time. All dancer have their own ways of getting ready for a show. I believe that a consistent routine is important to preparing for what’s ahead in a few hours. Because Forsythe’s “Artifact” is so hard on the body and I’m in every show, I tend to get to the theatre pretty early to make sure everything is ready, to put on some “normatec” boots (a compression boot for athletes that helps greatly with fatigue) and do hair and makeup. - Lia Cirio
Lia Cirio - Boston Opera House
Follow the Ballerina Project on Facebook, Instagram, YouTube, Twitter & Pinterest
For information on purchasing Ballerina Project limited edition prints.
Sunday’s are for relaxing with a good book.
Hold a buoyant sphere like a ping pong ball underwater and let it go, and you’ll find that the ball pops up out of the water. Intuitively, you would think that letting the ball go from a lower depth would make it pop up higher – after all, it has a greater distance to accelerate over, right? But it turns out that the highest jumps comes from balls that rise the shortest distance. When released at greater depths, the buoyant sphere follows a path that swerves from side to side. This oscillating path is the result of vortices being shed off the ball, first on one side and then the other. (Image and research credit: T. Truscott et al.)
Imagine tying your shoes or taking a sip of coffee or cracking an egg but without any feeling in your hand. That’s life for users of even the most advanced prosthetic arms.
Although it’s possible to simulate touch by stimulating the remaining nerves in the stump after an amputation, such a surgery is highly complex and individualized. But according to a new study from the University of Pittsburgh’s Rehab Neural Engineering Labs, spinal cord stimulators commonly used to relieve chronic pain could provide a straightforward and universal method for adding sensory feedback to a prosthetic arm.
For this study, published in eLife, four amputees received spinal stimulators, which, when turned on, create the illusion of sensations in the missing arm.
“What’s unique about this work is that we’re using devices that are already implanted in 50,000 people a year for pain — physicians in every major medical center across the country know how to do these surgical procedures — and we get similar results to highly specialized devices and procedures,” said study senior author Lee Fisher, Ph.D., assistant professor of physical medicine and rehabilitation, University of Pittsburgh School of Medicine.
The strings of implanted spinal electrodes, which Fisher describes as about the size and shape of “fat spaghetti noodles,” run along the spinal cord, where they sit slightly to one side, atop the same nerve roots that would normally transmit sensations from the arm. Since it’s a spinal cord implant, even a person with a shoulder-level amputation can use this device
Fisher’s team sent electrical pulses through different spots in the implanted electrodes, one at a time, while participants used a tablet to report what they were feeling and where.
All the participants experienced sensations somewhere on their missing arm or hand, and they indicated the extent of the area affected by drawing on a blank human form. Three participants reported feelings localized to a single finger or part of the palm.
“I was pretty surprised at how small the area of these sensations were that people were reporting,” Fisher said. “That’s important because we want to generate sensations only where the prosthetic limb is making contact with objects.”
When asked to describe not just where but how the stimulation felt, all four participants reported feeling natural sensations, such as touch and pressure, though these feelings often were mixed with decidedly artificial sensations, such as tingling, buzzing or prickling.
Although some degree of electrode migration is inevitable in the first few days after the leads are implanted, Fisher’s team found that the electrodes, and the sensations they generated, mostly stayed put across the month-long duration of the experiment. That’s important for the ultimate goal of creating a prosthetic arm that provides sensory feedback to the user.
“Stability of these devices is really critical,” Fisher said. “If the electrodes are moving around, that’s going to change what a person feels when we stimulate.”
The next big challenges are to design spinal stimulators that can be fully implanted rather than connecting to a stimulator outside the body and to demonstrate that the sensory feedback can help to improve the control of a prosthetic hand during functional tasks like tying shoes or holding an egg without accidentally crushing it. Shrinking the size of the contacts — the parts of the electrode where current comes out — is another priority. That might allow users to experience even more localized sensations.
“Our goal here wasn’t to develop the final device that someone would use permanently,” Fisher said. “Mostly we wanted to demonstrate the possibility that something like this could work.”
By finely tuning the distance between nanoparticles in a single layer, researchers have made a filter that can change between a mirror and a window.
The development could help scientists create special materials whose optical properties can be changed in real time. These materials could then be used for applications from tuneable optical filters to miniature chemical sensors.
Creating a ‘tuneable’ material - one which can be accurately controlled - has been a challenge because of the tiny scales involved. In order to tune the optical properties of a single layer of nanoparticles - which are only tens of nanometres in size each - the space between them needs to be set precisely and uniformly.
To form the layer, the team of researchers from Imperial College London created conditions for gold nanoparticles to localise at the interface between two liquids that do not mix. By applying a small voltage across the interface, the team have been able to demonstrate a tuneable nanoparticle layer that can be dense or sparse, allowing for switching between a reflective mirror and a transparent surface. The research is published today in Nature Materials.
Read more.
In California’s Salinas Valley, known as the “Salad Bowl of the World,” a push is underway to expand agriculture’s adoption of technology. Special correspondent Cat Wise reports on how such innovation is providing new opportunities for the Valley’s largely Hispanic population. Watch her full piece here: http://to.pbs.org/2gLmEga
According to legend, Pythagoras invented a cup to prevent his students from drinking too greedily. If they overfilled the cup, it would immediately drain out all the fluid. The trick works thanks to a U-shaped tube in the center of the cup. As long as the liquid level is below the highest point in the U-tube, only the entrance side of the tube will be filled. As soon as the liquid level in the cup is higher, the weight of all that fluid forces liquid up and around the bend. This kicks off a siphoning effect that pulls all the fluid out. Coincidentally, this is the same way that toilet flushing works! Pulling the handle releases extra water into the bowl that raises the fluid level higher than the highest point in a U-bend. That establishes a siphon, which (provided nothing has clogged the pipe), empties the toilet bowl. (Video credit: Periodic Videos)
Golden Gate Bridge by Jason Jko
Researchers have built and tested a new mathematical model that successfully reproduces complex brain activity during deep sleep, according to a study published in PLOS Computational Biology.
Recent research has shown that certain patterns of neuronal activity during deep sleep may play an important role in memory consolidation. Michael Schellenberger Costa and Arne Weigenand of the University of Lübeck, Germany, and colleagues set out to build a computational model that could accurately mimic these patterns.
The researchers had previously modeled the activity of the sleeping cortex, the brain’s outer layer. However, sleep patterns thought to aid memory arise from interactions between the cortex and the thalamus, a central brain structure. The new model incorporates this thalamocortical coupling, enabling it to successfully mimic memory-related sleep patterns.
Using data from a human sleep study, the researchers confirmed that their new model accurately reproduces brain activity measured by electroencephalography (EEG) during the second and third stages of non-rapid eye movement (NREM) sleep. It also successfully predicts the EEG effects of stimulation techniques known to enhance memory consolidation during sleep.
The new model is a neural mass model, meaning that it approximates and scales up the behavior of a small group of neurons in order to describe a large number of neurons. Compared with other sleep models, many of which are based on the activity of individual neurons, this new model is relatively simple and could aid in future studies of memory consolidation.
“It is fascinating to see that a model incorporating only a few key mechanisms is sufficient to reproduce the complex brain rhythms observed during sleep,” say senior authors Thomas Martinetz and Jens Christian Claussen.