258 posts
Metal fatigue can lead to abrupt and sometimes catastrophic failures in parts that undergo repeated loading, or stress. It’s a major cause of failure in structural components of everything from aircraft and spacecraft to bridges and powerplants. As a result, such structures are typically built with wide safety margins that add to costs.
Now, a team of researchers at MIT and in Japan and Germany has found a way to greatly reduce the effects of fatigue by incorporating a laminated nanostructure into the steel. The layered structuring gives the steel a kind of bone-like resilience, allowing it to deform without allowing the spread of microcracks that can lead to fatigue failure.
The findings are described in a paper in the journal Science by C. Cem Tasan, the Thomas B. King Career Development Professor of Metallurgy at MIT; Meimei Wang, a postdoc in his group; and six others at Kyushu University in Japan and the Max Planck Institute in Germany.
“Loads on structural components tend to be cyclic,” Tasan says. For example, an airplane goes through repeated pressurization changes during every flight, and components of many devices repeatedly expand and contract due to heating and cooling cycles. While such effects typically are far below the kinds of loads that would cause metals to change shape permanently or fail immediately, they can cause the formation of microcracks, which over repeated cycles of stress spread a bit further and wider, ultimately creating enough of a weak area that the whole piece can fracture suddenly.
Read more.
Being the only audience member at a panel, the grad student pities everyone in the room.
“I travel around the world, eat a lot of shit, and basically do whatever the fuck I want.” Read our complete Profile of Anthony Bourdain here.
“She has autonomy. She has a strong will. But she can’t move. So in many ways her life is my life. It’s bigger than me, it controls me, and it makes me fight like never before. We spend so much time together that she’s a part of me. She knows how important she is to me. She had childhood cancer. Her heart failed three times. And I was by her side the entire time. I never realized that I could love someone as much as this. She could never hurt me. She could never hurt anyone. We always ask her: ‘Are you angry?’, ‘Are you mad?’ And she always says ‘no.’ She laughs when I laugh. And right now I’m trying not to cry. Because she’ll cry if I cry.” (São Paulo, Brazil)
For the past seven years or so, electric vehicles have been on the rise. Tesla is practically a household name, and it’s not uncommon to see EVs from companies like Nissan, Chevy, and BMW on the road now. That wouldn’t have happened without the lithium ion battery. Right now, lithium ion is the most popular battery type for electric vehicles. It can last up to 200 miles on a single charge, and it’s not too expensive to make, which means EVs are also relatively affordable.
But experts say that lithium ion batteries can only take electric cars so far—both on the road and in the marketplace. Before they can beat more popular combustion engine cars, electric vehicles will need a battery makeover, which is why countless engineers and scientists are searching for the next EV battery.
So what’s it going to look like? There are dozens of battery chemistries to play with. But how many of them can even approach the success of lithium ion? Electric vehicle advocate and blogger Chelsea Sexton joins George Crabtree, the director of the Joint Center for Energy Storage Research at Argonne National Laboratory, to discuss potential successors to the popular lithium ion battery.
Amélie doesn’t have a boyfriend. She tried once or twice, but the results were a let-down. Instead, she cultivates a taste for small pleasures.
The Art of Lying
Neuroscientists call for deep collaboration to ‘crack’ the human brain
The time is ripe, the communication technology is available, for teams from different labs and different countries to join efforts and apply new forms of grassroots collaborative research in brain science. This is the right way to gradually upscale the study of the brain so as to usher it into the era of Big Science, claim neuroscientists in Portugal, Switzerland and the United Kingdom. And they are already putting ideas into action.
In a Comment in the journal Nature, an international trio of neuroscientists outlines a concrete proposal for jump-starting a new, bottom-up, collaborative “big science” approach to neuroscience research, which they consider crucial to tackle the still unsolved great mysteries of the brain.
How does the brain function, from molecules to cells to circuits to brain systems to behavior? How are all these levels of complexity integrated to ultimately allow consciousness to emerge in the human brain?
The plan now proposed by Zach Mainen, director of research at the Champalimaud Centre for the Unknown, in Lisbon, Portugal; Michael Häusser, professor of Neuroscience at University College London, United Kingdom; and Alexandre Pouget, professor of neuroscience at the University of Geneva, Switzerland, is inspired by the way particle physics teams nowadays mount their huge accelerator experiments to discover new subatomic particles and ultimately to understand the evolution of the Universe.
“Some very large physics collaborations have precise goals and are self-organized”, says Zach Mainen. More specifically, his model is the ATLAS experiment at the European Laboratory of Particle Physics (CERN, near Geneva), which includes nearly 3,000 scientists from tens of countries and was able (together with its “sister” experiment, CMS) to announce the discovery of the long-sought Higgs boson in July 2012.
Although the size of the teams involved in neuroscience may not be nearly comparable to the CERN teams, the collaborative principles should be very similar, according to Zach Mainen. “What we propose is very much in the physics style, a kind of 'Grand Unified Theory’ of brain research, he says. "Can we do it? Clearly, it’s not going to happen within five years, but we do have theories that need to be tested, and the underlying principles of how to do it will be much the same as in physics.”
To help push neuroscience research to take the leap into the future, the three neuroscientists propose some simple principles, at least in theory: “focus on a single brain function”; “combine experimentalists and theorists”; “standardize tools and methods”; “share data”; “assign credit in new ways”. And one of the fundamental premises to make this possible is to “engender a sphere of trust within which it is safe [to share] data, resources and plans”, they write.
Needless to say, the harsh competitiveness of the field is not a fertile ground for this type of “deep” collaborative effort. But the authors themselves are already putting into practice the principles they advocate in their article.
“We have a group of 20 researchers (10 theorists and 10 experimentalists), about half in the US and half in the UK, Switzerland and Portugal” says Zach Mainen. The group will focus on only one well-defined goal: the foraging behavior for food and water resources in the mouse, recording activity from as much of the brain as possible - at least several dozen brain areas.
“By collaboration, we don’t mean business as usual; we really mean it”, concludes Zach Mainen. “We’ll have 10 labs doing the same experiments, with the same gear, the same computer programs. The data we will obtain will go into the cloud and be shared by the 20 labs. It’ll be almost as a global lab, except it will be distributed geographically.”
On this day in 1996, then-World Chess Champion Garry Kasparov makes his first move in the sixth game against Deep Blue, IBM’s supercomputer. Kasparov emerged the victor, winning three games, drawing in two, and losing one.
via reddit
For as long as scientists have been listening in on the activity of the brain, they have been trying to understand the source of its noisy, apparently random, activity. In the past 20 years, “balanced network theory” has emerged to explain this apparent randomness through a balance of excitation and inhibition in recurrently coupled networks of neurons. A team of scientists has extended the balanced model to provide deep and testable predictions linking brain circuits to brain activity.
Lead investigators at the University of Pittsburgh say the new model accurately explains experimental findings about the highly variable responses of neurons in the brains of living animals. On Oct. 31, their paper, “The spatial structure of correlated neuronal variability,” was published online by the journal Nature Neuroscience.
The new model provides a much richer understanding of how activity is coordinated between neurons in neural circuits. The model could be used in the future to discover neural “signatures” that predict brain activity associated with learning or disease, say the investigators.
“Normally, brain activity appears highly random and variable most of the time, which looks like a weird way to compute,” said Brent Doiron, associate professor of mathematics at Pitt, senior author on the paper, and a member of the University of Pittsburgh Brain Institute (UPBI). “To understand the mechanics of neural computation, you need to know how the dynamics of a neuronal network depends on the network’s architecture, and this latest research brings us significantly closer to achieving this goal.”
Earlier versions of the balanced network theory captured how the timing and frequency of inputs—excitatory and inhibitory—shaped the emergence of variability in neural behavior, but these models used shortcuts that were biologically unrealistic, according to Doiron.
“The original balanced model ignored the spatial dependence of wiring in the brain, but it has long been known that neuron pairs that are near one another have a higher likelihood of connecting than pairs that are separated by larger distances. Earlier models produced unrealistic behavior—either completely random activity that was unlike the brain or completely synchronized neural behavior, such as you would see in a deep seizure. You could produce nothing in between.”
In the context of this balance, neurons are in a constant state of tension. According to co-author Matthew Smith, assistant professor of ophthalmology at Pitt and a member of UPBI, “It’s like balancing on one foot on your toes. If there are small overcorrections, the result is big fluctuations in neural firing, or communication.”
The new model accounts for temporal and spatial characteristics of neural networks and the correlations in the activity between neurons—whether firing in one neuron is correlated with firing in another. The model is such a substantial improvement that the scientists could use it to predict the behavior of living neurons examined in the area of the brain that processes the visual world.
After developing the model, the scientists examined data from the living visual cortex and found that their model accurately predicted the behavior of neurons based on how far apart they were. The activity of nearby neuron pairs was strongly correlated. At an intermediate distance, pairs of neurons were anticorrelated (When one responded more, the other responded less.), and at greater distances still they were independent.
“This model will help us to better understand how the brain computes information because it’s a big step forward in describing how network structure determines network variability,” said Doiron. “Any serious theory of brain computation must take into account the noise in the code. A shift in neuronal variability accompanies important cognitive functions, such as attention and learning, as well as being a signature of devastating pathologies like Parkinson’s disease and epilepsy.”
While the scientists examined the visual cortex, they believe their model could be used to predict activity in other parts of the brain, such as areas that process auditory or olfactory cues, for example. And they believe that the model generalizes to the brains of all mammals. In fact, the team found that a neural signature predicted by their model appeared in the visual cortex of living mice studied by another team of investigators.
“A hallmark of the computational approach that Doiron and Smith are taking is that its goal is to infer general principles of brain function that can be broadly applied to many scenarios. Remarkably, we still don’t have things like the laws of gravity for understanding the brain, but this is an important step for providing good theories in neuroscience that will allow us to make sense of the explosion of new experimental data that can now be collected,” said Nathan Urban, associate director of UPBI.
Man dies. Come from darkness, into darkness he returns, and is reabsorbed, without a trace left, into the illimitable void of time.
Leonid Andreyev. (via drunk-on-books)
Mickey Mouse Remastered
1928 vs. 2014
https://www.youtube.com/watch?v=2VdAV0Yp_Gg
Bobby Fisher playing 50 opponents simultaneously. He won 47, lost 1 and drew 2. 1964.
via reddit
“There weren’t many opportunities to work in Paraguay. I was selling tools on the street. There was no money. There was nothing. I came to Argentina when I was nineteen and life has been so much better. I work every day. I’m close to opening another shop just like this. I do get called a lot of names like ‘Shitty Paraguayan.’ But I’m used to it now. In the beginning I would try to fight back, but not anymore. When I first arrived, I fought a man who tried to stab me through the cage. But he came back with twenty people and destroyed my store. So I don’t fight back anymore. Everyone in the neighborhood knows me now, so I’m treated with more respect. And my son was born in this country. So this one is an Argentinian. He’s going to study.” (Buenos Aires, Argentina)
:)
If you liked The Map of Physics animation, I bet you’ll like The Map of Mathematics too (also by Dominic Walliman):
h-t Open Culture
A research group at MIT has created a new class of fast-acting, soft robots from hydrogels. The robots are activated by pumping water in or out of hollow, interlocking chambers; depending on the configuration, this can curl or stretch parts of the robot. The hydrogel bots can move quickly enough to catch and release a live fish without harming it. (Which is a feat of speed I can’t even manage.) Because hydrogels are polymer gels consisting primarily of water, the robots could be especially helpful in biomedical applications, where their components may be less likely to be rejected by the body. For more, see MIT News or the original paper. (Image credit: H. Yuk/MIT News, source; research credit: H. Yuk et al.)
Austria
Taken By SusanK31
:')
Irving Langmuir, who won the 1932 Nobel Prize for ‘Surface Chemistry’, demonstrates how dipping an oil-covered finger into water creates a film of oil, pushing floating particles of powder to the edge.
The same phenomenon can be used to power a paper boat with a little ‘fuel’ applied to the back: as the film expands over the water, the boat is is propelled forward:
With experiments like this he revealed that these films are just one molecule thick - a remarkable finding in relation to the size of molecules.
In the full archive film, Langmiur goes on to demonstrate proteins spreading in the same way, revealing the importance of molecular layering for structure.
First, he drops protein solution onto the surface, and it spreads out in a clear circle, with a jagged edge:
Add a little more oil on top, and a star shape appears:
By breaking it up further, he makes chunks of the film which behave like icebergs on water:
You can watch the full demonstrations, along with hours more classic science footage, in our archive.
Mesquite Dunes | California (by Chris Lazzery)