In the system AR Scorpii a rapidly spinning white dwarf star powers electrons up to almost the speed of light. These high energy particles release blasts of radiation that lash the companion red dwarf star, and cause the entire system to pulse dramatically every 1.97 minutes with radiation ranging from the ultraviolet to radio.
The star system AR Scorpii, or AR Sco for short, lies in the constellation of Scorpius, 380 light-years from Earth. It comprises a rapidly spinning white dwarf, the size of Earth but containing 200,000 times more mass, and a cool red dwarf companion one third the mass of the Sun, orbiting one another every 3.6 hours in a cosmic dance as regular as clockwork.
Read more at: cosmosmagazine / astronomynow
It’s officially starry scholastic month!
Planet X starts off with a quick science fact!
Planet X’s first lesson will be posted tonight!
Today’s starry Fact: Niku
http://www.popularmechanics.com/space/deep-space/a22293/niku-weird-object-beyond-neptune/
Both hemispheres of the brain process numbers
Researchers of the Jena University (Germany) and of the Jena University Hospital located an important region for the visual processing of numbers in the human brain and showed that it is active in both hemispheres. In the ’Journal of Neuroscience’ the scientists published high resolution magnetic resonance recordings of this region.
The human brain works with division of labour. Although our thinking organ excels in displaying amazing flexibility and plasticity, typically different areas of the brain take over different tasks. While words and language are mainly being processed in the left hemisphere, the right hemisphere is responsible for numerical reasoning. According to previous findings, this division of labour originates from the fact that the first steps in the processing of letters and numbers are also located individually in the different hemispheres. But this is not the case, at least not when it comes to the visual processing of numbers.
Neuroscientists of the Friedrich Schiller University Jena and of the Jena University Hospital discovered that the visual processing of numbers takes place in a so-called ‘visual number form area’ (NFA) - in fact in both hemispheres alike. The Jena scientists were the first to publish high resolution magnetic resonance recordings showing the activity in this region of the brain of healthy test persons. The area is normally difficult to get access to.
The 'blind spot’ in the brain
In their study Dr. Mareike Grotheer and Prof. Dr. Gyula Kovács from the Institute for Psychology of Jena University as well as Dr. Karl-Heinz Herrmann from the Department of Radiology (IDIR) of the Jena University Hospital presented subjects with numbers, letters and pictures of everyday objects. Meanwhile the participants’ brain activity was recorded using magnetic resonance imaging (MRI). Thus the researchers were able to clearly identify the region in which the visual processing of numbers takes place. The small area at the underside of the left and right temporal lobe reacted with increased activity at the presentation of numbers. Letters and other images but also false numbers lead to a significantly lower brain activity in this area.
Although the Jena team already knew from other scientists’ previous research where they had to look for the area, a lot of developmental work went into the newly published story. “This region has been a kind of blind spot in the human brain until now,” Mareike Grotheer says. And here is why: Hidden underneath the ear and the acoustic meatus, surrounded by bone and air, previous MRI scans showed a number of artefacts and thus obstructed detailed research.
For their study the Jena scientists used a high-performance 3 tesla MRI scanner of the Institute of Diagnostic and Interventional Radiology (IDIR) of the Jena University Hospital. They recorded three-dimensional images of the brain of the test subjects at an unusually high spatial resolution and hence with only very few artefacts. In addition these recordings were spatially smoothed whereby the remaining 'white noise’ could be removed. This approach will help other scientists to investigate a part of the brain that until now had been nearly inaccessible. “In this region not only numbers are being processed but also faces and objects,” Prof. Kovács states.
So it turns out you can train a neural network to generate paint colors if you give it a list of 7,700 Sherwin-Williams paint colors as input. How a neural network basically works is it looks at a set of data - in this case, a long list of Sherwin-Williams paint color names and RGB (red, green, blue) numbers that represent the color - and it tries to form its own rules about how to generate more data like it.
Last time I reported results that were, well… mixed. The neural network produced colors, all right, but it hadn’t gotten the hang of producing appealing names to go with them - instead producing names like Rose Hork, Stanky Bean, and Turdly. It also had trouble matching names to colors, and would often produce an “Ice Gray” that was a mustard yellow, for example, or a “Ferry Purple” that was decidedly brown.
These were not great names.
There are lots of things that affect how well the algorithm does, however.
One simple change turns out to be the “temperature” (think: creativity) variable, which adjusts whether the neural network always picks the most likely next character as it’s generating text, or whether it will go with something farther down the list. I had the temperature originally set pretty high, but it turns out that when I turn it down ever so slightly, the algorithm does a lot better. Not only do the names better match the colors, but it begins to reproduce color gradients that must have been in the original dataset all along. Colors tend to be grouped together in these gradients, so it shifts gradually from greens to browns to blues to yellows, etc. and does eventually cover the rainbow, not just beige.
Apparently it was trying to give me better results, but I kept screwing it up.
Raw output from RGB neural net, now less-annoyed by my temperature setting
People also sent in suggestions on how to improve the algorithm. One of the most-frequent was to try a different way of representing color - it turns out that RGB (with a single color represented by the amount of Red, Green, and Blue in it) isn’t very well matched to the way human eyes perceive color.
These are some results from a different color representation, known as HSV. In HSV representation, a single color is represented by three numbers like in RGB, but this time they stand for Hue, Saturation, and Value. You can think of the Hue number as representing the color, Saturation as representing how intense (vs gray) the color is, and Value as representing the brightness. Other than the way of representing the color, everything else about the dataset and the neural network are the same. (char-rnn, 512 neurons and 2 layers, dropout 0.8, 50 epochs)
Raw output from HSV neural net:
And here are some results from a third color representation, known as LAB. In this color space, the first number stands for lightness, the second number stands for the amount of green vs red, and the third number stands for the the amount of blue vs yellow.
Raw output from LAB neural net:
It turns out that the color representation doesn’t make a very big difference in how good the results are (at least as far as I can tell with my very simple experiment). RGB seems to be surprisingly the best able to reproduce the gradients from the original dataset - maybe it’s more resistant to disruption when the temperature setting introduces randomness.
And the color names are pretty bad, no matter how the colors themselves are represented.
However, a blog reader compiled this dataset, which has paint colors from other companies such as Behr and Benjamin Moore, as well as a bunch of user-submitted colors from a big XKCD survey. He also changed all the names to lowercase, so the neural network wouldn’t have to learn two versions of each letter.
And the results were… surprisingly good. Pretty much every name was a plausible match to its color (even if it wasn’t a plausible color you’d find in the paint store). The answer seems to be, as it often is for neural networks: more data.
Raw output using The Big RGB Dataset:
I leave you with the Hall of Fame:
RGB:
HSV:
LAB:
Big RGB dataset:
What does it take to teach a bee to use tools? A little time, a good teacher and an enticing incentive. Read more here: http://to.pbs.org/2mpRUAz
Credit: O.J. Loukola et al., Science (2017)
We don’t have any real pictures of the Milky Way galaxy. Most non-illustrated images of the entire Milky Way spiral are actually of another spiral galaxy called Messier 74. It’s impossible to take a full photo of the Milky Way’s spiral structure because it’s about 100,000 light-years across, and we’re stuck on the inside. Source Source 2 Source 3