One of my favourite geology facts is this: These diagrams are a lie.
The mantle isn’t yellow. Nor is it orange, or red, or brown, or gray, or black.
The earth’s mantle is made up largely of peridotite.
The earth’s mantle is lime green.
On Oct 31, 2015 scientists used radar imaging to photograph a little dead comet. Much to their surprise, it looked quite a lot like a skull! Due to that and the timing, they nicknamed it Death Comet.
Death Comet will swing by us again this year (albeit a little later than last time).
You can read more about the Death Comet here: https://www.universetoday.com/140108/the-death-comet-will-pass-by-earth-just-after-halloween/
Happy Halloween, all!!! 💀🎃
How playing an instrument benefits your brain
Recent research about the mental benefits of playing music has many applications, such as music therapy for people with emotional problems, or helping to treat the symptoms of stroke survivors and Alzheimer’s patients. But it is perhaps even more significant in how much it advances our understanding of mental function, revealing the inner rhythms and complex interplay that make up the amazing orchestra of our brain.
Did you know that every time musicians pick up their instruments, there are fireworks going off all over their brain? On the outside they may look calm and focused, reading the music and making the precise and practiced movements required. But inside their brains, there’s a party going on.
From the TED-Ed lesson How playing an instrument benefits your brain - Anita Collins
Animation by Sharon Colman Graham
Better late than never!
Here’s a comic about Cosmic Strings!
https://www.space.com/9315-cracks-universe-physicists-search-cosmic-strings.html
https://www.sciencedaily.com/releases/2008/01/080120182315.htm
Wait, people are mad that it's blurry? Isn't that black hole in another galaxy????
It’s literally like 55 million light years away
A Powerful Solar Flare : It was one of the most powerful solar flares in recorded history. Occurring in 2003 and seen across the electromagnetic spectrum, the Sun briefly became over 100 times brighter in X-rays than normal. The day after this tremendous X 17 solar flare – and subsequent Coronal Mass Ejection (CME) – energetic particles emitted from the explosions struck the Earth, creating auroras and affecting satellites. The spacecraft that took these frames – SOHO – was put in a turtle-like safe mode to avoid further damage from this and subsequent solar particle storms. The featured time-lapse movie condenses into 10 seconds events that occurred over 4 hours. The CME, visible around the central sun-shade, appears about three-quarters of the way through the video, while frames toward the very end are progressively noisier as protons from the explosions strike SOHO’s LASCO detector. One this day in 1859, the effects of an even more powerful solar storm caused telegraphs on Earth to spark in what is known as the Carrington Event. Powerful solar storms such as these may create beautiful aurora-filled skies, but they also pose a real danger as they can damage satellites and even power grids across the Earth. via NASA
It’s officially starry scholastic month!
Planet X starts off with a quick science fact!
Planet X’s first lesson will be posted tonight!
Today’s starry Fact: Niku
http://www.popularmechanics.com/space/deep-space/a22293/niku-weird-object-beyond-neptune/
Oh hey, not a big deal, but the hubble took a picture of a star that’s nearing supernova status
Monkey sees… monkey knows?
Socrates is often quoted as having said, “I know that I know nothing.” This ability to know what you know or don’t know—and how confident you are in what you think you know—is called metacognition.
When asked a question, a human being can decline to answer if he knows that he does not know the answer. Although non-human animals cannot verbally declare any sort of metacognitive judgments, Jessica Cantlon, an assistant professor of brain and cognitive sciences at Rochester, and PhD candidate Stephen Ferrigno, have found that non-human primates exhibit a metacognitive process similar to humans. Their research on metacognition is part of a larger enterprise of figuring out whether non-human animals are “conscious” in the human sense.
In a paper published in Proceedings of the Royal Society B, they report that monkeys, like humans, base their metacognitive confidence level on fluency—how easy something is to see, hear, or perceive. For example, humans are more confident that something is correct, trustworthy, or memorable—even if this may not be the case—if it is written in a larger font.
“Humans have a variety of these metacognitive illusions—false beliefs about how they learn or remember best,” Cantlon says.
Because other primate species exhibit metacognitive illusions like humans do, the researchers believe this cognitive ability could have an evolutionary basis. Cognitive abilities that have an evolutionary basis are likely to emerge early in development.
“Studying metacognition in non-human primates could give us a foothold for how to study metacognition in young children,” Cantlon says. “Understanding the most basic and primitive forms of metacognition is important for predicting the circumstances that lead to good versus poor learning in human children.”
Cantlon and Ferrigno determined that non-human primates exhibited metacognitive illusions after they observed primates completing a series of steps on a computer:
The monkey touches a start screen.
He sees a picture, which is the sample. The goal is to remember that sample because he will be tested on this later. The monkey touches the sample to move to the next screen.
The next screen shows the sample picture among some distractors. The monkey must touch the image he has seen before.
Instead of getting a reward right away—to eliminate decisions based purely on response-reward—the monkey next sees a betting screen to communicate how certain he is that he’s right. If he chooses a high bet and is correct, three tokens are added to a token bank. Once the token bank is full, the monkey gets a treat. If he gets the task incorrect and placed a high bet, he loses three tokens. If he placed a low bet, he gets one token regardless if he is right or wrong.
Researchers manipulated the fluency of the images, first making them easier to see by increasing the contrast (the black image), then making them less fluent by decreasing the contrast (the grey image).
The monkeys were more likely to place a high bet, meaning they were more confident that they knew the answer, when the contrast of the images was increased.
“Fluency doesn’t affect actual memory performance,” Ferrigno says. “The monkeys are just as likely to get an answer right or wrong. But this does influence how confident they are in their response.”
Since metacognition can be incorrect through metacognitive illusion, why then have humans retained this ability?
“Metacognition is a quick way of making a judgment about whether or not you know an answer,” Ferrigno says. “We show that you can exploit and manipulate metacognition, but, in the real world, these cues are actually pretty good most of the time.”
Take the game of Jeopardy, for example. People press the buzzer more quickly than they could possibly arrive at an answer. Higher fluency cues, such as shorter, more common, and easier-to-pronounce words, allow the mind to make snap judgments about whether or not it thinks it knows the answer, even though it’s too quick for it to actually know.
Additionally, during a presentation, a person presented with large amounts of information can be fairly confident that the title of a lecture slide, written in a larger font, will be more important to remember than all the smaller text below.
“This is the same with the monkeys,” Ferrigno says. “If they saw the sample picture well and it was easier for them to encode, they will be more confident in their answer and will bet high.”
Whenever someone tries to claim that evolution is a lie, I send them a picture of platybelodon.
1. It’s an excellent example of transitional evolution.
2. It’s a mess who would intentionally do this and why
3. It makes them piss themselves a little.
“Evolution is just a theory-”
To help me refer my chemistry resources to students who seek help, I made a list of all the chemistry master posts I created in the past. Please enjoy and don’t forget to message me if any other chemistry questions arise or if you found anything in this post helpful!
General Chemistry 101 // Contains helpful websites, practice tests, study guides, and tips.
Steps to Balancing Chemical Equations // A 10 step guide to solving unbalanced chemical equations. Practice problems are provided at the end!
Ideal Gas Laws // A little introduction to the history of ideal gas laws and a breakdown of each equation. Also, it contains resources to learn more about them, practice problems, and a personal tip on how I tackle problems!
Electrochemistry Q // I got asked about galvanic cells. With my best attempt, I answered with basic definitions and posted some good websites! However, my knowledge grew about this topic so ask any further questions in my ask.
How I Survived Organic Chemistry // I provide tips on how to study and prepare for organic chemistry.
Organic Chemistry Synthesis Q // My old organic chemistry professor gave us amazing roadmaps of the syntheses we learned in organic chemistry I (Alcohol, alkyne, alkene, epoxide).
Emil Fisher // A short history post about Emil Fisher and his work with fisher projections. Linked are some practice problems on fisher projections which are my favorite!!
NMR // Another short history post but about nuclear magnetic resonance spectroscopy. Like before, practice problems are linked in the description!
A Master Post of Chemistry Resources // My favorite master post of all time. It contains websites with information and practice problems for every subject in chemistry.
How To: Pass and Prepare for A Chemistry Exam // Chemistry exams can be pretty stressful but they don’t have to be! :-)
Tips for Organization // I talk about how to organize your chemistry notes and binders.
Pursuing A Chemistry Degree Q // Just a little something for students who want to major in chemistry but don’t know if they should do a B.A. or B.S. (Valid for the USA, not sure if other countries do the B.A./B.S. system).
Hopefully, I’ll be updating this with more resources in the future! Don’t forget to check out my “Dummies Guide to Physics”! Another good master post that isn’t related to chemistry.
- TheChemistryNerd
A team of physicists at the University of California has uploaded a paper to the arXiv preprint server in which they suggest that work done by a team in Hungary last year might have revealed the existence of a fifth force of nature. Their paper has, quite naturally, caused quite a stir in the physics community as several groups have set a goal of reproducing the experiments conducted by the team at the Hungarian Academy of Science’s Institute for Nuclear Research.
The work done by the Hungarian team, led by Attila Krasznahorkay, examined the possible existence of dark photons - the analog of conventional photons but that work with dark matter. They shot protons at lithium-7 samples creating beryllium-8 nuclei, which, as it decayed, emitted pairs of electrons and positrons. Surprisingly, as they monitored the emitted pairs, instead of a consistent drop-off, there was a slight bump, which the researchers attributed to the creation of an unknown particle with a mass of approximately 17 MeV. The team uploaded their results to the arXiv server, and their paper was later published by Physical Review Letters. It attracted very little attention until the team at UoC uploaded their own paper suggesting that the new particle found by the Hungarian team was not a dark photon, but was instead possibly a protophobic X boson, which they further suggested might carry a super-short force which acts over just the width of an atomic nucleus - which would mean that it is a force that is not one of the four described as the fundamental forces that underlie modern physics.
The paper uploaded by the UoC team has created some excitement, as well as public exclamations of doubt - reports of the possibility of a fifth force of nature have been heard before, but none have panned out. But still, the idea is intriguing enough that several teams have announced plans to repeat the experiments conducted by the Hungarian team, and all eyes will be on the DarkLight experiments at the Jefferson Laboratory, where a team is also looking for evidence of dark photons - they will be shooting electrons at gas targets looking for anything with masses between 10 and 100 MeV, and now more specifically for those in the 17 MeV region. What they find, or don’t, could prove whether an elusive fifth force of nature actually exists, within a year’s time. [Image][Continue Reading→]
Deuteranomalia: This is caused by reduced sensitivity to green light. Deutan color vision deficiencies are by far the most common forms of color blindness. This subtype of red-green color blindness is found in about 6% of the male population, mostly in its mild form deuteranomaly.
Protanopia: Caused by a reduced sensitivity to red light due to either defective or a lack of long -wavelength cones (red cones). Some scientists estimate that being a protan is associated with a risk of a road accident equivalent to having a blood alcohol level of between 0.05 and 0.08 per cent.
Tritanopia: People affected by tritan color blindness confuse blue with green and yellow with violet. This is due to a defective short-wavelength cone (blue cone). Whilst Protanopia and Deuteranomalia are significantly more common in men, tritanopia affects both sexes in equal amounts.
Monochromacy: Only around 0.00003% of the world’s population suffers from total color blindness, where everything is seen in black and white.
Quantum computers have arrived.
First there was the mainframe, then came the personal computer, now we’ve reached a new monumental landmark in the history of technology. For the first time ever, IBM aims to bring universal quantum computers out of the lab and into the commercial realm. Projected to sift through vast possibilities and data, to choose the perfect option or discover unseen patterns, quantum computing is poised to drive a new era of innovation across industries. This means that some of the world’s most complex problems now have a chance of being solved. And as the quantum eco-system grows, a seemingly impossible kind of physics could start to make the most incredible things possible.
Learn More →
Contamination-seeking drones - IBM Patent 9447448.
Stay back and let the drones do the dirty work. Patent 9447448 makes cognitive drones able to inspect and decontaminate places so humans don’t have to. The drones’ on-board AI system can collect and analyze samples, so it can identify and clean up any bacteria or outbreak. Meanwhile you get to hang back, safely out of harm’s way.
This is just one of the record-breaking 8,000+ patents IBM received this year. Explore the latest IBM patents. →
Uhmm, how exactly were all of those megafauna able to grow that large and function??? And how the fuck was that giant bird actually able to fly????????
Realistic answer? Mostly because humans hadn’t come around and hunted them all to extinction yet. Dinosaurs are exempt from this because they vastly predated us, but almost anything that coincided with our timeline, we killed.
We are, for our relatively small size and frail, sometimes clumsy physical characteristics, a TERRIFYING species.
Evolution has produced all kinds of Big Shit. Ever seen a Paraceratherium?
Or a size chart for Sauropods that wasn’t produced before 1970?
Evolution likes to make things big. It tries this all the time. Whenever there’s a plentiful food source and enough space, things just get bigger and bigger.
The largest animal to have ever lived is alive right now. It’s the blue whale. And it’s truly a masterpiece of evolution’s drive to Go Bigger.
HOW THIS HAPPEN.
Basically, because they live in the ocean, space isn’t really an issue for them, and thanks to buoyancy, neither is their frankly ALARMING weight. The only real limit to their size is chemistry – whether they can possibly metabolize enough energy fast enough to stay alive at their size. Blue whales are estimated (having, for obvious reasons, never been measured in one piece) to be able to reach over 200 tons. As an average weight. Fluctuating with their feeding season. This was for a 98 foot long whale. The longest whales ever measured were 110 feet and 109 feet, both females. (Males tend to be slightly shorter, but heavier at any given length).
A blue whale can hold over 90 tons of food and water in its mouth.
They need 1.5 million kilocalories of food per day.
Blue whales are MASSIVE.
They are not the LONGEST animal in the world, though, just the heaviest. The longest is likely one of two things: the Lions Mane Jellyfish or the Bootlace Worm. The longest recorded Lions Mane Jellyfish washed up on shore with tentacles measuring 120 feet. It is unknown if they can be longer than this, but certainly possible given how fragile they are and the fact that this is just one that happened to get washed up on a beach.
The longest recorded bootlace worm SMASHES this record, but because of its stretchy body and the date of the recording (1864), the scientific accuracy is disputed. It also washed up on shore and measured 180 feet.
How far off topic am I this time?
Anyway yes animals get big sometimes. It helps deter predators when you’re too big to be hunted by anything. The only natural predator of the blue whale is killer whales. Regarding bears specifically, Brown Bears are in far more trouble than Black Bears because the Brown Bear line trends toward going bigger, which makes them easier targets for humans, while Black Bears have evolved to be shy and stealthy and avoid human contact. As a result, Brown bears are far larger, but far more likely to be driven to extinction by humans. Nature functions just fine if it’s left alone. We just ruin everything we touch. That’s why the largest individual crocodiles still living right now are the ones that have learned to avoid humans at all costs: conservation laws have not protected crocodiles from poaching long enough for them to get really, really big, even though we have significant historical records of crocodiles larger than what we generally see now. At least some of those records are considered to be reliable and put a couple extant crocodile species well over 20 feet – some over 22. The largest reliably measured crocodile was Lolong, a Saltwater crocodile in the Philippines who measured 20 feet 3 inches and died a few years ago.
And Argentavis magnificens was able to fly because it was designed to. Even with a massive 24 foot wingspan, it only weighed around 175 pounds, because birds have very lightweight skeletons. As impressive as the size was,
a living bird of that size probably weighed about as much, if not a bit less, than the man standing next to it. The surface area of its wings would have been sufficient to keep it in the air, mostly by gliding the way you see large modern birds of prey do. It would have resembled a condor or vulture, just much larger.
See those feathers? The skeleton they found was so well-preserved that scientists were able to examine the pigment cells in the feathers and compare them to those of modern day birds.
And they were able to do this with such accuracy that they know the coloration of this dinosaur. In life it looked something like this.
It just baffles me that we know the color patterns of an animal that has been dead for 161 million years
So it turns out you can train a neural network to generate paint colors if you give it a list of 7,700 Sherwin-Williams paint colors as input. How a neural network basically works is it looks at a set of data - in this case, a long list of Sherwin-Williams paint color names and RGB (red, green, blue) numbers that represent the color - and it tries to form its own rules about how to generate more data like it.
Last time I reported results that were, well… mixed. The neural network produced colors, all right, but it hadn’t gotten the hang of producing appealing names to go with them - instead producing names like Rose Hork, Stanky Bean, and Turdly. It also had trouble matching names to colors, and would often produce an “Ice Gray” that was a mustard yellow, for example, or a “Ferry Purple” that was decidedly brown.
These were not great names.
There are lots of things that affect how well the algorithm does, however.
One simple change turns out to be the “temperature” (think: creativity) variable, which adjusts whether the neural network always picks the most likely next character as it’s generating text, or whether it will go with something farther down the list. I had the temperature originally set pretty high, but it turns out that when I turn it down ever so slightly, the algorithm does a lot better. Not only do the names better match the colors, but it begins to reproduce color gradients that must have been in the original dataset all along. Colors tend to be grouped together in these gradients, so it shifts gradually from greens to browns to blues to yellows, etc. and does eventually cover the rainbow, not just beige.
Apparently it was trying to give me better results, but I kept screwing it up.
Raw output from RGB neural net, now less-annoyed by my temperature setting
People also sent in suggestions on how to improve the algorithm. One of the most-frequent was to try a different way of representing color - it turns out that RGB (with a single color represented by the amount of Red, Green, and Blue in it) isn’t very well matched to the way human eyes perceive color.
These are some results from a different color representation, known as HSV. In HSV representation, a single color is represented by three numbers like in RGB, but this time they stand for Hue, Saturation, and Value. You can think of the Hue number as representing the color, Saturation as representing how intense (vs gray) the color is, and Value as representing the brightness. Other than the way of representing the color, everything else about the dataset and the neural network are the same. (char-rnn, 512 neurons and 2 layers, dropout 0.8, 50 epochs)
Raw output from HSV neural net:
And here are some results from a third color representation, known as LAB. In this color space, the first number stands for lightness, the second number stands for the amount of green vs red, and the third number stands for the the amount of blue vs yellow.
Raw output from LAB neural net:
It turns out that the color representation doesn’t make a very big difference in how good the results are (at least as far as I can tell with my very simple experiment). RGB seems to be surprisingly the best able to reproduce the gradients from the original dataset - maybe it’s more resistant to disruption when the temperature setting introduces randomness.
And the color names are pretty bad, no matter how the colors themselves are represented.
However, a blog reader compiled this dataset, which has paint colors from other companies such as Behr and Benjamin Moore, as well as a bunch of user-submitted colors from a big XKCD survey. He also changed all the names to lowercase, so the neural network wouldn’t have to learn two versions of each letter.
And the results were… surprisingly good. Pretty much every name was a plausible match to its color (even if it wasn’t a plausible color you’d find in the paint store). The answer seems to be, as it often is for neural networks: more data.
Raw output using The Big RGB Dataset:
I leave you with the Hall of Fame:
RGB:
HSV:
LAB:
Big RGB dataset:
Science fiction writers and producers of TV medical dramas: have you ever needed to invent a serious-sounding disease whose symptoms, progression, and cure you can utterly control? Artificial intelligence can help!
Blog reader Kate very kindly compiled a list of 3,765 common names for conditions from this site, and I gave them to an open-source machine learning algorithm called a recursive neural network, which learns to imitate its training data. Given enough examples of real-world diseases, a neural network should be able to invent enough plausible-sounding syndromes to satisfy any hypochondriac.
Early on in the training, the neural network was producing what were identifiably diseases, but probably wouldn’t fly in a medical drama. “I’m so sorry. You have… poison poison tishues.”
Much Esophageal Eneetems Vomania Poisonicteria Disease Eleumathromass Sexurasoma Ear Allergic Antibody Insect Sculs Poison Poison Tishues Complex Disease
As the training got going, the neural network began to learn to replicate more of the real diseases - lots of ventricular syndromes, for example. But the made-up diseases still weren’t too convincing, and maybe even didn’t sound like diseases at all. (Except for RIP Syndrome. I’d take that one seriously)
Seal Breath Tossy Blanter Cancer of Cancer Bull Cancer Spisease Lentford Foot Machosaver RIP Syndrome
The neural network eventually progressed to a stage where it was producing diseases of a few basic varieties :
First kind of disease: This isn’t really a disease. The neural network has just kind of named a body part, or a couple of really generic disease-y words. Pro writer tip: don’t use these in your medical drama.
Fevers Heading Disorder Rashimia Causes Wound Eye Cysts of the Biles Swollen Inflammation Ear Strained Lesions Sleepys Lower Right Abdomen Degeneration Disease Cancer of the Diabetes
Second kind of disease: This disease doesn’t exist, and sounds reasonably convincing to me, though it would probably have a different effect on someone with actual medical training.
Esophagia Pancreation Vertical Hemoglobin Fever Facial Agoricosis Verticular Pasocapheration Syndrome Agpentive Colon Strecting Dissection of the Breath Bacterial Fradular Syndrome Milk Tomosis Lemopherapathy Osteomaroxism Lower Veminary Hypertension Deficiency Palencervictivitis Asthodepic Fever Hurtical Electrochondropathy Loss Of Consufficiency Parpoxitis Metatoglasty Fumple Chronosis Omblex's Hemopheritis Mardial Denection Pemphadema Joint Pseudomalabia Gumpetic Surpical Escesion Pholocromagea Helritis and Flatelet’s Ear Asteophyterediomentricular Aneurysm
Third kind of disease: Sounds both highly implausible but also pretty darn serious. I’d definitely get that looked at.
Ear Poop Orgly Disease Cussitis Occult Finger Fallblading Ankle Bladders Fungle Pain Cold Gloating Twengies Loon Eye Catdullitis Black Bote Headache Excessive Woot Sweating Teenagerna Vain Syndrome Defentious Disorders Punglnormning Cell Conduction Hammon Expressive Foot Liver Bits Clob Sweating,Sweating,Excessive Balloblammus Metal Ringworm Eye Stools Hoot Injury Hoin and Sponster Teenager’s Diarey Eat Cancer Cancer of the Cancer Horse Stools Cold Glock Allergy Herpangitis Flautomen Teenagees Testicle Behavior Spleen Sink Eye Stots Floot Assection Wamble Submoration Super Syndrome Low Life Fish Poisoning Stumm Complication Cat Heat Ovarian Pancreas 8 Poop Cancer Of Hydrogen Bingplarin Disease Stress Firgers Causes of the ladder Exposure Hop D Treat Decease
Diseases of the fourth kind: These are the, um, reproductive-related diseases. And those that contain unprintable four-letter words. They usually sound ludicrous, and entirely uncomfortable, all at the same time. And I really don’t want to print them here. However! If you are in possession of a sense of humor and an email address, you can let me know here and I’ll send them to you.
(Pie -> cat courtesy of https://affinelayer.com/pixsrv/ )
I work with neural networks, which are a type of machine learning computer program that learn by looking at examples. They’re used for all sorts of serious applications, like facial recognition and ad targeting and language translation. I, however, give them silly datasets and ask them to do their best.
So, for my latest experiment, I collected the titles of 2237 sweet and savory pie recipes from a variety of sources including Wikipedia and David Shields. I simply gave them to a neural network with no explanation (I never give it an explanation) and asked it to try to generate more.
Its very first attempt left something to be desired, but it had figured out that "P”, “i”, and “e” were important somehow.
e Piee i m t iee ic ic Pa ePeeetae a e eee ema iPPeaia eieer i i i ie e eciie Pe eaei a
Second checkpoint. Progress: Pie.
Pie Pee Pie Pimi Pie Pim Cue Pie Pie (er Wie Pae Pim Piu Pie Pim Piea Cre Pia Pie Pim Pim Pie Pie Piee Pie Piee
This is expected, since the word “pie” is both simple and by far the most common word in the dataset. It stays in the stage above for rather a while, able to spell only “Pie” and nothing else. It’s like evolution trying to get past the single-celled organism stage. After 4x more time has elapsed, it finally adds a few more words: “apple”, “cream”, and “tart”. Then, at the sixth checkpoint, “pecan”.
Seventh checkpoint: These are definitely pies. We are still working on spelling “strawberry”, however.
Boatin Batan Pie Shrawberry Pie With An Cream Pie Cream Pie Sweesh Pie Ipple Pie Wrasle Cream Pie Swrawberry Pie Cream Pie Sae Fart Tart Cheem Pie Sprawberry Cream Pie Cream Pie
10th checkpoint. Still working.
Coscard Pie Tluste Trenss Pie Wot Flustickann Fart Oag’s Apple Pie Daush Flumberry O Cheesaliane Rutter Chocklnd Apple Rhupperry pie Flonberry Peran Pie Blumbberry Cream Pie Futters Whabarb Wottiry Rasty Pasty Kamphible Idponsible Swarlot Cream Cream Cront
16th checkpoint. Showing some signs of improvement? Maybe. It thinks Qtrupberscotch is a thing.
Buttermitk Tlreed whonkie Pie Spiatake Bog Pastry Taco Custard Pie Apple Pie With Pharf Calamed apple Freech Fodge Cranberry Rars Farb Fart Feep-Lisf Pie With Qpecisn-3rnemerry Fluit Turd Turbyy Raisin Pie Forp Damelnut Pie Flazed Berry Pie Figi’s Chicken Sugar Pie Sauce and Butterm’s Spustacian Pie Fill Pie With Boubber Pie Bok Pie Booble Rurble Shepherd’s Parfate Ner with Cocoatu Vnd Pie Iiakiay Coconate Meringue Pie With Spiced Qtrupberscotch Apple Pie Bustard Chiffon Pie
Finally we arrive at what, according to the neural network, is Peak Pie. It tracks its own progress by testing itself against the original dataset and scoring itself, and here is where it thinks it did the best.
It did in fact come up with some that might actually work, in a ridiculously-decadent sort of way.
Baked Cream Puff Cake Four Cream Pie Reese’s Pecan Pie Fried Cream Pies Eggnog Peach Pie #2 Fried Pumpkin Pie Whopper pie Rice Krispie-Chiffon Pie Apple Pie With Fudge Treats Marshmallow Squash Pie Pumpkin Pie with Caramelized Pie Butter Pie
But these don’t sound very good actually.
Strawberry Ham Pie Vegetable Pecan Pie Turd Apple Pie Fillings Pin Truffle Pie Fail Crunch Pie Crust Turf Crust Pot Beep Pies Crust Florid Pumpkin Pie Meat-de-Topping Parades Or Meat Pies Or Cake #1 Milk Harvest Apple Pie Ice Finger Sugar Pie Amazon Apple Pie Prize Wool Pie Snood Pie Turkey Cinnamon Almond-Pumpkin Pie With Fingermilk Pumpkin Pie With Cheddar Cookie Fish Strawberry Pie Butterscotch Bean Pie Impossible Maple Spinach Apple Pie Strawberry-Onions Marshmallow Cracker Pie Filling Caribou Meringue Pie
And I have no what these are:
Stramberiy Cheese Pie The pon Pie Dississippi Mish Boopie Crust Liger Strudel Free pie Sneak Pie Tear pie Basic France Pie Baked Trance pie Shepherd’s Finger Tart Buster’s Fib Lemon Pie Worf Butterscotch Pie Scent Whoopie Grand Prize Winning I*iple Cromberry Yas Law-Ox Strudel Surf Pie, Blue Ulter Pie - Pitzon’s Flangerson’s Blusty Tart Fresh Pour Pie Mur’s Tartless Tart
More of the neural network’s attempts to understand what humans like to eat:
Perhaps my favorite: Small Sandwiches
All my other neural network recipe experiments here.
Want more than that? I’ve got a bunch more recipes that I couldn’t fit in this post. Enter your email here and I’ll send you 38 more selected recipes.
Want to help with neural network experiments? For NaNoWriMo I’m crowdsourcing a dataset of novel first lines, after the neural network had trouble with a too-small dataset. Go to this form (no email necessary) and enter the first line of your novel, or your favorite novel, or of every novel on your bookshelf. You can enter as many as you like. At the end of the month, I’ll hopefully have enough sentences to give this another try.
Cullen and Romulus are the world’s first set of identical twin puppies. While it’s possible that canines could have produced twins in the past, these Irish wolfhounds are the first to be medically documented and confirmed with DNA testing. Source Source 2
Last week, I featured new ice cream flavors generated by Ms. Johnson’s coding classes at Kealing Middle School in Austin, Texas. Their flavors were good - much better than mine, in fact. In part, this was because they had collected a much larger dataset than I had, and in part this was because they hadn’t accidentally mixed the dataset with metal bands.
(the three at the bottom were mine)
But not only are Ms. Johnson’s coding class adept with textgenrnn, they’re also generous - and they kindly gave me their dataset of 1,600 ice cream flavors. They wanted to see what I would come up with.
So, I fired up char-rnn, a neural network framework I’ve used for a lot of my text-generating experiments - one that starts from scratch each time, with no memory of its previous dataset. There was no chance of getting metal band names in my ice cream this time.
But even so, I ended up with some rather edgy-sounding flavors. There was a flavor in the input dataset called Death by Chocolate, and I blame blood oranges for some of the rest, but “nose” was nowhere in the input, candied or otherwise. Nor was “turd”, for that matter. Ice cream places are getting edgy these days, but not THAT edgy.
Bloodie Chunk Death Bean Goat Cookie Peanut Bat Bubblegum Cheesecake Rawe Blueberry Fist Candied Nose Creme die Mucki Ant Cone Apple Pistachio mouth Chocolate Moose Mange Dime Oil Live Cookie Bubblegum Chocolate Basil Aspresso Lime Pig Beet Bats Blood Sundae Elterfhawe Monkey But Kaharon Chocolate Mouse Gun Gu Creamie Turd
Not all the flavors were awful, though. The neural network actually did a decent job of coming up with trendy-sounding ice cream flavors. I haven’t seen these before, but I wouldn’t be entirely surprised if I did someday.
Smoked Butter Lemon-Oreo Bourbon Oil Strawberry Churro Roasted Beet Pecans Cherry Chai Grazed Oil Green Tea Coconut Root Beet Peaches Malted Black Madnesss Chocolate With Ginger Lime and Oreo Pumpkin Pomegranate Chocolate Bar Smoked Cocoa Nibe Carrot Beer Red Honey Candied Butter Lime Cardamom Potato Chocolate Roasted Praline Cheddar Swirl Toasted Basil Burnt Basil Beet Bourbon Black Corn Chocolate Oreo Oil + Toffee Milky Ginger Chocolate Peppercorn Cookies & Oreo Caramel Chocolate Toasted Strawberry Mountain Fig n Strawberry Twist Chocolate Chocolate Chocolate Chocolate Road Chocolate Peanut Chocolate Chocolate Chocolate Japanese Cookies n'Cream with Roasted Strawberry Coconut
These next flavors seem unlikely, however.
Mann Beans Cherry Law Rhubarb Cram Spocky Parstita Green Tea Cogbat Cheesecake With Bear Peanut Butter Cookies nut Butter Brece Toasterbrain Blueberry Rose The Gone Butter Fish Fleek Red Vanill Mounds of Jay Roasted Monster Dream Sweet Chocolate Mouse Cookies nutur Coconut Chocolate Fish Froggtow Tie Pond Cookies naw Mocoa Pistachoopie Garl And Cookie Doug Burble With Berry Cake Peachy Bunch Kissionfruit Bearhounds Gropky Pum Stuck Brownie Vanilla Salted Blueberry Bumpa Thyme Mountain Bluckled Bananas Lemon-Blueberry Almernuts Gone Cream with Rap Chocolate Cocoa Named Honey
For the heck of it, I also used textgenrnn to generate some more ice creams mixed with metal bands, this time on purpose.
Swirl of Hell Person Cream Dead Cherry Tear Nightham Toffee
For the rest of these, including the not-quite-PG flavors, enter your email here.
A researcher wrote about why neural networks like picdescbot hallucinate so many sheep – and yet will miss a sheep right in front of them if it’s in an unusual context. Enjoy!
This is the first of thirteen more in-depth write-ups I have planned out for this year. The list (which is not set in stone!) can be found here.
I decided to do these as a way to get more information out to the readers here without having to delve into one specific ask or series of questions. I can imagine that these might create more questions as I go, but I’m also hoping that they will provide a resource that readers can refer back to. The general idea is to allow the series to build up in complexity, and give everyone a better understanding of these topics!
What is DNA?
This first topic is going to be relatively short, because in a couple of weeks I am going to do “what is a gene”, which will get much longer and more complicated, but I wanted some set up about the physical structure of DNA.
You might have heard DNA described in a lot of different ways. Deoxyribonucleic acid. The building blocks of life. The blueprint of You. None of these are particularity inaccurate, but I don’t think that any of them are super great descriptors of what exactly DNA is, or how exactly it goes from existing in cells to encoding entire organisms (although I am going to talk about the actual encoding part in the future).
For now, we are going to start small. Let’s only look at the actual physical structure. Here we have a DNA molecule:
image from wikimedia commons here
So beautiful! (I might be biased, but DNA is my favourite molecule- it’s elegant in both design and function.)
This can be broken down into two main parts:
The phosphate-sugar backbone (all those P’s and O’s and light blue on the outside)
The nucleobases (adenine [A], thymine [T], cytosine [C], and guanine [G]- the purple, pink, yellow and green)
I will point out the hydrogen bonds in the middle as well. Note that cytosine and guanine have three bonds between them, and adenine and thymine only have two. These molecules always bond in this pattern (A bonds to T, and C to G). If you’ve heard DNA being described as “complementary”, this is why! If you find a C on one strand, you know that you will find a G on the other (this became very important for sequencing, but we will talk about that later).
The hydrogen bonds in the middle are quite important as well. If these molecules were bonded to each other directly, it would be basically impossible to open the strand to “read” the DNA. Instead, this can be done by breaking those hydrogen bonds, and then allowing them to reform. This does mean that a mutation is much more likely in a high A-T region rather than a high C-G on, simply because A-T only has two bonds, and C-G has three. As well, quite often before a gene is encoded, there’s a long stretch of TATA- repeated (these are cleverly called TATA-boxes), so that the strand can more easily be opened and the encoded gene read. More on that when I talk about what a gene is!
And that is honestly pretty much it for DNA (I say that in jest- there is a lot more, and this is the result of a few billion years of evolution!). It’s not a terribly complicated design, which is probably why it is so immensely biologically successful.
So, there we have it: a very, very quick rundown that is mostly to get some important features pointed out before I talk about what a gene is, and how DNA encodes them on January 31st. This is hardly comprehensive, but I will get more in-depth into the structure and features then, and I didn’t want to make that info post horrendously long. Thanks for reading!