Was early crocodile a top predator among dinosaurs?


It’s widely-accepted that dinosaur ruled the Earth (unless you believe that the Earth is only 6,000 years old and magically sprang into existence). For around 200 million, roarin’, stompin’, Jeff Goldblum-chasin’ years they were the unquestionable tyrants of the land.


Or were they? Scientists from the US claim to have found an early ancestor of crocodiles which they believe could have been the apex predator in the region in which it lived, certainly bigger than any dinosaurs in the vicinity.

Crocodiles (along with birds) are 2 families of animals, related to dinosaurs, which managed to survive the mass extinction event of 65 million years ago. They (like sharks of the land) have therefore evolved over millions of years to become perfect killing machines.

But interestingly, this was not always the case – once upon a time, the ancestors of modern-day crocs were believed to be small, land-dwelling and primarily vegetarian (which is a bit like finding out Simon Cowell actually had a taste in music at some point in his youth).

The ‘crocodylomorph’ discovered by the team of palaeontologists is described as ‘unusually large-bodied’ for its time period and could be one of the earliest examples of when crocs began to evolve to be bigger and bite-ier (which I promise is a real scientific word…).

So, what’s the point?

It is useful to understand how creatures have evolved in the past so we have a better idea of how and why their adaptations came about (i.e. what happened in their environment to make the change in their traits more helpful to their survival).


Understanding this can be useful in today’s world where we need to understand how changing environments can affect ecosystems and how some organisms might adapt while others might require intervention in order to protect them.

Another reason studying ancient creatures is important is simply for a deeper understanding of our planet’s biological history as this is ultimately part of the jigsaw that answers the grand philosophical questions: ‘Who are we?’ and ‘Where do we come from?’

Plus dinosaurs are friggin’ awesome.

What did they do?

The researchers analysed a skull discovered in the Pekin formation (in modern day North Carolina) and compared it to other ‘crocodylomorph’ fossils to see where it fits in to the evolutionary jigsaw.


Important things to consider were its age and how certain features on the skull were shaped. Features such as the teeth and shape of the jaw can help the researchers to make intelligent guesses as to how this individual creature lived, and by comparing features with other related crocodylomorphs they can guess as to how it evolved over time.

Did they prove anything?

The skull was around 231 million years old and more than 50cm long. The researchers calculated that it would likely have been around 3m long.

Its teeth were ‘elongated, serrated and slightly recurved’, suggesting that it ate meat. Because of its size and carnivorous nature, the team named it ‘carnufex’ – meaning butcher.

The team claims that meat-eating dinosaurs found in the same region and from that same time period were smaller than carnufex, which they say suggests it was a ‘top-order predator’.

So, what does it mean?

The researchers do make a compelling case, generally if something is larger than anything else around and eats meat then it is probably an apex predator (although it might also be a scavenger).

The fact that it was larger than the dinosaurs of its era and location (dinosaurs in other regions at the same time grew bigger) raises an interesting possibility that at one point crocodiles were the predators of dinosaurs.

Given that we are (kind-of) predators of crocodiles (there are plenty of places that sell cutlets of croc), does this mean we are the most successful predators the planet has ever seen? Probably not – in a few million years the ancestors of lions or dolphins or even chipmunks might be lunching on us, so is the precariousness of evolutionary power.


So perhaps we should view our time as the Earth’s dominant species for what it is in the grand passage of time – merely a passing fad – and try not to ruin it for the 7ft tall meat-eating apex-predator chipmunks of the post-apocalyptic future.

Original article in Scientific Reports Mar 2014

All images are open-source/Creative Commons licence.
Credit: AzDude (First); Mike Baird (Second); L Zanno et al.(Third); Fotocitizen (Fourth).

Text © thisscienceiscrazy. If you want to use any of the writing or images featured in this article, please credit and link back to the original source as described HERE.

Find more articles like this in:

paleontology and archaeology

Zanno LE, Drymala S, Nesbitt SJ, & Schneider VP (2015). Early crocodylomorph increases top tier predator diversity during rise of dinosaurs. Scientific reports, 5 PMID: 25787306


How radiation from space affects the Earth’s climate


Cosmic rays are a form of radiation from space consisting of particles such as protons and atomic nuclei that are very high in energy. These particles have the ability to electrically-charge (ionise) water molecules in the Earth’s atmosphere. These water ions can act as the starting points (nucleation points) of clouds in the lower atmosphere, so these rays from space actually play a role in cloud formation and therefore climate.

Weirdly, and for reasons still not understood, they only make a significant contribution to cloud formation in the lower parts of the atmosphere, not the middle and higher parts. This is important because the position of clouds in the atmosphere determines whether that region retains heat or loses it: Radiation from the Earth is reflected by high clouds more than by lower clouds, while the opposite is true for radiation from the Sun.

cloud height heating cooling

The overall effect is that more radiation reaches/is trapped on the Earth by high clouds than by low clouds, heating the Earth, while the opposite is true of low clouds. And because cosmic rays help with the formation of low clouds, the level of cosmic ray radiation received by the Earth could affect the global temperature.

So a team of researchers from the US, UK and China have compared cosmic ray radiation and global temperatures to see if there is any link between the two.

So, what’s the point?

Understanding the effects of cosmic rays on the temperature could help us to better predict weather and climate. This is more important than just helping you to know whether or not to take an umbrella with you – accurate weather prediction can help nations and aid organisations to better prepare for extreme weather, minimising the human cost of cyclones and droughts.


The statistical analysis can also help scientists to better understand climate change – to see whether cosmic rays have had an effect on the global temperatures rises of the past century.

What did they do?

The researchers used a statistical analysis known as convergent cross-mapping (CCM). Rather than being a coming together of angry cartographers, CCM allows you to see how likely it is that one variable is the cause of another over time.

They looked at both short term and long-term trends, to see if there was a causal link between cosmic rays and global temperature on short timescales (year-to-year) and on longer ones (across the last century).


Did they prove anything?

The researchers claim there is no significant causal link between cosmic rays and the long-term increase in global temperature (global warming). However, year-to-year, they noticed a ‘modest causal effect’.

So, what does it mean?

The researchers claim that cosmic rays are likely to affect the Earth’s temperature year to year, but not the global warming phenomenon observed over the past century (which they appeared very keen to stress).


The idea that radiation that has travelled hundreds or thousands of light years can affect weather on Earth is pretty awesome though. One theory is that cosmic rays originate from supernovae (exploding giant stars), so next time it’s raining on you, rather than getting annoyed about your choice of footwear, just think that those clouds could have been formed by an unfathomably distant  star as it died thousands of years ago in the most spectacular, violent and beautiful way imaginable. Mind blown.

Original article in PNAS March 2015

All images are open-source/Creative Commons licence.
Credit: TSIC (First); Kurdistann (Second); Soudan2/Deglr6328 (Third and title); NASA/ESA/JHU/R. Sankrit and W Blair (Fourth)

Text © thisscienceiscrazy. If you want to use any of the writing or images featured in this article, please credit and link back to the original source as described HERE.

Find more articles like this in:

spaceweather and earth sci

Tsonis, A., Deyle, E., May, R., Sugihara, G., Swanson, K., Verbeten, J., & Wang, G. (2015). Dynamical evidence for causality between galactic cosmic rays and interannual variation in global temperature Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1420291112

Genetically-modified mice resistant to frostbite


The chilling spectre of winter has fallen upon those of us in the higher latitudes of the Northern Hemisphere, and we humans are fortunate enough to have a number of ways of coping with the cold.

But for creatures that don’t have electric heaters, hot chocolate and fleece onesies, there are many ways to survive the frigid hibernal months: extra fur, blubber and hibernation are some of the more well-known tactics.


Some creatures even have the ability to let themselves freeze solid (like a Martian Ice Warrior, or Walt Disney) by replacing some of the water in their cells with sugar and urea (the smelly substance in urine). This prevents ice crystals from growing into the cells and disrupting their outer membranes (which would kill them).

A team of scientists from the US and Turkey discovered a species of tick which uses a similar method – except using a special sugar-protein (glycoprotein) to prevent the growth of ice crystals. In a recent study, they genetically engineered mice to produce this glycoprotein themselves to see if it could make them more resistant to frostbite.

So, what’s the point?

The research team wanted to see if the glycoprotein could help to prevent cold damage in mammals. If it works, it could be of benefit not only to frostbite victims but also in organ transplants, where the donor organ is typically kept cold during transport – the glycoprotein could help to reduce damage to the organ, making it more likely to function in the patient.

Perhaps more importantly, antifreeze proteins could be used in ice-cream! Unilever are apparently investigating the use of a similar protein to help stabilise ice-cream when the fat content is reduced, meaning that it could reduce the number of calories (and we can therefore eat more!).


What did they do?

First, the scientists inserted the glycoprotein gene into the genome of a group of mice to give them the ability to generate the glycoprotein themselves. They then interbred the mice for 8 generations to the point where the entire group of mice could make the glycoprotein (mice homozygous for the glycoprotein gene).

They then compared the transgenic (i.e. genetically modified) mice with a group of regular, ‘wild type’ mice via a series of tests. First, they removed some skin cells from the two sets of mice and subjected them to extreme cold before observing the response.

They then dipped the tails of around 100 mice into a bath at -22oC and looked for symptoms of necrosis (tissue death). They also took sample of the tails and looked for several other signs of frostbite, including auto-amputation (tail falling off), cell ‘stress’ chemicals (cytokines) and immune-system response.

Did they prove anything?

The scientists found that the cells from the ‘wild type’ mice didn’t multiply, but the cells from the transgenic mice did. This suggests that the individual cells are more resistant to cold-damage if they can produce the glycoprotein.

The tail-dipping experiment was to see if the frostbite-resistance could scale up to the level of the organism. The researchers claimed that 89% of the ‘wild type’ mice showed signs of frostbite, but only 40% of the transgenic mice did, suggesting that the glycoprotein confers frostbite resistance to a living organism as well as a handful of individual cells.

So, what does it mean?

It appears that their experiments worked – symptoms of frostbite seem to occur less often in the transgenic mice than the ‘wild type’. Importantly, they used a battery of tests to look at the differences between the two groups, investigating response of the tissue as a whole, the cells comprising it and the immune response.

Now that they have shown that the glycoprotein can help reduce cold damage in a mammal, it would be interesting to see if it can help to make cold-stored transplant organs more viable.

Helping patients with frostbite may be a bit trickier as it might have to be used as a preventative method rather than a treatment for someone who is already frostbitten (e.g. a mountain-climber injects themselves with the glycoprotein before they go out in the cold).


So while glycoproteins like these are never likely to be used by rodent polar explorers, they could one day help save the lives of critically ill people – and also change the world of ice-cream forever!

Original article in PLOS ONE Feb 2015

All images are open-source/Creative Commons licence.
Credit: SlimVirgin (First); opencage.info (Second); Tomi Tapio K (Third and Title).

Text © thisscienceiscrazy. If you want to use any of the writing or images featured in this article, please credit and link back to the original source as described HERE.

Find more articles like this in:

biochemgeneticsPlants and animals

Heisig, M., Mattessich, S., Rembisz, A., Acar, A., Shapiro, M., Booth, C., Neelakanta, G., & Fikrig, E. (2015). Frostbite Protection in Mice Expressing an Antifreeze Glycoprotein PLOS ONE, 10 (2) DOI: 10.1371/journal.pone.0116562

Biochemical ‘memory’ can help bacteria to grow


E_choli_Gram - Copy

When we think of ‘memory’ we typically think of the brain being able to recall facts and events, but ‘memory’ can take other forms: some plastics can ‘remember’ particular shapes, material shapes or magnetic alignment can be used for storing digital data and our immune systems also have a capacity to ‘remember’ past infections.

Bacteria are also believed to display a kind of ‘memory’ too, which helps them to digest nutrients they have encountered recently. A new study by researchers in the US explores this ‘memory’ in E. coli to see how it impacts on their ability to grow in environments where the food source changes.

So, what’s the point?

Bacteria are hardy survivors – the Mad Max of organisms. They are thought to be one of the very first forms of life to have evolved, and will certainly be around long after humans have gone (in fact, they’ll almost certainly have eaten our physical remains).

Central to their incredible capacity to survive is their ability to eat different things. Now while this might not seem particularly impressive to us – a species that invented the bacon ice-cream sundae or the cheeseburger in a can – species of bacteria have evolved to eat substances as unappetising as concrete, petroleum and nuclear waste (although no peer-reviewed tests have been conducted to see if any can eat the cheeseburger in a can).


When a bacterium is exposed to a new food source, it typically takes a while to become adjusted to it (the so-called ‘lag phase’). In this phase, the bacterium is generating the biochemical machinery (enzymes etc.) needed to digest the new nutrients it finds itself surrounded by.

During the ‘lag phase’ the bacteria are not dividing and so growth of the bacterial population is temporarily stunted. The researchers wanted to see if bacteria could, in effect, be trained to grow on more than one food source and reduce or eliminate the ‘lag phase’.

This idea is not completely crazy – the authors claim that ‘memory’ has been previously observed in bacteria, as some of the metabolic machinery for the original food source will still kick around inside the cell even after the bacterium has adjusted to the new food source.

Figuring out whether or not bacterial ‘memory’ helps them to quickly adapt to changing environments could be important in understanding how to control the growth and spread of bacteria. This is not just useful for killing harmful bacteria that we don’t want, but also for culturing or harvesting bacteria that we do want, for instance those used in waste treatment or for producing therapeutic molecules such as human insulin.

What did they do?

The researchers developed a microfluidic device, which fed a culture of bacteria (E. Coli) a stream of either glucose or lactose, with the feed changing every 4 hours for 3 complete glucose/lactose cycles (24 hours in total).


As the feed flows past the culture, individual bacteria are washed away, and are measured at a point downstream. The number of bacteria washed away in the stream is used as an indication of the total size of the population.

In a second experiment, they tested how long this ‘memory’ persists by growing bacteria on lactose for four hours, switching to glucose for a varying time (4, 5.5, 7, 9 and 12 hours) then measuring the ‘lag phase’ (if any) when the feed was switched back to lactose.

Did they prove anything?

The first time lactose was introduced (after 4 hours of glucose), there was a significant ‘lag phase’ as the bacteria adjusted to the new food (see Graph A below). However, the next time glucose was introduced, the ‘lag phase’ was much shorter and in subsequent changes of glucose/lactose there was no ‘lag phase’ at all, with the transitions described as ‘seamless’ (see Graph B below).

bacterial phenotypic memory

However, in the second experiment where the exposure time to glucose was varied, they found that when the bacteria were deprived of lactose for longer than four hours, the ‘lag phase’ reappeared. The ‘lag phase’ generally increased as the time away from lactose increased (see Graph C above).

In further tests, they looked at the lactose-digesting machinery (specifically proteins known as LacY and LacZ). They stated that LacY and LacZ are degraded extremely slowly, so this alone is unlikely to be the cause of the increase in ‘lag phase’.

However, they reckoned that LacY and LacZ are passed on from mother cell to daughter cell as each bacterium divides, causing ‘dilution’ of these proteins in each resulting cell (all of the residual LacY/LacZ of the mother cell would be shared between mother and daughter after division).

This can explain why there is little change in the length of the ‘lag phase’ at lactose-deprivation times below 4 hours (not much cell division occurs), but above this time there is a general increase in the ‘lag-phase’ as the number of cells dividing increases with time (sharing roughly the same amount of LacY and LacZ between a larger number of cells).

So, what does it mean?

There certainly appears to be a link between the length of time since the bacteria were exposed to lactose and the length of the ‘lag phase’, and the reasoning that cell-division is responsible appears to be consistent with the results.

This study could help scientists to better understand the mechanisms behind the growth behaviour of bacteria – important information in trying to control them. This is essential for harnessing their incredible capabilities technologically and curtailling their potential to harm us too.

Original article in PLoS Genetics Sep 2014

All images are open-source/Creative Commons licence.Credit: Bobjgalindo (First); A Gatilao (Second); NIH/NIAID (Third); G Lambert and E Kussel (Fourth)

Text © thisscienceiscrazy. If you want to use any of the writing or images featured in this article, please credit and link back to the original source as described HERE.

Find more articles like this in:


Lambert, G., & Kussell, E. (2014). Memory and Fitness Optimization of Bacteria under Fluctuating Environments PLoS Genetics, 10 (9) DOI: 10.1371/journal.pgen.1004556

Guiding light to boost algae biofuel production



Algae are aquatic organisms which make ponds murky and biofoul the hulls of boats and ships and slow them down. But these these tiny green creatures could also be the future of fuel production – they produce natural oils (lipids) which can be extracted and turned into a wide range of hydrocarbon fuels including diesel and kerosene.

Finding ways of growing the algae as efficiently as possible is essential for this technology to become commercially viable, and developing photobioreactors (basically glorified jars of algae) which can maximise the amount of sunlight that the algae receive is an important part of improving this efficiency.

A team of researchers from the US has investigated using a quirk of light exploited in fibre-optics to improve the distribution of light in an algae tank and boost growth.

So, what’s the point?


This study uses Slab waveguides, which exploit a phenomenon known as ‘total internal reflection‘, where light waves are trapped inside an object, continually bouncing off the inside edges (see images above).

Total internal reflection allows a light wave to be guided in a similar way to how a pipe or hose guides a flow of water, and is used in applications such as fibre-optics. Here, the researchers wanted to use the phenomenon to guide some of the incoming sunlight into the darker parts of the algae tank, where it doesn’t usually penetrate.

While the researchers say that using waveguides to scatter light in photobioreactors has been done before, this study tests different scattering schemes in order to distribute light as evenly as possible throughout the tank.

By making the distribution of light as even as possible, it will hopefully improve the growth rate of algae and therefore produce biofuel more efficiently.

What did they do?

In order to make the light leave the slab waveguide, the researchers attached tiny pillars designed to scatter some of the light from the waveguide, distributing it into the tanks (see image below).

pillars on waveguide

The waveguide/pillar system effectively turns incoming sunlight (which would normally only light up the outer surface of the algae tanks) into a series of small lights which can be spread evenly throughout the inside of the tank.

However, because light is being removed from the waveguide by each pillar, the intensity of the light is much lower at the end of the waveguide than it is at the start. So the researchers wanted to see if they could vary the distances between the pillars in order to compensate for this by having pillars more spread out at the beginning and more concentrated at the end (‘gradient’ system, see image below).

pillar spacings on waveguide
Did they prove anything?

In an experiment using dye-stained water (to imitate a thick biofilm), the researchers found that their ‘gradient’ pillar system successfully distributed light of roughly the same intensity across the length of the waveguide.

They then tested their system in a thin bioreactor against a similar system with evenly-distributed pillars. Crucially, they found that by changing the spacings between the pillars (more spread out at the start of the waveguide) boosted algae growth by ‘at least 40%’.

So, what does it mean?

It seems as if their idea has worked – the ‘gradient’ pillar scheme appears to distribute light evenly across the tank and led to an increase in algae growth compared to the evenly-distributed pillar scheme.

This is a simple, but quite clever, engineering solution to the problem of enabling sunlight to penetrate into a tank of algae and could help large-scale algae biofuel-production to become a reality one day.

There is also potential to use waveguides to collect light that would normally fall outside the tank, boosting the overall amount of light available for the algae.

It may be a while before algae are helping us to drive around, but research like this adds another piece to the jigsaw that could help the dream of green fuels become realised.

Original article in Optics Express Sep 2014

All images are open-source/Creative Commons licence.Credit: IGV Biotech (First and title); Josell7 and Sai 2020 (Second); Ahsan et al. (modified by TSIC) (Third and Fourth); NEON_ja (Fifth)

Text © thisscienceiscrazy. If you want to use any of the writing or images featured in this article, please credit and link back to the original source as described HERE.

Find more articles like this in:

engineeringchem and phys
Ahsan, S., Pereyra, B., Jung, E., & Erickson, D. (2014). Engineered surface scatterers in edge-lit slab waveguides to improve light delivery in algae cultivation Optics Express, 22 (S6) DOI: 10.1364/OE.22.0A1526

Centrifuging people to see if gravity affects perception



Which way is up? It’s a question that’s needs to be answered for seeds to grow in the right direction, homing pigeons to navigate and for Stoke City defenders to know where to hoof the ball.

Our bodies can sense the direction of gravity – and it helps us to figure out how our bodies are orientated relative to the ground, which is essential for maintaining balance. This is rarely an issue for most of us here on Earth but for astronauts on the moon, where the gravity is only around 1/6th of what it is here, keeping upright is more problematic.

This can result in some of the most dangerous pratfalls known to humankind, although thankfully no-one has yet damaged their space-gear as a result of lunar stumbles (or had a video of their embarrassing topples sent in to You’ve Been Framed).

But precisely how much gravity do we need to feel to maintain balance as well as we do on Earth? A new study by researchers from Canada and Germany tries to find out – by placing intrepid volunteers in astronaut-style centrifuges.

So, what’s the point?

The researchers say that our idea of ‘up’ is determined by a combination of three things: our body position (i.e. standing up vs. lying down), visual cues and direction of gravity sensed by our bodies.


They wanted to test how important a role gravity plays in this complex mix and just how much pull is required to help us to understand how we are orientated. This could have implications in future manned space missions as well as helping us to better understand the way our bodies sense our surroundings.

What did they do?

It might sound like an experiment designed to find how much vomit a human can produce, but willing participants were spun in a centrifuge while either sitting or lying on their back or sides.

As they were spun round, the letter ‘p’ would appear on a screen in front of the participants’ faces in a range of orientations. The participants had to indicate whether they thought each letter was a ‘p’ or a ‘d’, while a special computer algorithm varied the orientations of the letter to find the angles at which each person chose ‘p’ and ‘d’ in a 50:50 ratio.

The researchers wanted to see whether this angle changed depending on how fast the centrifuge spun. They could then compared these to control experiments, where participants completed the same task, but either sitting up (experiencing gravity along their bodies) or lying on their sides and back (experiencing gravity across their bodies) without being spun around.

Did they prove anything?

The researchers found that as centrifugation speed (simulated gravity) increased, the angle at which the participants selected ‘p’ and ‘d’ in a 50:50 ratio got closer to zero. In other words, feeling the pull of simulated gravity allowed participants to establish the ‘perceptual upright’ closer to what it was in reality.

The scientists compared the results obtained for different body positions as well, trying to calculate how important gravity, visual cues and body position are at each centrifugation speed. Their graph suggests that as simulated gravity increases, the contribution of gravity towards the ‘perceptual upright’ increases, while the contributions of visual cues and body position decrease (see graph below).

contribution of gravity

So, what does it mean?

While a pattern appears to emerge from this study linking increased gravity to the ‘perceptual upright’ only 10 participants were tested. As a result of small sample size the errors are pretty large, and the errors of the extreme values of g vs. ‘perceptual upright’ overlap (Figure 3A in the paper) so it’s difficult to say conclusively whether or not the effect of gravity is genuine.

This still seems like an interesting idea and certainly using centrifuges to test how we sense gravity does appear to be promising. While more tests need to be done there can be no doubt that sticking people in centrifuges is pretty cool (I’d definitely volunteer for the next set of tests!).

Original article in PLOS One Sep 2014
All images are open-source/Creative Commons licence.Credit: NASA GSFC (First); M Berch (Second); L R Harris et al. (Third)
Text © thisscienceiscrazy. If you want to use any of the writing or images featured in this article, please credit and link back to the original source as described HERE.

Find more articles like this in:

Sense and mindspace

Harris, L., Herpers, R., Hofhammer, T., & Jenkin, M. (2014). How Much Gravity Is Needed to Establish the Perceptual Upright? PLoS ONE, 9 (9) DOI: 10.1371/journal.pone.0106207

Can our brains process words while we sleep?



Learning by listening to things while you sleep might be a desperate last resort for budding linguists and university students cramming for their finals, but how much can the human brain actually take on board while in a state of unconsciousness?

It is fairly well-established that the brain processes information while we sleep (such as dealing with memories and ‘information of the day’) and can even respond to certain external stimuli (for instance the hypnic jerk is believed by some to be an ancient response evolved by humans to prevent us from falling from trees as we slept).

Now a team of scientists from France and the UK have tried to see whether or not our sleeping brains can process words to the point where they can understand simple meanings of those words.


So, what’s the point?

The human brain is capable of incredible things – the source of music, logic, poorly-written science blogs and language.

In order for language to work our brains need to be able to interpret the meanings of words. While this might seem like an obvious and simple task, you have to remember that individual words come loaded with various connotations and nuances (the word ‘set’ for instance is described on dictionary.com as having 100 separate meanings involving subjects as diverse as surgery, tennis and chickens).

Another level of complexity in meaning is the categorisation of a word:a simple noun such as ‘ball’ could be considered as a ‘toy’, ‘sporting equipment’ or a synonym of ‘sphere’ and can itself be sub-categorised into different types of ball.

In this study, the researchers wanted to see whether the sleeping brain can distinguish between words which are the names of animals and those which aren’t: a simple interpretation of the meaning of those words.

This kind of study could help to shed light on how our brains work and how we process information, as well as helping us to gain a better understanding of sleep – a state in which we spend around one-third of our lives (although it’s probably closer to half for teenagers and blog-writers).

What did they do?

Volunteers were placed in a dark room, each sitting back in reclining chair with their eyes closed and encouraged to drift off to sleep. Each wore an EEG (electroencephalography) cap to monitor brain activity.

While they were drifting off, the participants were played the names of objects, some of which were animals, and instructed to press a button by their left hand if they heard the name of an animal or to press on by their right hand if it was a non-animal.


Movements of the right-hand side of the body are controlled by the left hemisphere of the brain, and vice-versa, meaning that while conscious, the participants’ brains would associate animal-words with activity in the right hemisphere of their brains with non-animals with the left hemisphere.

Words were played at 6 to 9 second intervals and the participants continued to press the buttons as they descended into sweet slumber, and the words continued to be played while they slept. The EEG caps could monitor brain activity to figure out precisely when they fell asleep, but also to see if the right or left hemispheres continued to light up in response to animal or non-animal words, respectively, even though the participants were no longer pressing the buttons.

Did they prove anything?

Weirdly enough, the volunteers’ brains continued to show stimulation in their right hemispheres in response to animal words and the left hemispheres for non-animals. The scientists reckoned that this shows that their brains could still process the words and interpret the meaning of ‘animal’ and ‘non-animal’ even though the participants were asleep.


In a second experiment, volunteers were presented with a list of words, some of which they had been played while awake and unconscious, and some of which had not been played at all. They had to indicate whether or not they thought each word had been played to them.

Generally speaking, ‘participants could distinguish new words presented during wake period… but crucially not from words presented during sleep’. In other words, while the brain is able to linguistically process words during sleep to some extent, the volunteers did not remember them.

So, what does it mean?

This appears to be pretty strong evidence to suggest that the brain can process the meanings of word that we hear during sleep, at least at a fairly simplistic level of understanding, and begs the question: ‘what else can the brain do during sleep?’

It would be incredibly interesting to find out precisely how sophisticated the brain’s functions are, not only during light sleep, but at deeper stages and the highly-active REM stage too.

While the student dream of being able to learn important exam facts while sleeping off an evening of tequila and aftershocks might not have been realised, this study does provide an exciting insight into what our unconscious mind is capable of.

Original article in Current Biology Sep 2014
All images are open-source/Creative Commons licence.Credit: A Ajifo (First); USNARA (Second); S Kouider et al. (Third); C Hope (Fourth)

Text © thisscienceiscrazy. If you want to use any of the writing or images featured in this article, please credit and link back to the original source as described HERE.

Find more articles like this in:

Sense and mind

Kouider, S., Andrillon, T., Barbosa, L., Goupil, L., & Bekinschtein, T. (2014). Inducing Task-Relevant Responses to Speech in the Sleeping Brain Current Biology, 24 (18), 2208-2214 DOI: 10.1016/j.cub.2014.08.016