NHS Choices

Do potatoes reduce stomach cancer risk?

NHS Choices - Behind the Headlines - 10 hours 41 min ago

"Eating lots of potatoes will reduce your risk of getting stomach cancer," according to enthusiastic media reports that seized on the UK's love affair with the spud.

The mouth-watering headline followed the publication of a large Chinese review into the link between diet and stomach cancer, which involved 76 studies and 6.3 million people across several countries.

However, the news reports were perhaps a little hasty in their conclusions – the study didn't find any specific link between eating potatoes and a lower risk of stomach cancer.

Stomach cancer is one of the most common cancers, accounting for almost 10% of cancer deaths. Research suggests some foods may help protect against stomach cancer, while others may increase the risk of getting it.

The media focus on potatoes seems to have come from the link researchers found between the cancer and white vegetables in general, such as potatoes, cabbage, onions and cauliflower.

The study found eating lots of different types of fruit, white vegetables and vitamin C was associated with a lower risk of stomach cancer.

A high intake of fruit was associated with a 7% reduction in stomach cancer. White vegetables were associated with a 33% lower risk. Meanwhile, a diet high in pickled vegetables, processed meats like sausages, salted foods and alcohol was associated with an increased risk.

Although it has several limitations, this large review will contribute to the growing body of evidence informing the dietary associations with stomach cancer.

However, it is not possible to give any firm conclusions based on this review alone. It is certainly not possible to say at this stage that eating potatoes will reduce your risk.

Where did the story come from?

The study was carried out by researchers from Zhejiang University in China, and was funded by the Chinese National Natural Science Foundation and the Natural Science Foundation of Zhejiang Province.

It was published in the peer-reviewed European Journal of Cancer.

The media seems to have gone with the slant that eating potatoes will decrease your risk of stomach cancer. 

But this was not a specific finding of this review, which actually found a reduced risk of stomach cancer was associated with a higher consumption of "white vegetables".

White vegetables include potatoes, as well as cabbage, cauliflower and onions. The review did not find any link at all when it looked specifically at potatoes. 

What kind of research was this?

This systematic review aimed to pool the results of published prospective cohort studies that examined whether individual dietary factors are associated with stomach cancer risk.

As the researchers say, stomach (gastric) cancer is the fourth most common cancer in men and the fifth most common cancer in women worldwide, and counts for just under 10% of the deaths from cancer.

Dietary factors are believed to play a role in stomach cancer risk. Many previous observational studies have looked into this, including the large European Prospective Investigation into Cancer and Nutrition (EPIC) study.

The researchers say these studies suggest that processed meat may slightly increase risk, while a higher consumption of fruit and veg may decrease risk.

A systematic review is the best way to identify all published research on a given question and summarise what this evidence suggests.

Singling out individual dietary factors associated with health outcomes is challenging, as other dietary and lifestyle factors play a part, and it can be difficult to remove their effects.

Usually, drawing firm conclusions about what causes a particular disease requires drawing together a wide range of different types of evidence.

What did the research involve?

The researchers reviewed several literature databases to identify prospective cohort (observational follow-up) studies published up to the end of June 2015.

Eligible studies had to have looked at any dietary exposure (food, drinks or nutrients) and examined the risk of stomach cancer as the outcome.

Relevant studies were assessed for quality and two researchers independently extracted data from the studies to reduce the risk of error.

In total, 76 studies met the inclusion criteria, all of which were rated as being of moderate to high quality. These studies had followed a total of 6,316,385 people for 11.4 years, on average, and identified 32,758 new cases of stomach cancer over this period.

Thirty-seven of the studies were conducted in Europe, 11 in the US, 21 in Japan, four in China and three in Korea. The diets they were examining varied widely, from alcohol and salted foods to green tea and ginseng. The researchers pooled studies looking at the same food or food type to give an overall result.

What were the basic results?

Looking at studies examining fruit and vegetables, the results of 22 studies were pooled in an analysis of total vegetable consumption. No link was found with stomach cancer.

Meanwhile, 30 studies of total fruit consumption found a higher intake of fruit was associated with a 7% reduction in stomach cancer (relative risk [RR] 0.93, 95% confidence interval [CI] 0.89 to 0.98).

There was no association with stomach cancer for many of the specific fruits and vegetables examined. However, there were significant links with a few:

  • white vegetables were associated with a 33% decrease in risk (RR 0.67, 95% CI 0.47 to 0.95; data came from six studies)
  • pickled vegetables were associated with an 18% increase in risk (RR 1.18, 95% CI 1.02 to 1.36; data came from 20 studies)
  • tomatoes were associated with an 11% increase in risk (RR 1.11, 95% CI 1.01 to 1.22; data came from five studies)
  • spinach was associated with a 21% increase in risk (RR 1.21, 95% CI 1.01 to 1.46; data came from five studies)

Interestingly enough, despite the media focus on potatoes, no significant link was found between potatoes and stomach cancer (RR 0.93, 95% CI 0.82 to 1.06; seven studies).

Looking at other food types, significantly increased risk was found with:

  • processed meat (13 studies)
  • salted fish (11 studies)
  • high-salt foods (12 studies)
  • salt (8 studies)
  • alcohol (24 studies)
  • beer (13 studies)
  • liquor (12 studies)

A few studies looked at the effects of specific nutrients. The one significant result to come out of these analyses was that vitamin C was found to reduce risk by 11% in a pooled analysis of five studies (RR 0.89, 95% CI 0.85 to 0.93).

Where there was data available to examine the quantities of a specific food or drink needed to have an effect, the researchers found significant links for:

  • total fruit – 5% reduced risk per additional 100g a day
  • citrus fruit – 3% reduced risk per additional 30g a day

There weren't, however, significant dose links with white vegetables or vitamin C. If a factor dose directly affects risk, researchers would expect to see what they call a dose response – meaning the more fruit you eat, for example, the more your risk changes.

There were also significant dose links for:

  • salted fish and high-salt foods (4% and 10% increase for each item per week, respectively)
  • salt (12% increase per 5g a day)
  • alcohol (5% increase per 10g a day)
How did the researchers interpret the results?

The researchers concluded that, "This study provides comprehensive and strong evidence that there are a number of protective and risk factors for gastric cancer in diet." 

They say their findings "may have significant public health implications with regard to prevention of gastric cancer, and provide insights into future cohort studies and the design of related clinical trials."


This systematic review has gathered and summarised the results of prospective cohort studies published to date that have examined links between specific dietary items and the risk of stomach cancer.

The review has many strengths, including the large number of studies that have been reviewed and quality assessed, the large sample size, and extensive analyses carried out by individual food type.

But there are various points to note when interpreting these results. The media attention focused on potatoes and how we should eat these to reduce the risk of stomach cancer – this was not a finding of this study. It presumably comes from the decreased risk found with white vegetables.

However, exactly what white vegetables this included was not specified. Studies that specifically examined potatoes found no link with stomach cancer.

Also, researchers weren't able to say how many white vegetables should be eaten to have a preventative effect. The researchers would expect to find this sort of link if a specific food is affecting the risk of stomach cancer.

Eating lots of fruit and vitamin C were also associated with a lower risk. But, again, while researchers could say each 100g increase in fruit a day was associated with a decreased risk, no dose response was found with vitamin C.

Given the varied results for fruit and vegetables – overall and by specific type – it is difficult to give specific advice, other than that the findings generally support advice to eat a healthy, balanced diet with plenty of fruit and vegetables.

While all the studies were rated to be of moderate to high quality, they varied widely in the population assessed, follow-up time and the main food item being examined.

There are many unknowns that could impact on the strength of the evidence found by the individual studies. This includes the methods of assessing dietary intake and over what period this was examined, how cancer outcomes were assessed, or whether researchers adjusted for other health and lifestyle factors that may influence the results.

For example, smoking is an established risk factor for stomach cancer. Smoking – or not smoking – may be associated with other "healthy" or "unhealthy" dietary habits.

Generally, a diet high in fruit – and possibly certain vegetables – has for some time been recognised to potentially decrease the risk of stomach cancer. 

The World Cancer Research Fund published a similar review in 2007, concluding there was evidence to suggest that eating more fruit, non-starchy vegetables, and allium vegetables such as onions probably reduced stomach cancer risk, while salted and salty foods probably increased risk. At that point, there was not enough evidence to assess the effects of potatoes, vitamin C, or alcohol on stomach cancer risk.

This large study will contribute to the body of evidence informing the dietary associations with stomach cancer. However, it is not possible to give any firm conclusions based on this review alone. It certainly should not be advised that eating potatoes will decrease your risk of stomach cancer. 

Links To The Headlines

Potatoes can help cut cancer risk. The Daily Telegraph, November 28 2015

Potatoes reduce risk of stomach cancer. The Independent, November 28 2015

Want to reduce your risk of developing cancer? Eat more POTATOES, say experts. Daily Express, November 29 2015

Links To Science

Xuexian F, Jiayu W, Xuyan H, et al. Landscape of dietary factors associated with risk of gastric cancer: A systematic review and dose-response meta-analysis of prospective cohort studies. European Journal of Cancer. Published online November 14, 2015

Categories: NHS Choices

Sugar and water 'as good as a sports drink', says study

NHS Choices - Behind the Headlines - Mon, 30/11/2015 - 11:00

Adding a spoonful of table sugar to a glass of water could be just as good as – or better than – a sports drink, several media outlets have reported. The news comes from a study that compared whether a group of long-distance cyclists performed better when they had a glucose or sucrose mix drink.

Fourteen experienced male cyclists were randomly given a drink of sucrose or glucose stirred into water before and during a three-hour cycling stint.

Both drinks maintained the body's glucose stores, which are broken down to provide energy during physical activity if there's not enough glucose available in the bloodstream. However, British researchers found the cyclists performed better on the sucrose drink.

Many sports drinks designed to provide energy during exercise use sucrose or mixes of glucose and fructose – but many still rely on glucose alone. Sucrose is made up of glucose and fructose, whereas glucose is available in a form ready to be used by the body.  

The researchers suggest glucose-only drinks could produce gut discomfort, and sucrose-based alternatives, or simply sugar in water, could make exercise easier.

While the findings are interesting, this is a small study involving just 14 male endurance cyclists. The results can't inform us of the effects in women, less experienced exercisers, or people performing different types of exercise. Even for male cyclists, a much larger sample may give different results.

This study does inform us about how the body may use sucrose and glucose differently during exercise, but limited firm conclusions can be drawn about the best form of nutrition before, during or after exercise based on its results alone.  

Where did the story come from?

The study was carried out by researchers from the University of Bath, Northumbria University, Newcastle University, and Maastricht University.

It was funded by Sugar Nutrition UK and Suikerstichting Nederland, and was published in the peer-reviewed American Journal of Physiology – Endocrinology and Metabolism.

The news reporting is generally representative of the study's main findings, but would benefit from acknowledging that this research has limited implications because it used such a small, select sample group.

What kind of research was this?

This small randomised crossover trial aimed to compare the effects of glucose and sucrose (table sugar) drinks on the body during endurance exercise. A crossover trial means the participants acted as their own controls, drinking both drinks on two separate occasions.

Carbohydrate – which includes sugar – and fat are the main energy sources used during moderate-intensity endurance exercises. The carbohydrate source comes from glucose in the blood, which is continuously being topped up from the liver by the breakdown of glycogen, the stored form of glucose.

The body's glycogen stores therefore become depleted during exercise, unless carbohydrate is taken in the form of food or drink to provide a fresh source of glucose.

The researchers aimed to better understand the effect that drinking different types of sugary drinks has on the depletion of glycogen stores during exercise.

What did the research involve?

This study involved cyclists who performed endurance exercise while drinking either glucose or sucrose drinks. Researchers compared the cyclists' glycogen stores before and after exercise.

14 healthy endurance cyclists (all male) were involved in the study. They were randomised to either a glucose or sucrose (granulated sugar) drink before an exercise test. One to two weeks later they performed a repeat test after drinking the other drink.

On each occasion, participants arrived at the test centre after fasting for 12 hours and having avoided strenuous exercise for the previous 24 hours. The cyclists' last meal was standardised by the researchers, so they all had the same energy intake.

The carbohydrate test drinks were made up of 108g of either glucose or sucrose mixed with 750ml of water to give a 7% carbohydrate solution. Participants were given 600ml of the drink (86.4g carbohydrate) immediately before exercise, with a further 15ml (21.6g carbohydrate) given every 15 minutes during exercise.

The exercise involved a five-minute warm-up at 100 watts, after which power was increased up to 50% of the individual's peak power output (established during preliminary tests) for the remaining three hours.

A special imaging technique called magnetic resonance spectroscopy (MRS) was used to examine the breakdown of glycogen in liver and muscle tissue before and after exercise.

The researchers took blood samples to look at glucose and lactate levels, as well as expired breath samples to look at oxygen and carbon dioxide levels. They also questioned the participants about abdominal discomfort and how tired they felt during exercise.

Four of the participants also attended on another occasion to perform a control exercise test, where they drank only water.

What were the basic results?

Liver glycogen stores did not decrease significantly after the exercise tests, and did not differ between the two drinks. Muscle glycogen stores did significantly decrease after the tests, but again were not significantly different between the two drinks. Comparatively, both liver and muscle stores declined when only water was consumed during exercise.

Carbohydrate use was estimated by a calculation looking at the difference between the carbon dioxide expired and the oxygen used up during exercise. This was significantly greater with sucrose than glucose, suggesting the sucrose drink was being better used to provide energy.  

Participants also reported their perceived exertion increased to a lesser extent during exercise when they had sucrose compared with glucose. Gut discomfort was also less with the sucrose drink.

How did the researchers interpret the results?

The researchers concluded that, "Both glucose and sucrose ingestion prevent liver glycogen depletion during prolonged endurance exercise".

They say sucrose ingestion does not preserve liver glycogen concentrations any better than glucose, but sucrose does increase whole-body carbohydrate utilisation compared with glucose.


This study aimed to see whether having a sugary drink available during endurance exercise preserves the body's glucose stores in the form of glycogen. They also wanted to see whether there was any difference between sucrose or glucose in terms of performance.

As may be expected, the researchers found drinking both sucrose and glucose drinks during exercise provided an energy source, thereby preserving the body's glycogen stores.

However, the body seemed to make better use of the carbohydrates to provide energy when it was given in the form of sucrose rather than glucose, and participants felt they were getting less exhausted.

The findings suggest both sucrose and glucose are a good energy source during exercise, though plain sugar (sucrose) in water had the slight edge in this study.

These tests involved just 14 male endurance cyclists, which is an important limitation of this study. This means we should take care before applying the results to all groups – for example, women, less experienced sportspeople, or people performing different types of exercise. Even for male cyclists, a much larger sample could have given different results.

There are also many different aspects related to sports nutrition that could be examined, such as the effects of eating food and drink containing different nutrient sources an hour or two before exercise, or the effects of eating after exercise in replenishing energy stores.

Overall, this study informs us about how the body may use sucrose and glucose differently during exercise, but limited firm conclusions can be drawn about the best form of sustenance before, during or after exercise.

Links To The Headlines

Energy drinks run out of fizz as scientists discover spoonful of sugar in water has same effect. The Daily Telegraph, November 27 2015

Sugary water better for performance than some sports drinks – study. The Guardian, November 28 2015

Forget sports drinks - a spoonful of sugar is the secret to athletic success. Daily Mail, November 28 2015

Links To Science

Gonzalez JT, Fuchs CJ, Smith FE et al. Ingestion of Glucose or Sucrose Prevents Liver but not Muscle Glycogen Depletion During Prolonged Endurance-type Exercise in Trained Cyclists. American Journal of Physiology – Endocrinology and Metabolism. Published online October 20 2015

Categories: NHS Choices

High-strength 'skunk' cannabis linked to brain changes

NHS Choices - Behind the Headlines - Fri, 27/11/2015 - 12:50

"Scientists warn smoking 'skunk' cannabis wrecks brains," The Sun reports, somewhat simplistically. A small study found some users of the high-strength skunk strain of cannabis had changes in nerve fibres in a specific part of the brain.

Researchers used MRI scanners to scan the brains of 99 adults – some with psychosis, some without – looking for any links between small changes in their brain structure and their cannabis habits.

The researchers looked specifically at the effect on the fine structure of the corpus callosum. This is a band of nerve fibres joining the left and right sides of the brain and is thought to help different parts of the brain "communicate" with each other.

They found users of skunk – as well as those who used any type of cannabis on a daily basis – had different structural changes in the corpus callosum, compared with those who smoked less or lower-strength strains.

What this study doesn't tell us is whether these structural changes do any harm or cause any negative mental health effects – which is why The Sun's headline is too strong. The study simply didn't look at this.

The effects of cannabis use – both in the short and longer term – are not firmly established. But cannabis is known to be one of many substances that can trigger a psychotic episode. Read more about psychosis.

The study adds new knowledge about the potential effect of cannabis smoking on the brain, which other researchers can build on. But this was exploratory research and cannot provide any concrete cause and effect conclusions.   

Where did the story come from?

The study was carried out by researchers from King's College London and the Sapienza University of Rome.

It was funded by a King's College London Translational Research Grant, the National Institute for Health Research (NIHR) Mental Health Biomedical Research Centre at the South London and Maudsley NHS Foundation Trust, and King's College London.

The study was published in the peer-reviewed Psychological Medicine on an open-access basis and can be read online for free.

Generally, the UK media covered the story accurately, but some of the headline writers overstepped the mark. The Sun's headline, "Scientists warn smoking 'skunk' cannabis wrecks brains", and the Daily Mail's, "Proof strong cannabis does harm your brain", were not based on any evidence.

This type of study cannot prove cause and effect, only suggest a possible link, so "proof" is too strong a term. Also, the study didn't look at how the small changes in the brain associated with skunk affected thoughts or other brain functioning, so it was not fair to say skunk "wrecks" the brain.

This study wasn't designed to look at the effect of skunk on mental health illnesses, only small changes in brain structure, so it tells us little about the link between cannabis use and the development of a mental health illness. 

What kind of research was this?

This cross-sectional study looked for differences in the structure of a specific area of the brain called the corpus callosum in people with psychosis and those without.

It also looked at how this was linked with their reported cannabis use. The researchers were most interested in the effect of cannabis potency and how regularly cannabis was used.

The research team says high-strength cannabis (skunk) has been associated with a greater risk and the earlier onset of psychosis – the experience of hallucinations or delusions, a characteristic feature of the mental health condition schizophrenia.

However, the possible effect of cannabis potency on brain structure has never been explored. The researchers set out to investigate this by studying the fine structure of the corpus callosum, a band of nerve fibres joining the left and right sides of the brain.

This type of study can't prove cannabis causes changes in brain structure or any associated mental health illness. A long-term cohort study would be needed for this – a randomised controlled trial wouldn't be appropriate for ethical and, in the UK, legal reasons. But this type of study can point to possible or probable links for further investigation, a useful exercise to guide the next round of studies. 

What did the research involve?

A group of 56 people with psychosis (37 cannabis users) and 43 people without psychosis (22 cannabis users) had their brains scanned. The scans were used to look for possible links between their cannabis habits and any differences in the structure of the corpus callosum area of their brains.

Those with psychosis had been medically diagnosed with first episode psychosis, which simply means someone who has experienced psychosis for the first time. Most of those with psychosis were taking antipsychotic medication (53 of 56), just three were not.

The brain scans used an MRI imaging technique – diffusion tensor imaging tractography – that maps how different parts of the brain are linked to each other and how easily information is transferred between both sides. This technique measures the efficiency by which signals in the brain travel (diffusivity), where low diffusivity scores indicate a healthy functioning brain and high diffusivity may indicate some form of damage. 

The team looked at four common diffusion tensor imaging measures:

  • fractional anisotropy (FA)
  • mean diffusivity (MD)
  • axial diffusivity (AD)
  • radial diffusivity (RD)

FA is a sensitive way of picking up small brain structural changes and is relatively generic. MD, AD, and RD give more specific indications of where changes happen.

All participants filled in an illicit drugs questionnaire that included their cannabis smoking habits, when they first started, the strength they used, and how often they used it.

The statistical analysis took account of the following confounders:

  • sociodemographic factors
  • age
  • gender
  • ethnicity
  • some lifestyle factors, such as alcohol intake  
What were the basic results?

There were some interesting results, not all of which were picked up in the media reports. For example, those diagnosed with psychosis were more likely to have used cannabis at some stage in the past compared with those without psychosis.

But there were no differences between people with and without psychosis in terms of how long they had used cannabis, how old they were when they first used the drug, the type of cannabis used, how often it was used, and the strength.

Three of the four measures of corpus callosum function were no different in people with psychosis compared with those without (MD, RD, AD). FA was found to be different, but was borderline statistically significant, meaning there is a reasonable probability the result is down to chance – specifically, a 1 in 25 chance, p=0.04.

As the corpus callosum structure wasn't that different between those with and without psychosis, the researchers pooled the groups to study the effect of cannabis on the brain. Overall, they found the corpus callosum structure was negatively affected in people using high-potency cannabis, compared with those using a lower-potency strain or not using cannabis at all, across MD, AD and RD diffusion measures, but not the more generic FA.

These alterations were similar in users with and without psychosis. A similar mixed pattern was found for frequency of use, with daily users having the most changes compared with occasional or never users. No link was found between those first using cannabis before the age of 15 and those starting after in terms of changes in the corpus callosum structure. 

How did the researchers interpret the results?

The researchers concluded: "Frequent use of high-potency cannabis is associated with disturbed callosal microstructural organisation in individuals with and without psychosis.

"Since high-potency preparations are now replacing traditional herbal drugs in many European countries, raising awareness about the risks of high-potency cannabis is crucial." 


This research studied the brains of 99 people – some with psychosis and some without – looking for any links between small changes in their brain structure and their cannabis habits. The researchers looked specifically at the effect on the fine structure of the corpus callosum, a band of nerve fibres joining the left and right sides of the brain.

They found the corpus callosum wasn't very different in those with or without psychosis. But smoking high-strength cannabis (skunk) and using any type of cannabis daily was linked to structural changes in the corpus callosum, compared with those who smoked less or lower-strength cannabis.

What this study doesn't tell us is whether these structural changes do any harm or cause any negative mental health effects. The study simply didn't look at this, a subtlety much of the news reporting failed to recognise.

The study also can't tell us whether cannabis use is the direct cause of these observed differences, or whether other factors could be having an influence. Cohort studies following people over time, examining their cannabis use and carrying out follow-up brain scans, would be beneficial to look at this.

The researchers made the best of what they had in terms of collecting a sample of almost 100 people and analysing the results appropriately. 

However, as with all research, this study has its limitations. For example, 100 people isn't enough if you are splitting people into many groups, such as those with and without psychosis and different levels of cannabis use.

Some of the group numbers start to become quite small, which increases the chances you won't have enough people to find statistically significant differences, even if they exist. It can also throw up some unusual findings that wouldn't be the case in a larger group. This study carries these risks.

Similarly, the findings themselves weren't entirely consistent. There is a mix of significant and non-significant findings for the four measures taken (FA, MD, RD and AD). This lack of consistency muddies the picture somewhat and reduces our confidence in the findings a little.

The effects of cannabis use – both in the short and long term – are not firmly established. This study adds new knowledge about the potential effect of cannabis smoking on the brain that other researchers can build on. But it was exploratory research and so cannot provide concrete cause and effect conclusions. 

Cannabis is a class B drug that is illegal to possess (up to five years in prison) or supply (up to 14 years in prison). And while it may not trigger mental health problems in everyone, it may make pre-existing symptoms such as depression and paranoia more severe. If you think you may be using cannabis to cope with mental health problems, contact your GP for advice.

Links To The Headlines

Scientists warn smoking 'skunk' cannabis wrecks brains. The Sun, November 27 2015

Smoking high-strength cannabis may damage nerve fibres in brain. The Guardian, November 27 2015

Skunk 'causes damage to vital nerve fibres'. The Daily Telegraph, November 27 2015

Proof strong cannabis DOES harm your brain: Regularly smoking 'skunk' found to damage area that carries signals - and can lead to mental illness. Daily Mail, November 27 2015

Smoking skunk may damage brain junction. The Times, November 27 2015

Links To Science

Rigucci S, Marques TR, Di Forti M, et al. Effect of high-potency cannabis on corpus callosum microstructure. Psychological Medicine. Published online November 27 2015

Categories: NHS Choices

Experts call for more research into ADHD drug Ritalin

NHS Choices - Behind the Headlines - Fri, 27/11/2015 - 10:30

"The drug Ritalin should be prescribed with caution as the quality of evidence available about its benefits and risks is poor," the Mail Online reports. A review of available evidence found no high-quality evidence about both the benefits and risks.

Researchers aimed to assess the beneficial and harmful effects of the attention deficit hyperactivity disorder (ADHD) drug methylphenidate for children and adolescents – Ritalin is the most commonly known brand name.

The review identified a large number of trials including more than 12,000 children and adolescents. It found a slight improvement in symptoms of ADHD in children treated with methylphenidate compared to placebo (dummy drug) or no treatment. 

There was no increase in risk of serious adverse effects, but there was also a 29% increase in non-serious side effects, such as sleep problems and decreased appetite. However, the findings were based on very low-quality evidence, so we can't be sure of these effects, and better-quality studies would be required to look at this further.

The researchers conclude: "Better designed trials are needed to assess the benefits of methylphenidate".  

Alternative treatments for ADHD include behavioural therapy and cognitive behavioural therapy. Read more about treatment options for ADHD


Where did the story come from?

The study was carried out by researchers from a number of institutions, including the Region Zealand, University of Southern Denmark and Copenhagen University Hospital, all in Denmark. 

Funding for the study was provided by the Psychiatric Research Unit, Region Zealand Psychiatry, Roskilde; Region Zealand Research Foundation; and the Copenhagen Trial Unit, Centre for Clinical Intervention Research, Copenhagen University Hospital, Copenhagen.

The peer-reviewed study was published by the Cochrane: Developmental, Psychosocial and Learning Problems Group. Like all Cochrane research, the study is open-access, so it is free to read online

The review has been reported in much of the media as a warning to be cautious about over-prescribing such drugs. However, the Mail Online explained that the research team could not be confident about the results. 


What kind of research was this?

This study was a systematic review and meta-analysis which aimed to assess the beneficial and harmful effects of methylphenidate for children and adolescents with ADHD. This is a good way to gather and combine the findings from trials that have been conducted to date, to draw firmer conclusions; however, a systematic review can only be as good as the included studies.


What did the research involve?

This systematic review searched numerous literature databases and two trials registers to identify all randomised controlled trials (RCTs) comparing methylphenidate to inactive ("dummy") placebo, or no treatment in children and adolescents with ADHD aged 18 years or younger. At least 75% of participants in each study were required to have normal intellectual functioning.

Data was extracted from the studies for the following outcomes:

  • ADHD symptoms (attention, hyperactivity and impulsivity), short-term (within six months) or long-term (longer than six months)
  • serious adverse events
  • non-serious adverse events
  • general behaviour in school and at home
  • quality of life

Numerous study authors were responsible for data extraction and quality appraisal of the studies, which included an assessment of bias and differences in the results of the individual studies (heterogeneity).

Where appropriate, data from the different studies was pooled using meta-analysis to give an overall result.


What were the basic results?

The systematic review included 38 RCTs (5,111 participants) and 147 crossover trials (7,134 participants – crossover being where participants act as their own control, receiving treatment and no treatment).

The average age of participants across all studies was 9.7 years, but ranged from three to 18 years. As is frequently the case with ADHD, a larger number of boys were represented in the sample, with a boy-to-girl ratio of 5:1.

The length of methylphenidate treatment time ranged from one to 425 days, with an average of 75 days. All included trials were considered to be at high risk of bias.

In a pooled analysis of 19 trials, the researchers found that methylphenidate gave a slight improvement in teacher-rated ADHD symptoms when compared to placebo or no intervention. Those treated with methylphenidate had an average of 9.6 fewer points (95% confidence interval [CI] -13.75 to -6.38) on the ADHD Rating Scale (ADHD-RS). 

The ADHD-RS is a scoring system, based on the variety and severity of symptoms, which has a range of 0 to 72 points. A change of 6.6 points is considered to represent the minimal relevant or clinically meaningful difference.

There was no evidence to suggest methylphenidate was associated with an increase in serious adverse events.

The number of non-serious adverse events was, however, higher in the methylphenidate group, with a 29% increase in the overall risk of any non-serious adverse events (relative risk [RR] 1.29, 95% CI 1.10 to 1.51). The most common non-serious adverse events were sleep problems and decreased appetite.

These side effects are acknowledged by the manufacturers of methylphenidate and are described as common in the patient information leaflets that come with the medication.


How did the researchers interpret the results?

The authors conclude: "At the moment, the quality of the available evidence means that we cannot say for sure whether taking methylphenidate will improve the lives of children and adolescents with ADHD. Methylphenidate is associated with a number of non-serious adverse events, such as problems with sleeping and decreased appetite. 

"Although we did not find evidence that there is an increased risk of serious adverse events, we need trials with longer follow-up to better assess the risk of serious adverse events in people who take methylphenidate over a long period of time."



This is a well-conducted systematic review that aimed to assess the beneficial and harmful effects of methylphenidate (Ritalin being the most commonly known brand name) for children and adolescents with ADHD.

The review found that methylphenidate was associated with a slight improvement in the symptoms of ADHD, compared to placebo or no treatment – just on the borderline of what would be considered clinically meaningful. However, the researchers state this improvement should be weighed up against the increased risk of adverse events, such as sleeping problems and decreased appetite.

The review identified a large number of trials and included 12,245 children and adolescents, representing the gathering of extensive research into the effects of this drug. However, a major limitation is the poor-quality evidence that was available, with most trials being assessed as being of very low quality.

As the review authors suggest, more research with well-designed trials is needed to better assess the benefits and harms of the treatment, preferably with some subgroup analyses to see if it is possible to identify those who might have better or worse outcomes.

There is no cure for ADHD, but support and advice, and sometimes treatment in the form of medication or "talking" therapies can be useful, to make day-to-day life easier. Sometimes links may be noticed between symptoms and certain foods, such as sugar or additives. However, the most important thing is for the child to follow a balanced diet and not to make drastic changes or add supplements (e.g. omega 3 or 6 fatty acids) without first discussing with a GP.

Read more about living with ADHD.  

Links To The Headlines

ADHD drug Ritalin 'should be prescribed with caution' because scientists STILL can't say whether it's safe after 50 years of use. Mail Online, November 252015

Experts call for cautionover Ritalin. BBC News, November 25 2015

Links To Science

Storebø OJ, Ramstad E. Krogh HB, et al. Methylphenidate for children and adolescents with attention deficit hyperactivity disorder (ADHD). Cochrane Database of Systematic Reviews. Published online November 25 2015 

Categories: NHS Choices

Reports of an 'end to daily diabetes jabs' are premature

NHS Choices - Behind the Headlines - Thu, 26/11/2015 - 16:00

"The daily trial of insulin injections could soon be over for hundreds of thousands of people with type-1 diabetes," is the overoptimistic headline in The Times.

A small study involving immune "T-reg cells" proved safe for participants, but it is far too early to talk about an end to daily injections.

In type 1 diabetes the body's immune cells attack the insulin-producing beta cells in the pancreas. Without the hormone insulin, people with type 1 diabetes can't control the levels of sugar in their blood.

High levels of sugar (hyperglycaemia) can damage the blood vessels and nerves, while low levels (hypoglycaemia) can cause unconsciousness. Most people with type 1 diabetes need to inject insulin regularly.

It was already known that people with type 1 diabetes have fewer cells called T-regulators (T-regs), which are involved in stopping the immune system attacking healthy cells such as beta cells. Now a group of scientists has found a way to take T-regs from people's blood, filter out any defective cells, and expand the numbers of healthy T-regs so they can inject them back.

This study was to test whether the technique is safe, rather than effective. The researchers say they can't tell from the varied responses of the 14 people in the study whether the treatment actually helped preserve insulin production, let alone restore it. 

Where did the story come from?

The study was carried out by researchers from the University of California, Benaroya Research Institute in Seattle, Yale University and KineMed Inc.

It was funded by the Juvenile Diabetes Research Foundation International, the Brehm Coalition, the Immune Tolerance Network, BD Biosciences and Caladrius Biosciences. 

The study was published in the peer-reviewed journal Science Translational Medicine. Unsurprisingly, several of the study authors hold patents for the therapy or have been paid by companies interested in providing it.

The reporting in both The Times and The Daily Telegraph made it sound as if the treatment had been shown to work and was ready to be rolled out, when this is far from the case.

The coverage in The Independent and Mail Online was more cautious, sticking mainly to the facts about the study.  

What kind of research was this?

This was a phase 1 dose-escalation safety trial. Phase 1 trials are designed to look at safety, not effectiveness.

In this case, the trial was carried out to see whether patients with diabetes could tolerate the treatment without it causing severe side effects. Larger efficacy trials are done after safety trials to limit the number of people affected if they do find dangerous side effects.  

What did the research involve?

Researchers recruited 16 adults who had recently been diagnosed with type 1 diabetes and took a large sample of blood from them.

They separated out the T-reg cells, removed defective cells, and treated the T-regs to expand their numbers. They then infused the T-reg cells back into the bloodstream, and followed these people up to see what happened.

Two of the recruits did not have their cells transfused back into their bodies, as when researchers tested the samples, they failed to meet pre-set safety criteria. The researchers tested the function of the T-regs before they infused them back into the 14 remaining people. 

The treatments were done in stages, one group of people at a time, with the first group receiving the smallest dose of T-regs. The researchers waited at least 13 weeks to see if anyone in the first group got serious side effects before moving on to give a bigger dose to the second group, and then repeating the process.

People had weekly follow-up visits to check for side effects for the first four weeks, then every 13 weeks for the first year, with regular checks until five years after treatment. They also had a number of tests before and after treatment to see whether they were producing insulin.  

What were the basic results?

Nobody in the study had serious side effects the researchers thought had been caused by the treatment. This is important, because immune cell therapy could potentially cause problems, such as a severe reaction to the infusion.

There is also the potential risk of a cytokine release, when T-cells produce proteins called cytokines that cause severe inflammation, similar to that of a bad infection.

Nobody in the study had either of these problems, and none of the participants suffered from an increase in infections, which was also a potential side effect if there are more cells that dampen the immune response.

The main adverse events experienced by people in the study were episodes of very high or very low blood sugar, which happens in people with diabetes when blood sugar is uncontrolled. The researchers say these were unlikely to be linked to the therapy.

Follow-up studies showed some of the T-reg cells remained in the bloodstream for a year after infusion, although most of the cells (about 75%) could no longer be found 90 days after treatment.

Studies of the treated T-regs in the laboratory, before they were infused back into people, showed the cells seemed to have recovered their ability to prevent the body from wrongly attacking beta cells. However, we don't know if this ability persisted after they had been injected.

Tests of a protein called C-peptide, which can indicate whether people are producing insulin, showed a range of results. In some people, the levels remained about the same as before treatment, when you would normally expect them to decline over time.

In other people, the levels of C-peptide dropped off to nearly zero after a year. The researchers say that, given the small numbers of people in the study and the fact they'd been treated at different times in the progression of the disease, it was impossible to tell whether the treatment had made any difference to these results.  

How did the researchers interpret the results?

The researchers concluded that their results "support the development of a phase 2 trial to test efficacy of the T-reg therapy".

They say their therapy, when combined with other treatments being developed, "may lead to durable remission and tolerance in this disease setting".


These early-stage results show work is underway to find a long-term treatment for type 1 diabetes, which could one day mean people do not have to inject insulin.

However, that day is a long way off. Headlines suggesting an end to daily injections can unfairly raise people's hopes, leading to disappointment when no such treatment emerges.

Bringing a new treatment into use requires at least three stages of trials, from the phase 1 safety trials, to phase 2 studies of efficacy, to larger-scale phase 3 clinical trials, where the treatment is given to large groups of people who may be followed-up for some time.

This is usually done with a comparison group to see whether the new treatment performs better than placebo or the established treatment. Many treatments get no further than phase 1.

The results from this study are encouraging for the researchers, as they allow them to move on to the next phase of study. However, it doesn't mean there are no safety concerns. 

We need to see whether the treatment is safe and effective when given to large groups of people. Only after successful phase 3 trials can people with type 1 diabetes start to hope for an injection-free future.  

Links To The Headlines

Blood therapy heralds end of insulin jabs for diabetics. The Times, November 26 2015

End of daily injections for diabetes as scientists restore insulin production. The Daily Telegraph, November 25 2015

New treatment could free type-1 diabetics from 'daily grind' of insulin injections. The Independent, November 25 2015

Could this be the end of daily injections for people with Type 1 diabetes? 'Game-changing' treatment restores production of insulin. Mail Online, November 26 2015

Links To Science

Bluestone JA, Buckner JH, Fitch M, et al. Type 1 diabetes immunotherapy using polyclonal regulatory T cells. Science Translational Medicine. Published online November 25 2015

Categories: NHS Choices

Babies born on the weekend have slightly higher death risk

NHS Choices - Behind the Headlines - Wed, 25/11/2015 - 11:30

"Babies delivered at the weekend are significantly more likely to die or suffer serious injury," the Daily Mail reports. 

However, while the increase in risk is both significant and an obvious cause for concern, it should be noted that it is a very small increase.

Researchers looked at the outcomes of 1,349,599 births in the two years from April 1 2010, and found that an estimated 770 extra deaths occurred each year above what would occur if all babies were born on weekdays.

Obviously, 770 extra deaths is 770 too many, but it is important to put the figure into a larger context. When we look at the actual numbers, 0.73% of babies born at the weekend died, compared to 0.64% of babies born on weekdays.

While it may be tempting to assume that the extra deaths are all down to staffing issues (e.g. consultants not working at weekends) other factors may be involved. For example, most women giving birth by planned caesarean section did so during the week. Babies born this way may be lower risk, which could make the weekday births appear safer.

The study highlights that the overall risk of infant death is very low. However, the small difference in risk between those born on the weekend and on weekdays cannot be ignored.

The study raises important questions about the provision of maternity services at weekends, and whether changes to staff availability and resources might reduce the numbers of deaths among babies born at the weekend.


Where did the story come from?

The study was carried out by researchers from Imperial College London and the National Audit Office, and was partially funded by Imperial College London’s research centre. The study was published in the peer-reviewed British Medical Journal (BMJ) on an open-access basis, which means it is free to read online.

The tone of the reporting varied sharply between different media outlets. The Daily Mirror went with the powerful headline: "Betrayal of our babies as weekend births puts hundreds of mums and newborns at risk". The emotive headline was followed by a story that misreported the study’s figures. The report said that 770 babies delivered at the weekend die each year, when that is the estimated increased number of deaths each year, compared to if all babies were born during the week.

The Guardian took a more measured approach, with "Weekend-born babies slightly more likely to die in their first week", and like most other media sources, reported the study accurately and with context.

Unsurprisingly several sources, including the Daily Mail, The Daily Telegraph and BBC News, linked the study to the ongoing dispute between the government and junior doctors, over changes to doctors' contracts that would affect weekend working.

The dispute was further inflamed by a recent controversial study, published in the BMJ in September, which estimated that there were an extra 11,000 "weekend deaths" during 2013-14. 

However, the researchers themselves warned: "It is not possible to ascertain the extent to which these excess deaths may be preventable; to assume that they are avoidable would be rash and misleading".


What kind of research was this?

This is an observational study which used a database of NHS statistics to look for differences in outcomes between babies born during the week and at the weekend.

Previous studies across various medical conditions have suggested that people admitted to hospital at the weekend have increased risk of death and other adverse outcomes, compared to if admitted on a weekday.

This study aimed to see whether the association may also be found in maternity care. However, a study of this nature cannot say what has caused these differences.


What did the research involve?

The researchers used a large database of NHS statistics to find information about outcomes for women and babies in English maternity units.

They looked at seven outcomes they said could be linked to quality of care, including overall infant mortality around the time of birth (including stillbirths and deaths within seven days), tears to women’s perineum (the area between the anus and the vulva), emergency re-admissions for mother or baby, and infections. They looked at rates of these outcomes on each day of the week, and compared weekend rates to overall weekday rates.

The researchers chose Tuesday as a "reference day", because women admitted in labour on a Tuesday are likely to give birth during the week, and babies born on Tuesdays are not likely to have been born after a labour starting at the weekend. 

They compared weekend outcomes to outcomes on a Tuesday, after taking account of a number of factors (confounders) that might have affected the results. These included the mother’s age, and the baby’s gestational age and birth weight. They then calculated how many extra deaths are likely to have occurred at the weekend, compared to if all births had the same risks as those happening on Tuesdays.

A number of checks and adjustments to the figures were carried out to try to account for missing information and for other things that could have affected the results. They also looked to see whether maternity units which complied with recommendations about how many hours consultants should be present had better outcomes than units which did not comply with these recommendations.


What were the basic results?

Overall, 0.73% of babies born at the weekend died around the time of birth, compared with 0.64% of babies born during the week. In other words, this meant that babies born at weekends had a 7.3 in 1,000 chance of dying, compared to babies born during the week, who had a 6.4 in 1,000 chance. After taking account of factors that could explain the difference, this means that babies born at the weekend had a 7% greater chance of death (odds ratio (OR) 1.07, 95% confidence interval (CI) 1.02 to 1.13).

Mothers had a 6% higher chance of getting an infection after giving birth if they were admitted at the weekend (95% CI 1.01 to 1.11), and babies had a 6% higher chance of being injured during birth if they were born at the weekend (95% CI 1.02 to 1.09). 

There was a suggestion of a marginally increased chance of the baby being re-admitted as an emergency after a weekend birth, but this just fell short of statistical significance (OR 1.04, 95% CI 1.00 to 1.08). None of the other three outcomes measured showed a statistically significant difference between weekends and weekdays.

Women giving birth in hospital units which met the Royal College of Obstetricians and Gynaecologists’ guidelines on consultant staffing levels were slightly less likely to have a perineal tear, but consultant levels showed no other differences in outcomes.


How did the researchers interpret the results?

The researchers said their study had shown that "performance across four of the seven measures was significantly worse for women admitted, and babies born, at the weekend". They highlighted the increase in stillbirths or deaths within seven days of birth as being of particular concern.

They say that "further work is needed" to understand what lay behind their findings, and concluded: "Unless managers and practitioners work to better understand and tackle the problems raised in this paper, health outcomes for mothers and babies are likely to continue to be influenced by the day of delivery". 



The media headlines resulting from this study sound alarming and could be worrying for pregnant women and their partners. However, there are some good reasons to be cautious.

Firstly, it's important to keep in mind that it is unusual for babies to be stillborn or die within a few days of birth. It is devastating when it does happen, but the risk is low. In this study, this happened to around seven in every 1,000 babies born at the weekend and six in every 1,000 born on a weekday. Therefore, the absolute risk is very low, but the small difference in rates between weekends and weekdays cannot be ignored.  

The biggest difficulty is that we don't know what is behind the increased chances of certain problems at the weekend. We cannot say it is simply because care is less good in hospitals than during the week.

There are a number of important limitations to the study’s results. The database used, the Hospital Episode Statistics database, should include information about what happened to people from their admission to the maternity unit onwards. 

However, the researchers found that much of the information they looked for was missing, including information about babies' birth weight (missing in almost 10% of cases) and whether they were born at full term (missing in 13% of cases). These are important factors that can affect whether a baby dies, and may have nothing to do with the care they receive during birth.

The timing of admission and birth may also have affected the results. Babies were counted as having been born at the weekend if they were born between midnight on Friday and midnight on Sunday, although their mother may have been admitted in labour before then. Women were counted as having been admitted at the weekend if they were admitted between midnight on Friday and midnight on Sunday, although they may have given birth after then.

This means that babies who died might have been classified as having been born at the weekend, even though the problems leading to their death might have happened during labour on the Friday.

Conversely, mothers who had problems after being admitted at the weekend might not have encountered those problems until giving birth on the Monday.

Although the researchers tried to make allowances for these issues, the amount of missing information from the database makes it harder to rely on the results.

Another issue is the effect of planned caesarean births, which are almost always planned for a weekday.

Professor Andrew Whitelaw, of the University of Bristol, said planned caesareans represented "low-risk babies" because there is almost no risk of the baby being starved of oxygen or physically injured during birth, and that the large numbers of planned caesareans during the week might lead to a reduced death rate on weekdays. 

In an editorial published with the study, two professors of obstetrics and gynaecology from Oregon, in the US, conclude that "the most likely mechanism underlying the weekend effect is systems factors (such as staffing, resource availability, hospital policies)". This may be the answer, at least in part, as has been suggested with other areas of medical or surgical care. However likely this may be, the study does not provide evidence to prove this is the case. 

The availability of consultants did not seem to make a big difference to the outcomes, although we don’t know whether the numbers of nurses, junior doctors and midwives available might have made a difference.

Overall, this study raises a lot of questions about why certain outcomes, especially deaths of babies, were more common when babies were born at the weekend. We need more research to find out the answers.

Links To The Headlines

Risk of having a weekend baby: Major study reveals greater threat of stillbirth or death. Daily Mail, November 25 2015

Babies born at weekends 'have higher death risk'. BBC News, November 25 2015

Babies more likely to die if born in NHS hospitals at weekend. The Daily Telegraph, November 24 2015

Babies born at weekends 'more likely' to be stillborn or die in first week of life. The Independent, November 24 2015

Betrayal of our babies as weekend births puts hundreds of mums and newborns at risk. Daily Mirror, November 24 2015

Weekend-born babies slightly more likely to die in their first week. The Guardian, November 24 2015

Babies born in NHS hospitals at weekends 'have lower survival rate'. ITV News, November 25 2015

Links To Science

Palmer WL, Bottle A, Aylin P. Association between day of delivery and obstetric outcomes: observational study. BMJ. Published online November 24 2015

Categories: NHS Choices

Loneliness 'may affect the immune system'

NHS Choices - Behind the Headlines - Tue, 24/11/2015 - 14:30

"Being lonely won't just make you miserable; it could also suppress your immune system and knock years of your life," the Daily Mail reports. 

This headline was prompted by a laboratory study in humans and rhesus macaque monkeys, which aimed to investigate if there were biological mechanisms associated with isolation that could also be associated with the risk of chronic disease or early death.

The findings suggest increased activity of the sympathetic nervous system – responsible for the "fight or flight" response – may overstimulate development of inflammatory white blood cells in the bone marrow. At the same time it may decrease the production of antiviral proteins, reducing the body's ability to fight infections.

However, at this stage this is still just a hypothesis. The study has not directly demonstrated that people who are socially isolated are more likely to become ill or die earlier and the immune system played a key role.

Loneliness and social isolation can be complex emotions, and it may be difficult to pin down a single causative factor. It could be a cycle where people with a chronic disease may be less motivated to socialise with others, increasing the sense of isolation, and so on.

Many people in the UK – particularly older adults – can be lonely and socially isolated. But there are ways to combat loneliness, both by seeking help if you are lonely and by helping lonely and isolated people in your community.   

Where did the story come from?

The study was carried out by researchers from the University of California and the University of Chicago, with financial support provided by the US National Institutes of Health.

It was published in the peer-reviewed scientific journal PNAS on an open-access basis, so it is free to read online or download as a PDF.

The UK media's reporting of the research was generally accurate, but could have benefited from making it clearer that we don't know whether these findings provide the whole answer.

Also, although this study looks at a previously observed concept, it hasn't demonstrated that people who are lonely or isolated are more likely to become ill or die earlier.  

What kind of research was this?

This laboratory study in humans and rhesus macaque monkeys aimed to investigate the cellular effects of loneliness. Various studies have already linked social isolation in humans to chronic disease and mortality, though the possible biological mechanism behind this has remained poorly understood.

In humans, feeling socially isolated can involve feeling threatened and being hyperalert. Humans evolved to live in groups with other humans, so prolonged isolation may, on an unconscious level, trigger feelings of profound unease about potential threats: if all of your tribe has suddenly vanished, you could be in a lot of trouble.

Animal models have shown the response to a threat involves signalling from the sympathetic nervous system (SNS) – responsible for the "fight or flight" response – to the bone marrow, where new blood cells are produced.

SNS signalling is thought to increase the activity of "pro-inflammatory" genes, which stimulate the development of early-stage myeloid blood cells in the bone marrow. These myeloid cells give rise to various types of white blood cells (involved in fighting infection), as well as red blood cells and platelets.

It is thought increased myeloid stimulation could contribute to inflammation-related chronic diseases. Meanwhile, while increasing the activity of pro-inflammatory genes, SNS signalling is thought to decrease the activity of genes involved in the production of antiviral immune proteins.

This process is called the conserved transcriptional response to adversity (CTRA) and is associated with specific gene activity, known as CTRA gene expression. This study aimed to find further evidence of the possible links between perceptions of social isolation and sympathetic nervous system effects on the myeloid cells and the CTRA.       

What did the research involve?

The research involved groups of humans and rhesus macaques, and looked at how perceived isolation was associated with measures of immune blood cells and CTRA gene expression.

The human study involved 141 people taking part in the Chicago Health, Aging and Social Relations Study (CHASRS). About a quarter of these people perceived themselves to be highly socially isolated, based on their scores on a loneliness scale during the first five years of the study.

The current research involved blood samples collected from these people during study years 5 to 10. The researchers looked at white blood cell count and CTRA gene expression. Urine samples were also collected to measure the "fight or flight" hormones adrenaline and noradrenaline, and the stress hormone cortisol.

The researchers looked at the association between these biological measures and the score on the loneliness scale, taking account of various potential confounding factors, including age, gender, marital status, income and lifestyle factors.

The macaques were classified to have low, intermediate or high social isolation based on their assessed sociability and behaviours that indicated they felt threatened. Researchers similarly took urine and blood samples from these animals examining stress hormones, white blood cells and gene expression. 

What were the basic results?

The researchers found people with perceived social isolation had an average 6.5% increase in the activity of genes making up the CTRA profile. After additional adjustment for stress, depression and level of social support, isolation was associated with a 12.2% increase in the activity of CTRA genes. Social isolation was also associated with increased levels of white blood cells involved in the inflammatory response.

Similar results were found in macaques – those perceived as socially isolated demonstrated higher CTRA gene activity, with up-regulation of "pro-inflammatory" genes and down-regulation of genes involved in the production of antiviral immune proteins.

This was also demonstrated as an impaired response when the macaques were experimentally infected with simian immunodeficiency virus (SIV), a type of virus that affects primates.

Both humans and macaques with perceived social isolation also demonstrated increased urinary levels of the hormone noradrenaline. 

How did the researchers interpret the results?

The researchers concluded that their study shows socially isolated people have elevated sympathetic nervous system activity, which is associated with activation of the CTRA gene profile.

This is characterised by up-regulation of pro-inflammatory genes and down-regulation of genes involved in the production of antiviral proteins.  


People who are lonely and socially isolated have often been suggested as being at higher risk of illness, disease and early death. This study has aimed to further explore the possible biological mechanisms behind this.

The findings suggest it may involve the "fight or flight" response overstimulating the development of inflammatory white blood cells in the bone marrow, while decreasing the production of antiviral proteins. The idea is this altered immune and inflammatory response could therefore contribute to the increased disease risk.

But this is only a hypothesis. Though the research in animals has suggested socially isolated macaques may be more susceptible to viral infection, this study has not proved that socially isolated humans are more likely to become ill or die earlier.

It also does not confirm this is the only biological mechanism by which social isolation may confer an increased disease risk in humans. Feelings of loneliness and social isolation can be complex emotions that may be influenced by many personal, health and life circumstances.

For example, a person may have a chronic disease that has caused them to become more withdrawn, depressed and socially isolated. This chronic disease may then cause an increased mortality risk, rather than being a direct effect of the social isolation.

As such, there may be several contributing factors involved in a cycle and it can be difficult to pin down a single causative factor – isolation, for example – directly leading to the outcome, such as disease or early death.

However, what is fairly apparent from this and previous research is that, whatever the biological mechanism(s) that may be behind it, loneliness and social isolation do seem to be associated in some way with disease and illness.

If you are feeling isolated and lonely, there are a range of organisations that can help you reconnect with peopleVolunteer work can also be an effective way of meeting new people, as well as boosting your self-esteem and wellbeing.

Read more about how to combat feelings of loneliness.

Links To The Headlines

Loneliness is twice as bad as obesity for killing us early: Being isolated suppresses your immune system and knocks years off your life. Daily Mail, November 23 2015

Loneliness triggers biological changes which cause illness and early death. The Daily Telegraph, November 23 2015

Scientists reveal why being lonely increases your chances of dying early. Daily Mirror, November 23 2015

Lonely people's white blood cells less suited to fighting infection, study says. The Independent, November 23 2015

Death by loneliness: Isolation cutting short lives of millions. Daily Express, November 23 2015

Links To Science

Cole SW, Capitanio JP, Chun K, et al. Myeloid differentiation architecture of leukocyte transcriptome dynamics in perceived social isolation. PNAS. Published online November 23 2015

Categories: NHS Choices

Has the 'happiness region' of the brain been discovered?

NHS Choices - Behind the Headlines - Mon, 23/11/2015 - 12:40

"Neurologists 'work out the key to finding happiness'," claims The Independent. Japanese researchers claim to have found a link between reported happiness and an area of the brain called the precuneus.

The researchers recruited 51 young adult volunteers, scanned their brain structure and probed their happiness and emotions using questionnaires.

They found that more feelings of happiness were associated with a larger volume of the right precuneus. Other positive emotions and more purpose in life were also associated with greater volume in this region.

Importantly, we don't know whether the findings in this small sample of Japanese people could be generalised to everyone. We also can’t apply cause and effect – that is, whether precuneus volume is set at birth and so predetermines our emotions, or whether it could change depending on our emotions.

It is arguably simplistic to regard the brain as similar to the recent Disney film Inside Out – with specific regions of the brain linked to specific emotions such as joy, fear, anger, disgust and sadness.

However, as the researchers discuss, the brain does have a high degree of plasticity – it is possible for brain cells to change and adapt through different types of activity and exposures. 

Previous studies have indicated that meditation might increase the volume of the precuneus, and may be linked to happiness. There is a growing body of evidence that mindfulness-based techniques, such as meditation, can improve a person’s wellbeing. 

Where did the story come from?

The study was carried out by researchers from Kyoto University and other research institutes in Japan. It was funded by the Japan Society for the Promotion of Science – Funding Program for Next Generation.

The study was published in the peer-reviewed scientific journal Scientific Reports on an open-access basis, so it is free to read online or download as a PDF.

The media has generally taken these findings at face value, and could benefit from acknowledging the limitations of this cross-sectional study of a small and select population sample.

The Independent's headline "Neurologists 'work out the key to finding happiness'," is unsupported by the facts presented in the study.

The Daily Telegraph wrote: "Meditation increases the grey matter in a part of the brain which is linked to happiness, scientists have found," which implies this is one of the new findings the study produced. It was not.

The Telegraph wasn't alone in making this subtle mistake. The study referenced another study, which they said showed that this brain region structure could be changed through training, such as meditation, but they did not investigate or confirm this themselves.

A recent meta-analysis into whether meditation could change brain structure had mixed results. While the researchers did find some positive results, they also cited concerns about "publication bias and methodological limitations".


What kind of research was this?

This was a cross-sectional study which aimed to investigate whether subjective happiness is associated with specific brain features.

As the researchers say, happiness is a subjective experience that is important to humans, even to the extent that many philosophers and scholars have called it "the ultimate goal in life".

Previous studies have suggested that happiness has a strong hereditary component, and involves cognitive (mental processes of perception, memory, judgement, and reasoning) as well as emotional components. However, actual structural brain features associated with this feeling have remained elusive.    

This study aimed to look at participants' brain structure on MRI scanners to see how this was associated with measures of reported subjective happiness and other emotions. 


What did the research involve?

The research included 51 volunteers (average age 23) who had MRI scans and completed various psychological questionnaires assessing their feelings.

Subjective happiness was measured on a four-item Subjective Happiness Scale, positive and negative feelings on an Emotional Intensity Scale, anxiety on a State-Trait Anxiety Inventory, and other thoughts surrounding happiness on a Purpose in Life Scale.

All four of these questionnaires were Japanese versions, which have been validated for use in Japanese people.

The participants had MRI scans, and the researchers looked at the association between brain imaging findings and subjective happiness score, taking into account the influence of scores on the other scales. 


What were the basic results?

Looking at the different psychological questionnaires, the researchers found that, unsurprisingly, greater subjective happiness was associated with positive emotions and higher purpose in life scores. Conversely, negative emotions and higher trait anxiety were associated with lower happiness scores.

Looking at the MRI scans, subjective happiness was linked to volume of the right precuneus, an area of the brain previously associated with feelings of ego or self-consciousness. Happiness score was not associated with any other brain region.

The researchers also found that right precuneus volume was associated with feelings on the other scales. Positive emotions and more purpose in life were associated with greater volume, negative feelings with lower volume.


How did the researchers interpret the results?

The researchers conclude that they have found a positive association between subjective happiness score and volume of the right precuneus in the brain – a brain region also associated with emotional and purpose of life scores. 

They suggest that, "the precuneus mediates subjective happiness by integrating the emotional and cognitive components of happiness".



This Japanese study found subjective happiness to be associated with volume of one brain region – the right precuneus. Previous research is said not to have been able to clarify whether brain features are linked to this elusive and highly valued feeling.

Perhaps unsurprisingly, the researchers also found that greater subjective happiness was associated with positive emotions and greater feelings of purpose in life, while lower happiness was linked with the opposite. 

There is however, little else to be concluded from this research and there are a few important limitations to note.

The sample size, at only 51, was small for this type of study. The participants were also all young Japanese adults. Great care must be taken before extending the observations in this sample to people of other populations, or all people in general. The same findings may not have been observed in another group of people.

The study is cross-sectional, taking one-off psychological questionnaires and one-off brain scans. We do not know whether the psychological assessments reflect lifelong happiness, mood or emotions in these people, or whether these are more transitory states – as they can be for many of us – depending on current life circumstances. We also don't know whether the questionnaires are able to grasp all the nuances of people's feelings.

Being cross-sectional, we also can't conclude on cause and effect. We don't know whether the feelings or emotions of an individual could be predetermined by the precuneus volume they are born with, or whether brain nerve cells in this area could change and adapt during life – influencing volume – depending on our feelings and emotions.

The researchers do discuss two previous studies. One suggests that meditation can increase happiness, while a second suggests that psychological training, such as meditation, could influence the volume of the precuneus. However, they did not study whether this was true themselves – it was just part of their discussion about the potential implications of their research.

Randomised controlled trials or carefully designed observational follow-up studies would be needed to better assess whether mediation or other psychological practices could influence our brain or emotions.  

This study alone provides no evidence that mediating will influence our brain structure or volume and make us feel happier.

That said, the concept of "mindfulness" – using a range of techniques, including meditation, to become more aware of the world around you – has become increasingly popular. Supporters of mindfulness claim that it can help combat stress and improve wellbeing. 

Links To The Headlines

Neurologists 'work out the key to finding happiness'. The Independent, November 21 2015

Grow your own happiness: how meditation physically changes the brain. The Daily Telegraph, November 20 2015

Is the search for happiness over? Experts discover the part of the brain that determines how cheerful we are. Mail Online, November 21 2015

Links To Science

Sato W, Kochiyama T, Uono S, et al. The structural neural substrate of subjective happiness. Scientific Reports. Published online November 20 2015

Categories: NHS Choices

One diet 'doesn’t fit all' – people 'metabolise food differently'

NHS Choices - Behind the Headlines - Fri, 20/11/2015 - 12:50

"No one diet fits all," the Daily Mail reports. 

Israeli researchers monitored 800 adults to measure what is known as postprandial glycemic response – the amount by which blood sugar levels increase after a person eats a meal. This measure provides a good estimate of the amount of energy that a person "receives" from food.

The researchers found high variability in postprandial glycemic response across individuals who consumed the same meals.

They found these differences were related to the individual's characteristics, and developed a model (known as a "machine learning algorithm") to predict an individual's response to a given meal.

When 12 individuals were put on two different tailored meal regimens predicted by this model to either give lower blood sugar levels or higher levels for a week each, the prediction was correct in most of the individuals (10 of the 12).

Results of the study should be interpreted with some caution due to limitations. The main one is that the sample in which the diets were tested was small, with a short follow-up period. The study looked at post-meal blood sugar levels and not weight, so we cannot say what the impact on weight would be.

Still, the concept that a machine learning algorithm model could be used to create a personalised diet plan is an intriguing idea. In the same way Netflix and Amazon "learn" about your TV viewing preferences, the plan could "learn" what foods were ideally suited to your metabolism.  

Where did the story come from?

The study was carried out by researchers from the Weizmann Institute of Science, Tel Aviv Sourasky Medical Center and Jerusalem Center for Mental Health – all in Israel. 

The study was funded by Weizmann Institute of Science, and the researchers were supported by various different institutions, such as the Israeli Ministry of Science, Technology and Space.

The study was published in the peer-reviewed scientific journal Cell.

The Daily Mail's reporting implies the study explains why different weight loss diets perform differently in different individuals, but we cannot say this based on the research. 

The study only aimed to look at blood sugar levels after a meal – not weight. It also did not compare the personalised dietary plans the researchers developed against popular weight loss diet plans such as the 5:2 diet.


What kind of research was this?

This study aimed to measure the differences in post-meal blood glucose levels between individuals and to identify personal characteristics that can predict these differences.

The researchers then used a small randomised controlled trial (RCT) to identify whether personalising meals based on this information could help reduce post-meal blood glucose levels.

Researchers say that blood sugar levels are rapidly increasing in the population. This has led to an increase in the proportion of people with "pre-diabetes" where a person has higher blood sugar than normal, but does not meet all of the criteria required for being diagnosed with diabetes. They say that up to 70% of people with pre-diabetes eventually develop type 2 diabetes.

Having high blood sugar levels after meals is reported to be linked to an increased risk of type 2 diabetes as well as obesityheart disease and liver disease

The researchers hoped that by understanding the factors responsible for variations in post-meal blood glucose levels they could use this information to personalise dietary intake to reduce those levels.


What did the research involve?Stage I

This study started with 800 healthy and pre-diabetic individuals (aged 18-70 years). The cohort was representative of the individuals without diabetes in Israel. Just over half (54%) of the cohort was overweight and 22% were obese.

Researchers started by collecting data on food intake, lifestyle, medical background and anthropometric measurements (such as height and weight) for all the study participants. A series of blood tests was carried out and a stool sample (used to assess gut microbial profile) was also collected.

Participants were then connected to a continuous glucose monitor (CGM) over seven days. The machine was placed on the individual's skin to measure glucose in interstitial fluid – the fluid in and around the body's cells – every five minutes for a week. They were also asked to accurately record their food intake, exercise and sleep using a smartphone-adjusted website developed by the researchers.

Over this period, the first meal of each day was a standardised meal given to all participants to see how their blood glucose responses differed. Other than that, they ate their normal diets.

Researchers then analysed the relationship between an individual's characteristics and their post-meal glucose levels. They developed a model based on these characteristics that would predict what these levels would be. They then tested their model on 100 other adults.

Stage II

To assess whether personally tailored dietary interventions could improve post-meal blood sugar levels, researchers carried out a randomised crossover trial

This trial included 26 new participants who were connected to continuous glucose monitors (CGM) and had the same information collected as the 800-person cohort over a week. This allowed the researchers to identify their personal characteristics and blood glucose responses to meals.

After this, the groups were allocated to two different personalised diets. One group (the "prediction" group) was allocated to receive a meal plan based on what the researchers' model predicted to be a "good" or a "bad" diet for them. They received these two different meal regimens for a week each, in random order:

  • one regimen was based on meals that were predicted to produce "low" post-meal blood sugar levels (good diet) in the individual
  • one regimen was based on meals predicted to produce "high" post-meal blood sugar levels (bad diet) in the individual

The second group (the "expert" group) took part in a similar process, but their "good" and "bad" diets were based on what a clinical dietitian and researcher selected for them based on looking at the person's responses to different meals in the first week of the study.

Participants and researchers did not know which meal plan they were eating during the study – so both groups were blinded.


What were the basic results?

Overall, the study found high variability in post-meal blood sugar levels across the 800 individuals even when they consumed the same meal. They found that many personal characteristics were associated with their post-meal blood glucose levels, including their body mass index (BMI) and blood pressure, as well as what the meal itself contained.

One example, given in an interview to the Mail, was the case of a woman whose blood sugar levels spiked dramatically after eating tomatoes.

The researchers developed a model based on these characteristics to predict their glucose levels after a meal. This model was better at predicting post-meal glucose levels than simply looking at how much carbohydrate or calories the meal contained. The model performed similarly well when tested in a different group of 100 adults.

The researchers found that most of the individuals on the "prediction" diet (10 out of 12; 83%) had higher post-meal blood glucose levels during their "bad" diet week than their "good" diet week. This was slightly better than the "expert" diet – where eight out of 14 participants (57%) had higher post-meal blood glucose levels during their "bad" diet week.


How did the researchers interpret the results?

Researchers concluded that this research suggests: "personalised diets [including the one based on their algorithm] may successfully modify elevated postprandial blood glucose and its metabolic consequences".



This study assessed the differences in post-meal blood sugar levels – medically known as postprandial glycemic responses (PPGR) – across 800 non-diabetic adults, and found a lot of variation between individuals. 

They developed a model based on a wide range of personal characteristics, such as a person's BMI and gut microbial profile, which could predict their response to a given meal.

In a small crossover study, it found that tailoring meals for individuals based on their model could help lower the individual's post-meal sugar levels.

This study has some strengths and limitations. Its strengths include the relatively large sample size used to analyse the relationship between personal characteristics and post-meal blood sugar levels, and the fact the model they developed was then checked in a new group of individuals.

The main limitation of this study is that the actual testing of the personalised diets was done in a small sample of only 26 people, with only 12 of these getting the diet based on the model's predictions.

What we can say based on these results is also limited based on its short follow-up period and the fact that only blood glucose levels were measured. We cannot say what effects these different diets have on a person's weight or risk of diabetes in the long term.

It appears the research team is now looking into finding commercial applications for this approach. It would be feasible to combine a continuous glucose monitor with a smartphone application that creates a personalised diet plan. If successful, such an application would likely become very popular.

Links To The Headlines

No ONE diet fits all: How your body reacts to Atkins, Paleo and the 5:2 'is determined by your metabolism'. Daily Mail, November 20 2015

Expert diet tips could be WRONG as we all metabolise food in different ways. Daily Mirror, November 19 2015

Links To Science

Zeevi D, Korem T, Zmora N, et al. Personalized Nutrition by Prediction of Glycemic Responses. Cell. Published online November 19 2015

Categories: NHS Choices

Last line in antibiotic resistance under threat

NHS Choices - Behind the Headlines - Thu, 19/11/2015 - 12:00

"The last line of antibiotic defence against some serious infections is under threat," The Guardian reports, after researchers found that E.coli bacteria from food products in China has developed resistance to colistin – a polymixin antibiotic.

This antibiotic is, in a sense, a weapon of last resort in the antibiotics armoury, and is sometimes used to serious treat infections that have become resistant to other strong antibiotics.

The researchers found that colistin resistance was caused by a gene called MCR-1. This gene was found on a piece of bacterial DNA that can be transferred between bacteria.

They took a number of samples from animals in abattoirs, and raw meat from open markets and supermarkets in China to identify how frequently the MCR-1 gene is found in bacteria.

The study found the MCR-1 gene in E. coli collected from 15% of raw meat samples and 21% of animals tested from 2011-14. The gene was also found in E. coli from 1% of hospital inpatients in China.

As this study was conducted in China, we do not know whether the situation is the same in the UK. However, antibiotic resistance is a global concern that could potentially advance more quickly than new antibiotics can be developed.

An editorial accompanying the study recommends that the use of polymixin should be restricted in agriculture, as we could end up with a situation where doctors are forced to say, "Sorry, there is nothing I can do to cure your infection". 

Where did the story come from?

The study was carried out by researchers from a number of institutions, including the South China Agricultural University and the China Agricultural University.

It was funded by the Chinese Ministry of Science and Technology, and the Chinese National Natural Science Foundation.

The study was published in the peer-reviewed medical journal, The Lancet Infectious Diseases.

This research has been reported on widely and accurately by the UK media, but we do not know if the findings and level of risk apply to the UK population.  

What kind of research was this?

This laboratory study aimed to investigate the cause of resistance to one of the strongest "last resort" group of antibiotics.

During routine surveillance of E.coli bacteria isolated from livestock in China, researchers observed an increase in resistance to the antibiotic colistin.

Colistin is a very strong polymixin antibiotic. It is given directly into the vein (intravenously) to treat serious infections – such as lung or urinary tract infections – where other strong injected antibiotics are not effective, mostly because bacteria have developed resistance to them.

The finding that bacteria seem to be developing resistance to colistin is therefore a major concern. The researchers wanted to find out how the bacteria developed this resistance.

This type of study is useful to investigate how the antibiotic resistance developed and how it can be transferred between bacteria cells. It also gives some indication of how common these resistant bacteria are in China. Whether there has been a wider spread of the resistance needs to be investigated further.  

What did the research involve?

This study aimed to investigate the cause of a major increase in E. coli resistance to the class of antibiotics known as polymyxins seen in livestock in China. 

The researchers selected one E. coli strain (SHP45) for investigation, as this strain had shown colistin/polymixin resistance. The researchers identified that the cause of the resistance seemed to be a gene called MCR-1, which was found on a piece of DNA called a plasmid.

Bacteria are capable of transferring plasmids to other bacteria, which could help antibiotic resistance spread. The researchers therefore investigated the possibility that these bacteria could transfer plasmid-mediated colistin resistance. Pig strains of colistin-resistant E. coli and another type of bacteria called K. pneumoniae were chosen for this investigation.

To test how widespread this resistant gene was, samples of bacteria called clinical isolates were collected from inpatients at two hospitals in China and screened for the presence of the MCR-1 gene.

Further samples were collected from pig abattoirs and raw meat from 30 open markets and 27 supermarkets located in seven regions of Guangzhou from 2011-14. One isolate was collected from each animal and retail meat sample, and was then screened to look at the spread of MCR-1 in animals and food.

Mice were used to investigate whether the colistin-resistant E. coli collected from an inpatient would be able to resist the antibiotic in injected mice if they were given the equivalent of human colistin dosing. 

What were the basic results?

The researchers found that the cause of colistin resistance was the MCR-1 gene. The resistance gene was found to be transferred between bacteria cells through a process called conjugation, where the plasmid are passed from one bacterium to another. Researchers found that this transfer was able to take place across bacterial species, from E. coli to K. pneumoniae.

Between 2011 and 2014, the MCR-1 gene was found in E. coli isolates collected from 78 (15%) of 523 samples of raw meat, 166 (21%) of 804 animals, and 16 (1%) of 1,322 samples from hospital inpatients with infection.

How did the researchers interpret the results?

The researchers concluded that the MCR-1 gene is able to cause resistance to colistin and is transferred between bacterial cells by the process of conjugation. 

Although currently confined to China, MCR-1 is likely to spread further. More surveillance and molecular epidemiological studies on the spread of this gene are urgently required. 


This Chinese study followed on from previous routine surveillance, which found that some livestock were carrying E. coli bacteria resistant to one of the "last resort" groups of antibiotics used in humans.

Here, the researchers investigated how this resistance developed and how it can be transferred between bacterial cells. They found it is caused by the MCR-1 gene, which is found on a piece of DNA that can be transferred between bacteria. This gene was found in E. coli isolated from a number of raw meat and animal samples taken by the research team. 

The prevalence of MCR-1 found in E. coli cells was found to be quite high, which is of some concern and suggests it may already be widespread among livestock in China. However, as the researchers acknowledge, they took a relatively small number of samples, and caution against the results being extrapolated too far.

As China is the world's largest producer of poultry and pig products, this is of great concern to their population and economy. The researchers suggest a possible reason for this antibiotic resistance is the use of colistin in animal feed in China.

It is unclear whether the situation may be similar in other countries. Antibiotic resistance is a global concern that could potentially advance more quickly than new, stronger antibiotics can be developed.

Without effective antibiotics, infections we regard as non-serious and routine operations could carry a much higher risk of serious complications. Further study to investigate the ways bacteria develop resistance and how we can tackle this problem are needed.

There are several things you can do to help prevent the development of antibiotic or other antimicrobial resistance. These include recognising that many common respiratory and gastrointestinal infections are viral and do not need – and will not respond to – antibiotics.

If you are given a course of antibiotics for any condition, it is very important to take the full course as prescribed, even if you begin to feel better. Doing this prevents bacteria being exposed to a dose of antibiotics that is too small to eradicate them, but gives them a taste of the antibiotic and allows them to develop resistance. 

Links To The Headlines

Antibiotic defences against serious diseases under threat, experts warn. The Guardian, November 18 2015

Antibiotic resistance: World on cusp of 'post-antibiotic era'. BBC News, November 19 2015

E.coli has developed resistance to last-line of antibiotics, warn scientists. The Daily Telegraph, November 18 2015

Antibiotic resistant superbugs pose a global threat after breaking through last line of defence, doctors warn. The Independent, November 19 2015

Superbugs 'are now resistant to ALL drugs': Scientists announce germs have breached last line of antibiotic defences that could lead to untreatable infections. Daily Mail, November 19 2015

Links To Science

Liu Y, Wang Y, Walsh TR, et al. Emergence of plasmid-mediated colistin resistance mechanism MCR-1 in animals and human beings in China: a microbiological and molecular biological study. The Lancet Infectious Diseases. Published online November 18 2015

Categories: NHS Choices

Coffee 'can make you live longer' claims

NHS Choices - Behind the Headlines - Wed, 18/11/2015 - 15:40

"Drinking three to five cups of coffee a day could help people live longer, new research has found," The Independent reports. 

Research suggests a link between regular coffee consumption and reduced risk of chronic diseases, such as heart disease – whether people drank the normal or decaffeinated variety.

The results come from three studies with a total of 208,501 health professionals, followed up for more than 20 years. Overall, people who drank one to five cups of coffee a day were slightly less likely to have died by the end of the study, compared to people who did not drink coffee at all.

People who drank more than five cups a day were no more or less likely to have died. However, the results changed, depending on whether the researchers included people who smoked. This may be because heavy coffee drinking and smoking often go together, so the unhealthy effects of smoking may cancel out any minimal effects from coffee.

The results suggest that regular coffee drinking may have some benefits. However, the differences in the chance of death between coffee drinkers and non-coffee drinkers, while statistically significant, are modest, ranging from a 5% to a 9% reduction in risk.

The study cannot prove cause and effect, and even if it could, the results suggest that daily coffee consumption will do little for your long-term health if your general lifestyle is unhealthy.


Where did the story come from?

The study was carried out by researchers from the Harvard School of Public Health, Brigham and Women's Hospital, Harvard Medical School, Indiana University, the Universidad Autonoma de Madrid and the National University of Singapore. 

It was funded by the US National Institutes of Health. There were no reported conflicts of interest.

The study was published in the peer-reviewed medical journal Circulation on an open-access basis, which means it is free for anyone to read online.

The Independent and The Daily Telegraph reviewed the study in light of other recent research on coffee, giving a cautious welcome to positive findings and balancing this with warnings of the health risks (such as disrupted sleep) associated with caffeine. 

The Metro was less cautious, asserting that the research means people who don't drink coffee are "missing out" and should "drink more of the black stuff". 

The news reports did not include the actual figures about the differences in risk of death between coffee drinkers and non-coffee drinkers.

Some UK media sources carried the eminently sensible advice from Emily Reeve, Senior Cardiac Nurse at the British Heart Foundation, who said: "It is important to remember that maintaining a healthy lifestyle is what really matters if you want to keep your heart healthy, not how much coffee you drink."


What kind of research was this?

This was a prospective cohort study, based on three big groups (called cohorts) of health professionals, which aimed to see whether drinking caffeinated or decaffeinated coffee was associated with risk of death.

Cohort studies are observational, which means they watch to see what happens to people. This type of study can find links between factors (in this case, coffee drinking and length of life) but cannot show that one factor is the cause of another.


What did the research involve?

Researchers used information from three big cohort studies of healthcare workers in the US, which started in the 1970s and 1980s, and ran until December 2012. They looked at whether people drank coffee, and if so how much, and then followed them up to see whether they died during the course of the study. They adjusted their figures to take account of other factors that could affect the results, such as people’s age and lifestyle.

They were particularly interested in whether people smoked, and how that affected both coffee drinking and the results, because coffee drinking and smoking often go together. They also wanted to see whether decaffeinated and caffeinated coffee had different effects, and whether coffee drinking had an effect on deaths from specific diseases. They carried out different calculations, using data from the cohort studies to answer these questions.

The analysis of the data included tests to see whether people's coffee consumption changed over time, whether the results were affected by medical conditions people had at the start of the study, and people's diet, body mass index, smoking status and how often they exercised. The researchers analysed the data separately for each cohort, and then pooled it together.


What were the basic results?

Overall, the study found that 31,956 of the 208,501 people studied had died during the 21 to 28 years they were followed up. There was an association between coffee drinking and risk of death. Compared to people who drank no coffee:

  • People who drank one cup of coffee a day or less were 5% less likely to have died (hazard ratio (HR) 0.95, confidence interval (CI) 0.91 to 0.99).
  • People who drank one to three cups a day had a 9% lower chance of having died (HR 0.91, 95% CI 0.88 to 0.95).
  • People who drank more than three to less than five cups had a 7% lower chance of having died (HR 0.93, 95% CI 0.89 to 0.97).
  • People who drank five or more cups a day had no significantly different risk of death (HR 1.02, 95% CI 0.96 to 1.07).

It made little difference whether people drank caffeinated or decaffeinated coffee. However, when divided into these two subgroups, the risk reductions were only significant up to three cups a day. The separate analyses found that drinking more than three cups of either caffeinated or decaffeinated was not associated with mortality risk.

The researchers also found that non-smokers were less likely to drink coffee, and that only about a third of people who drank more than five cups a day were non-smokers. 

They ran the figures again, this time including only people who never smoked. This time they found that drinking more than five cups a day did reduce the chance of death compared to people who didn't drink coffee at all, meaning that any amount of coffee seemed to reduce the risk of death, so long as people didn't smoke. 

However, this could also be down to the smaller number of people in the >5 cup group when restricted to non-smokers, making the accuracy of this risk estimate slightly less reliable.

Looking at specific diseases, the study found people who drank coffee were less likely to have died from cardiovascular disease and diabetes, but more likely to have died from lung cancer or respiratory disease. 

The researchers suspected that smokers were behind this result, so ran the figures again with non-smokers only, and found that the increased risk disappeared. Overall, there was no increase or decrease in risk of death from cancer, linked to drinking coffee.


How did the researchers interpret the results?

The researchers say that coffee consumption is linked to a lower risk of death, and that their finding of no reduced risk for those drinking over five cups a day was probably down to confounding by the numbers of heavy coffee drinkers who smoked. 

They say there are "several plausible biological mechanisms" by which coffee might benefit health, including substances in coffee which reduce resistance to insulin and calm inflammation in the body.



This large study found that people who drink coffee have a slightly reduced risk of death compared to non-coffee-drinkers, up to the point of five cups a day. Beyond five cups, the picture is more complicated – it may be, as the researchers say, because of the link between heavy coffee drinking and smoking. However, we can't be sure that's the case.

The results for moderate coffee drinking are more consistent, but they still do not prove that coffee alone is the reason that coffee-drinkers were less likely to die during the study. The study has several strengths, including its large collective sample size, long duration of follow-up, and attempting to take into account various potential confounding factors, particularly smoking. However, the analyses may not have been able to account for the full effect of all of these or other, unmeasured health and lifestyle factors that could be influencing the results.

Other limitations include the possibility for inaccurate estimation of coffee intake. Although the study has separated into caffeinated or decaffeinated, it is not able to inform on all the nuances of coffee drinking today – such as instant, freshly ground, espresso, latte, cappuccino, etc. Also, though a large sample size, it includes only US health professionals, who may have distinct characteristics from other populations. 

It is also important to note that the reduction in risk of death from drinking coffee, at less than 10% relative risk, is fairly small. There are other reasons why some people might want to avoid caffeine. It's a stimulant, and can interfere with sleep, especially if you drink it in the evening. It can raise blood pressure for a short time, which might be a problem for people with heart disease. It has also been linked to miscarriage, so pregnant women might want to avoid it.

If you want to increase your chances of living longer, coffee is unlikely to make a big difference. You'd be better off quitting smoking (if you smoke), eating a healthy diettaking plenty of exercise and achieving or maintaining a healthy weight.

Links To The Headlines

5 cups of coffee a day might make you live longer, study suggests. The Independent, November 17 2015

Is coffee really good for you? The Daily Telegraph, November 17 2015

Could five cups of coffee a day help you live longer? Drink found to reduce chance of heart disease, Parkinson's and Type 2 diabetes thanks to compound in the beans. Mail Online, November 17 2015

5 reasons why drinking coffee might help you live longer. Metro, November 17 2015

Links To Science

Ding M, Satija A, Bhupathiraju SN, et al. Association of Coffee Consumption with Total and Cause-Specific Mortality in Three Large Prospective Cohorts. Circulation. Published online November 16 2015

Categories: NHS Choices

Region of brain 'shorter' in people with hallucinations

NHS Choices - Behind the Headlines - Wed, 18/11/2015 - 15:00

"A study of 153 brain scans has linked a particular furrow, near the front of each hemisphere, to hallucinations in schizophrenia," BBC News reports.

While schizophrenia is commonly associated with hallucinations – seeing, hearing and, in some cases, smelling things that are not real – around 3 out of 10 people with schizophrenia do not have them.

Researchers compared the brain scans of people with schizophrenia who have experienced hallucinations with those who have not. They focused on the paracingulate sulcus (PCS) – a fold in the frontal part of the brain – as previous research has associated the PCS with our ability to distinguish between reality and imagination.

The research found the PCS was significantly shorter in people with schizophrenia who've experienced hallucinations, compared with others with schizophrenia who've not had hallucinations, as well as healthy population controls.

The study is undoubtedly of value in furthering our understanding of the brain structure of people who experience abnormal perceptions. However, further research is needed to investigate whether this is a risk factor or a consequence of the condition. As such, at present it has no preventative or therapeutic implications.  

Where did the story come from?

The study was carried out by researchers from the University of Cambridge, Durham University, Trinity College Dublin, and Macquarie University. 

The individual researchers received various sources of financial support, including from the Medical Research Council and the Wellcome Trust. 

The study was published in the peer-reviewed scientific journal Nature Communications on an open access basis, so it is free to read online.

BBC News gives reliable and balanced coverage of this research. 

What kind of research was this?

This was a cross-sectional study comparing the brain scans of people with schizophrenia who have experienced hallucinations with those who have not.

Hallucinations are when a person sees, hears, smells or has other sensory perceptions of something that isn't there. Along with abnormal thought patterns and beliefs (delusions), they are one of the characteristic features of schizophrenia. 

However, not everyone with the condition experiences hallucinations – around a third of people meeting diagnostic criteria for schizophrenia do not report having them.

Various neurological factors are thought to underlie hallucinations. In this study, the researchers focused on examining the structure of the paracingulate sulcus (PCS) in the frontal part of the brain. 

A previous study suggested this part of the brain influences our ability to distinguish between real and imagined events.     

This kind of research design can look to see if there is any link between PCS and hallucinations, but it cannot draw conclusions on the causality.  

What did the research involve?

The research included three groups of people:

  • those with schizophrenia who have experienced hallucinations (n=70)
  • those with schizophrenia who have not (n=34)
  • a control sample of healthy people without schizophrenia or experience of hallucinations (n=40)

Roughly half of those with schizophrenia who'd had hallucinations had experienced auditory ones. The remainder had experienced other sensory hallucinations. The majority of these people were male and had an average age of around 40. 

The other two groups were accordingly matched to give comparative age and gender proportions. They were also all matched by IQ and right- or left-handedness.   

An MRI scanner was used to scan and measure the length of the PCS in both halves of the frontal part of the brain. The PCS was defined as "prominent" if the length was above 40mm, "absent" if the length was below 20mm, and "present" if it fell between the two. 

The measurements were taken by researchers who were unaware of the person's condition. 

What were the basic results?

The researchers found PCS length differed between those who had and had not experienced hallucinations. It was significantly shorter in those with schizophrenia who had hallucinations, compared with people with schizophrenia who hadn't had hallucinations (average 19.2mm shorter) and healthy controls (average 29.2mm shorter).

The difference in PCS length between the latter two groups – those with schizophrenia without hallucinations and healthy controls – was not statistically significant.

In all subjects the PCS in the left half of the frontal lobe was longer than that in the right half. For people with schizophrenia and hallucinations, the PCS was significantly shorter than the healthy controls in both brain halves, but only significantly shorter in the left half than the group with schizophrenia without hallucinations.

Overall, the researchers' modelling suggested a 10mm reduction in PCS length in the left half was associated with 19.9% increased odds the person had experienced hallucinations. 

The type of sensory hallucination did not influence PCS length, suggesting this was an overall association with hallucinations in general, not specific to the nature of the perception. 

No other variables, such as overall brain volume and surface area or other characteristics of the illness, had a significant influence on PCS length. 

Another observation was grey matter volume – which contains the nerve cell bodies – immediately surrounding the PCS was greater in those who had experienced hallucinations.  

How did the researchers interpret the results?

The researchers concluded hallucinations are associated with specific differences in the PCS in the frontal part of the brain. 

They say their findings "suggest a specific morphological basis for a pervasive feature of typical and atypical human experience". 


Previous research suggested the paracingulate sulcus (PCS) – a fold in the frontal part of the brain – may be associated with our ability to distinguish between reality and imagination.

This study found further evidence in support of this association. People with schizophrenia who had experienced hallucinations seemed to have significantly shorter PCS length than people who have not experienced hallucinations – either those with schizophrenia or healthy people.

The samples are relatively small, so it is possible the findings may have been different if it had been possible to study a much larger sample. However, performing MRI scans on extensive numbers of people with and without schizophrenia is unlikely to be feasible, so this is perhaps the best evidence we are likely to get.

What is important to highlight, though, is this is a cross-sectional study taking one-off MRI scans. As such, it can only demonstrate PCS length is associated with the experience of hallucinations. It cannot tell us whether PCS length predicts the risk of hallucinations, or conversely whether PCS length has changed as the result of experiencing hallucinations.

Follow-up studies performing repeated MRI scans over time in people at high risk of, or who have developed, schizophrenia would be valuable to examine if the brain changes during the course of the condition and its development.

Also, as the researchers say, as the PCS develops around birth, it would be valuable to look at any differences in fold length in children and see whether this could be a risk factor.  

At present, though, the findings have no apparent preventative or therapeutic implications for either schizophrenia or the experience of hallucinations.

But despite limited application of these findings, the study is undoubtedly of value in furthering our understanding of the brain structure of people who experience abnormal perceptions.   

Links To The Headlines

Frontal brain wrinkle linked to hallucinations. BBC News, November 17 2015

Links To Science

Garrison JR, Fernyhough C, McCarthy-Jones S, et al. Paracingulate sulcus morphology is associated with hallucinations in the human brain. Nature Communications. Published online November 17 2015

Categories: NHS Choices

Study suggests disability test link to suicide risk

NHS Choices - Behind the Headlines - Tue, 17/11/2015 - 11:20

"Fitness to work tests linked to 590 extra suicides in England," warns the Daily Mirror. The paper reports a "horrific death toll" from the policy of reassessing disability benefit claimants. But there is reason to be cautious about whether the suicides were directly linked to Work Capability Assessments (WCAs).

WCAs, introduced in 2010, are intended to assess what work, if any, people are fit to do. People found to be fit for work are moved off disability benefit and expected to look for a job.

Researchers used data about the changing numbers of suicides, reported mental health problems, and prescriptions of antidepressants in 149 local authority areas in England. These were then compared with the numbers of people living in those areas who had undergone WCAs.

The main reported findings were that for every 10,000 WCAs in an area:

  • there were an estimated six extra suicides
  • there were 2,700 extra cases of reported mental health problems
  • GPs prescribed an extra 7,020 antidepressants

It cannot be assumed the WCAs were the direct cause of the increases in mental health problems seen in the study, which only compared rates per area.

This means we do not know whether the people who took their own lives or reported mental health problems had actually been through a WCA. Though, to be fair to the researchers, the government didn't release the data that would make such an analysis possible.

If you are troubled by persistent low mood, contact your GP. If you are thinking about suicide, you should phone Samaritans' free 24-hour helpline on 116 123. 

Where did the story come from?

The study was carried out by researchers from the University of Liverpool and the University of Oxford. It was funded by grants from the National Institute of Health Research, the Commission of the European Communities, and the Wellcome Trust.

The study was published in the peer-reviewed Journal of Epidemiology and Community Health on an open access basis, so it is available to read free online.

The tone of the media coverage differed, as you would expect with a story with such strong political overtones.

The Mail Online reported the study's findings in detail, but gave prominence to the Department for Work and Pensions' (DWP) description of the research as "wholly misleading", while the Daily Mirror used stronger, more emotive language, calling the extra suicides linked to assessments a "horrific death toll" and only including the DWP's statement at the very end of its report.

BBC News, The Guardian and The Independent gave more balanced coverage.

Buzzfeed News was the only news source to make the point that many of the limitations of the study were down to the DWP refusing to release more precise data on people who underwent a WCA.  

What kind of research was this?

This was an observational study, which looked at population-level data over time to see whether changes in rates of Work Capability Assessments (WCAs) were associated with rates of mental health problems. These types of observational studies can find links between factors, but cannot definitively prove that one causes another.  

What did the research involve?

Researchers collected information about mental health outcomes from 149 local authorities in England between 2004 and 2013. They looked to see how this was associated with the numbers of WCAs carried out in the different local authorities between 2010 and 2013. They adjusted their figures to take account of other factors that could have affected mental health outcomes.

The mental health outcomes studied were numbers of suicides – including deaths from injury of undetermined cause, sometimes used by coroners when it is unclear whether someone intended suicide – as well as the number of people reporting mental health problems in surveys and the number of antidepressant prescriptions written by GPs. 

For each outcome, the researchers calculated the rate per 100,000 people, looking only at adults aged 18 to 65 (those of working age who might be affected by WCAs).

Local authorities introduced WCAs at different rates, dependent partly on the number of people in an area receiving disability benefits and the number of staff available to start work. 

The researchers looked at how many people in an area had been through a WCA per 10,000 by the end of each quarter from 2010-13. They used these figures to look for links between WCAs and mental health outcomes.

Because more WCAs were carried out in deprived areas, the researchers adjusted their figures to take account of different deprivation, employment, wages and local authority spending levels, as well as looking at long-term trends in mental health conditions in individual areas.

They carried out a number of checks for other external factors that might influence results (confounders), including looking for links you would not expect to see, such as between WCA rates and mental health problems in adults over 65.

They also looked at whether the number of people with mental health problems increased before or after the number of WCAs in an area rose. All these tests were designed to make the results as reliable as possible.  

What were the basic results?

The study found rates of suicides, mental health problems and prescriptions of antidepressants were higher in areas that carried out more WCAs, after adjusting for baseline differences between areas. 

The researchers estimated for every 10,000 people reassessed, you would expect to see an additional six suicides (95% confidence interval [CI] 2 to 9), an extra 2,700 reports of mental health problems (95% CI 548 to 4,840) and 7,020 extra antidepressants prescriptions (95% CI 3,930 to 10,100).

Between 2010 and 2013 1.03 million people, or 80% of existing disability claimants, were reassessed using the WCA, equivalent to 3,010 per 10,000 of the population. 

During the study period the researchers calculated there were 590 additional suicides (5% of all suicides), 279,000 additional self-reported mental health problems (11% of total), and 725,000 more antidepressants prescribed (0.5% of total).

The extra tests designed to take other factors that might have affected the results into account did not find any evidence other factors were involved. 

How did the researchers interpret the results?

The researchers said this was the first analysis of the effects of the WCA policy on mental health, and the results indicate that, "It may have had substantial adverse consequences".

They say the process is potentially harmful and doctors should consider their involvement in implementing WCAs on ethical grounds.  


It is always hard to assess the direct impact of an intervention, outside the context of a randomised controlled trial. When the intervention is a social policy affecting thousands of people in very different circumstances around the country, the difficulty is that much greater.

The researchers did the best they could to guard against problems such as unexplained factors that might have affected the results, or reverse causality, where what looks like a result of an intervention is actually a cause of it.

Despite this, the study can only demonstrate associations between the data. We cannot say for sure that the WCAs were the direct cause of the mental health outcomes examined.

There is probably no way to tell this, even if you examined every single case of mental illness and suicide to find out whether the individual had been through a WCA and what the impact on them had been.

Mental health is complex, and influenced by various hereditary, health, personal and lifestyle factors. It is rarely possible to identify a single definite cause for suicide.

Population-level studies like this provide the best evidence we are likely to get about the potential effects of social policies, but they cannot provide firm answers.

It is important to keep the results in perspective. While the numbers of extra suicides (590) linked to WCAs sounds like a lot, this is an estimate. The researchers say the figure could be anywhere between 220 and 950, which is quite a wide margin of error. And the numbers who have undergone WCAs is much bigger – well over a million people.

We also cannot ignore the point that for many disabled people, regular employment can be empowering, not a burden. Disability needn't be an obstacle to working – there's a lot of guidance, support and training that can help you get back to work. Read more advice about disability and work.

If you, or someone close to you, is suffering from a mental health problem or thinking about suicide, it's vital to get help right away. There are many sources of support and good treatments for depression and anxiety, which can help people through difficult times.  

Links To The Headlines

Fit-for-work tests may have taken serious toll on mental health – study. The Guardian, November 16 2015

'More suicides' in government disability test areas. BBC News, November 17 2015

Fitness to work tests linked to 590 extra suicides in England say experts. Daily Mirror, November 17 2015

Iain Duncan Smith's tougher fit-to-work tests 'coincide with 590 additional suicides'. The Independent, November 17 2015

Almost 600 Suicides Could Be Related To DWP Work Assessments, Claims New Research. The Huffington Post, November 17 2015

Disability Benefit Assessments Linked To Suicides, Study Finds. Buzzfeed News, November 16 2015

Harder fit-for-work tests connected to rise in number of suicides: Checks on disability benefit are linked to 590 deaths and thousands of anti-depressant prescriptions. Mail Online November 17 2015

Links To Science

Barr B, Taylor-Robinson D, Stuckler D, et al. 'First, do no harm': are disability assessments associated with adverse trends in mental health? A longitudinal ecological study. Journal of Epidemiology and Community Health. Published online November 16 2015

Categories: NHS Choices

Study calls for smartphones and tablets to have 'bedtime mode'

NHS Choices - Behind the Headlines - Mon, 16/11/2015 - 11:55

"Smartphones, tablets and e-readers should have an automatic 'bedtime mode' that stops them disrupting people's sleep," BBC News reports.

The concern is the devices emit short-wavelength blue light, which may disrupt the production of melatonin, a hormone that helps us sleep.

The news comes from a study that examined the short-wavelength blue light emissions produced by three commonly used devices:

  • a tablet – iPad Air
  • an e-reader – Kindle Paperwhite first generation
  • a smartphone – iPhone 5s

Previous research suggested the blue light these devices emit can have a disruptive effect on the sleep hormone melatonin when they are used around bedtime.

This study confirmed the three devices do produce this light, with text producing slightly more intense light levels than the popular Angry Birds game. It also found special orange safety glasses filter out some of the blue light, and a sleep app for children produces less blue light. The researchers suggest the design of future devices and apps could be adapted to limit the colour palate at night.

But this wasn't an experimental study in people. The study didn't examine whether using these devices before sleep had a significant effect on sleep quality and duration.

Still, most sleep specialists stress the importance of good sleep hygiene – adopting a regular pattern in the evening that helps both body and mind wind down and relax ahead of sleep. 

Where did the story come from?

The study was carried out by researchers from Evelina London Children's Hospital sleep medicine department, King's College London and the University of Surrey, and received no sources of external funding.

It was published in the peer-reviewed medical journal Frontiers in Public Health on an open-access basis, so it is free to access online.

The UK media's reporting could have benefited from making it clearer that this research didn't actually prove these devices disrupt sleep.

No people were involved in this study, which only measured the light the devices produce. In particular, it is unclear where the Daily Mail's "extra hour's sleep" suggestion comes from.

Also, The Daily Telegraph's slant towards children may suggest this study involved them. It only examined light from the "Angry Birds" game, which is popular with both children and adults (including, apparently, Prime Minister David Cameron).  

What kind of research was this?

This study examined the short-wavelength blue light emissions produced by a tablet (iPad Air), e-reader (Kindle Paperwhite first generation) and a smartphone (iPhone 5s).

The researchers say there is growing evidence to suggest using light-emitting (LE) devices in the evening may have an adverse effect on sleep quality, duration and daytime performance. Behind the Headlines discussed similar research earlier this year, as well as in 2013.

It is said the brightness, colours and patterns of these devices may influence our body rhythms, particularly when used before bed. Light and brightness during the day has a positive effect on alertness, function and mood, but at night this can impair the production of the sleep hormone melatonin, and so affect sleep.

In particular, short-wavelength blue light is believed to have the most disruptive effect on melatonin. This study aimed to measure the blue light produced by three popular LE devices – a tablet, smartphone and e-reader – allowing comparison by activity type.  

What did the research involve?

The researchers selected the three most popular tablet, smartphone and e-reader devices according to sales data – the iPad Air, iPhone 5s and Kindle Paperwhite first generation, respectively. All of these devices are said to be easily viewed in darkness without additional room lighting ("backlighting").

The tests were therefore carried out in a dark room. Screen brightness for the tablet and smartphone were not altered from automatic settings, but the e-reader was reduced to 50% in accordance with user feedback.

An optical spectrometer – a device that can measure the frequency and wavelength of light – was used to measure light levels while displaying text on all devices, and then the game Angry Birds on the smartphone and tablet.

The researchers also looked at the effect of two devices designed to reduce light disruption:

  • blue-blocking, orange-tinted safety glasses
  • a sleep diary and behavioural advice app called Kids Sleep Dr, which is designed for evening or night use and uses a "sleep aware" palate of colours that changes the default display settings 
What were the basic results?

The results are fairly complex, listing the spectral distribution of the devices as calculated into equivalent "α-opic" – illuminance of the different photo pigments on the retina of the eye.

Essentially, all the devices showed similar short-wavelength blue light peaks when displaying text (around 445-455nm). The light intensity was slightly lower when showing Angry Birds.

The orange-tinted glasses significantly reduced the intensity of short-wavelength light that got through. The colour palate used in the Kids Sleep Dr app had a different spectral profile and also reduced short-wavelength light emissions. 

How did the researchers interpret the results?

The researchers concluded that all the LE devices they tested produced short-wavelength enriched emissions. They went on to say that, "Since this type of light is likely to cause the most disruption to sleep as it most effectively suppresses melatonin and increases alertness, there needs to be the recognition that at night-time 'brighter and bluer' is not synonymous with 'better'." 

They suggest future software designs are better optimised when night-time use is anticipated, saying devices could have an automatic "bedtime mode" that shifts blue and green light emissions to yellow and red, as well as reduce backlight and light intensity. 


This study measured short-wavelength blue light emissions produced by widely used tablet, smartphone and e-reader devices when displaying text or a game.

The study demonstrates the devices do produce this light, which previous research suggested can have a disruptive effect on the sleep hormone melatonin. The research also found less blue light passes through special orange safety glasses, and a sleep app for children produces less blue light.

Little more can be said about the results of this study. Despite the media headlines, the study does not show these light-emitting devices disrupt our sleep or alter our melatonin levels.

This was not a sleep study where, for example, the researchers measured participants' sleep duration and quality when they did or did not use these devices before sleep.

There are also many other questions readers of these news headlines may have, such as:

  • Does it make a difference whether the user is a child or an adult?
  • Does it matter what activity I am using the device for? For example, as the emissions from the game are less than text, is this "safe" to use?
  • How long do the effects last? What time delay is needed between the last use of the device and trying to go to sleep?
  • Does the duration of last use make a difference?
  • Is it OK to sleep with the device in the room with me, or do I need to power-off the devices at night?

For a couple of these, the study has leant on previous research and recommendations to give some answers.

The researchers say Harvard Medical School suggests avoiding blue light two to three hours before going to bed, while the National Sleep Foundation suggests turning all electronic devices off at least an hour before bed. The researchers also suggest parents can easily remove devices from the bedrooms of young children or turn them off before they go to bed.

As the researchers rightly acknowledge, sleep duration and quality is rarely influenced by one factor alone. Many personal and environmental factors can contribute to this. Read more advice about methods that can help you, and your family, improve the quality of your sleep.

Links To The Headlines

Phones need 'bed mode' to protect sleep. BBC News, November 15 2015

'Bedtime mode' on smartphones would mean we'd all get an extra hour's sleep, expert claims. Daily Mail, November 15 2015

Smartphones and tablets need 'bedtime mode' to improve children's sleep. The Daily Telegraph, November 15 2015

Smartphones, tablets and e-readers need a 'bed mode', experts say. The Independent, November 15 2015

Using your phone before bed could disrupt your sleep by an hour, warns scientist. Daily Mirror, November 15 2015

Links To Science

Gringras P, Middleton B, Skene DJ, Revell VL. Bigger, Brighter, Bluer-Better? Current light-emitting devices – adverse sleep properties and preventative strategies. Frontiers in Public Health. Published online October 13 2015

Categories: NHS Choices

'New' sexually transmitted infection 'MG' may be widespread

NHS Choices - Behind the Headlines - Fri, 13/11/2015 - 11:00

"A sexually transmitted infection could have infected hundreds of thousands of people in the UK," The Guardian reports. 

The infection – mycoplasma genitalium (MG) – causes few, and often no, symptoms. It is unclear whether it could trigger complications such as infertility.

Many media sources describe MG as a new infection, but it was actually discovered in 1981, although at the time it was unclear if it was a sexually transmitted infection (STI).

New research suggests it could be. A large study of UK adults found 1 in 100 adults aged 16 to 44 were infected with MG, with the majority showing no symptoms. 

Black men and men from deprived areas were most likely to carry the bacteria, while infection risk increased for those with more sexual partners and those who did not practice safe sex.

MG infection was linked to a higher risk of post-sex vaginal bleeding – a possible sign of disease – but this was tentative, and the only sign the infection might be causing disease.

This study provides a prevalence estimate and insight into risk factors, but leaves the question of potential long-term harm unanswered. This question requires further investigation using different study types.

However, you can protect yourself from MG and other STIs by practicing safe sex. The humble condom offers the best protection against STIs and can be used during penetrative, oral and anal sex.


Where did the story come from?

The study was carried out by researchers from London-based Universities in England and was funded by the Medical Research Council, the Wellcome Trust, the Economic and Social Research Council, and the Department of Health, with support from an NIHR Academic Clinical Lectureship.

The study was published in the peer-reviewed International Journal of Epidemiology on an open-access basis, so it is free to read online.

Generally, the UK media reported the story accurately. Most UK coverage focused on the possibility that thousands of adults were infected without knowing it – a so-called "stealth STI", as most people don't experience any symptoms.

Some potential harms from MG infection – such as possible female infertility linked to pelvic inflammatory disease – were mentioned in the media, but do not come directly from the study text.

That said, the media coverage usually came with the caveat that the long-term effects of MG infection are largely unknown. 


What kind of research was this?

This was a cross-sectional study looking at whether MG infection was likely to be sexually transmitted, as well as its prevalence in Britain and the risk factors associated with infection.

MG is a bacterium, which evidence identified by the research team says might be linked to genital urinary diseases in men and women, such as post-coital bleeding and urethritis (inflammation of the urethra).

The researchers say there are currently no large population-based epidemiological studies of MG that include prevalence, risk factors, symptoms and co-infection in men and women across a broad age range. Hence, there is doubt about whether it is an STI, how common it is, and uncertainty about whether it causes sexually transmitted diseases (STDs).

Cross-sectional studies are one of the best ways of assessing the prevalence of an infection like MG. However, they are not able to prove cause and effect – that different sexual behaviours increase the risk of MG infection. That said, they can point to highly probable links that can be investigated more robustly in the future using different study designs.


What did the research involve?

Data for this research came from 8,047 respondents to a National Survey of Sexual Attitudes and Lifestyles (Natsal-3) who lived in England, Wales or Scotland from 2010 to 2012.

Participants were interviewed from 2010 to 2012 using computer-assisted face-to-face and self-completion (CASI) questionnaires, which included questions on participants’ sexual lifestyles, history of STIs and current STI symptoms.

Following the interview, a sample of participants was invited to provide a urine sample for testing. The researchers obtained 189 samples from 16 to 17-year-olds who had not been sexually active and 4,507 urine samples from the rest of the 16 to 17-year-olds. They also obtained the same from a sample of 18 to 44-year-olds who reported at least one sexual partner in their lives.

MG infection rates were calculated for 16 to 44-year-olds who reported at least one sexual partner in their lives. They were calculated separately for different age groups and for men and women. Factors linked to MG infection were analysed, such as ethnicity, education level, deprivation levels and sexual behaviours – such as number of sexual partners and unprotected sex in the last year.


What were the basic results?

Just over 1 in 100 men (1.2%, 95% confidence interval (CI) 0.7 to 1.8%) and women (1.3%, 95% CI 0.9 to 1.9%) aged 16 to 44 had an MG infection.

There were no positive MG tests in men aged 16 to 19, and prevalence peaked at 2.1% (1.2 to 3.7%) in men aged 25 to 34 years. By contrast, prevalence was highest in 16 to 19-year-old women, at 2.4% (1.2 to 4.8%), and decreased with age.

The strongest risk factors linked to MG infection were men of Black ethnicity (adjusted odds ratio (AOR) 12.1; 95% CI 3.7 to 39.4) and men living in the most deprived areas (AOR 3.66 95% CI 1.3 to 10.5).

For both men and women, MG was strongly associated with an increased number of total and new partners, and unsafe sex, in the past year. No infections were detected in those reporting no previous sexual experience.

More than 9 out of 10 men (94.4%) and over 5 in 10 women (56.2%) with MG did not report any STI symptoms in the past month.

Women with MG were much more likely to report vaginal bleeding after sex (AOR 5.8; 95% CI 1.4 to 23.3) than those without MG. This, the study authors say, may be a sign the infection is causing disease, but they admit they don't know with any certainty. For example, women with MG were no more likely to report other symptoms that are usually associated with pelvic inflammatory disease, such as pelvic pain, abnormal vaginal discharge or dyspareunia (pain during sexual intercourse).


How did the researchers interpret the results?

The researchers summed up their findings in three key messages:

  • "This study strengthens evidence that MG is an STI: there were strong associations with risky sexual behaviours, with behavioural risk factors similar to those in other known STIs, and no infections were detected in those reporting no previous sexual experience.
  • Given the uncertainty on the natural history and clinical implications of infection, especially in women, we report that although asymptomatic infection was common, we found a strong association with post-coital bleeding in women. Therefore, in addition to MG being an STI, it can also be an STD.
  • MG was identified in over 1% of the population aged 16-44, and among men was most prevalent in 25 to 34-year-olds, who would not be included in STI prevention measures aimed at young people."



This British population study found that around 1 in 100 men and women aged 16-44 living in England, Wales and Scotland are infected with MG, and that it is likely to be transmitted by sexual contact.

The STI doesn't lead to symptoms in the vast majority of men and around half of women. The study wasn't able to tell if the infection was causing disease, but there were tentative signs that it might. For example, more women with MG infection reported vaginal bleeding after sex than those without MG – a possible, but by no means strong, sign the infection may be causing disease.

The overall prevalence masked interesting variation by age, ethnicity and gender. For example, male prevalence of MG was highest in those aged 25 to 34, at 2.1%, whereas in women it peaked earlier in those aged 16 to 19 years, 2.4%.

There are a number of potential biases in this study – for example, non-participation bias to the survey, and bias from non-provision of the urine sample. In each case, the groups taking part might be different to those who chose not to – potentially influencing the results. While this remains a possibility, the authors were aware of the risk and took measures to minimise the influences. For example, the statistical analysis took account of some factors and the team compared the background of the participants taking part with those of the wider population. 

This showed that the group who participated in the study were similar to the British population at large, at least in terms of ethnicity, marital status and self-reported general health.

The study team suggest they may have underestimated MG prevalence in women, as the urine test they used is less effective than an alternative, using vaginal swabs.

To summarise, the study was based on a large number of people living in Britain – over 4,000 urine samples and interviews – so can be considered relatively reliable and applicable to the UK population.

We don't routinely screen for MG infection in adults in Britain, so this study might spark debate about whether we should. To better inform that debate, we need more information about the possible disease-causing effects of the infection: is it harmless, or does it do lasting damage that needs treatment to stop or prevent it? At the moment, we don't seem to have a clear idea.

Even if we don't know the long-term effects of MG infection, it is simple to minimise your personal risk. Ways to prevent MG infection are likely to be the same as for other STIs, such as using condoms during oral, anal and regular sex.

Read more about safe sex and reducing your risk of STIs

Links To The Headlines

STI known as MG could have infected hundreds of thousands in UK. The Guardian, November 12 2015

Scientists identify new STD that could affect hundreds of thousands of adults - and it often has NO symptoms. Mail Online, November 12 2015

Mycoplasma Genitalium: Hundreds of thousands of British adults could have newly-identified STI. The Independent, November 12 2015

New STD often with no symptoms could affect hundreds of thousands of adults in Britain. Daily Mirror, November 12 2015

A new symptomless STD could affect hundreds of thousands of adults in the UK. Metro, November 12 2015

Links To Science

Sonnenberg S, Ison CA, Clifton S, et al. Epidemiology of Mycoplasma genitalium in British men and women aged 16–44 years: evidence from the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3). International Journal of Epidemiology. Published online November 3 2015

Categories: NHS Choices

Will a cholesterol-busting vaccine work for humans?

NHS Choices - Behind the Headlines - Thu, 12/11/2015 - 10:23

"Could a vaccine replace the need for daily statins?" asks the Mail Online. An experimental vaccine has been found to lower low-density lipoprotein (LDL) cholesterol in a small number of mice and macaque monkeys, but has not yet been tested in humans.

LDL cholesterol – aka "bad" cholesterol – can clog up the arteries, leading to conditions such as heart attack and stroke. Currently, a group of drugs known as statins are used by several million people in the UK to lower LDL cholesterol.

The new research tested several types of vaccine, designed to target a protein called PCSK9. This protein helps regulate how much LDL cholesterol is in the blood. It does this by blocking receptors in the liver that absorb it and break it down.

Previous studies showed people with naturally occurring mutations that stop PCSK9 working had low levels of LDL cholesterol and remain healthy.

The vaccine is designed to provoke an immune response against PCSK9. The researchers found several of the vaccines worked in mice and monkeys, reducing their LDL cholesterol level. A combination of vaccine and statins tested in monkeys resulted in lower levels of LDL cholesterol than the vaccine alone.

If further studies show this vaccine is safe and effective in humans, it could be a useful additional treatment. Some people find statins hard to tolerate, complaining of side effects such as muscle and joint pain, and weakness. A vaccine could prove to be an alternative form or treatment.  

Where did the story come from?

The study was carried out by researchers from the University of New Mexico and the National Institutes of Health in the US, and was partly funded by these institutions.

It was published in the peer-reviewed medical journal Vaccine on an open-access basis, so it can be read online for free.

The researchers have applied for a patent for the vaccines, which represents an obvious – though understandable and reasonable – conflict of interest.

The Mail Online and The Daily Telegraph predicted the vaccine would eventually replace statins, although nothing in this study suggests this. The Telegraph reported the vaccine "can lower cholesterol better than statins".

However, this is patently not the case – the study actually showed the injection reduced cholesterol by only 10-15% in monkeys, less than the 20-50% reduction that usually occurs in humans taking statins, depending on the type and dose of statin. 

What kind of research was this?

This animal study used mice and macaque monkeys to test experimental vaccines developed in the laboratory.

Animal studies are done at an early stage in the development of new treatments. They don't tell us whether the treatment is safe or effective in people. 

What did the research involve?

Researchers developed two types of vaccine and tested them firstly on mice, along with a dummy vaccine, then on nine macaque monkeys. They later did a second experiment with the monkeys, which combined a booster dose of the vaccine with treatment with statin drugs, to see whether the two worked well in combination to lower cholesterol.

The vaccines were designed to mimic a virus, with features of the PCSK9 protein on the surface. The researchers hoped the animals' immune systems would produce antibodies to disable the PCSK9 protein.

They predicted this would result in more LDL cholesterol being taken out of the blood, so cholesterol levels would fall. They also used a dummy vaccine without PCSK9 features as a control.

In the first experiment, the researchers used mice split into groups of five, and compared the effects of several variations on the vaccine, plus the dummy vaccine for comparison. They measured the mice's antibodies and their cholesterol and other lipid levels before and after the vaccine.

They researchers repeated the experiment by using the most successful vaccine on 20 mice. They then used the vaccines and dummy vaccine on nine monkeys, split into three groups, again measuring antibody levels and cholesterol.

Finally, they revaccinated the monkeys and gave them statins for two weeks to see the effect of combined treatment on their cholesterol levels. They looked at which vaccines had most effect on lipid levels, and whether they had a significant effect on LDL cholesterol levels.  

What were the basic results?

The mice and monkeys given the anti-PCSK9 vaccines produced antibodies to PCSK9, although, unsurprisingly, not in those animals given the dummy vaccine.

In mice, cholesterol levels dropped significantly for those treated with four of the seven anti-PCSK9 vaccines. Mice who'd had the most successful of the two vaccines showed a 55% drop in total cholesterol in the first experiment and a 28% drop in the second experiment, compared with the group given the dummy vaccine.

The monkeys had a smaller response. Monkeys given one of two anti-PCSK9 vaccines showed drops in total cholesterol of 10-15%, compared with those given the dummy vaccine.

When the vaccine was combined with a two-week course of statins, LDL cholesterol dropped by 30-35% more than in monkeys who'd been given the dummy vaccine plus statins, although the difference for total cholesterol was only about 15-20%. 

How did the researchers interpret the results?

The researchers say their study "provides proof-of-principle evidence that a vaccine targeting PCSK9 can effectively lower lipid levels and work synergistically with statins". They say their study has helped them identify "at least one" candidate vaccine to study further.

They say they should now be able to do a study in humans to check for the safety of the vaccine. They concluded: "If successful, this approach could obviously have a major impact on human health worldwide." 


The first thing to keep in mind is this is an early-stage study in the development of a vaccine to lower cholesterol. The study found some of the experimental vaccines the researchers developed had an effect on the cholesterol levels of mice and monkeys to varying degrees.

They now need to do further work to show the vaccine is effective and can be used safely in humans. Many drugs have very different effects in humans than they do in other animals.

The use of statins to lower cholesterol and reduce the chances of a heart attack or stroke is well established and effective for many people. Although there is ongoing controversy about the side effects of statins, they have been used for decades, and their benefits and risks are reasonably well understood.

The type of vaccine being explored in this study works to "prime" the immune system to attack a naturally occurring protein in the body. While some people seem to remain healthy despite having been born without a functioning version of this protein, and indeed have a lower risk of heart disease, we don't yet know what the long-term effects of a vaccine that works in this way would have.

The next crucial stage of research must be to establish the safety of this proposed new vaccine in humans. Until we know it is safe in people, there's not much point in speculating about how it could be used in the future.

If you are unable or unwilling to take statins, there are alternatives that can reduce your cholesterol, including alternative medicines such as fibrates, as well as lifestyle measures. 

Links To The Headlines

Could a vaccine replace the need for daily statins? New injection 'could prove more powerful at reducing levels of bad cholesterol'. Mail Online, November 10 2015

Cholesterol vaccine could end need for daily statins. The Daily Telegraph, November 10 2015

Links To Science

Crossey E, Amar MJA, Sampson M, et al. A cholesterol-lowering VLP vaccine that targets PCSK9. Vaccine. Published online September 26 2015

Categories: NHS Choices

Stronger legs linked to stronger brains in older women

NHS Choices - Behind the Headlines - Wed, 11/11/2015 - 12:00

"Strong legs 'help the brain resist the effects of ageing','' the Mail Online reports. A study that tracked 324 female twins (162 sets) over 10 years found an association between leg strength and cognitive ability, measured through memory testing and brain scans.

The study recruited twins aged 43 to 73 in 1999 and measured their physical fitness using a piece of gym equipment, similar to an exercise bike, to measure the power in their thigh muscles. The women also performed memory tests and completed questionnaires on their usual physical activity level, current health, and lifestyle factors.

After 10 years, they completed another set of memory tests. Some of the twins were also given MRI brain scans to check for changes in the structure in the brain associated with cognitive decline.

The study found women with stronger leg extension had less age-related change in brain function and structure 10 years later, after taking into account their age, lifestyle and other risk factors.

While this is an interesting finding, it is not possible to say less physical strength caused the brain to decline or vice versa. Women with a more active brain may have been more likely to take part in physical exercise.

That said, the study is further evidence of the numerous benefits of physical activity, especially in older women, who may experience weakening of the bones as a result of the effects of the menopause.

Read more about the importance of bone health

Where did the story come from?

The study was carried out by researchers from King's College London and was funded by the Wellcome Trust and the National Institute for Health Research (NIHR). There were no reported conflicts of interest.

It was published in the peer-reviewed medical journal Gerontology on an open access basis, so it is available for free online.

The UK media reported on the study both accurately and responsibly. However, some of the study's limitations were not highlighted.

The BBC quoted the director of research at the Alzheimer's Society, Dr Doug Brown, who said that although "the findings added to the growing evidence that physical activity could help look after the brain as well as the body … we have yet to see if the improvements in memory tests actually translate into a reduced risk of dementia". 

What kind of research was this?

This was a cohort study that aimed to assess if muscle fitness (measured by leg power) could predict cognitive change in healthy females over a period of 10 years. It also assessed if leg power was predictive of differences in brain structure and function after 12 years of follow-up in identical twins.

Twin studies like this are useful as they can take into account shared genetic and environmental factors. However, as the study design is observational in nature, we cannot draw firm conclusions on the causality as more than one factor may be responsible for the observed outcomes.  

What did the research involve?

This study included 324 female twins from the UK. The study participants were selected from the TwinsUK volunteer registry, which was originally set up to study ageing in women.

The participants' muscular fitness was estimated based on the strength of the leg extensor muscles (thigh muscles) in 1999. This was done by a trained research nurse using the Leg Extensor Power Rig machine. The machine, similar to an exercise bike, measures leg explosive power by measuring the force and velocity a participant uses when they push down on a pedal.

Participants sat on this machine with their legs slightly bent. The activity leg was then placed on a pedal and they were asked to push the pedal as fast and hard as possible to full extension, "as if performing an emergency stop in a car".

Other measurements and tests included:

  • grip strength and lung function
  • weight and height
  • blood pressure
  • blood sugar and cholesterol

Participants were then asked to fill in a questionnaire, which included:

  • rating their physical activity during the last 12 months as inactive, light, moderate or heavy
  • occupation and income
  • smoking and alcohol use
  • vegetable intake
  • saturated fat intake
  • history of ischaemic heart disease
  • history of diabetes
  • history of mental health conditions

To estimate age-related cognitive changes, study participants underwent a computerised test (CANTAB) once in 1999 and again in 2009. This test is particularly known to be sensitive to age, and measures memory and processing speed of the brain. 

Twenty pairs of identical twins underwent MRI brain scans 12 years after the start of the study. The scans were used to look at the amount of grey matter (tissue made up of nerve cells) in two regions of the brain associated with cognitive ability: the medial temporal lobe and the middle frontal gyrus. 

What were the basic results?

After adjusting for age, lifestyle and psychological factors, both physical activity and leg extensor power had statistically significant protective effects on age-related cognition over a period of 10 years.

Overall, twins who were stronger at the beginning of the study had significantly less deterioration in cognition than their weaker sisters.

The brain scans of identical twins found those with stronger leg extensor power at the beginning of the study had more total grey matter 12 years later than those with weaker power.  

How did the researchers interpret the results?

Researchers concluded by saying the study "found that greater muscular fitness – as measured by leg power – is associated with improved cognitive ageing over the subsequent 10 years in non-impaired community living women". 


This study of 324 female twins from the UK found a positive association between leg extensor power and age-related cognitive activity.

As it was a cohort study, it is not possible to say that increased muscular strength prevented decline in mental ability, as other related or unrelated factors could have played a part.

That said, the researchers did try to account for many of these factors, such as:

  • using twins to reduce potential genetic and early environmental confounding factors
  • taking baseline cardiovascular risk factor profiles, as these are risk factors for dementia
  • taking into account age and sociodemographic details

The finding that women with stronger legs had more grey matter on the MRI scans should also be interpreted with caution. The MRI scans were only taken at one time point, so we do not know whether the amount of grey matter had changed over the course of the study. Additionally, they were only performed on a small subset of 20 identical twins.

Other limitations of the study include:

  • At baseline, data on the participant's physical activity level in the previous 12 months was obtained through self-reported questionnaire, and this may have introduced recall bias. There was no follow-up information regarding physical activity levels, which is likely to change over time.
  • Researchers have accounted for some of the common confounding factors, but there may be other factors not considered in this study that might have had an influence on the observed outcome.
  • None of the study participants are reported to have developed dementia, so it is unclear whether the same results would be found for women at higher risk.

Regardless of these limitations, the beneficial impacts of daily physical activity are well known. 

Find out the government guidelines for physical activity for your age group.

Links To The Headlines

How squats, lunges and walking could keep your mind young: Strong legs 'help the brain resist the effects of ageing'. Mail Online, November 10 2015

Fit legs equals fit brain, study suggests. BBC News, November 10 2015

Strong legs contribute to a healthier brain in old age, study finds. The Guardian, November 9 2015

Links To Science

Steves C.J, Mehta MM, Jackson SHD, Spector TD. Kicking Back Cognitive Ageing: Leg Power Predicts Cognitive Ageing after Ten Years in Older Female Twins. Gerontology. Published online November 10 2015

Categories: NHS Choices

Normal BMI with a big belly 'deadlier than obesity'

NHS Choices - Behind the Headlines - Tue, 10/11/2015 - 10:28

"Slim adults with a 'spare tyre' of fat around their stomach have a twice as high mortality risk than those who are overweight," The Daily Telegraph reports.

A major new study tracked more than 15,000 adults to look at the effect of body size on mortality.

Researchers looked at two types of measurement:

  • body mass index (BMI) – which provides an assessment of overall body weight
  • waist-to-hip ratio (WHR) – which is calculated by dividing the circumference of the waist by the circumference of the hips; this can provide an assessment of abdominal fat (belly fat)

Researchers found that people with a normal BMI but a large WHR had increased risk of dying during follow-up compared to people with a smaller WHR. This included people of similar BMI, and also people who were overweight or obese, but with a smaller WHR. 

The risk increase was higher for men than for women. Men with normal BMI but a large WHR were roughly twice as likely to die within 5 or 10 years as other men.

One hypothesis is that having a big belly increases the amount of fat inside the abdomen (visceral fat). This may then cause inflammation to the vital organs stored inside the abdomen, which possibly makes people vulnerable to chronic diseases.

However, despite the large sample size, only a minority of people came into this high-risk category of normal BMI, but high WHR – 11% of men and 3% of women. Analyses based on small numbers have an increased chance of producing inaccurate risk estimates.   

Where did the story come from?

The study was carried out by researchers from the Mayo Clinic in the US and the University of Ottawa in Canada, and was funded by the National Institutes of Health, the American Heart Association, European Regional Development Fund and Czech Ministry of Health. The study was published in the peer-reviewed journal Annals of Internal Medicine.

The media accurately reported the results and quotes from the press release. However, we suspect that many of the reporters did not actually read the study. Most did not go into detail about the differing risks linked to different levels of weight and obesity, especially for women, or make it clear that this type of study cannot show whether central obesity directly causes early death.


What kind of research was this?

This was an observational study using survey data collected as part of a big, ongoing study in the US, which aimed to look at the relationship between central obesity in people with normal BMI and survival. Both BMI and central obesity – having a high WHR – have previously been associated with overall and cardiovascular mortality.

However, greater emphasis is often placed on using a person's BMI as an indicator of their health, overweight and obesity, rather than distribution of body fat. This study aimed to see whether central obesity carries a risk even in people with a BMI judged to be within healthy limits (18.5 to 24.9).

A study design such as this can find trends and links between different factors, but cannot prove that one thing directly causes another. Other factors could be having an influence.


What did the research involve?

Researchers took information about people's measurements and used them to see how BMI and WHR were linked to their chances of dying during study follow-up. They adjusted the figures to take account of age, sex, education level and smoking.

The information on body measures came from the US's third National Health and Nutrition Examination Survey, carried out from 1988 to 1994. The researchers were unable to use more recent figures, because the survey stopped measuring hip size, which is important for calculating the WHR.

Researchers looked at the National Death Index to identify any participants who had died from any cause up to the end of December 2006 (an average of 14 years follow-up per person). 

They then looked at the chances of having died within particular time scales (5 to 10 years), for people who had different combinations of BMI (normal, overweight or obese BMI) and WHR (either normal or showing what they described as "central obesity").

The researchers tested their results to see whether they were explained by other factors, such as whether people had diabetes. They analysed the figures separately for men and women, because they found that the effect of BMI and WHR differed between the sexes.

Finally, they calculated the relative chances of dying, based on these results, for people who were normal, overweight or obese in terms of BMI, with and without central obesity.


What were the basic results?

People with a normal BMI, but a high WHR (with central obesity) were more likely to have died during follow-up than people who had a similar BMI, but who did not have central obesity. More surprisingly, men with normal BMI but high WHR were more likely to have died than women who were overweight and obese, and also had a high WHR.

A man of normal weight with a high WHR was 87% more likely to die than a man of comparative BMI, but no central obesity (hazard ratio (HR) 1.87, 95% confidence interval (CI) 1.53 to 2.29). Surprisingly "he" was twice as likely to have died compared to a man who was overweight or obese by BMI, but had no central obesity (HR 2.24, CI 1.52 to 3.32).

At age 50, a man with a normal BMI and normal WHR had a 5.7% chance of dying within the next 10 years, but that rose to 10.3% chance for men with normal BMI, but a high WHR.

For women, the results were less striking. A woman of normal BMI but high WHR had an almost 50% increased risk of death compared to a woman of similar BMI without central obesity (HR 1.48, 95% CI 1.35 to 1.62), and a 33% increased risk compared to a woman with obese BMI, but no central obesity (HR 1.32, 95% CI 1.15 to 1.51).

A woman aged 50 of normal weight and normal WHR had a 3.3% chance of dying within 10 years, rising to 4.8% for women of the same weight, but a high WHR.

Men with normal BMI but a high WHR were more likely to have died than any other combination, including men who were obese and had a high WHR. 

The picture was more mixed for women. Women who were overweight or obese with a high WHR had about the same chance of death as women with a normal BMI but a high WHR.


How did the researchers interpret the results?

The researchers said: "Our findings suggest that persons with normal-weight central obesity ['belly fat'] may represent an important target population for lifestyle modification." 

They say we need more research on how central obesity develops in people with a normal BMI, and a better understanding of the effect of central obesity on health. They call for measures of central obesity to be used alongside BMI to calculate people's risk.



This study adds to previous research that it may not be just your weight that matters, but where you carry it. It found that – especially for men – those with a high WHR had a greater chance of dying from any cause during study follow-up than those without. The results were less strong for women.

A high WHR suggests excess fat around the waist, as muscle mass is unlikely to lead to greater waist circumference. Although this study does not explore why WHR may be linked to chances of dying earlier, other studies have shown that carrying excess fat around your waist may be more harmful than carrying it in other areas, such as your legs and hips. Fat around the waist has been linked to inflammation, increased risk of diabetes, and having raised cholesterol.

The study's strengths are its size and the fact that data was collected consistently over a long follow-up period. However, there are important limitations. One of these is that, despite the large overall sample size, the main study analyses for people with normal BMI but central obesity were based on a small number of people. Only 322 men (11.0%) and 105 women (3.3%) were in this risk group.

Analyses based on smaller numbers of people have an increased chance of producing inaccurate risk estimates. Therefore, we do not know that the risk figures obtained here – for example, the 50% risk increase – are completely accurate and would apply to all people in this category.

Also, as the researchers say, diseases such as diabetes and high blood pressure were self-reported by the participants. This means that some disease classifications may be inaccurate, and, overall, the full effect of all health and lifestyle factors that could be confounding the results have not been fully accounted for. Body fat measurements were also taken by hand rather than by imaging as recommended, so may be less reliable.

The main point to remember is that we don't know that high WHR directly led to a higher chance of dying, or know why this link may have been found – particularly for people with a normal BMI more than people with an overweight or obese BMI. We need to see more research to understand the results better.

Nevertheless, the study adds to the evidence on the importance of central abdominal fat as a health indicator. 

It shouldn't be taken that it is safe to be obese as long as your WHR is on the smaller side. While it could be the case that some types of excess fat are worse than others, all excess fat is bad for you.

Links To The Headlines

People with beer bellies at greater risk of death than those who are obese, study finds. The Daily Telegraph, November 9 2015

Roll of fat around the waist doubles risk of an early death. The Times, November 10 2015

Why your pot bellies and muffin tops could well be the death of you. Daily Mirror, November 9 2015

It's BETTER to be obese all over than just have a beer belly: Fat around the middle can 'double the risk of early death'. Mail Online, November 10 2015

Beer belly is deadlier than being obese, warn experts. Daily Express, November 9 2015

Links To Science

Sahakyan KR, Somers VK, Rodriguez-Escudero J, et al. Normal-Weight Central Obesity: Implications for Total and Cardiovascular Mortality. Annals of Internal Medicine. Published online November 10 2015

Categories: NHS Choices

No hard evidence champagne can prevent dementia

NHS Choices - Behind the Headlines - Mon, 09/11/2015 - 10:23

"Drinking three glasses of champagne per week could help stave off dementia and Alzheimer's disease," the Daily Mirror reports. But before you break out the Bolly, you should know the study that prompted this headline was on rats.

The study that forms the basis of these reports was actually published in 2013, but apparently recently went viral on social media. It looked at the possible effects of the phenolic acids found in champagne on memory in rats. Phenolic acids are similar to flavonoids, which are plant substances said to have antioxidant qualities. 

Three groups of eight rats were each given six weeks of daily champagne, a non-champagne alcoholic drink, or an alcohol-free drink. Their performance at finding treats in a maze was assessed before and after this period.

The main finding was that rats given champagne were better at remembering how to find the treat than those given the alcohol-free drink. They found the treats roughly five times out of eight, compared with four times out of eight in rats given the other drinks.  

A slightly improved maze performance in a small number of rats does not necessarily translate into humans having a reduced risk of dementia from drinking champagne. The health risks of consuming large amounts of alcohol are well known.

If you want to increase your flavonoid intake, there are far cheaper – and healthier – alternatives to champagne, such as parsley, peanuts and blueberries. But whether these would actually prevent dementia is unproven. 

Where did the story come from?

The study was carried out by researchers from the University of Reading and the University of East Anglia. No sources of funding are reported and the authors declare no conflicts of interest.

It was published in the peer-reviewed journal, Antioxidants and Redox Signaling.

The media sources do not report responsibly on this early-stage animal research. The quantity of champagne consumed by the rats was said to be equivalent to 1.3 small glasses of champagne (around two units) a week for humans. And we can't be sure these results would apply to humans. 

What kind of research was this?

This animal research aimed to investigate the effects of certain phenolic acids found in champagne on the spatial memory of rodents.

Foods and drinks that contain flavonoids (a plant pigment) have received considerable attention over the past few years for their potential antioxidant and anti-inflammatory properties.

Recent research has also suggested they may have the potential to protect the brain and nerve cells. For example, some observational studies in humans suggested a low to moderate red wine intake could protect against cognitive impairment and dementia.

Red wine contains flavonoids as well as phenolic acids. These compounds are found at high levels in white wines, particularly champagne. The high phenolic compounds are said to come from the two red grapes Pinot Noir and Pinot Meunier, and the white grape Chardonnay, used in its production.

The theory the researchers wanted to test was that these compounds might be able to affect the nerves and blood vessels in the brain, resulting in changes to cognitive performance. To investigate this, they looked at the effects of moderate champagne intake on the spatial memory and movements of rats.  

What did the research involve?

The research involved three groups of adult male rats (eight in each) who were housed in standard conditions. The three groups were assigned to receive daily champagne, a non-champagne alcoholic carbonated drink, or an alcohol-free carbonated drink for six weeks. All three drinks had the same nutritional value and contained the same number of calories.

For the two alcoholic drinks, alcohol was given at a level of 1.78ml per kilo of of body weight. This was calculated to be roughly equivalent to 1.3 125ml glasses of champagne a week for humans. The drinks were given in the form of a mash, by mixing the drinks with a small amount of powdered food (8mg of food per 10ml of liquid).   

The rats' spatial and working memory was assessed using the maze test, which includes chambers and tunnels with visual cues and food rewards. These tests were given at the start of the study and after six weeks of drink supplementation.

The rats were given eight tests in the maze each time, and the researchers looked at how often the rats chose the correct route to get the food reward. Motor skills were also tested using a balance beam test.

After the study was completed, the rats' brains were examined in the laboratory – in particular, the hippocampus, which is the area involved with learning and memory.

Laboratory methods were also used to extract and measure the amount of phenolic compounds present in the champagne.    

What were the basic results?

At the six-week mark, accuracy in the maze test seemed to improve for the rats given champagne. At the study's start, average choice accuracy was 4.25 out of eight tests in all rats. After drink supplementation, accuracy was 3.50 in those given the alcohol-free drink, 4.00 in those given the non-champagne alcoholic drink, and 5.29 in those given champagne.

The difference was statistically significant between those given champagne and those given the alcohol-free control drink. The groups did not differ in the speed or distance walked along the balance beam.

After death, brain hippocampus examination revealed rats given champagne had increased levels of various proteins related to the division of cells and neuroplasticity (the ability of the nerve cells in the brain to adjust and adapt). 

The major phenolic compounds present in champagne were gallic acid, protocatechuic acid, tyrosol, caftaric acid and caffeic acid. These compounds were not found in the non-champagne alcoholic drink or the alcohol-free control drink. 

How did the researchers interpret the results?

The researchers say their findings suggest that, "Smaller phenolics such as gallic acid, protocatechuic acid, tyrosol, caftaric acid and caffeic acid, in addition to flavonoids, are capable of exerting improvements in spatial memory via the modulation in hippocampal signaling and protein expression." 


This research found champagne might improve spatial memory in adult rats, possibly in relation to the phenolic acids in the drink. These chemicals are similar to another type of plant chemical called flavonoids, which have also been suggested as having biological effects on animals.

Previous research has suggested flavonoids may have effects on nerve cells in the brain and cognitive functioning. This study on rats found those given champagne to drink over six weeks seemed to have improved performance when finding treats in a maze test. These rats also seemed to have increased levels of brain proteins related to adaptability and learning.

However, before jumping to any conclusions, it should be noted this is a study on a small number of rats. The apparent improvements in the champagne group were only significant compared with the alcohol-free group – there was no significant difference in effect compared with the non-champagne alcoholic group. This means there is no firm proof these effects were directly the result of the phenolic compounds present in champagne.

This study is from 2013, and would ideally need to be repeated on a larger number of rats by other researchers to make sure it is correct.

This research has limited direct applicability to humans. Animal research like this can give a useful insight into the possible biological effects of a chemical, food or drink that may be transferable to humans.

However, we are not identical to rats, and it can't be guaranteed that the results would be the same. The fact that rats may have performed slightly better in a maze, or demonstrated some protein changes related to nerve adaptability, does not mean champagne definitely reduces the risk of dementia in humans.

The health risks of excessive alcohol consumption are well established. While we can't say for certain whether or not drinking champagne could have any effect on your future dementia risk, we can say regularly drinking high levels of alcohol is likely to cause many other health risks. 

It's not always possible to prevent dementia, particularly Alzheimer's, the most common form, which has no established cause beyond ageing and possibly genetics.

However, behavioural changes may help. To possibly reduce your risk of developing dementia and other serious health conditions, it's recommended that you:

Links To The Headlines

Drinking three glasses of champagne per week could help stave off dementia and Alzheimer's disease. Daily Mirror, November 9 2015

Drinking three glasses of champagne every week 'could prevent dementia'. Metro, November 8 2015

Links To Science

Corona G, Vauzour D, Hercelin J, et al. Phenolic Acid Intake, Delivered Via Moderate Champagne Wine Consumption, Improves Spatial Working Memory Via the Modulation of Hippocampal and Cortical Protein Expression/Activation. Antioxidants & Redox Signaling. Published online November 2013

Categories: NHS Choices

Gene editing breakthrough in treating baby's leukaemia

NHS Choices - Behind the Headlines - Fri, 06/11/2015 - 15:30

"Baby girl is first in the world to be treated with 'designer immune cells'," The Guardian reports.

Pioneering work carried out at Great Ormond Street Hospital made use of a new technique known as genome editing.

The girl, one-year-old Layla Richards, developed acute lymphoblastic leukaemia (ALL) when she was five months old.

ALL is cancer of the white blood cells and although it is generally rare, it is one of the most common childhood cancers, affecting around 1 in 2,000 children.

The outlook for ALL is usually good, with around 85% of children achieving a complete cure. However, this was not the case with Layla, as she failed to respond to conventional treatments. The staff treating Layla at Great Ormond Street Hospital sought permission to try a new technique, previously only used in mice, called genome editing.


What is genome editing?

Genome editing uses a range of molecular techniques to make changes to the genome (the complete set of DNA) of individual organisms.

Genome editing can:

  • modify genetic information to create new characteristics
  • remove regions from genomes, such as those that can cause genetic diseases
  • add genes from other organisms to specific locations within a genome

The editing process modifies the actual nucleotides – the "letters" of DNA (A, T, C, G) – of genetic code.

In Layla's case, proteins known as Transcription Activator-Like Effector Nucleases (TALENs), were used as a kind of "molecular scissors" to modify the DNA inside a batch of donated T-cells (an immune cell).

The T-cells were modified to seek out and destroy the abnormal leukaemia cells, while also becoming resistant to the chemotherapy drugs Layla was taking.

Layla responded well to the treatment and is now back home with her family.

Researchers were keen to stress that it is too soon to say if Layla has been completely cured of the cancer, and there could still be long-term complications. 

However, this should be regarded as truly pioneering work that could lead to a new generation of treatments for a range of conditions.  

Links To The Headlines

Baby girl is first in the world to be treated with 'designer immune cells'. The Guardian, November 5 2015

Our little miracle! Baby girl battling leukaemia saved by 'revolutionary' cell treatment. Daily Mail, November 6 2015

'Designer cells' reverse one-year-old's cancer. BBC News, November 5 2015

How 'miracle' leukaemia cure works: The treatment that saved Layla Richards' life and gives hope to thousands. Daily Mirror, November 5 2015

Parents who refused to let baby die of leukaemia change medical history. The Daily Telegraph, November 5 2015

Links To Science

Science Media Centre. Genome Editing - Fact Sheet. 2014

Categories: NHS Choices