NHS Choices

Kidney damage 'killing thousands,' study claims

NHS Choices - Behind the Headlines - Wed, 23/04/2014 - 11:00

“Failures in basic hospital care are resulting in more than 1,000 deaths a month from … acute kidney injury,” The Independent reports. A study commissioned by the NHS estimates that up to 40,000 people may be dying from this preventable condition.

The study aimed to discover the prevalence of acute kidney injury (AKI – previously called acute kidney failure) among adult inpatients in NHS hospitals.

AKI is characterised by a rapid decline in kidney function, which can have many underlying causes. The condition can have a high risk of multiple organ failure and death.

The researchers used data from the Hospital Episode Statistics (HES), which covers all NHS hospital admissions in England. They compared this with more detailed information on AKI obtained from three Kent hospitals to see whether the overall HES data was giving a reliable indication of the true prevalence of the condition in NHS hospitals.

The results suggest that the prevalence of AKI among hospital inpatients could be much higher than previously thought.

Overall, it was estimated that around 14% of hospital inpatients could have AKI. The mortality associated with this is also high – accounting for an estimated 40,000 inpatient deaths over any given year.

Previous research has suggested that around 20-30% of AKI cases could be prevented, and the study highlights the importance of recognising people who could be at risk of developing the condition.

The health watchdog NICE published guidelines on AKI in 2013.

 

Where did the story come from?

The study was carried out by researchers from Insight Health Economics (London); East Kent Hospitals University NHS Foundation Trust (Canterbury); NHS Improving Quality (Newcastle upon Tyne); and Salford Royal NHS Foundation Trust (Salford). Funding was provided by NHS Kidney Care.

The study was published in the peer-reviewed medical journal Nephrology Dialysis Transplantation and has been made available on an open access basis, meaning it is free to read online.

All the media headlines have focused on the angle that thousands of people are dying of thirst due to alleged poor care. This has been taken from the “avoidable” aspect of this acute kidney injury – where previous research (specifically the previous National Confidential Enquiry into Patient Outcome and Death) has shown that up to a third of cases could be prevented.

However, the research itself only looks at the prevalence, costs and outcomes of AKI.

It does not focus on identifying possible reasons for the high number of cases or ways they could be avoided.

Based on the evidence made available in the study, claims that 40,000 people are “dying of thirst” are unsupported.

 

What kind of research was this?

This was a modelling study, which had a number of related goals:

  • Examining the prevalence of AKI across the NHS.
  • Estimating the impact that AKI has upon mortality, length of hospital study, quality of life and healthcare costs.

Acute kidney injury (AKI), previously termed acute kidney failure, is the term used to describe when there is sudden damage to the kidneys. There is no widely accepted standard definition of AKI and there may be a number of difference causes.

Criteria tend to be based upon:

  • A sudden rise of blood creatinine levels above a certain threshold level (creatinine is a breakdown product produced by the muscles, and is a good indicator of kidney function).
  • A decrease in urine output below a certain threshold level.

It is a serious illness that has a high mortality risk, though the specific mortality risk will be highly variable, depending on the individual (such as whether there are complications, or the person has existing kidney damage or other medical problems).

Importantly, as previous research has highlighted, there are concerns that many cases of AKI could be prevented, which would cause considerable reductions in illness, deaths and healthcare costs. The 2009 National Confidential Enquiry into Patient Outcome and Death (NCEPOD) found that around a third of AKI cases occurring while in hospital were avoidable. Furthermore, only half of patients with AKI received an overall standard of care that was considered “good”.

This modelling study using fairly reliable data on NHS hospital admissions and is valuable research for estimating health outcomes of AKI and costs to the NHS.

 

What did the research involve?

This study used routinely collected national data for the NHS in England, in order to look at the prevalence of AKI in adults. They then estimated the impact of AKI on mortality, other health outcomes and cost to the NHS.

The researchers used Hospital Episode Statistics (HES), which is derived from records for each patient admitted to each NHS hospital. HES data includes the patient’s demographic details and medical information, including diagnoses, procedures, length of stay and in-hospital mortality.

They looked at recorded diagnoses of AKI (according to the International Classification of Diseases) between 2010 and 2011. 

However, the HES data does not include information on patients' AKI stage, kidney function prior to hospital admission or kidney function after being discharged from hospital.

As the researchers say, AKI is often poorly recorded in patient notes, so the national findings were compared with data collected by the three hospitals of the East Kent Hospitals University NHS Foundation Trust (EKHUFT).

This involved looking at laboratory records and identifying cases of AKI based on blood creatinine levels, using the Acute Kidney Injury Network (AKIN) classification system.

Comparing these two sources of information, they estimated the under-recording of AKI in patient notes.

They also used both data sets to estimate the possible distribution of AKI cases across the NHS according to stage, and to estimate the person’s previous and future kidney function.

They then used statistical models to estimate the impact of AKI on mortality, number of days in critical care and overall hospital stay.

 

What were the basic results? AKI prevalence

The HES data indicated that AKI was recorded for 2.4% of hospital admissions during 2010/11 (142,705 out of 3,792,951 admissions). Prevalence ranged from 0.3% of patients aged 18 to 39, to 5.7% of people aged ≥80.

During the six-month period of EKHUFT data, laboratory research indicated that AKI was present in 15% of admissions, though the EKHUFT population is older than the overall HES population. 

When standardising for age, it was 14% of admissions. 

Over a third of patients (38%) in EKHUFT who had AKI during the study period had pre-existing chronic kidney disease. Three-quarters of those had AKI when they were admitted to hospital, suggesting that their condition was not due to poor hospital care.

AKI mortality

Using the HES data, just over a quarter (28%) of people with AKI recorded during their admission died before hospital discharge. The odds of in-hospital death were 10-fold greater in a person with AKI compared to those without. Mortality rates increased with age. 

From EKHUFT data, it was shown that 14% of people with AKI died before being discharged from hospital. In over half of all in-patient deaths during the six-month study period, the person had AKI recorded.

Analysis from HES data suggests that AKI was associated with around 15,000 excess deaths among inpatients in England in 2010/11.

However, extrapolating from EKHUFT data suggests that the number of excess inpatient deaths associated with AKI in England may be above 40,000.

Length of hospital stay

When using HES data, the average duration of hospital stay was 16.5 days for AKI-admissions, compared to just 5.1 days for admissions without AKI recorded. A person with AKI had a length of stay 2.6 times longer than someone without AKI; using the EKHUFT data, it was 1.6 times longer. From the EKHUFT critical care information, 60% of critical care bed days over the period were in people recorded to have AKI.

Long term outcomes and costs

Post-discharge information was not available from HES; using the EKHUFT data, 0.56% of people with AKI were receiving renal replacement therapy (such as dialysis) at 90 days, though more than half had pre-existing chronic kidney disease. 

Using the HES data, there were estimated to be almost 1,000,000 excess bed days due to AKI.

Based on the EKHUFT data, the number of excess bed days may be as high as 2.5 million, with over 160,000 of these spent in critical care beds. The total inpatient costs of AKI recorded in HES was estimated at £380 million.

When extrapolating from EKHUFT data, the cost could be as high as £1.02 billion – just over 1% of the NHS budget. To put that figure into context, that is enough to hire an additional 47,500 trainee nurses.

The lifetime costs of post-discharge care for people with AKI during admission was estimated at £179 million, with a loss of 1.4 quality of life years for each person with AKI who was admitted to hospital.

 

How did the researchers interpret the results?

The researchers conclude that the prevalence of AKI among people admitted to hospital may be considerably higher than previously thought, and up to 80% of cases may not be adequately captured by routine hospital data. AKI is associated with large numbers of in-hospital deaths and with high NHS costs.

 

Conclusion

This valuable study provides an estimate of the likely prevalence of AKI among inpatients in NHS hospitals. Comparison of HES data with laboratory data obtained from the three EKHUFT hospitals (where the AKIN classification system was used to define AKI cases), suggests that prevalence could be much higher than thought, and that there could be considerable under-recording of cases in the NHS.

The study also highlights the high mortality associated with AKI – accounting for an estimated 40,000 excess inpatient deaths. AKI was also associated with considerable loss to quality of life. Looking at the financial burden, this study estimated that AKI accounted for just over 1% of the NHS budget in 2010/11.

However, the study had its limitations. These figures are based on estimates only and centred on extrapolating data for HES based on data from the three EKHUFT hospitals. As noted, these hospitals have a different patient demographic from all NHS hospitals across England as a whole. There was also a lack of longer-term outcome data beyond 90 days after a patient was discharged from hospital.

Also, as the researchers say, this study only provides information on AKI recorded for adult hospital inpatients. There is no information on the number of cases that develop in the community.

The media has focused on the “preventable” aspect of AKI. Previous NCEPOD data has reported that up to a third of AKI cases could be predicated and prevented.

The researchers discuss how many of the failings identified by this report related to omissions in basic medical care. These include performing regular observations, checking the person’s fluid and mineral (electrolyte) balance, and a lack of adequate senior review. However, though the researchers mention fluid balance, at no point in this research paper do they say that “thousands are dying because of thirst”.

Notably, based on EKHUFT data, AKI was present at the point of admission in 75% of admissions where it was recorded, possibly noting a point for early recognition and management.

As the researchers say: “if 20% of AKI cases were prevented, the figures presented in this report suggest that the gross savings to the NHS could be in the region of £200 million a year, equivalent to 0.2% of the NHS budget in England”.

The research highlights the importance of recognising people who could be at risk of developing AKI and ensuring that they receive appropriate care and management.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Failure to spot kidney illness ‘kills 1,000 a month’. The Independent, April 22 2014

Hospital kidney deaths 'unacceptable', says doctor. ITV News, April 22 2014

Thousands die through thirst in hospital. The Times, April 22 2014

Thousands die of thirst and poor care in NHS. The Daily Telegraph, April 22 2014

Tragedy of 3,000 patients that die of thirst in hospitals every month. Daily Express, April 22 2014

Links To Science

Kerr M, Bedford M, Matthews B, O’ Donoghue D. The economic impact of acute kidney injury in England. Nephrology Dialysis Transplantation. Published online April 21 2014

Categories: NHS Choices

Edible flowers not proven to prevent cancer

NHS Choices - Behind the Headlines - Wed, 23/04/2014 - 01:00

“Eating flowers grown in British gardens could help to reduce the risk of heart disease and cancer, according to a new study,” The Daily Telegraph reports.

However, the study the news is based on did not actually involve any humans.

So while the flowers may be edible, claims they prevent cancer are unproven.

The study in question measured the levels of one group of antioxidant chemicals called phenolics in 10 edible flowers. It found that there are high levels of these compounds in tree peony; a group of plants native to China. Tree peony extracts also had the highest levels of antioxidant activity.

As mentioned, the study did not assess the effects of the flowers on human health outcomes.

While antioxidants have been suggested to have various health benefits, a review of antioxidant supplements found no evidence of a beneficial effect on survival. In fact it found that some compounds might actually be harmful.

The review highlights the importance of not assuming that compounds will be beneficial just based on their antioxidant levels.

This doesn’t mean people can’t continue to enjoy edible flowers for their beauty and taste. However, some flowers are poisonous, so people should be careful not to eat flowers unless they are certain they are safe.

Current methods known to reduce the risk of cancer, such as not smoking, eating a healthy diet and regular exercise, may not be particularly newsworthy, but they are tried and tested.

 

Where did the story come from?

The study was carried out by researchers from Zhejiang University and other research centres in China. It was funded by Foundation of Fuli Institute of Food Science, Zhejiang University, and the National Natural Science Foundation of China. The study was published in the peer-reviewed Journal of Food Science.

The Daily Telegraph reports on this story briefly and non-critically. The suggestion in their headline that edible flowers may reduce cancer risk is unproven by this study.

 

What kind of research was this?

This was laboratory research looking at the antioxidant chemicals in edible flowers found in China. The study measured the amount of a specific group of antioxidant compounds called phenolics, which includes flavonoids.

The authors say that increased consumption of phenolics has been associated with reduced risk of cardiovascular disease and certain cancers.

While this study can tell us how much of these compounds are present in the flowers, it cannot tell us what effect they have on human health.

 

What did the research involve?

The researchers measured the level of phenolic compounds in 10 edible flowers commonly found in China:

  • Paeonia suffruticosa (tree peony)
  • Lilium brownii var. viridulum (a type of lily)
  • Flos lonicerae (Japanese honeysuckle)
  • Rosa chinensis (China rose)
  • Lavandula pedunculata (French lavender)
  • Prunus persica (peach)
  • Hibiscus sabdariffa (a type of hibiscus)
  • Flos carthami (safflower)
  • Chrysanthemum morifolium (a type of chrysanthemum)
  • Flos rosae rugosae (a type of rose)

They also looked at exactly which phenolic compounds were found in the flowers, and measured their antioxidant activity.

 

What were the basic results?

Paeonia suffruticosa (tree peony) had the highest levels of phenolic compounds and Flos lonicerae (Japanese honeysuckle) had the highest levels of flavonoids. Paeonia suffruticosa and Rosa chinensis extracts had the high levels of antioxidant activity. Overall, higher levels of phenolic compounds in the flowers was associated with high levels of antioxidant activity.

 

How did the researchers interpret the results?

The researchers concluded that the 10 edible flowers tested were rich sources of phenolic compounds and antioxidant activity. They also suggest that the flower extracts have potential to be used as food additives to prevent chronic diseases and promote health.

 

Conclusion

The current study has identified the levels of phenolic compounds in certain edible flowers. These compounds have antioxidant compounds, and antioxidants have been suggested to have various health benefits, including fighting cancer and heart disease. However, the current study has not assessed whether eating these flowers could have effects on human health, or at what levels they would need to be consumed to have any effects.

A Cochrane systematic review pooled data on the effects of antioxidant supplements tested in clinical trials and found no evidence of beneficial effects on survival in healthy people or people with specific diseases.

Certain antioxidant supplements (beta-carotene and vitamin E) appeared to potentially slightly increase the risk of death during the trials.

While the trials in this review may not have tested edible flower extracts specifically, the review does highlight the importance of testing compounds to be sure of their effects, rather than assuming that simply because they have antioxidant properties they must be beneficial.

Just because a substance comes from a plant you should never assume that it is guaranteed to be safe. Some of the deadliest poisons are derived from plants.

Similarly, despite claims to the contrary, it is untrue that science looks down its nose at substances derived from plants. Many widely used drugs, including aspirin, warfarin and some chemotherapy drugs are based on plant chemicals.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Eating flowers 'could help reduce cancer risk'. The Daily Telegraph, April 22 2014

Links To Science

Xiong L, Yang J, Jiang Y, et al. Phenolic Compounds and Antioxidant Capacities of 10 Common Edible Flowers from China. Journal of Food Science. Published online March 12 2014

Categories: NHS Choices

Cheap holidays 'increased' melanoma rates

NHS Choices - Behind the Headlines - Tue, 22/04/2014 - 11:35

“Skin cancer rates ‘surge since 1970s’,” reports the BBC.

The news is based on a press release from Cancer Research UK after the release of new figures for the number of people diagnosed with malignant melanoma, the most serious form of skin cancer. The statistics show that the number of people being diagnosed with malignant melanoma is five times higher than it was 40 years ago.

The press release argues that the rise can be explained, at least partially, by the growth of cheap package beach holidays since the late 1960s.

The rise in popularity of sunbeds and sunlamps may have also contributed to the increased rates.

Cancer Research UK, in collaboration with Nivea Sun, has used this opportunity to launch the third year of its campaign to encourage people to enjoy the sun safely this summer. Nivea Sun is a sunscreen brand. Many other sunscreen brands are available. For more information see our video on how to apply sunscreen.

 

What is malignant melanoma?

Malignant melanoma is the most serious form of skin cancer. In malignant melanoma, cells called melanocytes – which make a pigment or colouring for the skin – become cancerous. The cancer can rapidly become aggressive with spread to deeper tissues, lymph nodes and to other parts of the body. Early recognition, diagnosis and treatment are essential for a good outcome.

One of the early warning signs is the change in appearance of an existing mole or the appearance of a new mole.

A good way to tell the difference between a normal mole and a melanoma is to use the ABCDE checklist:

  • A for asymmetrical – melanomas have two very different halves and are an irregular shape
  • B for border – unlike a normal mole, melanomas have a notched or ragged border
  • C for colours – melanomas will be a mix of two or more colours
  • D for diameter – unlike most moles, melanomas are larger than 6mm (¼inch) in diameter
  • E for enlargement or evolution – a mole that changes characteristics and size over time is more likely to be a melanoma

Melanomas can appear anywhere on your body, but the back, legs, arms and face are the most common locations. Sometimes, they may develop underneath a nail.

It is also important to regularly check your skin for any changes and see your GP promptly if you do detect any changes.

 

What do the latest statistics show?

After accounting for the age of people in the population, it was found that the number of people diagnosed with malignant melanoma is five times higher than 40 years ago.

Cancer Research UK states that more than 13,000 people are now being diagnosed with the disease every year, or 17 for every 100,000 people in Great Britain. In the mid 70s approximately 1,800 people were diagnosed with malignant melanoma each year, or just over 3 per 100,000 people.

Malignant melanoma is now the fifth most common cancer in the UK and more than 2,000 people die from the disease each year.

 

Why has the number of people being diagnosed with malignant melanoma increased?

The statistics can’t tell us why the number of people being diagnosed with malignant melanoma has increased.

Cancer Research UK suggests that the increase could be due to:

  • the increase in overseas package beach holidays
  • the popularity of tanning
  • increased sunbed use
  • better detection

UV exposure is the main risk factor for malignant melanoma and therefore it is possible that increased sun exposure through travel to hot countries, sunbathing and sunbed use are at least in part responsible for the increase in diagnoses.

However, Cancer Research UK’s latter point about better detection is also an important point. Awareness about malignant melanoma and its risks – both among the public and health professionals – is likely greatly improved today compared to how it was in the 1970s.

This may have contributed to the increases in the number of diagnoses, which is likely to be a good thing, as improved awareness and earlier diagnosis leads to improved prognosis.

As Cancer Research UK statistics also demonstrate, the five-year survival rate from malignant melanoma is much higher today than it was in the 1970s. However this may be due to the “lead time effect”, where five-year survival improves simply because the disease is diagnosed earlier.

 

What are the risk factors for malignant melanoma?

Those with the highest risk of the disease include people with pale skin, lots of moles or freckles, a history of sunburn or a previous skin cancer, or a family history of the disease. However, all people should take adequate precautions in the sun (e.g. during the UK summer or when travelling to hot countries), including using sunscreens, covering up with clothing, hats and sunglasses, and staying out of the sun during the hottest parts of the day.

 

How is it treated?

The main treatment for melanoma is surgery, although your treatment will depend on your circumstances.

If melanoma is diagnosed and treated at an early stage, surgery is usually successful. However, you may need follow-up care to prevent melanoma recurring.

If melanoma isn't diagnosed until an advanced stage, treatment is mainly used to slow the spread of the cancer and reduce symptoms. This usually involves medicines, such as chemotherapy.

 

How can you prevent it?

One of the best ways for people to reduce their risk of melanoma is to avoid overexposure to UV light, and:

  • avoiding sunburn
  • spending time in the shade, covering up and using sunscreen
  • avoiding using sunbeds and sunlamps

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Skin cancer rates 'surge since 1970s'. BBC News, April 21 2014

Skin cancer alert issued as number of cases soars. The Guardian, April 21 2014

Melanoma rates in the UK are up five times on the 1970s. The Independent, April 21 2014

Success in war on skin cancer: 8 in 10 cases now curable. Daily Express, April 21 2014

Categories: NHS Choices

'Silver surfers' may have lower depression risk

NHS Choices - Behind the Headlines - Tue, 22/04/2014 - 01:00

"Silver surfers are happier than techno-foges [sic]: Internet use cuts elderly depression rates by 30 per cent," the Mail Online reports after the results of a US study have suggested that regular internet use may help combat feelings of isolation and depression in older adults.

In this study, 3,075 retired people were surveyed every two years between 2002 and 2008. Internet usage was assessed based on a "yes/no" response to the question: "Do you regularly use the world wide web, or the internet, for sending and receiving e-mail or for any other purpose?"

Depression symptoms were measured using a short version of the Center for Epidemiologic Studies (CES-D) scale. This scale looks at responses to eight "yes/no" questions about mood and defines a "depressed state" as a score of four or more out of eight.

The study found that internet users were less likely to have a "depressed state" than non-users, with internet use leading to a 33% reduction in the probability of being in a "depressed state". 

But it's important to note that this does not necessarily mean those who took part in the study had a medical diagnosis of depression. These findings cannot prove that internet use is the direct cause of any reduction in depression symptoms.

A randomised controlled trial of internet use would be required to better see whether – and how – internet use can reduce the risk of depression.

The internet, like any tool, can be a force for both good and bad. On the plus side, it does allow you to access up to seven years of Behind the Headlines articles.

 

Where did the story come from?

The study was carried out by researchers from Michigan State University, the University of Montevallo, Harvard University, and the Phoenix Centre for Advanced Legal and Economic Public Policy Studies in the US. The sources of funding for this study were not reported.

It was published in the peer-reviewed Journals of Gerontology, Series B: Psychological Sciences and Social Sciences.

The story was covered well by the Mail Online, although it should be noted that some of the quotes from the researchers were based on their personal opinions, rather than the results of the study.

 

What kind of research was this?

This study looked at data collected from repeated cross-sectional surveys completed by retired, non-working US citizens every two years between 2002 and 2008. The current study aimed to determine the influence of past depression symptoms and internet use on current depression symptoms.

This repeated analysis of data collected from cross-sectional surveys can suggest associations, but it can't prove that internet use was responsible for differences in depression symptoms. A randomised controlled trial of internet use would be required to better show whether – and how – internet use can reduce the risk of depression symptoms.

Importantly, this study did not obtain confirmed medical diagnoses of depression. Depression symptoms were only assessed using a short version of the Center for Epidemiologic Studies (CES-D) scale, which asks eight questions with "yes/no" responses.

Though this is a commonly used measure of depression in older adults, particularly in research studies such as this, the indication of a "depressed state" as used in this study – a score of four or more out of eight – does not necessarily mean a person has depression.

 

What did the research involve?

The researchers analysed information on 3,075 retired, non-working people collected as part of the Health and Retirement Study between 2002 and 2008. This study surveys people over the age of 50 every two years.

In this survey, depressive symptoms were measured using the short eight-item version of the Center for Epidemiologic Studies (CES-D) scale. The CES-D score on this shortened version is based on responses to eight "yes/no" questions assessing mood, with higher scores indicating more depression symptoms.

For the purposes of this study, participants were categorised as being in a "depressed state" if they had scores of four or more out of eight (the researchers note that the average score was 1.4 and approximately 12% of participants had a score of four or more).

Internet use was based on the answer to the question: "Do you regularly use the world wide web, or the internet, for sending and receiving e-mail or for any other purpose?"

The researchers looked at the effects of past "depressed state" and internet use on current "depressed state".

They adjusted their analyses for potential confounders, including:

  • age
  • gender
  • race
  • education
  • whether participants were married
  • physical activity
  • health conditions
  • household size
  • when the survey was completed

 

What were the basic results?

Over the course of the whole study, 14% of participants had a CES-D score of four or more on average. This was found to be relatively stable across time (13.5% in 2002; 12.9% in 2004; 14.4% in 2006; 15.4% in 2008). On average, 9.1% of internet users had a CES-D score of four or more compared with 16.1% of non-users.

About half (48.6%) of those categorised as being in a depressed state in one survey according to this criteria were also found to be in a depressed state in the preceding survey.

Internet use was also stable over the four surveys (28.9% in 2002; 30.4% in 2004; 30.0% in 2006; and 29.6% in 2008), with 85% of users in a current wave also being users in the preceding wave of surveys.

The researchers found that being in a depressed state is persistent, with people in a depressed state in a previous survey about 50% more likely to be in a depressed state in the current survey. Similarly, being in a depressed state in the first survey in 2002 greatly increased the probability of a later depressed state.

Internet users were found to be less likely to be in a depressed state than non-users, leading to a 33% reduction in the probability of a depressed state.

The researchers performed additional analysis to check that the reduction in the probability of a depressed state in internet users was not the result of differences between internet users and non-users.

To do this, they matched internet users and non-users based on demographic variables. In this analysis, internet use was found to reduce the probability of a depressed state by 48%.

They also performed some preliminary analysis of what could explain the reduction in the probability of depressed state in internet users. They found that using the internet reduced the probability of a depressed state the most in people living alone.

They used this result to hypothesise that internet use may improve isolation and loneliness. This hypothesis remains unproven, but is arguably plausible.

 

How did the researchers interpret the results?

The researchers concluded that, "For retired older adults in the United States, internet use was found to reduce the probability of a depressed state by about 33%. Number of people in the household partially mediates this relationship, with the reduction in depression largest for people living alone.

"This provides some evidence that the mechanism linking internet use to depression is the remediation of social isolation and loneliness. Encouraging older adults to use the internet may help decrease isolation and depression."

 

Conclusion

This US study analysed repeated cross-sectional surveys of retired older adults collected as part of the Health and Retirement Study between 2002 and 2008. The study found that depression symptoms were persistent, with people with a "depressed state" at one time point during the study more likely to have a "depressed state" at another time point.

It also found that internet users were less likely to have a "depressed state" than non-users, with internet use leading to a 33% reduction in the probability.

Preliminary analysis found that using the internet reduced the probability of a depressed state the most in people living alone. The researchers used this result to hypothesise that internet use may improve isolation and loneliness.

However, there are several important limitations of this study. Importantly, the study did not obtain confirmed medical diagnoses of depression. Depression symptoms were only assessed using a short version of the Center for Epidemiologic Studies (CES-D) scale, which asks eight questions with "yes/no" responses.

This is a commonly used measured of depression in older adults, particularly in research studies such as this. But the indication of a "depressed state" used in this study – a score of four or more out of eight – does not necessarily mean a person has depression. The CES-D scale is designed to assess a history of symptoms over the past two weeks, so a low score could be the result of a temporary lowering of mood rather than clinical depression.

It is also worth noting that internet usage was based on a "yes/no" response to the question: "Do you regularly use the world wide web, or the internet, for sending and receiving e-mail or for any other purpose?" There was no assessment of what the internet was used for, or how much time was spent on the internet.

The repeated analysis of data collected from cross-sectional surveys can suggest associations, but it can't prove that internet use was responsible for differences in depression symptoms.

There may be many other sociodemographic, psychological, health and lifestyle influences that are having an influence in the observed relationship which this study has not been able to account for.

A randomised controlled trial of internet use would be required to show whether – and how – internet use can reduce the risk of depression.

With these limitations in mind, there are many anecdotal reports from older adults about how internet use has made them feel more connected and less isolated.

If you know an older person who you think would benefit from using the internet, encouraging them to go their local library is probably the best first step towards becoming a "silver surfer".

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Silver surfers are happier than techno-foges: Internet use cuts elderly depression rates by 30 per cent. Mail Online, April 18 2014

Links To Science

Cotten SR, Ford G, Ford S, Hale TM. Internet Use and Depression Among Retired Older Adults in the United States: A Longitudinal Analysis. The Journals of Gerontology – Series B. Published online March 26 2014

Categories: NHS Choices

Apathy unproven as early warning sign of dementia

NHS Choices - Behind the Headlines - Thu, 17/04/2014 - 11:40

“Elderly who lose interest in pastimes could be at risk of Alzheimer's,” reports The Daily Telegraph, with other papers reporting similar headlines.

These incorrect headlines are based on the results of a study that looked for a link between symptoms of apathy and structural brain changes (on brain scans) in over 4,000 older adults who did not have dementia.

The researchers were interested in discovering whether there were a combination of changes in brain volume and reported symptoms of apathy.

These symptoms were defined as:

  • giving up activities and interests
  • preferring to stay at home rather than going out and doing new things 
  • not feeling full of energy

People who reported two or more of the symptoms listed above had significantly smaller total brain volume and grey and white matter volumes, compared to their counterparts.

Our grey matter contains predominantly nerve cell bodies – it is also where memories are stored and where learning takes place in the brain. White matter contains nerve cell fibres and is responsible for communication between different brain regions. People with symptoms of apathy also had more abnormal changes to their white matter.

As symptoms of apathy and structural brain changes were assessed at the same time, we don’t know if the two are directly related or if there are other factors at play.

It’s currently unproven whether keeping both the mind and the body active will prevent dementia, but it can help improve a person’s quality of life.

Read more about how getting active can improve your wellbeing.

 

Where did the story come from?

The study was carried out by researchers from the University Medical Centre Utrecht in the Netherlands; the National Institute on Aging and the Laboratory for Epidemiology, Demography and Biometry in the US; and the Icelandic Heart Association, the University of Iceland, Janus Rehabilitation and Lanspitali University Hospital in Iceland. It was funded by a US National Institutes of Health contract, the US National Institute on Aging Intramural Research Programme, Hjartavernd (the Icelandic Heart Association) and Althingi (the Icelandic Parliament).

The study was published in the peer-reviewed journal Neurology.

This story was covered by The Independent, the Daily Mail and The Times. The Mail and The Independent’s coverage was poor, with both newspaper’s reporting that losing interest in hobbies and other activities in old age could be an early sign of dementia or Alzheimer’s. This study did not investigate whether symptoms of apathy were linked to Alzheimer’s or other dementias. Instead, it looked for a link between apathy symptoms and structural brain changes at a particular point in time.

The Times’ coverage was more measured, as it stressed that a direct causal link between apathy, brain size and dementia risk had not been proven by the study.

 

What kind of research was this?

This was a cross-sectional study of 4,354 older people without dementia who were participating in the Age, Gene/Environment Susceptibility-Reykjavik Study. It aimed to discover if there was a link between apathy symptoms (lack of interest, enthusiasm or concern) and structural brain changes.

Cross-sectional studies only analyse people at one particular point in time. This means that we don’t know whether the appearance of apathy symptoms and brain changes happened at the same time or if one happened before the other. We also don’t know if the two things are directly related or if there are other factors associated with both.

 

What did the research involve?

The researchers studied 4,354 older people (with an average age of 76) without dementia who were participating in the Age, Gene/Environment Susceptibility-Reykjavik Study, which is an ongoing cohort study into the effects of ageing and genetics.

Apathy symptoms were assessed through responses to three items relating to apathy on the Geriatric Depression Scale. The three questions relating to apathy were:

  • Have you dropped many of your activities and interests?
  • Do you prefer to stay at home, rather than going out and doing new things?
  • Do you feel full of energy?

Brain volumes and total white matter lesions (abnormal changes in white matter) were measured from MRI scans [/conditions/MRI-scan/Pages/Introduction.aspx].

The researchers compared people with two or more apathy symptoms to those with fewer than two symptoms, to see if there were differences in brain volume and white matter lesions.

They adjusted their analyses for a wide variety of confounding factors including age, education, skull size, physical activity, depressive symptoms and antidepressant use

What were the basic results?

Just under half of participants (49%) had two or more symptoms of apathy. People with two or more symptoms were older and more likely to be women. They also had lower education, were less physically active, had poorer Mini-Mental State Examination Scores, walked slower and often had high blood pressure, mild cognitive impairment, brain infarcts and antidepressant use, as well as higher depression scores.

After adjusting their analyses for confounders, people with two or more apathy symptoms had significantly smaller total brain volume and grey and white matter volumes than those with fewer than two apathy symptoms. People with two or more symptoms had 0.5% less grey matter and 0.5% less white matter. They also had more white matter lesions.

Differences in grey matter volumes were particularly noticeable in the frontal and temporal lobes. These are two of the main brain regions, with the frontal lobe (at the front of the brain) involved with higher mental processes like thinking, judging and planning, and the temporal lobe at the sides of the brain (near the temples) involved with memory, hearing and language.

Differences in white matter volumes were particularly noticeable in the parietal lobe and the thalamus, both of which are involved in processing sensory information from the body.

How did the researchers interpret the results?
The researchers conclude that: “in this older population without dementia, apathy symptoms are associated with a more diffuse loss of both grey and white matter volumes”.

 

Conclusion

This cross-sectional study found that people who reported at least two symptoms of apathy had significantly smaller total brain volume and grey and white matter volumes than people with fewer than two apathy symptoms. The grey matter contains predominantly nerve cell bodies. It is also where memories are stored and where learning takes place in the brain. White matter contains nerve cell fibres and is responsible for communication between different brain regions. People with symptoms of apathy also had more abnormal changes to their white matter lesions.

As symptoms of apathy and structural brain changes were assessed together, we don’t know if the appearance of apathy symptoms and brain changes happened at the same time, or if one happened before the other. We also don’t know if the two things are directly related or if there are other factors associated with both.

This study has found that apathy symptoms are linked to brain changes. However, this study did not investigate whether apathy symptoms were associated with the development of Alzheimer’s or other types of dementia.

Currently, there is no guaranteed method to prevent dementia. However, evidence suggests that to reduce your risk some forms of dementia you should:

  • eat a healthy diet
  • maintain a healthy weight
  • exercise regularly
  • do not drink too much alcohol
  • stop smoking (if you smoke)
  • make sure you keep your blood pressure at a healthy level

Keeping your mind active may also help. Read more about possible methods to reduce your dementia risk.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Elderly who lose interest in pastimes could be at risk of Alzheimer's Disease. The Daily Telegraph, April 17, 2014

Apathy in old age 'an early sign of dementia': Study shows that losing interest in hobbies could mean Alzheimer's. Daily Mail, April 17 2014

Brain size shrinks in the elderly as apathy grows. The Times, April 17 2014

Links To Science

Grool AM, Geerlings  MJ, Sigurdsson S, et al. Structural MRI correlates of apathy symptoms in older persons without dementia. Neurology. Published online April 16 2014

Categories: NHS Choices

NICE highlights how hand washing can save lives

NHS Choices - Behind the Headlines - Thu, 17/04/2014 - 01:00

“Doctors and nurses should do more to stop hospital patients developing infections, an NHS watchdog says,” BBC News reports.

The National Institute for Health and Care Excellence (NICE) has highlighted how basic hygiene protocols, such as hand washing, may be overlooked by some health professionals, which may threaten patient safety.

NICE points out that one in 16 people being treated on the NHS picks up a hospital acquired infection such as meticillin-resistant staphylococcus (MRSA).

 “It is unacceptable that infection rates are still so high within the NHS” said Professor Gillian Leng, director of Health and Social Care at NICE. “Infections are a costly and avoidable burden. They hinder a patient's recovery, can make underlying conditions worse, and reduce quality of life.”

The measures to reduce infection are laid out by NICE in a “Quality Standard” on “Infection prevention and control” and are outlined below.

 

What has NICE said?

This NICE Quality Standard lays out six specific statements for NHS staff on preventing and controlling infections. They are based on previous more detailed guidance and are listed below:

  • People should be offered antibiotics according to local guidance about which ones are most suitable. They should only be prescribed antibiotics when they are needed and not for self limiting, mild infections such as colds and coughs, earache and sore throats. This measure is aimed at reducing the problem of antibiotic resistance, which is when an infection no longer responds to treatment with one or more types of antibiotic and so is more likely to spread and can become serious.
  • NHS organisations should aim to continually improve their approach to preventing infection (for example, by sharing information with other organisations and monitoring rates of infection).
  • All health care staff should always clean their hands thoroughly, both immediately before and immediately after coming into contact with a patient or carrying out care, and even after wearing gloves. Hands can usually be cleaned with either soap and water or an alcohol-based handrub; but soap and water must be used when the hands are obviously soiled or contaminated with bodily fluids, or when caring for people with diarrhoea or vomiting. All care providers should be trained in effective hand cleaning techniques. Hand hygiene in hospitals has improved in recent years says NICE, but good practice is still not universal.
  • Staff involved in the care of patients with urinary catheters should minimise the risk of infection by carrying out procedures to make sure that the catheter is inserted, looked after and removed correctly and safely. These procedures include cleaning hands, using a lubricant when inserting the catheter, emptying the drainage bag when necessary, and removing the catheter as soon as it is no longer needed. (A urinary catheter is a thin flexible tube used to drain urine from the bladder).
  • Staff involved in the care of patients who need a vascular access device should minimise their risk of infection by making sure that the device is inserted, looked after and removed correctly and safely. These procedures include using sterile procedures when inserting the device, using the correct antiseptics and dressings, and removing the device as soon as it is no longer needed. A vascular access device is a tube that is inserted into a main vein or artery and used to administer fluids and medication, monitor blood pressure and collect blood samples.
  • Health care staff should give people who have a urinary catheter, a vascular access device or an enteral feeding tube, and any family members or carers who help them, information and advice about how to look after the equipment, including advice about how to prevent infection. Enteral feeding is a type of feeding used for people who cannot eat normally or safely (for example they may have trouble swallowing) in which liquid food is given through a tube directly into the stomach or upper parts of the digestive system.

 

What are the dangers of not washing hands?

Bugs (microbes) such as bacteria and viruses can easily be spread by touch. They may be picked up from contaminated surfaces, objects or people, then passed on to others. 

Effective hand decontamination – either by washing with soap and water or with an alcohol-based handrub – is recognised as crucial in the reducing avoidable infection.

 

What hygiene procedures should visitors to hospitals follow?

When visiting someone in hospital, always clean your hands using soap and water or alcohol handrubs. Do this when you enter or leave a patient’s room or other areas of the hospital. Effective hand decontamination relies on an effective technique, which includes:

  • wetting hands with warm water
  • applying an adequate amount of (preferably liquid) soap
  • rubbing this thoroughly onto all hand surfaces (for at least 10 to 15 seconds)
  • rinsing thoroughly
  • drying thoroughly, preferably with disposable paper towel
  • taps should be then turned off with the paper towel to avoid recontaminating the hands

Alcohol handrub can only be used if hands are free from soling. The handrub needs to be thoroughly rubbed into all hand surfaces until hands are completely dry.

If you are concerned about the hand hygiene of doctors, nurses or anyone else who comes into contact with the patient you are visiting, you are encouraged to ask them whether they have cleaned their hands.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To Science

Hospital infection rates must come down, says watchdog. BBC News, April 17 2014

One in 16 pick up a bug in FILTHY hospitals: NICE blames staff hygiene and dirty equipment for thousands of deaths. Mail Online, April 17 2014

Don't forget to wash your hands, nurses told. The Daily Telegraph, April 17 2014

One in 16 patients in NHS hospitals picks up infection, warns watchdog. The Guardian, April 17 2014

Wash your hands to cut infections, nurses told. The Times, April 17 2014

1 in 16 NHS patients pick up infections. ITV News, April 17 2014

Medics told by watchdog to wash hands often to stop spread of infection. Daily Express, April 17 2014

Categories: NHS Choices

PET scans may improve brain injury diagnosis

NHS Choices - Behind the Headlines - Wed, 16/04/2014 - 12:30

“PET scans could predict extent of recovery from brain injury, trials show,” The Guardian reports. Evidence suggests that the advanced scanning devices may be able to detect faint signs of consciousness in people with severe brain injuries.

The paper reports on a study that examined how accurate two specialised brain imaging techniques were at diagnosing the conscious state and chances of recovery in 126 people with severe brain damage.

The people were scanned using Positron Emission Tomography (PET) scans, which use a radioactive tracer to highlight cell activity, and functional Magnetic Resonance Imaging (fMRI) scans, which show blood flow in the brain, to demonstrate areas of activity. The results of these scans were compared for accuracy, with assessments made using an established coma recovery scale.

The study aimed to see if the scans could accurately distinguish between a minimally conscious state (MCS) – in which there is a chance of recovery – from other disorders of consciousness.

PET scans correctly identified 93% of people with MCS and accurately predicted that 74% would make a recovery within the next year. The fMRI scans were slightly less accurate, correctly identifying only 45% with MCS and accurately predicted recovery for just 56% of them.

The brain scans also showed that a third of the 36 people who had been diagnosed as unresponsive by the coma scale actually had brain activity consistent with minimal consciousness, and just over two thirds of these people subsequently recovered consciousness.

This small study suggests that PET scanning, together with existing clinical tests, could help accurately identify people with the potential to recover consciousness.

 

Where did the story come from?

The study was carried out by researchers from the University and University Hospital of Liege (Belgium), University of Western Ontario (Canada) and the University of Copenhagen (Denmark). It was funded by the National Funds for Scientific Research (FNRS) in Belgium, Fonds Léon Fredericq, the European Commission, the James McDonnell Foundation, the Mind Science Foundation, the French Speaking Community Concerted Research Action, the University of Copenhagen and the University of Liège.

The study was published in the peer-reviewed medical journal The Lancet.

It was covered fairly in The Guardian and The Times, which understandably looked at the ethical implications for decisions around switching off life support or giving pain relief.

 

What kind of research was this?

This diagnostic study looked at how accurate two specialised brain imaging techniques – Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI) – were at correctly distinguishing between different conscious states and predicting recovery in people with severe brain damage. This included both traumatic brain damage, which is typically caused by a severe head injury, and non-traumatic brain damage, which can have many causes, such as a stroke or heart attack.

The brain imaging results were compared with an established coma recovery scale, which is used in the assessment of people with brain damage.

PET scanning involves injecting a radioactive tracer (fluorodeoxyglucose – which is why the scans are often referred to as FDG-PET), which then produces colourful 3D images that show up cell activity in the body. It is most commonly used in the diagnosis of cancer. fMRI scanning shows up blood flow in the brain, which demonstrates areas of brain activity.

The researchers point out that in people with severe brain damage and a disordered level of consciousness, judging the level of awareness is difficult. In particular, the researchers aimed to see whether the scans could accurately distinguish between “unresponsive wakefulness syndrome” and a “minimally conscious state”.

People with “unresponsive wakefulness syndrome” (previously referred to as a vegetative state) differ from people in a coma in that they have their eyes open and show a normal sleep/wake cycle, but aside from this they show no behavioural signs of awareness. Meanwhile, people in a minimally conscious state (MCS) show fluctuating awareness and response to some stimuli (such as instructions or questions). 

The distinction between them has important therapeutic and ethical implications. As the researchers say, people in MCS are more likely to suffer pain and might therefore benefit from pain-relief and other interventions to improve their quality of life. They are also more likely to recover higher levels of consciousness that those with unresponsive wakefulness syndrome. In several countries, doctors have a legal right to withdraw artificial life support from people with unresponsive wakefulness syndrome, but not those with MCS.

The researchers also say that up to 40% of such patients are misdiagnosed by traditional clinical examinations. Brain imaging methods are now being developed to complement these bedside assessments, which can assess spontaneous brain activity or specific responses to mental tasks.

Such methods may help distinguish between people in an MCS and those with unresponsive wakefulness syndrome.

 

What did the research involve?

The researchers included 126 people with severe brain damage that were diagnosed at the University Hospital of Liège, in Belgium, between January 2008 and June 2012. They included people with both traumatic and non-traumatic causes for their brain damage. The results were:

  • 41 had been diagnosed with unresponsive wakefulness syndrome
  • 81 had been diagnosed as being in a minimally conscious state (MCS)
  • 4 patients had been diagnosed with locked-in syndrome, (a state where the person is fully conscious but behaviourally unresponsive). These people acted as a control group

The researchers carried out repeated clinical assessment of the patients using a behavioural test called the Coma Recovery Scale-Revised (CRS-R). This is thought to be the most validated and sensitive method for diagnosing disorders of consciousness. The scale has 23 items and is used by specialist staff to assess hearing, vision, motor function, verbal function, communication and level of arousal.

The researchers then carried out imaging using PET and fMRI scans, though not all patients were assessed with each technique (if the person moved too much to obtain a reliable scan, the procedure was left out).

  • For the PET, the person was injected with the imaging agent fluorodeoxyglucose before undergoing a scan. The scan from each person was contrasted against 39 healthy adult controls 
  • For the fMRI scan, patients were asked to do various motor and visuospatial tasks during the imaging session – including imagining playing tennis or walking into a house. The patterns of activity in the brain were also compared to those obtained in 16 healthy volunteers

12 months after the initial assessment, the researchers assessed the patients using a validated recovery scale (the Glasgow Outcome Scale – Extended). This assesses their level of recovery and disability and places the person into one of 8 categories ranging from 1 (death) to 8 (having made a good recovery). They also obtained an assessment of each patient’s outcome from medical reports.

The researchers then calculated the diagnostic accuracy of both imaging techniques, using the CRS-R diagnoses as the reference “gold standard”.

 

What were the basic results?

The main results:

  • PET scanning accurately identified 93% of people in a minimally conscious state (95% confidence interval (CI) 85-98) and had a high level of agreement with behavioural CRS-R scores
  • fMRI was less accurate at diagnosing a minimally conscious state (MCS), correctly identifying 45% of patients (95% CI 30-61) and had lower overall agreement with behavioural CRS-R scores than PET imaging
  • PET correctly predicted outcome after 12 months in 74% of patients (95% CI 64-81), and fMRI in 56% of patients (95% CI 43-67)
  • 13 of 42 (32%) of patients who had been diagnosed as unresponsive with CRS-R showed brain activity compatible with minimal consciousness on at least one of the brain scans; 69% of these (9 of 13) people subsequently recovered consciousness
  • The tests correctly identified all patients with locked-in syndrome as conscious

 

How did the researchers interpret the results?

They say the results show that, used together with the Coma Recovery Scale, PET scanning might be a useful diagnostic tool in disorders of consciousness. They also say it would be helpful in predicting which people with MCS might make a long-term recovery.

 

Conclusion

This is a valuable diagnostic study that tested how accurate PET and fMRI imaging are at distinguishing between different levels of conscious state and helping to predict recovery. 

Diagnostic assessments are traditionally made using bedside clinical tests – but as the researchers say, judging the level of awareness in people with severe brain damage can be difficult.

In particular, the researchers wanted to see whether the scans could accurately distinguish between people with “unresponsive wakefulness syndrome” and “minimally conscious state”, as distinguishing between these two states can have important therapeutic and ethical implications. The study found that PET scanning in particular had a high accuracy for diagnosing MCS and for predicting recovery time.

It’s particularly noteworthy that PET scans detected brain activity in some people who had been diagnosed as unresponsive by the standard Coma Recovery Scale test, and two-thirds of these people subsequently recovered consciousness.

However, the study has some limitations, including its small size, some missing data and possible differences between people who were and were not lost to follow-up. As the researchers acknowledge, their study used a complex method of statistical analysis, so there is a risk of false results.

At a practical level, these specialist types of imaging techniques are expensive and complicated to set up, so could have resource implications.

Overall, the findings suggest that PET scanning  could be a promising addition to standard clinical assessments, when trying to diagnose people with severe brain damage and disordered consciousness.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

PET scans could predict extent of recovery from brain injury, trials shows. The Guardian, April 16 2014

Brain scanner can detect signs of hope in vegetative-state patients. The Times, April 16 2014

Links To Science

Stender J, Gosseries O, Bruno M, et al. Diagnostic precision of PET imaging and functional MRI in disorders of consciousness: a clinical validation study. The Lancet. Published online April 16 2014

Categories: NHS Choices

Cannabis linked to brain differences in the young

NHS Choices - Behind the Headlines - Wed, 16/04/2014 - 11:30

“Using cannabis just once a week harms young brains,” the Daily Mail reports.

The newspaper reports on an US study that took one-off brain MRI scans of a group of 20 young adult recreational cannabis users, and a comparison group of 20 non-users. They compared their brain structure, focusing on regions that are believed to be involved in addiction.

They found differences between users and non-users in shape and volume of the nucleus accumbens and amygdala; areas of the brain involved in reward and pleasure responses, emotions, memory, learning, and decision making.

However, a case could be made that the media has overstated the implications of the research.

As the study only involved a single one-off brain scan it cannot prove cause and effect. It could be the case that pre-existing abnormalities in the brain make people more likely to use cannabis rather than vice versa.

The study was small, involving just 20 users and 20 non-users. Examining different groups of people and different age groups could give different results.

And finally, there is currently no proof that the changes detected to the brain will correspond to any demonstrable differences in thought processes and decision making behaviour.

That said, due to the widespread use of cannabis, results such as these warrant further study. This may possibly become easier to carry out due to the quasi-legal status of cannabis in some US states.  

Where did the story come from?

The study was carried out by researchers from Massachusetts General Hospital, Harvard Medical School, Boston, and Northwestern University Feinberg School of Medicine, Chicago.

Funding was provided by the National Institute on Drug Abuse, the Office of National Drug Control Policy, Counterdrug Technology Assessment Center, the National Institute of Neurological Disorders and Stroke, and the National Institutes of Health. Individual researchers also received support from Warren Wright Adolescent Center at Northwestern Memorial Hospital and Northwestern University; and a Harvard Medical School Norman E. Zinberg Fellowship in Addiction Psychiatry Research.

The study was published in The Journal of Neuroscience, a peer-reviewed medical journal.

By and large the media has made the (potentially incorrect) assumption that cannabis use has harmed the brain and is responsible for alleged changes in behaviour. For example, the Daily Mail headline that “cannabis once a week harms young brains” in particular is not justified by this research.

The study did not investigate whether the brain changes observed were harmful (for example in terms of thinking or behaviour), they just commented that the brain structures were different. Also, users in the study averaged 11 cannabis joints per week, rather than one.

This small cross sectional study taking one-off brain scans cannot prove whether cannabis was behind any changes seen to the brain. Observational studies that followed people over time would be able to provide better evidence of this.

 

What kind of research was this?

This cross sectional study took MRI scans of the brains of young adults who used marijuana (cannabis) recreationally, and compared them with brain images of adults who did not use cannabis. They were interested in comparing the structure in particular areas of the brain.

Cannabis is one of the most commonly used illicit drugs, particularly by adolescents and young adults. It has been shown to have effects upon thought processes such as learning, memory, attention and decision-making.

Previous animal studies have shown that exposing rats to 9-tetrahydrocannabinol (THC), the main psychoactive chemical of cannabis, leads to changes in the structures including the nucleus accumbens. In people the nucleus accumbens is believed to play a central role in the brain’s reward centre and pleasure-seeking behaviour. However, less is known about the relationship between cannabis use and brain structure in people, and this is what this study aimed to look at.

 

What did the research involve?

The study included 20 young adults (aged 18–25 years; 9 male) current cannabis users and 20 controls who did not use cannabis. The controls were matched by age, sex, ethnicity, hand dominance and educational level. Cannabis users used cannabis at least once a week but were not considered to be dependent (as assessed using valid diagnostic criteria). They did not include people who met criteria for abuse of alcohol or any other substance. 

The participants received MRI imaging on one visit to the study centre. They were asked not to use cannabis on that day. They performed a urine screen for any substance. The main breakdown product of THC can be detected in the urine several weeks after last use, so they couldn’t tell from the urine test how long ago participants had last used. But researchers checked that none showed signs of acute intoxication according to criteria on examination (for example fast heart rate, red eyes, slurred speech).

All participants were scanned using special MRI techniques, specifically looking at the volume, shape and density of gray matter (nerve cell bodies) in the nucleus accumbens and other brain regions that may be involved in addiction.

 

What were the basic results?

The researchers found that the gray matter of cannabis users was denser in the left nucleus accumbens, and in other brain regions including the amygdala, a region believed to play an important role in our emotional responses, including fear and pleasure. Correlating with the increased density of nerve cells, the volume of the left nucleus accumbens was also larger in cannabis users than non-users. 

The higher the reported use of cannabis, the higher the volume of the left nucleus accumbens tended to be, and the greater the density of gray matter.

Cannabis users and non-users also demonstrated differences in brain shape, particularly in the left nucleus accumbens and right amygdala.

The observed differences were seen even after adjusting for age, sex, alcohol and cigarette use.

 

How did the researchers interpret the results?

The researchers conclude that their study suggests that cannabis use in young recreational users is associated with exposure-dependent alterations in the structure of the core brain regions involved in the reward system.

 

Conclusion

This study found differences between young recreational cannabis users and non-users in the volume and structure of the nucleus accumbens and amygdala, which have a role in the brain’s reward system, pleasure response, emotion and decision making.

However, as this was only a cross sectional study taking one-off brain scans of cannabis users and non-users, it cannot prove that cannabis use was the cause of any of the differences seen. It is not known whether cannabis use could have caused these changes in regular users.  

Or conversely whether the cannabis users in this study had this brain structure to start with, and that this may have made them more likely to become regular users of cannabis.

Also, this is a small study comparing the brain structure of only 20 users and 20 non-users. With such a small sample of people, it is possible that any differences in brain structure could have been due to chance. These changes may not have been evident had a larger number of people been examined.

Examination of different samples of people, and in different age groups, may have given different results.

Similarly, examining the extent of brain structural change was related to factors such as age at first use, and frequency or duration of use, are less reliable when based on such a small sample of people.

Confirmation of these tentative findings through study of other groups of cannabis users is now needed. 

It would also be of value to see whether the structural differences observed actually correlated with any demonstrable differences in thought processes and decision making behaviour.

As some US states have now, to all intents and purposes, legalised the sale of cannabis, such studies should be easier to carry out.

It is important to stress that cannabis has uncertain effects on thought processes, emotions and mental health, both in the short and longer term. It is also a class B drug which is illegal to possess or distribute. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Using cannabis just once a week harms young brains: Study shows emotions and motivation affected. Daily Mail, April 16 2014

Even casual use of cannabis alters brain, warn scientists. The Daily Telegraph, April 16 2014

Smoking cannabis could change the part of the brain dealing with motivation, according to one new study. The Independent, April 16 2014

Links To Science

Gilman JM, Kuster JK, Lee S, et al. Cannabis Use is Quantitatively Associated with Nucleus Accumbens and Amygdala Abnormalities in Young Adult Recreational Users (PDF, 533kb). Published online April 16 2014

 

Categories: NHS Choices

Eating chocolate probably won't save your marriage

NHS Choices - Behind the Headlines - Tue, 15/04/2014 - 12:20

“As blood glucose levels plummet, aggression levels rise, and people take it out on those closest to them,” The Daily Telegraph reports.

This news is based on an American study into blood glucoses levels and aggression.

Researchers aimed to find out whether people’s blood glucose levels predicted aggressive impulses and aggressive behaviour in married couples.

The thinking behind the study is that as people’s energy levels fall, so does their self-control, making them more likely to lash out (either verbally or physically) to those closest to them. The study included 107 couples, who had their blood sugar measured over 21 days. The researchers measured aggressive impulses by allowing participants to stick pins in a voodoo doll each evening. They were told that the angrier they felt towards their partner, the more pins they should stick in (up to a total of 51!).

Aggressive behaviour was assessed by measuring the intensity and duration of an unpleasant sound (such as fingernails scratching across a blackboard) that one partner selected for the other as a punishment for losing a competition at the end of the study.

The researchers did find an association between blood glucose levels and increased results in the tests used to assess aggressive impulses and aggressive behaviour.

However, this was a highly experimental and abstract study, and it is difficult to assess what, if any, implications it has in a real world setting. It is certainly not the case, as the Daily Express claims, that “chocolate can save your marriage”.

If you are concerned that your relationship has become abusive – either verbally, physically, or both – call the free 24-hour National Domestic Violence Helpline on 0808 2000 247.

Where did the story come from?

The study was carried out by researchers from The Ohio State University, University of Kentucky and University of North Carolina. It was funded by a US National Science Foundation Grant and published in the peer-reviewed journal PNAS.

Despite headlines to the contrary, the study did not show that “chocolate can save your marriage”. It also didn’t show that married couples who diet are more likely to argue, or that “low levels of blood sugar can increase the risk of a niggling irritation with your partner turning into a blazing row”.

All that it found was that the lower blood glucose levels were, the more pins participants stuck into the voodoo doll, and the greater the intensity and duration of noise participants set for their spouse as a forfeit for losing a competition.

There are also a number of limitations to the study, which should be considered. The researchers didn't determine whether the participants were hungry or whether they were dieting at any stage of the study. They also failed to investigate whether having a sugary snack before completing either the voodoo doll or trial tasks changed the outcome. They also didn’t examine whether the participants had impaired glucose tolerance (a marker of diabetes).

Despite the light-hearted coverage, it is important to state that domestic violence is a serious issue that can affect both men and women. Read more advice for people in abusive relationships.

 

What kind of research was this?

This was an experimental study that aimed to determine whether evening blood sugar (glucose) levels predict aggressive impulses and aggressive behaviour in married couples.

The researchers measured aggressive impulses by allowing participants to stick pins in a voodoo doll, and aggressive behaviour by measuring the intensity and duration of an unpleasant sound that participants selected as the forfeit for their spouse losing a competition.

The researchers wanted to test how low blood glucose levels may relate to violent tendencies among intimate partners. It is unclear how the results of this highly experimental scenario can be applied to actual relationships where domestic violence occurs.

 

What did the research involve?

The researchers recruited 107 married couples to take part in the study. The average age of participants was 36, with an average marriage of 12 years in length, and were given $50 each to take part in the study. The researchers do not say whether any of the couples had any previous experience of intimate partner violence.

For 21 days, participants measured their blood glucose levels in the morning before breakfast and in the evening before bedtime. Each evening, participants were told to stick between 0 and 51 pins into a voodoo doll that represented their husband or wife, depending on how angry they were with them. Participants were told to do this alone, without their spouse present, and to record the number of pins inserted. The researchers say this was a measure of “aggressive impulses”.

At the end of the trial, each couple competed against their husband or wife on a task involving 25 trials at the laboratory. The winner of each trial could blast the loser with a loud noise (a mixture of unpleasant noises, such as fingernails on a chalkboard, dentist drills and ambulance sirens) through headphones. The winner could also choose the intensity (between 60 decibels – similar to the noise level of laughter –and 105 decibels – the level of a fire alarm) and the duration (between half a second and five seconds). They could also choose not to blast their spouse with noise.

The researchers measured the intensity and duration of noise participants set for their spouse. However, unbeknown to them, participants actually competed against a computer. Participants lost 13 of the 25 trials (in a randomly determined order) and heard noise on each of those 13 trials. The computer chose random noise intensity and duration levels for the spouse across the 25 trials. The researchers state that this was a measure of “aggressive behaviour”.

The researchers aimed to see if there was a link between glucose levels and “aggressive impulses” (the number of pins participants stuck in the voodoo doll), and whether there was a link between glucose levels and “aggressive behaviour” (the intensity and duration of noise participants set for their spouse).

 

What were the basic results?

The researchers found that the lower the level the blood glucose level, the more pins participants stuck into the voodoo doll.

Lower-than-average evening glucose levels were linked to longer and more intense noise used to blast their spouse with after winning trials.

People who stuck more pins into the voodoo doll across the 21 days also selected louder and longer noise blasts for their spouse.

 

How did the researchers interpret the results?

The researchers said: “Our study found that low glucose levels predicted higher aggressive impulses in the form of stabbing pins in a voodoo doll that represented a spouse. This study also found that low glucose levels predicted future aggressive behaviour [sic] in the form of giving louder unpleasant noise blasts for longer durations to a spouse.”

“There also was a link between aggressive impulses and aggressive behaviour. Lower levels of glucose predicted aggressive impulses, which, in turn, predicted aggressive behaviour. These findings remained significant even after controlling for relationship satisfaction and participant sex. Thus, low glucose levels might be one factor that contributes to intimate partner violence.”

 

Conclusion

This study of married couples found that the lower blood glucose levels were in the evening, the more pins participants stuck into a voodoo doll of their husband or wife. Lower blood glucose was also associated with selecting longer and more intense noise to blast their spouse with after winning trials.

The real-life implications of these findings are unclear. The researchers wanted to test how low blood sugar levels relate to increased violent tendencies towards a partner. It is already known that very low blood glucose can cause symptoms including altered and irrational behaviour (which may include aggression), but this is usually seen in people with diabetes whose blood sugar drops very low, usually below three or four millimoles per litre (known as hypoglycaemia). The actual blood sugar levels of participants in this study were not reported, and as none were reported to have diabetes or impaired glucose tolerance, it is highly unlikely that glucose levels in any of the participants had fallen to a level where you would expect to see such symptoms.

Most importantly, this study used highly experimental scenarios, where married couples (with no reported experience of partner violence) were asked to carry out two abstract tests. Therefore, the results cannot be applied to real life situations involving domestic violence. 

Intimate partner violence may have varied complex psychological causes, and it cannot be answered by one general simple cause, such as low blood sugar.

If you find it difficult to keep aggressive emotions in check and frequently lash out at those around you, you may require anger management training. Read more advice about controlling your anger.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Married couples who diet more likely to argue, study finds. The Daily Telegraph, April 14 2014

Spoiling for a row? Then check your sugar level. The Independent, April 15 2014

How chocolate can save your marriage. Daily Express, April 15 2014

Links To Science

Bushman BJ, DeWall CN, Pond RS, et al. Low glucose relates to greater aggression in married couples. PNAS. Published online April 14 2014

 

Categories: NHS Choices

Salt cuts have 'saved lives,' says study

NHS Choices - Behind the Headlines - Tue, 15/04/2014 - 11:20

"Cutting back on salt does save lives," is the good news on the front page of the Daily Mail. The headline is based on a study of data obtained from the Health Survey for England, the National Diet and Nutrition Survey, and the Office for National Statistics between 2003 and 2011.

The researchers chose 2003 as the start date because this is when the Department of Health launched its salt reduction programme. This consisted of a range of measures, of which possibly the most influential was persuading food manufacturers to reduce the amount of salt put into processed foods.

Salt intake can increase blood pressure, and high blood pressure is linked to an increased risk of stroke and heart attacks.

The study looked at changes in average salt intake, blood pressure, and deaths from stroke and heart disease during this time. During this period, average blood pressure and salt intake fell significantly, and there was a reduction in the number of deaths from stroke and heart disease.

When the researchers only looked at people who were not taking blood pressure medications or other drugs that might affect blood pressure, there was still a significant decrease in blood pressure between 2003 and 2011, even after adjusting for some relevant confounders.

The researchers say it is likely that this decrease in blood pressure was the result of the reduction in salt intake during this period. However, although this is plausible, the study cannot prove this.

The reduction in blood pressure could be the result of other health and lifestyle changes that were not accounted for. There is also the possibility that improvements in medical care and treatments were also partially responsible for the reduced number of deaths.

Nevertheless, the study lends support to current health recommendations that we keep salt intake to no more than 6g per day for adults (around one teaspoon) to reduce the risk of high blood pressure.

 

Where did the story come from?

The study was carried out by researchers from the Wolfson Institute of Preventive Medicine, part of the Barts and The London School of Medicine and Dentistry, Queen Mary University of London. No funding was received specifically for this research.

It was published in the peer-reviewed BMJ Open, which is an open access journal. The article can be accessed free on the journal's website.

This research was covered well by the UK media, particularly by The Guardian, which included quotes from other experts outlining some of the inherent limitations of the study.

 

What kind of research was this?

This was a serial cross-sectional study. The study looked at three separate data sets:

  • salt intake in a random sample of the English population
  • blood pressure in another sample of the population
  • deaths from stroke and heart disease at different time points to see whether these changed over time

The researchers tried to link changes in salt consumption with changes in blood pressure and deaths from stroke and heart disease. However, salt intake and blood pressure were measured in different people, and different people were surveyed at the different time points.

This type of study cannot show that changes in salt consumption directly caused changes in blood pressure and death. The changes seen are also likely to have been influenced by other various health and lifestyle factors.

 

What did the research involve?

The researchers analysed information on blood pressure and other cardiovascular disease risk factors from people aged 16 and over who had taken part in the Health Survey for England in 2003, 2006, 2008 and 2011. The Health Survey for England is an annual survey of a random sample of the English population.

During this survey, interviewers collected information on demographics (age, sex, ethnic group, education level and household income), smoking status, consumption of alcohol, and fruit and vegetable intake, and trained nurses measured participants' body weight, height and blood pressure.

There was information for 9,183 people in 2003, 8,762 people in 2006, 8,974 in 2008, and 4,753 people in 2011.

Salt intake was analysed in a separate random sample of the population aged between 19 and 64 in the National Diet and Nutrition Survey. This was measured by 24-hour urinary sodium excretion (how much salt was passed out during a day) and verified for accuracy using laboratory methods.

Information on salt intake was available for 1,147 people in 2000-01, 350 in 2005-06, 692 in 2008, and 547 in 2011.

Information on the number of deaths from heart disease and stroke was obtained from the Office for National Statistics using the cause of death on death certificates.

The researchers looked at how changes in salt intake had influenced changes in blood pressure over the decade. To do this, they compared blood pressure in 2011 with blood pressure in 2003 in people who were not taking blood pressure medications or other drugs that might affect blood pressure.

They assumed that the change in salt intake was responsible for the change in blood pressure seen after they adjusted for the following confounders:

  • age
  • sex
  • ethnic group
  • education level
  • household income
  • alcohol consumption
  • fruit and vegetable intake
  • body mass index (BMI)

They also looked how these changes were linked to the number of deaths from heart disease and stroke.

 

What were the basic results?

From 2003 to 2011:

  • average blood pressure fell significantly – systolic (the upper blood pressure figure, showing the artery pressure when the heart contracts) fell by 3.0mmHg, and diastolic (the lower blood pressure figure, showing the artery pressure when the heart relaxes and fills with blood) fell by 1.4mmHg
  • there were also significant reductions in total cholesterol and the number of people who smoked, and significant increases in fruit and vegetable consumption – but there were also increases in BMI and decreases in HLD ("good") cholesterol
  • average salt intake also fell significantly, by 1.4g per day
  • there was a 42% reduction in the number of deaths from stroke and a 40% reduction in the number of deaths from heart disease

The researchers suggest that the decrease in the number of deaths from stroke and heart disease could be influenced by several factors, including the decreases in blood pressure, total cholesterol, the number of people who smoke, and salt intake, as well as the increase in fruit and vegetable consumption. This could also be influenced by improvements in the medical treatment of blood pressure, cholesterol and cardiovascular disease.

The researchers then focused on people who were not on blood pressure medications or other drugs that might affect blood pressure. After adjusting for the confounders described above, there was still a significant decrease in blood pressure between 2003 and 2011 (systolic fell by 2.7mmHg and diastolic fell by 0.23mmHg). They say that it is likely that this decrease in blood pressure was a result of the reduction in salt intake that occurred during this period.

 

How did the researchers interpret the results?

The researchers concluded that, "The reduction in salt intake is likely to be an important contributor to the falls [in blood pressure] from 2003 to 2011 in England. As a result, it would have contributed substantially to the decreases in stroke and [heart disease] mortality."

 

Conclusion

This UK study used serial cross-sectional data collected as part of the Health Survey for England, the National Diet and Nutrition Survey, and the Office for National Statistics between 2003 and 2011. It found that average blood pressure and salt intake fell significantly, and there was a reduction in the number of deaths from stroke and heart disease.

The researchers only looked at people who were not taking blood pressure medications or other drugs that might affect blood pressure. After adjusting for some relevant confounders, there was still a significant decrease in blood pressure between 2003 and 2011 (systolic fell by 2.7mmHg and diastolic fell by 0.23mmHg). They say that it is likely that this decrease in blood pressure was thanks to the reduction in salt intake that occurred during this period.

However, although the changes in salt intake could have had an effect, this study cannot prove that this is the case. Salt intake and blood pressure were measured in different people, and in different people at the different time points.

There also could have been other factors that are responsible for the changes seen, such as differences in the people measured or other differences that occurred that were not observed by the researchers.

During this period, it is reported that the number of people who smoked fell, but this wasn't adjusted for in the analysis. The researchers didn't take into account other possible factors that could explain the changes seen, such as changes in physical activity, as no information was collected on this.

Overall, the changes could be the result of a complex mixture of various health and lifestyle changes in people over this time that the study has not been able to fully account for.

As the researchers acknowledge, it is also possible that the reduction in deaths from stroke and heart disease could be related to gradual improvements in medical care and treatments over the last decade. This may have had a greater influence than changes in salt intake, and – from this – changes in blood pressure.

Nevertheless, the study lends support to current health recommendations to keep salt intake to no more than 6g per day for adults (around one teaspoon) to reduce the risk of high blood pressure. This in turn may reduce the risk of other cardiovascular diseases, such as stroke and heart disease.

The study also lends weight to the idea that even relatively modest public policy decisions, such as encouraging food manufacturers to reduce salt content in their products, can achieve significant results at a population level. The case could be made that salt levels (and possibly sugar levels) in processed foods should be lowered further.

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Salt intake falls 15%... and heart deaths fall 40% (but we still eat too much of it). Daily Mail, April 15 2014

War on salt has cut heart deaths. The Daily Telegraph, April 15 2014

Food manufacturers' salt cuts 'have saved lives'. The Independent, April 14 2014

Salt campaign deserves partial credit for fall in heart deaths. The Guardian, April 14 2014

Fewer heart deaths linked to reduction in salt intake. Daily Express, April 15 2014

Links To Science

He FJ, Pombo-Rodrigues S, MacGregor GA. Salt reduction in England from 2003 to 2011: its relationship to blood pressure, stroke and ischaemic heart disease mortality. BMJ Open. Published online April 14 2014

Categories: NHS Choices

Could statins also protect against dementia?

NHS Choices - Behind the Headlines - Mon, 14/04/2014 - 01:00

“Heart pills taken by millions of people in Britain could dramatically reduce the risk of dementia,” the Daily Express reports.

A study from Taiwan has found an association between the use of statins (cholesterol-lowering drugs) and reduced dementia risk.

In this large study of older adults, researchers recorded people’s first prescription of statins and looked at their later development of dementia – comparing statin users with non-statin users. 

Over an average five-year period, statin use was associated with a 22% reduced risk of dementia. Risk reduction was greater among females than males, and greatest with high-dose statins and with statin use for more than three years.

However, this type of study cannot definitely prove cause and effect – only an association. The researchers also attempted to adjust for factors that could influence any association, such as history of heart disease. But this still may not fully account for these, or other, factors that may be involved in the relationship.

And because it studied Taiwanese people, its results cannot be directly generalised to other populations, such as those of the UK.

Overall it is not clear whether statins definitely reduce the risk of dementia, and if they do, how it is they act to reduce risk. Also it is not known whether they may reduce risk of all dementias, or only specific types. 

Currently there is no guaranteed method to prevent dementia, although many of the same methods used to prevent heart disease may also help prevent dementia (vascular dementia in particular).

 

Where did the story come from?

The study was carried out by researchers from National Yang Ming University, Taipei and other institutions in Taiwan. The study was supported by the National Science Council Taiwan and published in the peer-reviewed medical journal International Journal of Cardiology.

The Daily Express’s reporting on the study is broadly accurate, but does not consider the limitations of this research.

 

What kind of research was this?

This was a population-based cohort study.

The study included over 33,000 people aged over 60 years from Taiwan and looked back at whether dementia developed in people who were and were not prescribed statins.

The researchers say that there has been some controversy in past research over whether there is any link between statin use and risk of dementia, and Alzheimer’s disease in particular.

The main limitation of this study, as with all cohort studies, is that it can demonstrate an association, but it cannot definitely prove cause and effect.

The study has adjusted for a number of potentially contributing factors (confounders) that could be influencing the association including:

  • age
  • sociodemographic variables
  • various long-term medical conditions coded in medical records (for example high blood pressure, heart disease, stroke, diabetes, and liver and kidney disease)

Still, this may not fully account for these or other health or lifestyle factors that may be involved in the relationship; especially for such a complex condition as dementia.

 

What did the research involve?

The research used the Longitudinal Health Insurance Database 2000, which includes a randomly sampled group of 1 million individuals included in Taiwan's National Health Insurance Research Databases (NHIRD) between 1996 and 2010. The NHIRD contains registration information, claims data, and information on clinical visits, diagnostic codes for diseases (according to the International Classification of Diseases) and prescription details.

For the purposes of this trial they included only people above the age of 60 years, who had not had a statin prescription or dementia diagnosis in the three years prior to the start of the cohort. They excluded people who had been diagnosed with dementia prior to prescription of statins.

Statin use was defined as receipt of at least one prescription of statins in the cover period of cohort.

Statin users were each matched by age and gender to a person who was not taking statins. The researchers recorded statin use:

  • by individual drug
  • by mechanism of drug action
  • according to duration of use

New cases of dementia were defined as the first time a diagnostic code was given for any type of dementia, from the date of the statin prescription onwards to the end of the study in 2010. However, they excluded from their analyses anyone diagnosed with dementia within one year of statin prescription, or who had less than one year of follow-up.

The researchers considered many potential confounders, including age and various sociodemographic factors recorded at the time the statin was first prescribed. They also took into account various diseases recorded at the time that the statin was prescribed (such as high blood pressure, heart disease, stroke, diabetes, and liver and kidney disease).

 

What were the basic results?

Just over half of the 16,699 statin users and their 16,699 non-statin-using comparison group were female. The average follow-up time was five years.

Comparing sociodemographic and health characteristics, there were few significant differences between the statin users and non-users. An exception to this was age and history of chronic diseases such as high blood pressure.

Overall, the incidence of dementia was lower among the statin users than non-users, which calculated to statin use being associated with a 22% reduced risk of dementia (hazard ratio [HR] 0.78, 95% confidence interval [CI] 0.72 to 0.85).

The risk reduction with statin use was greater for women (24%) than men (14%).

When looking at the type of statin, risk reduction was greatest with high-dose statins, and with use for more than three years.

However, on sub-analyses by type of dementia, the only significant association was found between statin use and any type of dementia with the exclusion of vascular dementia. There was no significant association between statin use and Alzheimer’s disease specifically, or statin use and vascular dementia specifically.

 

How did the researchers interpret the results?

The researchers conclude that: “Statin use was associated with a significantly lower risk of dementia in the elderly patients in Taiwan. The potency and the cumulative duration of statin utilised played critical roles.”

 

Conclusion

This study using a large, older age, Asian population finds an association between statin use and reduced risk of developing dementia over an average five years of follow-up.

The main limitation of this study is that it can demonstrate an association, but it cannot definitely prove cause and effect. The study has adjusted for a number of measured confounders, but this may not fully account for these or other factors (such as lifestyle habits) that may be involved in the relationship. 

Also, while the research has used what can be expected to be a fairly reliable research database, there is also the possibility for some of these health variables to have been inaccurately coded. In particular, there may be inaccurate assumptions around the use of statins. However, statin use was based on first recorded prescription and duration of prescription, we do not know for definite whether the person actually took them as described.

And as the study population was Taiwanese, the results cannot be generalised to other populations who may have socioeconomic, health and lifestyle differences and different dementia risk.

Overall, these results suggest a possible beneficial effect of statins in reducing risk of dementia, but the possible biological mechanism is not clarified.

It may be expected that statin use could be associated with risk of vascular dementia, through both statin prescription and vascular dementia having a common cardiovascular risk association.

However, surprisingly, no specific association was found between statin use and vascular dementia. Statins were only found to reduce risk of dementia when vascular dementia was excluded. Also, no association was found specifically with Alzheimer’s disease, which is the most common type of dementia and which has no firmly established cause (age and genetics being the most associated risk factors).

So, overall, the possible association between statin use and dementia risk needs to be further studied and clarified.

Until then, statins are not licensed as a possible preventative treatment for dementia. Statins should only be used within their licensed indication for reduction of cholesterol in people considered to be at risk of cardiovascular disease.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Heart statins drugs could 'slash risk of dementia by a quarter'. Daily Express, April 14 2014

Links To Science

Chou C, Chou Y, Chou Y, et al. Statin use and incident dementia: A nationwide cohort study of Taiwan. International Journal of Cardiology.  Published online March 31 2014

Categories: NHS Choices

New hepatitis C drug treatment 'shows promise'

NHS Choices - Behind the Headlines - Mon, 14/04/2014 - 01:00

"A new treatment for hepatitis C 'cured' 90% of patients with the infection in 12 weeks, scientists said," BBC News reports after a new drug protocol designed to target the protein that assists the spread of the virus through the body has shown promising results.

The study the BBC reports on involved 394 people with hepatitis C who had not responded to previous standard treatment, or who had responded but later relapsed.

They were randomised to either an active five-drug combination or a matching placebo for 12 weeks. The five drugs were ABT-450, ritonavir and ombitasvir, dasabuvir and ribavirin. At the end of the 12-week treatment period, the active treatment group stopped treatment, while all people in the placebo group switched to receiving 12 weeks' active treatment.

The majority of people in the original active treatment group (96%) did demonstrate a response 12 weeks after treatment. The people in the original active treatment group were only assessed 12 weeks after they had stopped taking their treatment, at which time the majority of them (96%) did demonstrate a response.

However, because of their unusual RCT design, by this time there was no comparison group as the placebo group had just then completed the same 12-week course of treatment. In this sense, the research was essentially a cohort study that has reported the outcomes for a group of people tested with a particular treatment.

Overall, the results suggest that this drug combination may be effective for people with the hepatitis C virus who have not responded to previous treatment. But whether this is more effective or more tolerable than other standard treatment options for such people remains to be proven. Side effects remain a big issue in terms of drug treatments for hepatitis C. 

 

Where did the story come from?

The study was carried out by researchers from Johann Wolfgang Goethe University and Hannover Medical School in Germany and other institutions in Europe, the US, Canada and Australia. It was funded by the pharmaceutical company, AbbVie.

It is unclear whether there were any conflicts of interest, as relevant information was not provided in the study.

The study was published in the peer-reviewed medical journal, the New England Journal of Medicine, on an open access basis, so the study is free to read online.

BBC News is perhaps a little premature in hailing this treatment a breakthrough considering the limitations of the study's design. A randomised controlled trial comparing this combination with standard treatment is needed first. There were also some inaccuracies in the BBC's reporting, as the participants in the study did not have liver cirrhosis, as reported.

 

What kind of research was this?

This was a randomised controlled trial that aimed to examine the effectiveness and safety of a combination of drugs compared with inactive placebo in people with hepatitis C infection. It is reported to be a phase 3 randomised controlled trial, though arguably the study design does not meet the standards of a phase 3 RCT as there is no comparison with another treatment.

The study involved patients who had previously been treated with the standard treatment option for hepatitis C (specifically, genotype hepatitis C 1, which is the most common type of the virus), but who had not got better with this treatment.

This treatment is the combination of pegylated interferon and ribavirin, which is licensed for the treatment of hepatitis C. Previous research has shown that up to 50% of people with hepatitis C respond to this combination (as demonstrated by the virus being no longer detected in the blood).

An additional two drugs (telaprevir and boceprevir) have also recently been recommended as treatment options for use in combination with peginterferon–ribavirin in people who have type 1 hepatitis C virus.

The combination of pegylated interferon and ribavirin is already licensed for the treatment of hepatitis C. Previous research has shown that up to 50% of people with hepatitis C respond to this combination (as demonstrated by the virus being no longer detected in the blood).

An additional two drugs (telaprevir and boceprevir) have also been recommended as treatment options for use in combination with peginterferon–ribavirin in people who have the type 1 hepatitis C virus. Response rates have been shown to increase to up to around three-quarters in people who receive first-line treatment with one of these triple therapy combinations.

However, response rates to triple therapy can be lower in people who have previously been treated with peginterferon–ribavirin. There are many reports of patients not responding, or responding but later relapsing.

The peginterferon–ribavirin combination and the newer drugs telaprevir and boceprevir are also associated with side effects such as anaemia. There is therefore still a need for new, more effective and better-tolerated drug treatments to be developed.

This phase 3 randomised controlled trial investigated the use of non-interferon-based combination treatment with the drugs ABT-450, ritonavir and ombitasvir (in one formulation), dasabuvir and ribavirin. This combination was compared with matching placebo for 12 weeks.

Earlier phase studies demonstrated that the majority of people with type 1 hepatitis C infection who had previously not responded to peginterferon–ribavirin did respond to this five-drug combination.

This trial therefore aimed to further investigate the safety and effectiveness of this treatment combination in people with genotype 1 hepatitis C who had previously not got better with peginterferon–ribavirin.

These drugs can also all be taken by mouth, while peginterferon has to be given by injection under the skin.

 

What did the research involve?

The researchers included adults with genotype 1 hepatitis C (virus RNA level more than 10,000 international units per millilitre) who did not have liver cirrhosis.

The participants had also not responded to previous dual combination treatment with peginterferon–ribavirin.

Non-response to previous treatment included those with:

  • initial response and later relapse (undetectable viral RNA at treatment end but detectable levels within one year)
  • partial response (viral RNA levels decrease by a certain amount at week 12 of treatment, but detectable again by treatment end)
  • no response

The researchers did not include people who had previously not responded to triple therapy, or who had HIV infection or a recent history of drug or alcohol abuse.

People were recruited across 76 sites in North America, Europe and Australia. They were randomised to receive either inactive placebos or the active drug combination for 12 weeks, which included:

  • the co-formulation of ABT-450/r–ombitasvir (a once-daily dose of 150mg of ABT-450, 100mg ritonavir, and 25mg of ombitasvir)
  • dasabuvir (250mg twice daily)
  • ribavirin (1000mg daily if body weight was less than 75kg or 1200mg daily if body weight was equal to or greater than 75kg

People in the placebo group received matching placebo pills for these three sets of tablets. The study was double blind, meaning that neither participants nor researchers knew which treatment was being given.

The main outcome examined was the rate of a sustained virologic response (SVR) 12 weeks after the end of the study treatment. This is a term used to describe when the person has undetectable levels of the viral RNA in their blood. SVR for hepatitis C is defined as having an RNA level of less than 25 international units per millilitre.

Other outcomes examined included normalisation of liver enzyme levels, treatment response according to whether the genotype was 1a or 1b, and relapse after treatment.

Side effects of treatment were monitored throughout treatment and up to 30 days after the last drug dose.

All analyses were by intention to treat on the basis that all people who received at least one dose of the study drug were included in the analyses, regardless of whether they completed treatment.

Of note, the research describes that after the 12-week double-blind treatment period, people in the placebo group received the active treatment regimen on an open-label basis for 12 weeks.

As the outcomes were assessed 12 weeks after treatment end, this suggests that at the time of assessment people assigned to the placebo group had been receiving the active treatment for the past 12 weeks, while those assigned to the active treatment had completed 12 weeks of active treatment 12 weeks ago. A case could therefore be made that this was more of a cohort study than a textbook RCT.

 

What were the basic results?

Of 562 eligible people, 395 were randomised and 394 received at least one dose of their assigned treatment and were included in the analyses.

Twelve weeks after treatment was completed, 286 of 297 people in the active treatment group (96.3%) had a sustained virologic response. Looking by specific genotype, there was little difference in SVR rates between those with hepatitis C virus type 1a (96%) and 1b (96.7%).

According to previous response to peginterferon–ribavirin, SVR rates were 95.3% among those with initial response then relapse, 100% among those with previous partial response, and 95.2% among those with previous null response. Only 7 of 293 people (2.4%) who completed treatment had a post-treatment relapse.

SVR rates for those receiving placebo are not reported. However, at the time of outcome assessment, people in the placebo group had been receiving the active treatment for the past 12 weeks.

During the 12-week double-blind treatment period, side effects were reported by 91% of the active treatment group and 83% of the placebo group. Headache was the most common side effect in both groups, occurring in just over a third of people. Itching occurred significantly more often in the active treatment group (13.8% versus 5.2% in people taking placebo).

Three people in the active regimen group (1.0%) discontinued the study drugs because of side effects. Anaemia also occurred significantly more commonly in the active treatment group, with a decrease in haemoglobin below 10g per decilitre affecting about 5%.

 

How did the researchers interpret the results?

The researchers concluded that, "Rates of response to a 12-week interferon-free combination regimen were more than 95% among previously treated patients with HCV genotype 1 infection, including patients with a prior null response."

 

Conclusion

Although designed as an RCT, the study had an analysis of drug effectiveness that becomes more like a single cohort of people receiving an active treatment, with no comparison arm.

People were assigned to the five-drug combination or matching placebos for 12 weeks. During this time, the side effects in both treatment groups were monitored and these could be compared, with itching and anaemia occurring more commonly in the active treatment group.

However, the double-blind drug treatment period was completed at 12 weeks and response outcomes were then assessed 12 weeks later. Twelve weeks later, the active treatment group demonstrated high response rates, with sustained virological response present in almost all (96%) of those who had been treated.

Problematically, however, there is no comparison group for these people. At the end of the 12-week double-blind treatment period, all people in the placebo group went on to receive 12 weeks' active treatment with the five-drug combination.

This means that at the time the outcomes were assessed in the active treatment group, the placebo group had also just completed 12 weeks of active treatment. The response rates for the placebo group are not reported.

Overall, the results suggest that the oral combination of ABT-450, ritonavir and ombitasvir (in one formulation) and dasabuvir and ribavirin may have potential in the treatment of hepatitis C.

However, the safety and effectiveness of this combination now need to be compared with other standard treatment options for this group of people – including repeat treatment with the peginterferon–ribavirin combination, and triple therapy with peginterferon–ribavirin and either telaprevir and boceprevir.

Only then will we know whether this five-drug combination may one day be licensed for this condition, and for which specific groups of people.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Hepatitis C: New drug treatment 'is a breakthrough'. BBC News, April 12 2014

Links To Science

Zeuzem S, Jacobson IM, Baykal T, et al. Retreatment of HCV with ABT-450/r–Ombitasvir and Dasabuvir with Ribavirin. The New England Journal of Medicine. Published online April 10 2014

Categories: NHS Choices

No way to reliably identify low-risk prostate cancer

NHS Choices - Behind the Headlines - Fri, 11/04/2014 - 12:30

“Men with prostate cancer being given 'false hope',” The Daily Telegraph reports.

UK researchers have examined the accuracy of different methods that have sometimes been used (mostly outside the UK) to identify “clinically insignificant” prostate cancers – those that would not be expected to affect a man during his lifetime (meaning he is likely to die of something else).

There has been considerable debate about overtreatment of such slower growing, low grade prostate cancers – not least because complications of treatment, such as erectile dysfunction can be life-changing.

Monitoring a man (known as “active surveillance”) can often be the preferred course of action with a low risk cancer. However, with this in mind, prostate cancer is a leading cause of cancer-related deaths in men – therefore doctors need to be sure that the cancer really is low risk.

There have been a number of methods proposed for identifying clinically insignificant cancers. The researchers used these methods to see how reliable they would have been at accurately diagnosing the severity of cancer in a large series of men who had their prostate removed at one UK hospital.

None of the methods were accurate at predicting clinically insignificant prostate cancer. They only accurately identified up to half of men with clinically insignificant prostate cancer which, as the researchers pointed out, is akin to trying to predict the toss of a coin.

The research highlights the uncertainty that exists. And as the authors suggest, when doctors discuss active surveillance with patients, they should explain the uncertainty around predicting the stage or grade of the cancer.

 

Where did the story come from?

The study was carried out by researchers from University of Cambridge and Cambridge University Hospital NHS Foundation Trust and funded by the National Institute for Health Research and the National Cancer Research ProMPT (Prostate Cancer: Mechanisms of Progression and Treatment) collaborative.

The study was published in the peer reviewed medical British Journal of Cancer.

Many of the media headlines have given quite a simplistic interpretation of what is in fact quite a complex study. The researchers examined the accuracy of different sets of criteria, proposed by different research groups, which have sometimes been used to predict clinically insignificant disease.

Most have been developed in countries like the US where prostate cancer screening is performed. However, screening for prostate cancer is not performed in the UK.

Some of the reporting implies that screening and diagnosing for prostate cancer is a black and white issue. In fact, it has long been recognised that predicting which cases of prostate cancer will turn out to be aggressive is an inexact science.

 

What kind of research was this?

This was a case series of 847 men who had their prostate gland removed because of prostate cancer at a single hospital in England between July 2007 and October 2011.

The researchers then examined these tumour specimens in the lab to see how accurate different methods or criteria would be at identifying which prostate cancers were “clinically insignificant”. A clinically insignificant prostate cancer is one that would not affect a man during his lifetime (meaning he is likely to die of something else), which is not uncommon.

The researchers explain how there has often been debate around the over-treatment of such slower growing, low grade prostate cancers. And monitoring men (active surveillance, also known as “watchful waiting”) would often be considered a more appropriate treatment option. Therefore, having a reliable method of identifying which are “clinically insignificant” prostate cancers is important.

Many attempts have been made to try and find a reliable method for identifying these cancers. Several different methods have been developed based on a combination of characteristics such as examination findings, prostate-specific antigen (PSA) levels (a hormone associated with prostate enlargement), ultrasound examination and examination of biopsy specimens.

These have mainly been developed in countries where prostate cancer screening is currently performed, such as the US (prostate screening is not currently performed in the UK).

The researchers wanted to see how accurate these methods were for identifying clinically insignificant prostate cancer in a group of men whose prostate gland had been removed due to prostate cancer.

 

What did the research involve?

The research included 847 men who had their prostate removed at Addenbrooke’s Hospital, Cambridge, between July 2007 and October 2011.

Their PSA level was measured and their prostate specimen was examined in the laboratory, and their cancer was staged according to the standard TNM (Tumour, lymph Nodes, Metastases) staging system.

The Gleason score – another method of assessing the outlook of the cancer depending on what the cancer cells look like under a microscope – was also assessed. On a Gleason score, cells are graded between 1 and 7, with 1 and 2 being normal looking prostate cells, and 7 being the most abnormal looking cancerous cells.

Sometimes within an examined prostate sample there can be more than one grade of cell, so a doctor may give two scores indicating which two types of cell are most and second-most common in the specimen.

Postoperative assessments, including physical examination and measurement of PSA levels, were carried out at six weeks and then three, nine and 12 months after the prostate was removed, and then every six months after that. “Biological recurrence” of the cancer was defined as a PSA level of greater than 0.2 nanograms per millilitre.

The researchers identified several different methods that have been used to identify clinically insignificant prostate cancer (described by different research groups and identified by the lead author of the study).

The accuracy of these different methods was compared against three different definitions of clinically insignificant disease:

  • The “classical definition”: organ-confined tumours of <0.5 cm3, Gleason 3+3 and no Gleason 4 or 5.
  • The “European Randomised Study of Screening for Prostate Cancer (ERSPC) definition”: organ-confined tumour, Gleason 3+3 with no Gleason grade 4 or 5, index tumour volume 1.3cm3 or less, and the total tumour volume of 2.5 cm3 or less.
  • An “inclusive definition”: organ-confined tumour, Gleason 3+3 tumours, with no Gleason grade 4 or 5.

They also compared them with the accuracy of another method used to define low-risk disease (described by Anthony V. D’Amico and colleagues in 1998): PSA level of 10 or less, Gleason 3+3, and tumour stage 1 to 2a. As the researchers say, the criteria described by D’Amico and colleagues was not intended to be used to identify which men would be suitable for active surveillance (those who had clinically insignificant disease), but to predict outcome following prostate removal only.

However, in the UK the D’Amico method has previously been used as a way to predict likely outcome with different treatment approaches.

 

What were the basic results?

Of the 847 men, 415 (49%) had Gleason 3+3 disease indicated on their diagnostic biopsy. This indicated that the cells in their prostate were cancerous, but they were the “least abnormal” possible. Of these, 206 had what would meet the D’Amico’s criteria for “low risk disease”. 

However, after surgical removal of the prostate and laboratory examination, half of them (209) were actually found to have more advanced disease than previously thought and upgraded to more advanced Gleason grade 4 to 5.

A third of them (131) had cancer spread beyond the prostate, and one man had positive lymph nodes.

206 of the 415 with Gleason 3+3 at biopsy (a quarter of the full group) met the D’Amico criteria for “low risk” prostate cancer.

None of the methods for predicting clinically insignificant cancer that were assessed was considered to have adequate discriminative power in predicting clinically insignificant tumours.

The different methods correctly identified only up to half of those with clinically insignificant disease.

None of the methods had significantly improved accuracy over D’Amico ‘slow-risk criteria (which correctly identified between 4% and 47% of those with clinically insignificant cancer, depending on which of the three criteria were used).

 

How did the researchers interpret the results?

The researchers conclude that “In our unscreened population [the men in this study had not been identified through screening as prostate screening is not performed in the UK], tools designed to identify insignificant prostate cancer are inaccurate.”

 

Conclusion

This research has examined different methods that have sometimes been used to identify men with clinically insignificant prostate cancer that would not be expected to affect a man during his lifetime. The researchers explain how there has often been debate around the over-treatment of such slower growing, low grade prostate cancers, and monitoring the man (active surveillance) would often be considered a good treatment option.

There have been a number of methods proposed – most of these have been developed in countries where prostate cancer screening is carried out. The researchers found that in their series of 847 men, none of the various methods were accurate at predicting clinically insignificant disease. So they correctly identified around half the men with clinically insignificant disease.

Some of the methods with more inclusive criteria for potential disease risk would identify very few men as having clinically insignificant disease and so be eligible for watchful waiting. Meanwhile the methods that had stricter criteria for selecting men at higher potential risk (for example only those with larger tumours), could lead to a greater number of men being wrongly offered watchful waiting when they in fact need active treatment.

The research highlights the uncertainty doctors experience when trying to accurately identify which men diagnosed with prostate cancer (for example through a combination of PSA, physical examination, imaging and biopsy) have a cancer that is unlikely to affect them in their lifetime, and so are suitable for a watchful waiting approach only.

As the researchers suggest, when doctors discuss the watchful waiting approach with patients, they should explain the uncertainty around predicting the stage or grade of the cancer.

The researchers appropriately conclude, “there is an urgent need for development of a means by which to exclude aggressive prostate cancer in patients wishing to undergo conservative treatment”.

Analysis by Bazian. Edited by NHS Choices.

Follow Behind the Headlines on twitter.

Join the Healthy Evidence forum.

Links To The Headlines

Men with prostate cancer 'falsely' told it is not aggressive. The Daily Telegraph, April 11 2014

Prostate cancer tests underestimate aggressiveness of disease, says study. The Guardian, April 11 2014

Prostate cancer tests miss severity in half of cases. BBC News, April 11 2014

Faulty tests give false hope to prostate cancer victims: Half of patients have more aggressive tumour than first diagnosed. Daily Mail, April 11 2014

Men with prostate cancer 'given false hope' by tests. ITV News, April 11 2014

Links To Science

Shaw GL, Thomas BC, Dawson SN, et al. Identification of pathologically insignificant prostate cancer is not accurate in unscreened men. British Journal of Cancer. Published online April 10 2014

Categories: NHS Choices

Lab-grown vaginas successfully implanted

NHS Choices - Behind the Headlines - Fri, 11/04/2014 - 01:00

"Doctors implant lab-grown vagina" is the headline on the BBC News website, reporting on the latest breakthrough in the increasingly exciting field of tissue engineering.

In this latest study, tissue engineering was used to develop a vagina for reconstructive surgery in four teenage girls who had the rare condition Mayer-Rokitansky-Küster-Hauser syndrome. In this condition, the vagina and uterus do not form properly while the female foetus is developing in the uterus.

Various techniques have been used for vaginal reconstruction in the past, usually involving surgically creating a space where the vagina would normally be and lining this with graft tissue. However, there have been problems with the types of graft tissue used, including the muscle not functioning correctly.

In this new technique, tissue samples were taken from the girls' own vulvas and then grown in the laboratory into a 3D structure for reconstruction. Over the course of up to eight years' follow-up, the reconstructed vagina was demonstrated to have a similar structure to normal vaginal tissue and the women reported normal sexual function. There were no adverse effects or complications of surgery reported.

While the problem of Mayer-Rokitansky-Küster-Hauser syndrome may not be a major public health issue (though obviously extremely distressing for those affected by it), this small study does mark an important proof of concept.

The vagina consists of a complex structure of tissue. If a vagina can be reconstructed, it may be possible to reconstruct other complex structures and one day possibly even entire organs. 

 

Where did the story come from?

The study was carried out by researchers from the Tissue Engineering Laboratory, Children's Hospital of México Federico Gómez, Metropolitan Autonomous University, CINVESTAV-IPN in Mexico, and Wake Forest University School of Medicine in the US. Funding was provided by Wake Forest University and the Children's Hospital of México Federico Gómez.

It was published in the peer-reviewed medical journal, The Lancet.

The media reporting of the study was accurate and provided some useful background context. The companion piece in the same journal about reconstructing nostrils failed to gain the same publicity, only being mentioned in The Independent.

 

What kind of research was this?

This was a case series reporting on a new vaginal reconstructive technique used in four consecutive women with a condition called Mayer-Rokitansky-Küster-Hauser syndrome (MRKHS). In this condition, a female foetus fails to develop the vagina and uterus properly, and these are wholly or partly absent from birth. It is estimated to affect between 1 in 1,500 and 1 in 4,000 female babies.

The girls usually first present to doctors during the early teenage years, when they do not start their periods as expected. If the uterus has formed, there may also be monthly abdominal pain, or a lump that develops in the abdomen because the uterus is still shedding blood monthly but there is no drainage route.

The main treatments are usually surgical and many different techniques have been developed for vaginal reconstruction. These often involve surgically creating a space where the vagina would normally be and lining this with graft tissue.

Various different tissues have been tried for grafts, such as skin or abdominal tissue, though these types of grafts do not contain the normal constituents of vaginal tissue. This can cause problems such as decreased pleasure during sex and narrowing of the space (stenosis).

This study reports the experience of using tissue engineering techniques to create a vagina using girls' or women's own external genital tissue (vulval tissue) rather than donor tissue or tissue from elsewhere in the body.

 

What did the research involve?

The study involved four consecutive teenage girls (aged 13 to 18 years, average age 16) with no vaginas (vaginal aplasia) as a result of the condition MRKHS. They came to the researchers' hospitals between May 2005 and August 2008.

Three of the girls first came to their doctors because of not having periods and the fourth because of an abdominal lump (mass). One of the girls had already had a failed vaginal reconstruction using intestinal graft tissue.

The researchers took a detailed history from each of the girls, MRI scanned them, and took tissue samples (biopsies) from the vulva to obtain tissue for the graft. They separated the tissue's muscle layer from the epithelial layer (that lines body surfaces) for separate processing.

They then used tissue engineering techniques to develop the vaginal structure for reconstruction using a 3D "scaffold" specifically developed for each girl depending on the dimensions of the pelvic area on MRI and physical examination.

The girls underwent vaginal reconstruction five to six weeks after the initial biopsy samples had been taken. They had vaginal examinations and biopsies three, six and 12 months after surgery, then yearly after that.

All girls also received MRI monitoring and filled in the Female Sexual Function Index questionnaire, a validated self-report tool for assessing female sexual function.

 

What were the basic results?

All of the girls had the initial vulval biopsy and the vaginal reconstructive surgery without any immediate or postoperative complications. They were followed up for an average of 81 months (6.75 years).

The yearly biopsies showed that the transplanted vaginal tissue had a normal triple-layered structure consisting of an epithelial cell-lined vaginal canal surrounded by matrix and muscle. The MRIs and vaginal examination showed that the tissue-engineered vagina also appeared to be normal.

The two girls who had a partially developed uterus and had the vaginal tissue joined to their uterus went on to have periods.

The Female Sexual Function Index questionnaire showed that the girls reported in the "normal" range for all areas questioned: desire, arousal, lubrication, orgasm, satisfaction, and painless intercourse.

 

How did the researchers interpret the results?

The researchers concluded that implanted vaginal tissue engineered from the patient's own cells showed normal structure and function over eight years.

They say that this technique could be useful in other patients requiring vaginal reconstruction.

 

Conclusion

This small case series reports the apparent success of using tissue engineering techniques to develop a vagina for reconstruction in four teenage girls who had an absent vagina from birth. All of these girls had the rare condition Mayer-Rokitansky-Küster-Hauser syndrome (MRKHS), where the vagina and uterus do not develop properly.

The technique used tissue samples biopsied from the girls' own vulva, which were then developed in the laboratory to make a 3D structure for reconstruction. It was hoped that by using this approach they might avoid some of the problems seen with the various types of graft tissue previously used, including abnormal muscle function.

Over up to eight years' follow-up, the reconstructed vaginas did seem to have a similar structure to normal vaginal tissue. The girls and women reported normal sexual function without unexpected adverse effects or complications.

This study only reports on a very small sample of four girls with this condition. Further use of this technique is needed to see if the same successful results are replicated.

With that limitation in mind, this study – as well as the related study about nostril reconstruction – suggests that tissue engineering is an avenue of research with a great deal of potential.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Doctors implant lab-grown vagina. BBC News, April 11 2014

Lab-grown vaginas prove long-term success for four women born without one. The Independent, April 11 2014

Hope for young women born without a vagina: Scientists successfully implant organs grown in the lab using patients' own cells. Mail Online, April 11 2014

Links To Science

Raya-Rivera AM, Esquiliano D, Fierro-Pastrana R, et al. Tissue-engineered autologous vaginal organs in patients: a pilot cohort study. The Lancet. Published online April 11 2014

Categories: NHS Choices

Effectiveness of Tamiflu and Relenza questioned

NHS Choices - Behind the Headlines - Thu, 10/04/2014 - 11:25

“Ministers blew £650 MILLION on useless anti-flu drugs,” the Daily Mail reports. The paper cites a large study, which investigated the effectiveness of the antiviral drugs Tamiflu (oseltamivir) and Relenza (zanamivir).

These drugs, called neuraminidase inhibitors, have been stockpiled in many countries, including the UK, to prevent and treat large influenza outbreaks.

The systematic review by the Cochrane Collaboration covered the benefits and damage of the drugs in both adults and children. It took into account new data that had previously been kept confidential by the drugs’ manufacturers: Roche (which manufactures Tamiflu) and GlaxoSmithKline (GSK) (which manufactures Relenza).

It found that both drugs shorten the symptoms of influenza-like illness by about half a day in adults (but not in asthmatic children), compared to a placebo. There was no reliable evidence that either drug reduces the risk of people with flu being admitted to hospital or developing serious complications such as pneumonia, bronchitis, sinusitis or ear infection. Used as a preventative measure, Tamiflu and Relenza slightly reduced the risk of developing the symptoms of flu. The review also found no evidence that these drugs can stop people carrying the influenza virus and spreading it to others.

The study also found that Tamiflu slightly increases the risk of adverse effects such as nausea, vomiting, psychiatric and kidney problems in adults, and vomiting in children.

This is an important, well-conducted review of a controversial topic. Most experts would agree that the modest benefits of Tamiflu and Relenza, as reported by the review, do not justify the increased adverse risks, let alone the money spent on them by the UK.

 

Where did the story come from?

The study was carried out by researchers from the Cochrane Collaboration – an independent, international research network that produces rigorous systematic reviews on healthcare interventions. There was no external funding. The study was published in the peer-reviewed Cochrane Database of Systematic Reviews, which is an open access journal, meaning the study is free to read online.

The review was widely covered by the media, with many reports taking information straight from an accompanying press release. However, most papers also included comments from independent experts, the Department of Health and the two drug companies (GSK and Roche). 

 

What kind of research was this?

This was a systematic review that aimed to assess the potential benefits and harms of oseltamivir and zanamivir (known as neuraminidase inhibitors) in the prevention and treatment of influenza in healthy adults and children. 

The researchers explain that previous reviews of the drugs have been hampered by “unresolved discrepancies” in the data from published trials, as well as problems with publication bias.

Previously, concerns about the effectiveness of Tamiflu have been raised, with data suggesting it was not as effective as previously thought not released for external peer review and scrutiny.

They therefore did not use the data directly from journal articles, but went to unpublished documents from both regulatory bodies and the manufacturers.

The researchers point out that oseltamivir and zanamivir have been stockpiled in many countries to prevent and treat both seasonal and pandemic influenza, and are now used worldwide. In particular, the worldwide use of Tamiflu has increased dramatically since the outbreak of swine flu in April 2009. It was initially believed that the drug would reduce hospital admissions and complications of influenza, such as pneumonia, during influenza pandemics.

 

What did the research involve?

The researchers searched trial registries, electronic databases (up to July 22 2013) and regulatory archives, and corresponded with manufacturers to identify all relevant, randomised controlled trials (RCTs) with placebos. They also requested the unpublished reports on which the trials were based.

They made sure there were no published RCTs from non-manufacturer sources by running electronic searches in the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE (Ovid), EMBASE, Embase.com, PubMed (not MEDLINE), the Database of Reviews of Effects, the NHS Economic Evaluation Database and the Health Economic Evaluations Database. They found that all RCTs were sponsored by the manufacturers.

Once the data from clinical study reports was collected, they assessed the risk of bias in the published trials. They analysed the effects of zanamivir and oseltamivir on:

  • The duration of symptoms
  • Influenza outcomes
  • Complications
  • Admissions to hospital
  • Adverse events

 

What were the basic results?

The researchers obtained 107 study reports from drug regulatory bodies and drug manufacturers. They eventually used data from 46 trials – 20 on oseltamivir, with 9623 participants; and 26 on zanamivir, with 14,628 participants. They identified problems with the design of many of the trials included, which they say affects confidence in the results.

Here are the main findings from the review:

Reduction in the duration of symptoms
  • In adults, oseltamivir reduced the time it took to first alleviate symptoms by 16.8 hours (95% confidence interval [CI] 8.4 to 25.1 hours). This represents a reduction in the duration of symptoms from 7.0 to 6.3 days.
  • There was no effect in asthmatic children – but in otherwise healthy children, there was an average reduction in the time it took to first alleviate symptoms of 29 hours (95% CI 12 to 47 hours).
  • In adults, zanamivir reduced the time until the first alleviation of symptoms in adults by 0.60 days (95% CI 0.39 to 0.81). This represents a reduction in the average duration of symptoms from 6.6 to 6.0 days. The effect in children was not statistically significant.
Admission to hospital
  • In both adults and children, treatment with oseltamivir had no significant effect on whether they were admitted to hospital (risk difference (RD) 0.15% (95% CI -0.78 to 0.91).
  • Data on admission to hospital and zanamivir was unreported.
Serious influenza complications
  • In both adults and children treated with oseltamivir, the drug did not significantly reduce serious complications or those that led to study withdrawal (RD 0.07%, 95% CI -0.78 to 0.44).
  • In adults either treated with zanamivir or taking it for prevention, the drug did not reduce complications. 

There was insufficient evidence to say whether oseltamivir used for prevention or zanamivir used for treatment reduced complications in children.

Pneumonia

The evidence on the effects of either drug used in treatment or prevention of pneumonia risk was deemed unreliable.

Bronchitis, sinusitis and middle ear infection

In adults treated with zanamivir, the drug significantly reduced the risk of bronchitis (RD 1.80%, 95% CI 0.65 to 2.80), but oseltamivir did not. Neither drug significantly reduced the risk of middle ear infection or sinusitis in either adults or children.

Harms of treatment
  • Adults treated with oseltamivir had an increased risk of nausea (RD 3.66%, 95% CI 0.90 to 7.39); and vomiting (RD 4.56%, 95% CI 2.39 to 7.58).
  • Adults treated with oseltamivir had significantly lower increases in the numbers of antibodies (needed by the body to fight infection) compared to the control group (RR 0.92, 95% CI 0.86 to 0.97)
  • Oseltamivir significantly decreased the risk of diarrhoea (RD 2.33%, 95% CI 0.14 to 3.81); and cardiac events (RD 0.68%, 95% CI 0.04 to 1.0), compared to a placebo
  • two treatment trials with oseltamivir showed a “dose response” effect on psychiatric events  (such as feelings of nervousness or aggression)
  • Children treated with oseltamivir had a higher risk of vomiting (RD 5.34%, 95% CI 1.75 to 10.29). Children on oseltamivir also had a lower increase in the number of antibodies (RR 0.90, 95% CI 0.80 to 1.00).
Prevention
  • Used for prevention, oseltamivir and zanamivir reduced the risk of flu symptoms in individuals (oseltamivir: RD 3.05% (95% CI 1.83 to 3.88)), (zanamivir: RD 1.98% (95% CI 0.98 to 2.54)) and in households (oseltamivir: RD 13.6% (95% CI 9.52 to 15.47)), (zanamivir: RD 14.84% (95% CI 12.18 to 16.55))
  • Oseltamivir increased the risk of psychiatric adverse events (RD 1.06%, 95% CI 0.07 to 2.76); headaches (RD 3.15%, 95% CI 0.88 to 5.78), kidney problems (RD 0.67%, 95% CI -2.93 to 0.01) and nausea (RD 4.15%, 95% CI 0.86 to 9.51)

 

How did the researchers interpret the results?

The researchers say that on the the basis of their findings, clinicians and healthcare policy-makers should “urgently revise current recommendations for use of the neuraminidase inhibitors (NIs) for individuals with influenza”. They say “it is unclear whether this is superior to treatment with commonly used antipyretic medications [paracetamol]”. They go on to say they “did not find any credible evidence that either oseltamivir or zanamivir reduce the risk of complications of influenza, particularly pneumonia, nor reduce risk of hospitalisation or death. Moreover, even in individuals with a higher risk of complications, such as children with asthma or the elderly, we found no evidence of a beneficial effect for reducing risks of complications”.

It is of “some concern” they say that oseltamivir is now recommended as an essential medicine for the treatment of seriously ill patients or those in higher-risk groups with influenza.

In an accompanying press release, Dr Tom Jefferson, Dr Carl Heneghan and Dr Peter Doshi, authors of the review, said: “Drug approval and use cannot be based on biased or missing information any longer. We risk too much in our population’s health and economy. This updated Cochrane review is the first time a Cochrane systematic review has been based only on clinical study reports and regulator’s comments. It is the first example of open science in medicine using full clinical study reports available without conditions. Therefore, the conclusions are that much richer. We urge people not to trust in published trials alone or on comment from conflicted health decision makers, but to view the information for themselves.”

 

Conclusion

This major review is particularly significant for its use of unpublished, previously confidential data from both the drug manufacturers and regulators, to verify the information in published trials. As the researchers point out, much of the trial data is unreliable for various reasons, which makes it difficult to draw firm conclusions.

While it appears that these drugs have a modest benefit, there is no solid evidence that either drug can protect people from the more serious complications of influenza.

Paracetamol or ibuprofen would seem to be a far more cost-effective method of relieving the symptoms of influenza.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Tamiflu: Millions wasted on flu drug, claims major report. BBC News, April 10 2014

Scientists say UK wasted £560m on flu drugs that are not proven. The Guardian, April 10 2014

Ministers blew £650 MILLION on useless anti-flu drugs: Cash spent on stockpiling treatments that 'worked no better than paracetamol'. Mail Online, April 10 2014

The drugs don't work: Britain wasted £600m of taxpayers' money on useless flu pills stockpiled by Government in case of pandemic. The Independent, April 10 2014

Tamiflu: drugs given for swine flu 'were waste of £500m'. The Daily Telegraph, April 10 2014

Half A Billion Pounds 'Wasted' On Anti-Flu Drugs. Sky News, April 10 2014

Links To Science

Jefferson T, Jones MA, Doshi P, et al. Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. Cochrane Database of Systematic Review. Published online April 10 2014

Categories: NHS Choices

Removing copper from body could slow cancer

NHS Choices - Behind the Headlines - Thu, 10/04/2014 - 11:25

"Copper can block growth of rare cancer," is the rather unclear headline in The Daily Telegraph. Researchers have found that a drug that reduces the amount of copper in the body may also be able to lessen the growth of some kinds of tumours.

These tumours – such as melanoma – have a mutation in the BRAF gene. BRAF helps create a protein that's vital for a biochemical pathway necessary for cell growth. Some cancers have a mutation in this gene, which means that the growth is unchecked and leads to a rapid spread of cancerous cells.

The researchers previously found that copper plays a role in the activation of this cell growth pathway. Trials of drugs that target this pathway have shown improved survival rates for people with multiple melanomas in previous research.

The researchers wanted to see if reducing copper levels could target the pathway in a similar manner. Using a range of experiments, they found that reducing the level of copper available to tumour cells decreased the growth of BRAF-mutated human cancer cells in the laboratory and BRAF-mutated tumours in mice.

They found that a drug used in humans as a treatment for Wilson's disease (a genetic disorder that results in a build-up of copper in the body) also had this effect. The researchers suggested that these drugs could be "repurposed" to treat BRAF-mutated human cancer.

The fact that these drugs are already used in humans could mean that they can be tested for their effects in cancer more quickly than a completely new drug. However, these trials are still needed before we know whether these drugs could provide a new approach to treating certain cancers.

 

Where did the story come from?

The study was carried out by researchers from Duke University Medical Center and the University of North Carolina in the US and the University of Oxford in the UK

It was funded by the National Institutes of Health, the Structural Genomics Consortium, an FP7 grant, the Halley Stewart Trust, the Edward Spiegel Fund of the Lymphoma Foundation, and donations made in memory of Linda Woolfenden.

The study was published in the peer reviewed medical journal, Nature.

In general, the media covered the story accurately, but the Mail Online reported that, "High levels of copper could mean an increase in deadly cancers", which is not what the study assessed or found.

 

What kind of research was this?

This was a laboratory study using mice and human tumour cells in the laboratory. The researchers previously found that copper plays a role in the activation of a particular cell growth pathway, which can lead to tumour formation if a gene called BRAF is mutated.

BRAF encodes a protein that activates a biochemical pathway of proteins necessary for cell growth. A particular mutation in this gene called BRAF-V600E has been found in some of the cells of cancers such as melanoma, colorectal cancer, thyroid cancer, and non-small-cell lung cancer (the most common type of lung cancer).

Drugs have been developed that inhibit BRAF-V600E or other proteins in this pathway, and trials are reported to have shown prolonged survival in people with metastatic (advanced) melanoma. However, tumours can become resistant to these drugs and researchers want to develop other ways of treating them.

The researchers aimed to see if restricting copper in tumours with BRAF mutations would reduce tumour cell growth in the lab and in mice, and improve the lifespan of the mice with these BRAF-mutated tumours.

 

What did the research involve?

The researchers used different approaches to reduce copper availability for tumour cells and tumours in the laboratory setting.

This included using mice genetically engineered to carry a mutation in genes, including BRAF, which can be triggered to result in lung cancers. They looked at what happened if these mice were also genetically engineered to lack a protein that transports copper into the cells.

Experiments were also performed in the laboratory and in mice using drugs that reduce copper levels in humans to see if these would reduce tumour growth.

Drugs that bind to copper, making it less available to be taken into cells, are already available to treat a condition called Wilson's disease, where people have too much copper in their bodies.

The researchers investigated the effect of one of these drugs on BRAF-mutated tumour cell growth in the laboratory and then in mice.

They also looked at the effect of stopping copper in the diet of the mice.

 

What were the basic results?

The researchers found that if the mice genetically engineered to develop BRAF-mutated lung tumours also carried mutations that reduced the ability to transport copper into their cells, this reduced the number of visible lung tumours. These mice also survived for 15% longer than mice with normal copper levels in their cells.

One of the copper-binding drugs was also able to reduce the growth of human BRAF-V600E melanoma cells in the laboratory. When mice were given the copper-binding drug, their BRAF-mutated tumours reduced in size.

Combining this with a copper-deficient diet improved the ability to reduce tumour growth, but the copper-deficient diet on its own did not have a significant effect.

The copper-binding drugs still worked, even when the tumours were resistant to BRAF-V600E inhibitors. The tumours started growing again when the treatment and diet were stopped.

 

How did the researchers interpret the results?

The researchers concluded that reducing copper availability in BRAF-mutated tumour cells decreases their ability to grow. They say that the copper-binding drugs, "which are generally safe and economical drugs that have been given daily for decades to manage [copper] levels in patients with Wilson disease", also decreased BRAF-mutated tumour growth in their experiments.

This suggests that these drugs warrant further assessment as potential treatments for BRAF mutation-positive cancers and cancers that have developed resistance to BRAF-V600E inhibitors.

 

Conclusion

This research has suggested that drugs already available that are designed to reduce the amount of copper in the body may be able to reduce the growth of tumours that have a mutation in the BRAF gene, such as melanoma.

The drugs reduced growth of BRAF-mutated tumours in mice and human cancer cells in the laboratory setting. Human clinical trials will be necessary to be certain that the drugs will have a beneficial effect in people with BRAF-mutated tumours before they could become widely used treatments for these types of cancers.

Although these drugs have already been shown to be a safe and effective treatment for Wilson's disease, the aim of that treatment is to get abnormally high copper levels down to a normal level.

Using the drugs as a cancer treatment could reduce the copper levels to below normal, and this could have side effects.

Copper deficiency causes blood abnormalities such as anaemia and an increased vulnerability to infection, as well as neurological problems such as nerve damage, so an appropriate dose and duration of treatment would need to be determined.

If human trials are successful, these drugs could provide a useful additional treatment option for hard-to-treat cancers such as melanoma.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Copper can block growth of rare cancer. The Daily Telegraph, April 10 2014

Cancer could be 'starved' by taking pills that remove copper from the body, say scientists. Mail Online, April 10 2014

Links To Science

Brady DC, Crowe MS, Turski ML, et al. Copper is required for oncogenic BRAF signalling and tumorigenesis. Nature. Published online April 9 2014

Categories: NHS Choices

Home HIV testing kits now legal in UK

NHS Choices - Behind the Headlines - Thu, 10/04/2014 - 11:00

“Kits allowing people to test themselves for HIV at home can be bought over the counter in the UK for the first time,” BBC News recently reported.

The UK government has amended the law so “do it yourself” home testing kits for HIV are now legal to be sold over-the-counter.

 

Can go I out and buy a test?

No – at least not yet. No companies have applied for a license to sell self-testing kits within the European Union. Though this is understandable as self-testing kits were previously illegal.

The charity the National Aids Trust predicts that self-testing kits will become available by late 2014 or early 2015.

 

Why has the law been changed?

The government hopes that the change in law will encourage more people to get tested for HIV. It is thought that as many as one in five people with HIV do not realise they are infected.

Aside from the risk of spreading the virus to others, people whose diagnosis of HIV is delayed tend to have a worse outcome. If treatment begins soon after the infection occurs then there is a much better chance of preventing complications of HIV. With prompt treatment a person with HIV can expect to have a normal life expectancy.

 

How will the tests work?

It is likely that any commercially available test will be based on the same principle as the OraQuick In-Home HIV Test, which was approved by the US Food and Drug Administration (FDA).

This test works by checking for antibodies for HIV, an immune response that occurs if a person is infected. The test involves taking a swab of fluid from the upper and lower gums.

The swab is then placed into a supplied tube and then after 20-40 minutes either one or two lines should appear. One line means the test is negative, two means that the test may be positive. In the event of a positive test, follow up testing, from a sexual health clinic or similar is recommended.

 

How reliable are the tests?

At the moment there is only reliable data available on the OraQuick In-Home HIV Test which was provided to the FDA as part of the manufacture’s licensing application.

The data mentions that in a clinical trial, out of 480 people with HIV, 470 were correctly diagnosed using the home testing kit as having the infection.

This gives the home testing kit rate a false negative rate of 97.9%; so out of 100 people tested, around two will wrongly be given the all clear for HIV infection.

In the same trial, 473 people known not to be infected with HIV, just one was wrongly diagnosed as having the infection. This gives the home testing kit a false positive rate of 99.79%. So you would have to test around 4,000 people before you wrongly diagnosed a person as having HIV.

 

Can I get a HIV test anywhere else?

There are various places to go for an HIV blood test, such as:

Home sampling kits are also available, which allow you to take a saliva sample or blood spot and send them off to a laboratory for testing.

These are available online and from some pharmacies, but you will generally have to pay for them.

 

How has the news been received?

Most of the reaction to the government’s change in policy has been positive. A spokesperson for the Royal Pharmaceutical Society, said “"HIV self-testing kits may help increase diagnosis by providing more choice for people who have been at risk but are reluctant to get a test in person from existing services.”

Certain sections of the media have raised concerns that home testing could lead to misdiagnosis in people who lack the proper training to interpret the results of the tests.

Similar concerns were raised when home pregnancy kits were introduced and they are now a commonly accepted form of testing.

 

What if I test positive for HIV using a self-test kit?

It is important is that any positive test result is confirmed by a health professional, not least because if you are HIV positive you will need advice on treatment options.

And if a test proved negative that should not be taken as licence to take sexual risks or inject illegal drugs.

The most effective way to reduce your risk of HIV is to always use a condom during sex and never share needles if you are an injecting drug user. Read more about HIV prevention.


Edited by
NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

UK law passes sales of HIV home tests before they exist. BBC News, April 6 2014

Sale of HIV home tests legalised in the UK: Change in the law means the kits could soon appear in shops. Mail Online, April 4 2014

Categories: NHS Choices

Does paying drug users boost hep B jab uptake?

NHS Choices - Behind the Headlines - Wed, 09/04/2014 - 11:30

"Heroin addicts are being 'bribed' with £30 in shopping vouchers for agreeing to undergo vaccinations," The Daily Telegraph reports, while the Daily Mail said addicts were to get a "£10 supermarket 'bribe' to stay clear of drugs and £30 to have a hepatitis B jab".

Some of the reporting makes it sound as if drug users will soon be awash with NHS cash. In fact, there are two studies being covered here – one of which is completed and published, and a second that is ongoing.

The published study involved just over 200 injecting drug users being treated for heroin addiction. The researchers wanted to see if a small cash incentive of supermarket vouchers given in instalments up to £30 would increase the likelihood of drug users completing a course of vaccinations against hepatitis B.

Hepatitis B is a blood-borne virus that can be caught from sharing needles, as well as from unprotected sex. It is estimated that one in five injecting drug users has the disease.

The offer of vouchers led to dramatic results in some, but not all, participants. Drug users given the supermarket vouchers were at least 12 times more likely to complete a course of hepatitis B injections than those who were not given vouchers. However, half of the people offered an incentive did not complete the course.

While people may baulk at the idea of "bribing" drug users to get vaccinated, the proposal could save the NHS a great deal of money in the long run. A report from the Foundation for Liver Research (PDF, 734kb) estimated that treating hepatitis B costs the NHS up to around £375 million a year.

 

Where did the story come from?

The study was carried out by researchers from Imperial College London, King's College London, University College London, South London and Maudsley NHS Foundation Trust, Camden and Islington NHS Foundation Trust, Central and North West London NHS Foundation Trust, and Sussex NHS Foundation Trust. It was funded by the National Institute for Health Research.

The study was published in the peer-reviewed medical journal, The Lancet. It has been published on an open access basis and is free to read online.

Coverage of both studies was somewhat shrill, with The Independent referring to a "controversial trial" offering cash to drug addicts, while the Daily Mail and The Daily Telegraph referred to a voucher "bribe". The Mail also misleadingly implied that the £30 voucher is currently offered to those who had a hepatitis B jab, but this was only as part of a study.

There is room for debate over how NHS resources are best used. But this debate should also include an estimate of the savings that might be made in terms of future care and treatment if offering financial incentives means fewer drug or ex-drug users get hepatitis B.

The societal cost of heroin addiction is also estimated to be large. Aside from the actual cost of treating addicts, there are also additional costs associated with the crimes many addicts engage in to pay for their habit. If a £10 shopping voucher stops your house being burgled, you may consider that a price worth paying.

 

What kind of research was this?

This research was a cluster randomised trial involving 210 people receiving treatment for drug addiction. It aimed to find out if giving small financial incentives could improve the number of people that completed a vaccination schedule against hepatitis B. A cluster randomised trial is where groups of people (as opposed to individuals) are randomised.

The researchers point out that in general, poor adherence to treatment is a widespread problem that reduces the individual and public benefit from numerous health interventions. This problem is especially acute among heroin users, who often lead chaotic lives.

They say that interest in a strategy known as contingency management, which involves the use of material or financial incentives to promote adherence to treatment, is gaining ground. The National Institute for Health and Care Excellence (NICE) supports its use in some circumstances.

The researchers also point out that injecting drug users are a major risk group for infection and transmission of hepatitis B, with 22% of this group being affected. Clinical guidance recommends that routine hepatitis B vaccination is offered to people receiving addiction treatment, but uptake of the vaccine needs to be improved.

They aimed to assess the effectiveness of cash incentives in promoting the completion of hepatitis B vaccination among those receiving treatment for addiction, compared with the offer of vaccination without such an incentive.

 

What did the research involve?

Participants for the trial were all undergoing opioid substitution therapy for drug addiction at 12 NHS drug treatment clinics in the UK. All the clinics offered an accelerated hepatitis B vaccination schedule as advised in clinical guidelines, providing three injections on days 0, 7 and 21 of treatment.

Adults aged 18-65 years were eligible if they had previous, current or future risk of injecting drug use and were eligible for hepatitis B vaccination (they had not previously received a vaccination or had hepatitis B infection).

The clinics the drug users attended were randomly allocated to provide three different approaches:

  • hepatitis B vaccination without any cash incentive (treatment-as-usual group)
  • hepatitis B vaccination with a "fixed-value" incentive – participants received up to £30, provided as a £10 voucher at each of the three vaccinations
  • hepatitis B vaccination with an incentive that increased in value – where participants received up to £30 in vouchers, provided as a £5 voucher at the first vaccination visit, a £10 voucher at the second vaccination visit, and a £15 voucher at the third vaccination visit

In the second and third groups, eligibility to receive a voucher was conditional on attendance at the appointment on time and compliance with the vaccination schedule.

Patients completed a research interview before enrolment into the trial, which assessed their socioeconomic background, drug and alcohol use, drug treatment history, and health.

Patients were given a first vaccination appointment (day 0) at least 24 hours after enrolment. Attendance at the three hepatitis B vaccination appointments was recorded for up to three months.

Researchers primarily looked at completion of hepatitis B vaccination within 28 days of the first vaccination day.

Patients were defined as completers if they attended all scheduled vaccination appointments, or attended but were not vaccinated because they were found to have existing immunity.

They also recorded incidence of any serious adverse events, assessing if these were related to vaccination.

Standard statistical methods were used to analyse the results.

 

What were the basic results?

These are the main findings:

  • 9% of participants treated as usual (with no incentives) completed the vaccination schedule
  • 45% of participants who received three £10 vouchers completed the vaccination schedule (odds ratio [OR] 12.1, 95% confidence interval [CI] 3.7-39.9)
  • 49% of participants who received vouchers of increasing value completed the vaccination schedule (OR 14.0, 95% CI 4.2-46.2)
  • no serious adverse events related to treatment occurred

 

How did the researchers interpret the results?

The researchers say their findings are in accord with previous studies. They say their results provide compelling evidence that financial incentives significantly improve completion of the three-injection vaccination schedule, so that approximately half of patients complete vaccinations as scheduled.

Further work is now required to improve on this method and thereby increase the number of drug users who complete the vaccination schedules.

"That monetary incentives increase compliance is unremarkable, but the size of the increase we observed was striking," says Professor John Strang from the National Addiction Centre at King's College London, who led the study.

Strang goes on to say that, "Injecting drug users are at high risk of infection and transmission of hepatitis B. This is a potentially life-saving vaccine, and increasing its uptake among this group has important benefits to public health, as well as to the individual."

 

Conclusion

This was a well-designed study which found that a small cash incentive offered to drug users receiving treatment increases the likelihood that they completed a course of injections to protect them against hepatitis B.

However, it should be noted that about half the patients receiving cash incentives did not complete the vaccinations. As the authors point out, more research is needed to improve on the method.

It should also be noted that this trial involved patients already receiving treatment for drug addiction. These patients are more likely to be motivated to receive vaccinations than injecting drug users who have not gone into treatment.

A further limitation to the strength of the results is the wide confidence intervals, which may have been the result of a reasonably small sample size.

Bearing these limitations in mind, the study did lead to a significant uptake in people completing the hepatitis B vaccine course. The use of shopping vouchers as a "bribe" may actually deliver considerable savings to the NHS in the long term.

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Free shopping vouchers for drug addicts in NHS scheme. The Daily Telegraph, April 9 2014

Controversial trial at NHS clinics to offer cash for drug addicts who stay clean. The Independent, April 9 2014

Cash to stay clean: £10 supermarket voucher 'bribe' to help heroin addicts stay clear of drugs - and £30 to have a hepatitis jab. Daily Mail, April 9 2014

Links To Science

Weaver T, Metrebian N, Hellier J, et al. Use of contingency management incentives to improve completion of hepatitis B vaccination in people undergoing treatment for heroin dependence: a cluster randomised trial. The Lancet. Published online April 9 2014

Categories: NHS Choices

Painkiller use linked to irregular heartbeat

NHS Choices - Behind the Headlines - Wed, 09/04/2014 - 01:00

“Painkillers used by millions of Britons have been linked to higher risk of an irregular heartbeat that could trigger a stroke,” the Mail Online reports.

This headline follows the publication of a long-term study that aimed to find out whether older adults developed atrial fibrillation. The researchers looked at whether adults who had developed the condition had used non-steroidal anti-inflammatory drugs (NSAIDs) recently, previously or not at all.

NSAIDs are a type of painkiller and have been associated with a higher risk of atrial fibrillation – a condition that causes an irregular and often abnormally fast heart rate. Complications of atrial fibrillation include stroke and heart failure.

Out of the 8,423 participants, 857 people developed atrial fibrillation. Those who had used NSAIDs during the past 15-30 days had a 76% increased risk of atrial fibrillation, compared to those who had never used NSAIDS. Those who had used them within the previous 30 days also had an 84% increased risk of atrial fibrillation, compared to those who had never used them. However, these results were based on just 64 people.

Current use of NSAIDs for less than 14 days or more than 30 days, or past use more than 30 days ago, was not linked to an increased risk of atrial fibrillation.

Although this study was conducted over a long period of time, assessing a person’s current or recent use of NSAIDs at the time they were diagnosed cannot prove that NSAIDs caused atrial fibrillation.

Other factors may also have influenced the results, including whether the patients were prescribed the NSAIDs for pain following surgery.

You should not stop taking prescribed medication, but if you have any concerns, talk to your pharmacist or GP.

 

Where did the story come from?

The study was carried out by researchers from Erasmus Medical Center (Rotterdam), the Netherlands Consortium for Healthy Ageing and the Inspectorate of Health Care (Hague). It was funded by a variety of Dutch government and charitable sources, in addition to money from the European Commission. Nestle Nutrition (Nestec Ltd), Metagenics Inc and AXA also funded the research, but they were not involved in designing, analysing or writing the study.

The study was published in the peer-reviewed medical journal the BMJ Open. As the name suggests, this is an open-access journal, meaning the study is free to read online.

The media generally reported the study accurately, but none explained its limitations and the very small numbers that the significant results were based on.

 

What kind of research was this?

This was a prospective cohort study of the general older population in Rotterdam, in the Netherlands.

The researchers aimed to see if there was a link between the use of non-steroidal anti-inflammatory drugs (NSAIDs) and developing atrial fibrillation.

Previous research has shown a link between NSAIDs usage and increased risk of atrial fibrillation, but they have been retrospective case-control studies with limited ability to account for confounding factors.

Although this was a prospective cohort study that followed people over a period of time, the assessments within it were predominantly cross sectional. This means that it assessed people at several follow-up points during the study and looked at whether the person had a current or past prescription of NSAIDs at the time atrial fibrillation was diagnosed.

Despite the researchers adjusting their analyses for other medical and lifestyle factors that may be influencing results (confounders) it cannot prove that their current or recent use of NSAIDs caused atrial fibrillation.

randomised control trial would be ideal, though it may be both unethical and unfeasible. Such a trial would require a very large number of people to be given regular NSAIDs and following them up for a prolonged period of time purely to see if they developed atrial fibrillation.

A better method may have been to assess the use of NSAIDs in a group of people without the condition, then following them up over time to see if they developed atrial fibrillation, to better separate exposure and outcome.

 

What did the research involve?

Researchers followed a group of older adults who did not have atrial fibrillation at the study’s start, and recorded during the follow up whether they developed atrial fibrillation and if they were taking NSAIDs around that time. The results took into account factors such as age, sex and BMI, and looked for links between atrial fibrillation and NSAIDs usage.

The study included 8,423 older adults (average age 68.5 years) from Rotterdam, who did not have atrial fibrillation. The majority of participants were recruited between 1990 and 1993, and were followed up on three occasions (1993-1995, 1997-1999 and 2002-2004). A second, smaller group of people had been recruited over the 2000-2001 period and were followed up once, over 2004-2005. They followed up the people until they had a diagnosis of atrial fibrillation, died, were lost to follow-up or to the end of the study period in January 2009.

At the beginning of the study, and at each follow-up point, the presence of atrial fibrillation was examined by taking a heart tracing (electrocardiogram, known as an ECG), which was examined by a doctor, as well as looking at medical records from GPs and hospital specialists.

At the beginning of the study, the following cardiovascular risk factors were also recorded:

During follow-up, they recorded the date that people first had any symptoms of atrial fibrillation that was later confirmed by ECG.

NSAID usage was calculated from filled automated prescription records from participating pharmacies. They assumed that the medication was taken in the dosage and quantity prescribed. They put them into three categories:

  • current user: last used 14 or fewer days ago; 15-30 days ago; 30 or more days ago
  • past user: stopped 30 or fewer days ago; 31-180 days ago; more than 180 days ago
  • never used

They matched the date of atrial fibrillation, starting with the person’s NSAIDs category at this time, and compared this with the NSAID usage of all other participants who did not have atrial fibrillation. They analysed the results, just taking into account age and sex. They then analysed the results, taking into account all of the cardiovascular risk factors listed above.

 

What were the basic results?

After a mean follow-up time of 12.9 years, 857 people developed atrial fibrillation. At the time of their atrial fibrillation diagnosis:

  • 261 had never used NSAIDs
  • 554 had used NSAIDs in the past
  • 42 were currently using NSAIDs

Taking age, sex and cardiovascular risk factors into account, the researchers calculated that current use for 15-30 days was associated with a 76% increased risk of atrial fibrillation, compared with those that had never used them (hazard ratio (HR) 1.76, 95% Confidence Interval (CI) 1.07 to 2.88).

Recent past usage, within the previous 30 days, was also associated with an 84% increased risk of atrial fibrillation compared to those that had never used (HR 1.84, 95% CI 1.34 to 2.51).

These were the only statistically significant associations found. Current usage for less than 14 days or more than 30 days was not associated with atrial fibrillation, nor was past usage more than 30 days previously. Neither was NSAID dosage (high or low) significantly associated with increased risk of atrial fibrillation, compared with those that had never used them.

 

How did the researchers interpret the results?

The researchers concluded that the “use of NSAIDs is associated with an increased risk of atrial fibrillation. Current use and recent past use were especially associated with a higher risk of atrial fibrillation, adjusted for age, sex and cardiovascular risk factors. The underlying mechanism behind this association deserves further attention”.

 

Conclusion

This prospective cohort study claims an association between NSAIDs usage and developing atrial fibrillation. However, there are many limitations to this research.

Despite this being a large prospective cohort study that followed people over a period of time, the assessments within it were predominantly cross-sectional. That means it assessed the person’s current or recent prescription of NSAIDs at the time they were diagnosed, but this cannot prove that using NSAIDs caused atrial fibrillation.

A better method may have been to assess the use of NSAIDs in people without atrial fibrillation at the start of the study, then follow them up over time to see if they developed atrial fibrillation, which would have better separated exposure and outcome.

There is the potential for causes other than the cardiovascular risk factors measured to have influenced the results. For example, the reason for taking NSAIDs was not known, but there could have been other risk factors for developing atrial fibrillation, such as:

  • recent surgery, which would often lead to short-term NSAIDs usage
  • the need to take high-dose steroids – this includes people with inflammatory conditions like rheumatoid arthritis, who would also be more likely to take NSAIDs

Participants’ NSAID usage was also not accurately recorded. It was determined purely by prescription usage and then assumed that the medication was taken as prescribed. It is well known that people often deviate from this, and this is even more likely for painkillers due to the repeated daily dosing that is needed and the often fluctuating nature of pain. It also did not include any over-the-counter NSAIDs, like ibuprofen.

The study only found significant associations between assumed current use of NSAIDs (between 15 and 30 days) or those that had discontinued within the past 30 days. However, these risk calculations are based only on 17 people with atrial fibrillation who had used NSAIDs in the past 15 to 30 days, and 47 people with the condition who had used them within the past 30 days. These sample sizes are very small, which decreases the reliability of these risk estimates.

If the use of NSAIDs increases the risk of atrial fibrillation, you may expect that prolonged usage for greater than 30 days would also increase the risk, but this was not seen. However, only eight people who developed atrial fibrillation had ongoing current use of NSAIDs for more than 30 days. Again, the risk calculation involving such a small number of cases may be unreliable.

Overall, this study does not conclusively prove that NSAIDs increase the risk of atrial fibrillation.

However, given that NSAIDs have previously been associated with potential risk of cardiovascular events, particularly in people with existing cardiovascular disease, a link with this heart rhythm problem is plausible and worthy of further study.

Analysis by
Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Painkillers linked to higher risk of stroke: Alert over prescription medicines used by millions. Mail Online, April 9 2014

Painkillers linked to irregular heartbeat risk in older adults. The Daily Telegraph, April 9 2014

Everyday painkillers almost DOUBLE the risk of LETHAL heart attacks and strokes. Daily Express, April 9 2014

Links To Science

Krijthe BP, Heeringa J, Hofman A, et al. Non-steroidal anti-inflammatory drugs and the risk of atrial fibrillation: a population-based follow-up study. BMJ Open. Published online April 8 2014

Categories: NHS Choices

Could a blood test be used to detect lung cancer?

NHS Choices - Behind the Headlines - Tue, 08/04/2014 - 09:14

"Simple blood test could soon diagnose if patient has cancer and how advanced it is," reports the Mail Online. But this is a rather premature headline given the early stage of the research that the news is based on.

The blood of people with cancer contains DNA from the tumour, which may enter the blood after some of the tumour cells naturally die. However, blood also contains DNA from normal non-cancerous cells.

Researchers developed a technique called CAPP-Seq (cancer personalised profiling by deep sequencing) to detect the small amounts of tumour DNA in the blood of people with non-small-cell lung cancer. They identified parts of the DNA frequently mutated in non-small-cell lung cancer and developed a filter to "enrich" them. These were sequenced thousands of times to identify the mutations.

The researchers were able to detect circulating tumour DNA in 50% of people with early-stage cancers and in all people with later-stage cancers. Levels of circulating tumour DNA were also found to correlate with tumour size and response to treatment.

This is a promising technique that could potentially have a role one day in monitoring cancer progression and response to treatment, and possibly even in screening and diagnosis.

However, it has only been tested on a small number of people. Further studies will be needed to see the best ways to use it and how well it works for other cancers.

 

Where did the story come from?

The study was carried out by researchers from Stanford University and was funded by the US Department of Defense, the National Institutes of Health Director's New Innovator Award Program, the Ludwig Institute for Cancer Research, the Radiological Society of North America, the Association of American Cancer Institutes' Translational Cancer Research Fellowship, the Siebel Stem Cell Institute, the Thomas and Stacey Siebel Foundation, and Doris Duke Clinical Scientist Development Awards.

It was published in the peer-reviewed journal, Nature Medicine.

The Mail Online's coverage was a little optimistic. They report that, "According to the medics, the new test works for the most common types of cancer, including breast, lung and prostate. It could even be used to screen healthy or at-risk patients for signs of the illness."

Although this is ultimately what the researchers want to achieve, so far they have only used the technique to detect tumour DNA in the blood of a small sample of people with non-small-cell lung cancer, but it could be modified to detect other cancers in theory.

In addition, although the technique was good at detecting stage II to IV tumours, it was less good at detecting stage I cancers. The researchers state that methodological improvements are required to detect these early-stage cancers.

More research in larger populations is needed before it is known whether any test could be used either for monitoring the response to treatment in people with cancer, or possibly even for detecting cancer.

 

What kind of research was this?

The blood of people with cancer contains DNA from the tumour cells. How tumour DNA reaches the blood is unclear, but it may be released as the tumour cells die naturally. However, blood also contains DNA from normal, non-cancerous cells.

This was a laboratory-based study that aimed to develop a technique to detect and analyse circulating tumour DNA in blood.

This technique would be particularly useful for monitoring tumours and could have the potential to be involved in the screening or diagnosis of tumours.

 

What did the research involve?

The researchers were initially interested in optimising the technique for the most common type of lung cancer (non-small-cell), although they point out that theoretically it could be used for any cancer.

The researchers initially designed a "selector", or filter. This was a series of DNA "probes" that corresponded to regions of DNA that are often mutated in non-small-cell lung cancer. The researchers chose the regions based on mutations found in people with non-small-cell lung cancer in national databases such as the Cancer Genome Atlas.

In total, the selector targeted 521 regions of DNA that code for protein (exons) and 13 intervening regions (introns) in 139 genes, corresponding to 0.004% of the human genome. These DNA probes were used to select the DNA regions to be sequenced.

The researchers performed what is known as "deep" sequencing, meaning that these specific regions were sequenced multiple times (about 10,000 times). This was to detect any mutations that may be present.

They initially used the selector and deep sequencing – together known as cancer personalised profiling – by deep sequencing (CAPP-Seq) to detect mutations in tumour samples from 17 people with non-small-cell lung cancer.

They then assessed how accurate CAPP-Seq was for monitoring disease and detecting minimal residual disease using blood samples from five healthy people and 35 samples collected from 13 people with non-small-cell lung cancer.

The researchers also determined whether the amount of circulating tumour DNA in blood corresponded to tumour burden and whether the technique could potentially be used for tumour screening.

 

What were the basic results? Detection

When CAPP-Seq was applied to tumour samples from 17 people with non-small-cell lung cancer, it detected all the mutations that were known to be present from previous diagnostic work. It also detected additional mutations.

CAPP-Seq was then used to detect and analyse circulating tumour DNA in the blood. Circulating tumour DNA was detected in all people with stage II to IV non-small-cell lung cancer and 50% of people with stage I cancer.

Read more about what the stages of cancer mean.

Monitoring

The researchers then analysed whether the levels of circulating tumour DNA in the blood correlated with tumour volumes. They found that levels of circulating tumour DNA in the blood increased as tumour volume increased (measured using computerised tomography and positron emission tomography).

They then monitored levels of circulating tumour DNA in the blood of people who were undergoing cancer treatment. Again, levels of circulating tumour DNA in the blood correlated with tumour volumes.

From the results in two people with stage II or III disease, it seems that this technique may have potential in identifying people with residual disease after therapy. This is because one person was thought to have been successfully treated, but CAPP-Seq detected low levels of circulating tumour DNA. This person experienced disease recurrence and ultimately died.

Two people with early-stage disease were also monitored after treatment. One of these people had a mass thought to represent residual disease after treatment. However, CAPP-Seq detected no circulating tumour DNA and the person remained disease-free for the duration of the study.

Screening

The researchers also assessed the potential for using CAPP-Seq as a screening tool by testing blood samples from all the people in their cohort. The technique could detect all the people with cancer with circulating tumour DNA levels above a certain level (0.4% of all circulating DNA). It was also able to detect specific mutations in some patients.

 

How did the researchers interpret the results?

The researchers conclude that CAPP-Seq allows for "highly sensitive and non-invasive detection of [circulating tumour DNA] in the vast majority of patients with [non-small-cell lung cancer] at low cost. CAPP-Seq could therefore be routinely applied clinically and had the potential for accelerating the personalised detection, therapy and monitoring of cancer.

"We anticipate that CAPP-Seq will prove valuable in a variety of clinical settings, including in assessment of cancer DNA in alternative biological fluids and specimens with low cancer cell content," they said.

 

Conclusion

In this study, researchers have developed a technique called CAPP-Seq to detect and analyse the small amounts of tumour DNA in the blood. The researchers tested the technique on samples from five healthy people and 35 samples collected from 13 people with non-small-cell lung cancer.

Circulating tumour DNA was detected in 50% of people with stage I cancer (a small cancer in one area of the lung) and in all people with stage II to IV non-small-cell lung cancer (the three stages covering larger lung cancers – those that may have spread to the lymph nodes or spread to the rest of the body). Levels of circulating tumour DNA were also found to correlate with tumour size and response to treatment.

Overall, this is promising research into a technique that could potentially have a role in one day monitoring cancer progression and response to treatment, and possibly even in screening and diagnosis.

However, further studies in more people will be required to determine how well the technique works both for non-small-cell cancer and for other cancers, and to see if and how it could be used in cancer diagnosis and treatment.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Simple blood test for cancer could soon diagnose if patient has cancer and how far advanced the disease is, scientists say. Daily Mail, April 6 2014

Links To Science

Newman AM, et al. An ultrasensitive method for quantitating circulating tumor DNA with broad patient coverage. Nature Medicine. Published online April 6 2014

Categories: NHS Choices

Pages