Thursday, January 30, 2014

Zombie bees

These days our honey bee colonies (and other bees) are subject to numerous pathogens and parasites. The interaction among multiple pathogens and parasites is the proposed cause for the so called Colony Collapse Disorder (CCD), a syndrome characterized by worker bees abandoning their hive. 

As if this wasn't enough about a year ago colleagues from San Francisco and Los Angeles provided the first documentation that the phorid fly Apocephalus borealis, previously known to parasitize bumble bees and wasps, also infects and eventually kills honey bees and may pose an emerging threat to North American apiculture.

The way the fly kills the bee isn't pretty. The female Apocephalus pierces the bee's abdomen with a sword-like tube and deposits her eggs. The developing larvae attack the bee's brains, disorienting it into flying at night; hence the name, "zombie bee". Usually the poor victim dies within a few hours of exhibiting the aberrant behavior — fortunate because, about seven days later, up to 13 mature Apocephalus borealis will emerge at where the bee's thorax meets its head, decapitating the bee in the process; hence the name, Apocephalus.  

The colleagues used DNA Barcoding to confirm that the phorid flies that emerged from honey bees and bumble bees belong to the same species. Additional microarray analyses of honey bees from infected hives revealed that these animals are often infected with pathogens that are also associated with CCD, implicating the fly as a potential vector or reservoir of these honey bee pathogens. The researchers found that 77 % of the hives they sampled in the San Francisco Bay Area had been infected by the parasite. They also found the parasites in commercial hives in California's Central Valley and South Dakota. 

As a consequence the researchers started ZomBee Watch, a citizen science project sponsored by the San Francisco State University Department of Biology, the San Francisco State University Center for Computing for Life Sciences and the Natural History Museum of LA County. ZomBee Watch has three main goals:

- To determine where in North America the Zombie Fly Apocephalus borealis is parasitizing honey bees.
- To determine how often honey bees leave their hives at night, even if they are not parasitized by the Zombie Fly.
- To engage citizen scientists in making a significant contribution to knowledge about honey bees and to become better observers of nature.

ZomBee Watch already lists two confirmed infections outside California, one in South Dakota and another one discovered only recently, in Vermont. It is certainly too early to say if the parasite has established in the East. The discovery was made in October 2013 and authorities and researchers are now waiting to see if it survives the winter which - in this case fortunately - has been harsh so far. I am willing to freeze a little longer if it helps the bees.

Wednesday, January 29, 2014

True cinnamon

The plant genus Cinnamomum consists of about 250 species of that are aromatic and contain flavoring substances. Some of the species supposedly show antiinflammatory, antidiabetic, and antioxidant effects which increased the use of Cinnamon in traditional medicine. Cinnamomum verum, a native of Sri Lanka, is known as the true cinnamon. It is only cultivated in Sri Lanka and India, and the dried bark is used as the famous spice for biscuits, cakes, and other sweets. Most of us know the cinnamon sticks and we can buy them for little money. However, each of those sticks requires quite a bit of work. The outer bark of the plant is scraped by hand and the inner bark is then carefully removed with a knife. The best parts are used to create an outer sheaf and the other parts are placed within. These outer sheaves are joined to each other and overlap slightly to create a standard length stick. The sticks are then rolled daily as they dry and are tied into bundles for trading and transport. Not long ago the BBC aired a 3 part series on spices - The spice trail. It is quite an eye opener as it nicely shows how much work goes into the spices we can buy for cents in our grocery stores.

As with every valuable commodity it didn't take long until authorities and researchers discovered that Cinnamomum verum is often substituted with the hard, thick, and less aromatic bark of Cinnamomum aromaticum (Chinese cinnamon). This alternative however has a more bitter and burning flavor due to a higher amount of coumarin. True cinnamon has very little coumarin. To make matter worse the dried bark of another species, Cinnamomum malabatrum, is also passed off as true cinnamon.

Any attempt to distinguish species based on the morphology of bark is futile and most cinnamon is consumed as powder. Sounds like a case for DNA Barcoding and indeed a group of Indian researchers have now developed a method that promises to help with the problem. The biggest issue for DNA-based identification of cinnamon was the the presence of proteins, polysaccharides, and phenolic compounds of the lignin pathway that act as strong inhibitors of DNA extraction. Polysaccharides and polyphenols are also known to inhibit PCR. The researchers developed a way to extract DNA from the dried bark of Cinnamomum species which reduces the concentration of all those interfering substances and tested the products by Random Amplified Polymorphic DNA (RAPD) runs that produce different banding patterns for different species - here the true cinnamon and both common adulterants. They also successfully amplified rbcL, one of the official DNA Barcode markers for plants. 

Looks like there is a good way to ensure that my next cinnamon bun actually contains only true cinnamon!




Tuesday, January 28, 2014

Rainforest diversity regulators

Dictyophora sp. (credit Bob Thomas)
Tropical forests are important reservoirs of biodiversity, but the processes that maintain this diversity remain poorly understood. The Janzen–Connell hypothesis suggests that specialized natural enemies such as insect herbivores and fungal pathogens maintain high diversity by elevating mortality when plant species occur at high density (negative density dependence; NDD). NDD has been detected widely in tropical forests, but the prediction that NDD caused by insects and pathogens has a community-wide role in maintaining tropical plant diversity remains untested. 

This is the intro to a new study that showed that fungi, often seen as pests, play a crucial role policing biodiversity in rainforests. Researchers found that fungi regulate diversity in rainforests by making dominant species victims of their own success. Fungi spread quickly between closely-packed plants of the same species, preventing them from dominating and enabling a wider range of species to flourish. Seedlings growing near plants of the same species are more likely to die. It has long been suspected that something in the soil is responsible, and the new study shows that fungi play a crucial role. If lots of plants from one species grow in the same place, fungi quickly cut their population down to size, leveling the playing field to give rarer species a chance. 

The researchers sprayed plots in the rainforest of Belize with water, insecticide or fungicide every week for 17 months. They found that fungicides dealt a significant blow to diversity, reducing the effective number of species by 16%. While insecticides did change the composition of surviving species, they did not have an overall impact on diversity. The results suggest that insects disproportionately kill certain plant species, reducing their abundances during the transition from seeds to seedlings. Insects thus strongly influence the structure of plant communities in this forest; however, by doing so relatively independently of plant density, their net effect on plant species diversity is small. 

The initial expectation was that the removal of both fungi and insects would have an effect on the tree species present but it was actually only the removal of the fungi that affected diversity. It was also suspected that the fungus-like oomycetes might play a part in policing rainforest diversity, but treatments with substances that kill oomycetes showed no significant effect on the number of surviving species, suggesting that true fungi and not oomycetes are driving rainforest diversity.

Our experiments highlight that both insect herbivores and pathogens help structure tropical plant communities at the early stages of community assembly and provide support for a pivotal role for natural-enemy-mediated NDD in maintaining species diversity in this tropical forest. Although the magnitude of the NDD we observed was relatively small, this study was conducted over a relatively short timescale (17 months) in a tropical forest of relatively low plant species diversity (approximately 320 tree species have been recorded in the reserve). The effects of NDD will probably accumulate over time, and may be stronger in more species-rich forests. Indeed, similar experiments in other forests are now needed to evaluate the generality of the Janzen–Connell hypothesis as an explanation for variation in species diversity among tropical plant communities.



Thursday, January 23, 2014

A new bat species

Myotis annatessae, skull drawings
About 17 years ago my colleague Alex Borisenko was actively collecting bats in Vietnam. A small collection of bats as part of an ecological assessment in the Vu Quang Nature Reserve (Ha Tinh Province) contained several specimens of small mouse eared bats (genus Myotis) and  he provisionally identified those as ‘Myotis siligorensis'. 

Years later he and his colleague from the Zoological Museum at the Moscow State University had a chance to do some more in-depth comparisons with a larger series of South-East Asian Myotis and they found clear morphological differences of the Vu Quang specimens from the real Myotis siligorensis and its related forms. This was further confirmed by DNA Barcoding analysis which showed marked genetic divergence between the groups. The logical consequence -  the Vu Quang specimens represent a new bat species and the researchers provide these findings and a full formal description in a recently published paper.

The new species resembles smaller specimens of the widespread South Asian Myotis muricola, though differs from it and from other small mouse-eared bats by a set of cranial and external characters. Genetic analyses confirm that the new species is distinct from the other named forms of Asian Myotis. Comparison of sequence diversity in the DNA barcode region of the COI gene among East Asian members of Myotis, highlighted several taxonomic questions related to Asian ‘whiskered bats’, suggesting that common morphological diagnostic traits may be shared by genetically divergent species.
The young scientist Anna Tess
showing interest in the plant kingdom

When it came to naming the new species, the authors decided to name it Myotis annatessae in honor of Alex's two-year old daughter Anna Tess. I like this idea far more than naming new species after some celebrities. And who knows, maybe this close connection to the scientific name of an animal will instill the future wish to pursue science or simply nurture appreciation for nature and its wonders. Either way, a great choice, and it sounds good, too. 

Tuesday, January 21, 2014

How to increase value and reduce waste 3

Post #3 of my little series on the Lancet papers on how reduce waste in research is a take on how bureaucracy is contributing to the problem. I think the best way to summarize this contribution is to cite an example that was given in it. Imagine a group of researchers aiming to pool data from several cohort studies. In biomedical research you generally need approval to use patient relevant information. Such an approval is usually provided by an ethics committee. Sweden has a central research ethics committee for this.

In 2010, a group of researchers in Sweden wanted to pool data from several cohort studies to identify risk factors for subarachnoid haemorrhage (a form of stroke). They identified about 20 studies, and spent about 300 h contacting all investigators and getting signed data-sharing agreements and data security processes agreed. The team recorded the time taken for each step of the approval process. About 200 h of office time was spent on the ethics approval and resubmission process alone. The research ethics committee wanted to see all information that the participants of all cohorts had been given about the purpose of the study. These documents had to be provided as 18 copies and submitted manually. It took the team 6 months to collect all the information sheets from the 20 different cohorts, several of which began recruitment in the 1960s and for which little knowledge about what information was given by whom to whom in the recruitment phase was poor. The research ethics committee eventually had the team advertise in national newspapers about the pooling project, listing all original cohorts so that all individuals who did not want the team to use their data for this project could withdraw their consent for the study. Not one participant withdrew. It took more than 3 years to reach the stage of pooling data from the cohorts, ready for analysis.

Three years before the team could start researching. Frankly, I would probably have suffered a stroke myself if I had to jump through all those hoops. Luckily, in our field of research bureaucracy is usually not that excessive because most of what we do does not involve experiments or empirical studies on humans. Funnily enough, I had to write a few similar applications for our school program although we are not collecting any information of the children but gather data together with them. However, that took only 1-2 h of my time and not years. Overall, the three major categories in bureaucracy I have to deal with are grant proposals, permits for field work/research or university internal approval of research funding. All have their relevance and are required, and you can find both very stringent, clear-cut or cumbersome, time-consuming procedures for them. The last I am going to do here is to complain about specific ones but I can see that an increasing number of regulations can drive colleagues mad. Let's face it - all of us were trained to do research to the best of our abilities and we were -hopefully- given all the tools to do good science but nobody taught us how to write a grant proposal, how to write interim and final reports, how to apply for research permits, or how to apply to be able to apply for grants. Well, we learn that over the years. We even learn to live with the fact that each funding organisation, authority or university admin uses different formats and puts an incomprehensible emphasis on formatting. The only unfortunate thing is that the main teaching and learning tools are rejection and repeated revision.

All recommendations in the paper are addressing the science regulators and policy makers. Here are some that might be - at least in parts - applicable in other domains of science:

Recommendations
  • People regulating research should use their influence to reduce other causes of waste and inefficiency in research
    • Monitoring—people regulating, governing, and managing research should measure the extent to which the research they approve and manage complies with the other recommendations in this Series
  • Regulators and policy makers should work with researchers, patients, and health professionals to streamline and harmonise the laws, regulations, guidelines, and processes that govern whether and how research can be done, and ensure that they are proportionate to the plausible risks associated with the research
    • Monitoring—regulators, individuals who govern and manage research, and researchers should measure and report delays and inconsistencies that result from failures to streamline and harmonise regulations
  • Researchers and research managers should increase the efficiency of recruitment, retention, data monitoring, and data sharing in research through the use of research designs known to reduce inefficiencies, and do additional research to learn how efficiency can be increased
    • Monitoring—researchers and methodologists should do research to identify ways to improve the efficiency of biomedical research


An Arctic Foodweb

Zackenberg, Greenland
Understanding who feeds on whom and how often is the basis for understanding how nature is built and works. A new study now suggests that the methods used to depict food webs may have a strong impact on how we perceive their makeup. 

In order to understand how feeding interactions are structured, researchers from Finland and colleagues here at BIO chose to focus on one of the simplest food webs on Earth: the moths and butterflies of Northeast Greenland, as attacked by their specialist enemies, parasitic wasps and flies developing on their host, killing it in the process. This work is the result of a five-year exploration of insect food webs of Zackenberg in the High Arctic. The beauty of this rather simple system is that one has to keep track of perhaps only a handful of species which provides one with a food web structure of manageable complexity. As a result researchers were much more confident to have captured the full system and to have ruled out interactions that were not part of it. 

The researchers supplemented the traditional technique of rearing host larvae until the emergence of either the adult or its enemy with DNA Barcoding. More specifically, they targeted COI regions which differ between the predator and the prey, in order to selectively detect both immature predators from within their prey, and the remains of the larval meal from the stomachs of adult predators. By then comparing the sequences obtained to a reference library of DNA Barcodes of all species in the region, they could determine exactly who had attacked whom. One of the reasons to use this approach was the obscure nature of some species involved. As larvae, some of the predators attack their prey when they are hidden in the ground or vegetation, where they are likely going to missed by any collector. By instead looking for prey remains in the guts of the more easily-detectable adult predators, the scientists were able to establish previously unknown links within the food web.

The new approach changed every measure of the food web structure with three times as many interactions between species as known before. On average, most types of predator proved less specialized than assumed, and most types of prey were attacked by many more predators than previously thought. Furthermore, the comparison of the different techniques with each other revealed that food web structure more among the different techniques than among localities. Thus, whatever we think that we know about food web structure across the globe may be dictated as much by how we have searched as by how species really interact.

Fascinating, one of the simplest food webs we could possible find in the world, and yet it is far more complex that previously thought. I cannot say I am surprised as all the years in the trade have taught me that relationships and interactions in nature are more intertwined as humans were able to fathom so far. Maybe we are getting a little closer to a deeper understanding now.

In conclusion, our results show how the information provided by molecular techniques can surpass that recovered by traditional techniques, but also that different types of molecular information are complementary, revealing different features in the emergent architecture of ecological interaction webs. By resolving more interactions than traditional techniques, by revealing interactions of species with a cryptic lifestyle, and by revising our impression of emergent food web structure, such combinations have the potential to revamp our impression of local food web structure, and how biotic interactions are patterned across the globe.

Monday, January 20, 2014

Hope for amphibians?

Amphibians are currently affected by a wave of global extinctions. The main reasons for this decline are anthropogenic habitat alteration and fragmentation but mere conservation of amphibian habitats no longer guarantees survival. As a matter of fact, the introduction of infectious diseases has been shown to drive amphibians to extinction even in seemingly pristine habitats. 

The most prominent example is Chytridiomycosis, a fungal disease which is devastating amphibians around the world. It is caused by a skin fungus (Batrachochytrium dendrobatidis) in short called Bd. The fungus infects the skin which for amphibians is a vital part of the respiratory system. If Bd successfully establishes, infections will steadily increase and above a certain threshold, amphibians will start dying. A number of vulnerable species have been lost already, especially in Central America and tropical Australia and the fungus spreads like wildfire. However, prevalence varies significantly at local and regional scales.

Even a single highly susceptible host species, such as the European midwife toad Alytes obstetricans, can exhibit strong variation in the prevalence of infection across small geographic scales. Mortality in this species owing to Chytridiomycosis correlates positively with altitude, which is due at least in part to the effects of environmental temperature. However, this does not explain why sites with equivalent temperature regimes can still exhibit substantial variation in prevalence and mortality associated with infection, or why Bd-positive sites were found to be more similar to each other than would be expected based on chance.

This find intrigued colleagues from Europe enough to start a whole range of experiments, which took over three years to complete, to understand, which differences between different ponds and lakes of the Pyrenees could explain such a pattern. The researchers found several difference between infected lakes and uninfected ones. Their geological characteristics were different and they were surrounded by disparate vegetation. Water samples from the sites showed clear differences in laboratory cultures of the pathogen, as well as in the infection dynamics. A series of additional experiments established that a suite of microscopic aquatic predators, such as protozoans and rotifers, are capable of consuming large quantities of the infectious stage of Bd which in turn reduces the infection pressure for the whole population by reducing the number of infected tadpoles.

I am very careful to not overstate the results of this study but there seems to be a little hope for the amphibians although the main causes for their decline (habitat alteration and destruction) have not changed.

We here show the importance of predation in controlling infections in larvae of two amphibian species and provided direct evidence that zoospore ingestion is the mechanism through which infection is modified. Development of methods that facilitate natural augmentation of predatory microorganisms as a form of Bd biocontrol may hold promise as a field mitigation tool that lacks the downsides associated with introducing nonnative biocontrol agents, such as the use of antifungal chemicals or release of nonnative skin bacteria into the environment, or the reliance of unpredictable environmental temperature to “cure” infections.

Friday, January 17, 2014

The locust genome is not the largest

World's Largest Animal Genome Belongs to Locust: New Insight Explains Swarming, Long-Distance Migratory Behaviors is a headline that goes through the internet as we speak and it is simply wrong! Although this locust has a 6.5 Gb genome that is twice as large as ours it falls way short compared to the actual record holder, the African Lungfish (Protopterus aethiopicus) with 139 Gb. There are also some salamanders and newts that have larger genomes. Don't believe me? Check this database.

Too bad, this is only because somebody did a lousy job writing the original press release. The authors of the associated paper stated something else - Here we present a draft 6.5 Gb genome sequence of Locusta migratoria, which is the largest animal genome sequenced so far. So it is the largest sequenced genome so far and not the largest animal genome. That is a big difference. I wonder if nobody proofread the release before it went out.

Although this looks very much unintentional it reminded me of yesterday's blog post by Larry Moran in which he shows the results of the combination of bad papers and hyperbole press releases. To stand out in the mass of excellent research published every day papers are often accompanied by press releases that exaggerate the findings. A couple of infamous controversies (e.g. ENCODE, Arsenic-associated bacteria) should have waken everybody to the problem at hand but frankly I don't see any real reaction in the community. That is particular concerning if you consider that many press releases are not actually read by a journalist or editor specialized in science. Most of the modern news aggregators are machines and if there is human intervention it has to happen in such a short time that such errors slip through all the time. Our locust genome size mistake has already been picked up by 20 news distributors and some of them (such as Eureka or Science Daily) are actually considered reputable. We can only lean back and see what the writers in the press will make out of it.

We as researchers have to be more careful with the news we pass on to the press. In the worst case there won't be anybody doing a final fact-check and once misinformation is out there it is hard to get rid of it and it will multiply rapidly. It is a pity in this case because the actual paper on the locust genome is pretty good.

Thursday, January 16, 2014

How to increase value and reduce waste 2

For the second paper in the Lancet series I have to cherry pick a little as large parts of the publication focus on specifics for biomedical research. My intention is to present and talk about the findings that can easily be applied to research in general. Nevertheless it is more than worth to read these publications in their entirety as most of them contain very useful ideas on how to change current drawbacks.

The general topics of this study are shortcomings of research design, conduct and analysis and the authors don't hold off on criticism:

  • absence of detailed written protocols and poor documentation of research is common 
  • information obtained might not be useful or important 
  • statistical precision or power is often too low or used in a misleading way 
  • insufficient consideration might be given to both previous and continuing studies 
  • arbitrary choice of analyses and an overemphasis on random extremes might affect the reported findings.

I must admit that none of those points are really news to me which means I've crossed path with each and every one throughout my career, perhaps even myself walking into those traps. Three reasons for this are carved out in this publication. 

One major problem is the failure to train the research workforce properly. Statistics is a very good example. For example a study cited in the paper showed that p values did not correspond to the given test statistics in 38% of articles published in Nature and 25% in the British Medical Journal in 2001. Many colleagues bemoan the fact that students have very little knowledge in statistical methods when they start working on their first real project or thesis. I can attest to that but it would be wrong to blame the students for the plight. It rather seems that we seriously should consider the way we teach statistics. While the complexity of statistical analysis grows the time spend to learn the basics was gradually reduced. Let's face it, most of the algorithms used in analytical software for genetic data are often not fully understood although deeper knowledge is crucial to make informed decisions on which method to use and how to interpret the results. 

Another issue identified by the authors is the fact that often inadequate emphasis is placed on recording of research decisions and on reproducibility of research. There are a lot of examples in the paper where companies have tried to reproduce biomedical research for the development of treatments and failed. Researchers at Bayer could not replicate 43 of 67 oncological and cardiovascular findings reported in academic publications. Researchers at Amgen could not reproduce 47 of 53 landmark oncological findings for potential drug targets. This could be the result of bad study design or sloppy reporting. I will never forget what I was taught as undergrad - if you write your method section it should read as clear as a good cookbook recipe. Everyone should be able to redo your experiment without further reading or consultation.

Finally, the authors state that our current reward systems incentivise quantity more than quality, and novelty more than reliability. It is very rare that scientists are rewarded for rigorous work and efforts to replicate their own research in order to ensure reproducibility. The hunt for high impact factors and h-indices has flooded the literature with mediocre work. It is not rare that authors split a nice study into several papers each telling the same story but from a slightly different analytical angle. That might be an interesting idea for a fictional story or the next best selling novel(s) but in research it just dilutes the findings and I find it inappropriate. We shouldn't need prestidigitation to receive credit for our work.

It is one thing to lament about the current situation but it is another to propose ways to find out of it:
Recommendations
  • Make publicly available the full protocols, analysis plans or sequence of analytical choices, and raw data for all designed and undertaken biomedical research
    • Monitoring—proportion of reported studies with publicly available (ideally preregistered) protocol and analysis plans, and proportion with raw data and analytical algorithms publicly available within 6 months after publication of a study report
  • Maximise the effect-to-bias ratio in research through defensible design and conduct standards, a well trained methodological research workforce, continuing professional development, and involvement of non-conflicted stakeholders
    • Monitoring—proportion of publications without conflicts of interest, as attested by declaration statements and then checked by reviewers; the proportion of publications with involvement of scientists who are methodologically well qualified is also important, but difficult to document
  • Reward (with funding, and academic or other recognition) reproducibility practices and reproducible research, and enable an efficient culture for replication of research
    • Monitoring—proportion of research studies undergoing rigorous independent replication and reproducibility checks, and proportion replicated and reproduced

The biodiversity of Duckburg


My generation is one of many that grew up with Disney comics. I must confess that I was actually an avid collector of Donald Duck comic books. My collection was quite large but at some point I sold it. I wish I hadn't as my son seems to develop a similar taste these days.


And now I get this email telling me that there exists a biodiversity database of Disney characters and it is not a small one. It contains a detailed list of more than 160,000 Disney comics stories and more than 50,000 publications, but it's not just a simple list―it includes descriptions of the stories, artists, number of pages, appearances and much more. A group of about 30 individuals spread on four continents have committed themselves to index every single Disney comic story and every single Disney comic magazine in the world. They are obviously not done but they have been doing this as a hobby and with no support from the Disney company.

The database has its own relational synonymy structure for character names across countries, with links describing the source publications for those names. That might sound familiar to some of my readers. For example, on the Uncle Scrooge page. In France, you'd call him: "Omer Picsou" or "Oncle Harpagon". In Argentina he's "Tío Rico", and in Germany "Onkel Dagobert". 

Moreover, some database entries have been used to produce a genealogical history of Disney's Duck characters and it is quite a big family tree.

h/t Rodger Gwiazdowski


Wednesday, January 15, 2014

How to increase value and reduce waste 1

As promised the day before yesterday I want to have a closer look at the individual publications in the new series in The Lancet on wasteful research and ways to overcome this issue

The first paper was written by an international group of biomedical researchers. The lead author is one of two who, back in 2009 made the perhaps bold statement that the incredible amount of 85% the investment in science is actually wasted because of a flawed system. The new paper discusses how avoidable waste can be considered when research priorities are set. According to the authors a lot of research does not lead to worthwhile achievements. They provide to potential reasons for this. One problem is that some studies are conducted to further understand basic concepts and mechanisms but the findings have no societal benefit. That is one argument I've heard very often. Actually that's the one you might hear not only from potential funding partners but also from family and friends. The problem is that especially groundbreaking basic research starts without any consideration of relevance to society but purely to advance knowledge. Often it is just one individual with a new idea. As a consequence different models have been proposed for decisions about which high-risk basic research to support. These models suggest to fund individual scientists rather than projects to allow maximum freedom of thinking and action. However, nobody is following these ideas. One example in the paper is the U.S. NIH funding in 2011 where only 0.001% of all proposals were in categories that focus on individuals. ...the kind of creative lateral thinking that has led to important advances in understanding has had to survive peer-review systems that tend to be innately conservative, subject to fashion, and often inimical to ideas that do not conform with mainstream thinking. Experimental evidence suggests that a strong bias against novelty does exist in traditional peer-review systems. Moreover, peers tend to support proposals that have similar fingerprints to their own interests, not those that are really novel.

The other issue is that often good research yields unexpected results. Well, that is science, and I agree with the authors that as long as the way in which these ideas are prioritised for research is transparent and warranted, these disappointments should not be deemed wasteful; they are simply an inevitable feature of the way science works.

In their attempt to define what might be a good way to separate the wheat from the chaff they provide a nice figure of research categories. They essentially adapted Pasteur's quadrant, a term that was introduced by Donald Stokes in 1997. The renaming was done to fit the quadrant into the biomedical context. Stokes initially had named the category for basic research Bohr quadrant and the one for pure applied research Edison quadrant. The definitions are essentially the same. Important is the waste quadrant reflecting research that neither advances knowledge nor makes any relevant strides towards immediate application.


Well, translated into our world of biodiversity science, all the taxonomic work (species descriptions, discoveries, revisions) falls in to the Curie quadrant as it advances knowledge without prior consideration of further relevance in other fields. Pure applied research in a DNA Barcoding context (Doll quadrant) could be the development of new technologies that utilize barcoding data, e.g. an eDNA probe to discover invasive or rare species in a given region. The Pasteur quadrant is filled wih all those studies that utilize barcoding to address important issues e.g. in conservation biology, ecology, food safety etc. And yes, we do have examples in the Waste quadrant such as studies that try to introduce yet another primary DNA Barcoding region such as repeated attempts to replace COI in fish with cytb or 16S. This ought not be confused with the search for complementary gene regions because the defined standards have proven insufficient in particular taxonomic groups.
There is a lot more information in the paper and a number of suggestions to improve the system of research funding. I can only recommend to read it (it is open access after you signed up with The Lancet). The authors also state that research funders have the primary responsibility for reductions in waste resulting from decisions about what research to do. Therefore, most of the general recommendations made are actually meant to improve the decision making system on their end including a great deal of transparency for us scientists:
Recommendations
1
More research on research should be done to identify factors associated with successful replication of basic research and translation to application, and how to achieve the most productive ratio of basic to applied research
Monitoring—periodic surveys of the distribution of funding for research and analyses of yields from basic research
2
Research funders should make information available about how they decide what research to support, and fund investigations of the effects of initiatives to engage potential users of research in research prioritisation
Monitoring—periodic surveys of information on research funders' websites about their principles and methods used to decide what research to support
3
Research funders and regulators should demand that proposals for additional primary research are justified by systematic reviews showing what is already known, and increase funding for the required syntheses of existing evidence
Monitoring—audit proposals for and reports of new primary research
4
Research funders and research regulators should strengthen and develop sources of information about research that is in progress, ensure that they are used by researchers, insist on publication of protocols at study inception, and encourage collaboration to reduce waste
Monitoring—periodic surveys of progress in publishing protocols and analyses to expose redundant research

Tuesday, January 14, 2014

The costs of inappropriate taxonomy

Madracis auretenra
Yesterday's announcement to have a closer look at the new series in the Lancet sparked some interest and one of the emails I found particular intriguing. At least intriguing enough to talk about it first. My colleague Rodger Gwiazdowski forwarded me a paper from 2008 which I was completely unaware of. .

According to Rodger the publication is the best example I know of that estimates a dollar value loss from application of inappropriate taxonomy. And indeed I couldn't find any other papers that explicitly deal with the monetary consequences of mis-identifications. Before I go more into detail I would like to stress that this example particularly goes after what the authors call bad taxonomic practices. I am looking at this issue from a more general vantage point especially when it comes to species that are extremely difficult to identify using traditional morphology-based methods. Therefore, I'd like to define what I consider good taxonomic practice - the use of all necessary and available methods to properly identify an unknown specimen. This explicitly includes DNA-based methods. Many people call this integrative taxonomy in an attempt to bring the morphological and molecular world together. Even today in a world where DNA work has become really cheap there is still a large number of researchers claiming that it is too expensive to be done on a regular basis especially in taxonomy where financial resources are decreasing. This might also be particular true for researchers in developing nations that lack area-wide infrastructure to do molecular work. However, for the rest of us it might be worth to look at the publication by Locke and Coates which puts things in perspective.

In this paper we present the results of our attempts to discover the identity of specimens that have been named as Madracis mirabilis or as “Madracis mirabilis sensu Wells” in papers published since 1967. In particular we wanted to know whether these specimens were or were not Madracis auretenra. We also made a rough estimate of a dollar value for the research effort that might be considered wasted, if no species identity can be resolved. 

The authors essentially claim that data derived from false species identification cannot be applied in any analyses that have species-level implications and that spans all across the biological disciplines. In this study it is estimated that inappropriate taxonomy concerning the scleractinian coral species Madracis auretenra costed about $4 million dollars. That seems to be an extraordinary amount but to me their approach is reasonable and based on realistic estimates:

We know that data from 86 studies, which might be relevant to Madracis auretenra, cannot be included in a review of the biology of that well-studied,  widespread, and common species; in fact, at this point those data could not be included in such a study of any coral species. However, the same data may be applicable to very general questions about coral reef ecology and biology. Thus, conservatively, we suggest that only a third of the data produced in those studies has extremely limited value. If we consider that the cost of our studies [$140,000] is in a similar range to the studies we cited and apply a current cost to research over the whole period of the publications we consider, then we would be looking at a total expenditure of about $3,970,620 for data with limited to no value.

The authors go on and provide a list of taxonomic practices (morphology-based) that should be followed all of which I would consider common sense among colleagues. I would suggest to add DNA Barcoding to any such list. Why? Well, it costed the authors $140,000 to describe one species (Madracis auretenra) and to disseminate the information about ongoing mis-identifications. Even that seems too high to me. I am convinced if every newly described type specimen would be barcoded (even several specimens are not going to cost more than a few hundred Dollars) we could safe a lot of money in subsequent studies and even if we perform similar attempts to clean up a taxonomic mess such as the one involving Madracis auretenra.

Monday, January 13, 2014

Research: increasing value, reducing waste


"The Lancet" started the new year with a unique special that looks at the recurring criticism that a lot of the current research is simply irrelevant and wasteful. Under the title "Increasing Value, Reducing Waste" The Lancet published a Series of five papers about research. In the first report Iain Chalmers et al. discuss how decisions about which research to fund should be based on issues relevant to users of research. Next, John Ioannidis et al consider improvements in the appropriateness of research design, methods, and analysis. Rustam Al-Shahi Salman et al. then turn to issues of efficient research regulation and management. Next, An-Wen Chan et al. examine the role of fully accessible research information. Finally, Paul Glasziou et al. discuss the importance of unbiased and usable research reports. These papers set out some of the most pressing issues, recommend how to increase value and reduce waste in biomedical research, and propose metrics for stakeholders to monitor the implementation of these recommendations.

Admittedly this is all written from the perspective of biomedical research but a lot of the issues described can be found in all scientific disciplines and unfortunately represent a more general trend. What makes this rather special is the fact that all studies don't stop at stating the facts and telling us inconvenient truths but make suggestions how the sciences can improve. All contributions consequently focus on the system and do not look at external factors. Although my initial reaction was skeptical as there are certainly examples how misdirection through external factors can create irrelevant research and hamper important scientific work. Canadians can tell you a thing or two about that. But instead of simply complaining the approach is rather "Here is what we think is wrong and here is what we think we can do to make it better".

Nevertheless the criticism is harsh: institutional incentive systems are counterproductive, money is wasted, patients put at risk, and the entire system skewed. In the editorial commentary Sabine Kleinert and Richard Horton state There is clearly a strong feeling among many scientists ... that something has gone wrong with our system for assessing the quality of scientific research.

And they go on citing two Nobel prize laureates, Randy Schekman, and Peter Higgs, both giving interviews in The Guardian. Schekman used the attention he received as Nobel laureate in 2013 and started a ferocious attack against the lead journals: These luxury journals [Nature, Science, and Cell] are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.

Peter Higgs (yes, that's the guy who predicted the Higgs Boson) actually described himself as an embarrassment to his University department because he published so little: Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.

Back in 2009 two of the senior authors of publications in this new series published a paper titled Avoidable waste in the production and reporting of research evidence.  They made the extraordinary claim that as much as 85% of research investment was wastefully spend. An incredible figure. According to the authors researchers often start with wrong objectives, work with insufficient study designs, and - a cardinal sin - they ignore similar research that has already been done. The study also states clearly that this is not necessary the result of sloppy or bad research but rather a problem of the system, e.g. all too often results are not accessible to everyone.

I think it is worth to look closer at all of the five papers and in the next posts I will take a stab at transferring their messages to research in general and to biodiversity science in particular.


Friday, January 10, 2014

War elephants

The Battle of Raphia
After Alexander the Great's premature death, his vast kingdom was divided among his generals. Being generals, they spent the next three centuries fighting over the land. One historically well known battle, the Battle of Raphia, took place in 217 B.C. between Ptolemy IV, the King of Egypt, and Antiochus III the Great, the King of the Seleucid kingdom that reached from modern-day Turkey to Pakistan.

According to historical records, Antiochus's ancestor traded vast areas of land for 500 Asian elephants whereas Ptolemy established trading posts for war elephants in what is now Eritrea, a country with the northern-most population of elephants in East Africa. In the Battle of Raphia, Ptolemy had 73 African war elephants and Antiochus had 102 Asian war elephants, according to Polybius, a Greek historian who described the battle 70 years later:
A few of Ptolemy's elephants ventured too close with those of the enemy, and now the men in the towers on the back of these beasts made a gallant fight of it, striking with their pikes at close quarters and wounding each other, while the elephants themselves fought still better, putting forth their whole strength and meeting forehead to forehead.
Ptolemy's elephants, however, declined the combat, as is the habit of African elephants; for unable to stand the smell and the trumpeting of the [Asian] elephants, and terrified, I suppose, also by their great size and strength, they at once turn tail and take to flight before they get near them.

Over the years, there has been a lot of speculation about Polybius's account.
Until well into the 19th century the ancient accounts were taken as fact by all modern natural historians and scientists and that is why Asian elephants were given the name Elephas maximus although it later became clear that African elephants are mostly larger than Asian elephants. Speculations about the reason for this discrepancy went as far as suggestions that there might even have been an extinct smaller subspecies.In 1948, Sir William Gowers reasoned that Ptolemy must have fought with forest elephants that fled from larger Asian elephants, as Polybius described. Until now, the main question remained: Did Ptolemy employ African savanna elephants (Loxodonta africana) or African forest elephants (Loxodonta cyclotis) in the Battle of Raphia?

Today Eritrea has one of the northernmost populations of African elephants with only about 100 individuals. They have become completely isolated, with no gene flow from other elephant populations. Their conservation is very important Eritrean authorities and such efforts would benefit from an understanding of their genetic affinities to elephants elsewhere on the continent and the degree to which genetic variation persists in the population. Using dung samples from Eritrean elephants researchers from Eritrea and the U.S. have looked at microsatellite data as well as nuclear and mitochondrial markers. Their results:
The sampled Eritrean elephants carried nuclear and mitochondrial DNA markers establishing them as savanna elephants, with closer genetic affinity to Eastern than to North Central savanna elephant populations, and contrary to speculation by some scholars that forest elephants were found in Eritrea. Mitochondrial DNA diversity was relatively low, with 2 haplotypes unique to Eritrea predominating. Microsatellite genotypes could only be determined for a small number of elephants but suggested that the population suffers from low genetic diversity. Conservation efforts should aim to protect Eritrean elephants and their habitat in the short run, with restoration of habitat connectivity and genetic diversity as long-term goals.

This study disproved years of rumors and hearsay surrounding the ancient Battle of Raphia. It seems more likely that Polybius's sources have been largely exaggerating. 

Blackflies

Blackflies comprise 26 genera and some 2200 species, and they are a nuisance. Similarly to the mosquitoes the female requires a bloodmeal for egg maturation but their strategy for getting this from us mammals is a bit more violent. Admittedly, the bites are shallow but they are accomplished by first stretching the skin using teeth on the labrum and then abrading it with the maxillae and mandibles, cutting the skin and rupturing its fine capillaries. Feeding is facilitated by a powerful anticoagulant in the flies' saliva. As the flies are rather tiny one does not feel the process of damaging the skin. The first indication that one was bitten is usually the tickling sensation of blood that flows down the skin. This can be followed by itching and localized swelling. Swelling can be quite pronounced depending on the species and the victim's immune response. The black fly's swarming behavior can make matters worse and any outdoor activities unpleasant or intolerable even if one encounters species that do not require blood meals. Here in Canada black flies built quite a reputation as nicely demonstrated in this song:


Unfortunately the story has a far more serious background as some black fly species serve as vectors for parasites:
The most important human parasites transmitted by blackflies are the nematodes Onchocerca volvulus, the causative agent of river blindness and Mansonella ozzardii, which causes Mansonelliasis or ‘serous cavity filariasis’. In Latin America, blackflies are thought to be responsible for outbreaks of endemic pemphigus foliaceus and to be the aetiological agent of the Altamira haemorrhagic syndrome. Blackflies also transmit pathogens to domestic livestock, resulting in increased mortality, reduced weight, decreased milk production and malnutrition.

A new study not only adds new barcodes for a number of blackfly species to BOLD but perhaps more importantly the authors evaluated the efficacy of various primers for the purpose of DNA barcoding old, pinned museum specimens of blackflies. Of course it is perhaps more desirable to have freshly collected specimens simply because one can easily obtain a full-length barcode fragment from it. However, sometimes museum specimens represent the only available samples for rare or otherwise difficult to acquire species. In addition a barcode from a type specimen could resolve taxonomic uncertainty when putative new species are found. 

Unfortunately, the use of museum specimens to generate DNA Barcodes can be challenging due to factors such as DNA degradation, contamination and uncertainty regarding details of specimen collection and preservation.

The colleagues have utilized primer sets that amplify small overlapping fragments, which can be combined during post hoc analysis to form a full-length or near full-length DNA Barcode. Here are their results copied from the abstract of the study:

We analysed 271 pinned specimens representing two genera and at least 36 species. Due to the age of our material, we targeted overlapping DNA fragments ranging in size from 94 to 407 bp. We were able to recover valid sequences from 215 specimens, of which 18% had 500- to 658-bp barcodes, 36% had 201- to 499-bp barcodes and 46% had 65- to 200-bp barcodes. Our study demonstrates the importance of choosing suitable primers when dealing with older specimens and shows that even very short sequences can be diagnostically informative provided that an appropriate gene region is used.

Sounds a little bit like the good old shotgun sequencing with subsequent contig assembly. Just on a much smaller scale and much more targeted. Sounds like a way to go after all those DNA bearing type specimens in the museums, laborious but perhaps the only way to get a barcode for some species.

Thursday, January 9, 2014

The Polar vortex and a fundamentalist

Usually I don't engage in discussions that involve creationism or climate change denial. Both world views are unscientific, unworldly, and closed-minded. The problem is that most supporters are so stubborn and unwilling to accept facts and scientific evidence that any attempt to convince them otherwise seems futile. The only remaining option is to publicly ridicule them and show the world how farcical most of their claims are. Some of my more senior colleagues (e.g. Jerry Coyne, Larry Moran, P.Z. Myers) have developed exquisite mastery in the art of fighting the impairment of science by plain stupidity. Up to this point their efforts were sufficient to keep me quiet and focused on the main purpose of this blog, reporting about advancements in biodiversity science. 

Unfortunately, there is a certain level of ignorance paired with a political agenda that drives me up the wall. So it happened yesterday when I went though the news of the last few days. I could hardly believe what came out the mouth of the infamous Rush Limbaugh:
"Right on schedule, the media have to come up with a way to make it [the cold snap] sound like it's completely unprecedented, because they've got to find a way to attach this to the global warming agenda."

"They're [liberals] perpetrating a hoax." "They are relying on their total dominance of the media to lie to you each and every day about climate change and global warming."

Limbaugh noted that media outlets have created a "polar vortex" to exaggerate the harsh weather conditions by publishing "fraudulent pictures," including photographs of the North Pole "melting" in order to convince people that "we're responsible -- we're causing it."

"Any weather extreme now is said to be man-made and therefore it fulfills the leftist agenda," he added. "Obviously there is no melting of ice going on at the North Pole."

It is bad enough to live in a country that has a conservative government that formally abandoned Canada's commitments under the old Kyoto Protocol. Furthermore, Environment Canada confirmed not long ago that Canada is not even close to being on track to meet its 2020 emissions target under the subsequent Copenhagen Accord. 

But now this rabble-rouser from Missouri claims that the meteorological term "polar vortex" is an invention of (left-wing) media in order to relate our current weather to global warming. Even worse, the US government had to release an official statement to counter Limbaugh's horseplay. That actually is scary as it demonstrates how much power ignorance has in the United States. Where else in the world would a government see the need to comment on such ridiculous statements? It was bound to happen -  extreme cold spells, like the one that we were having here in Canada and the United States during the last couple of days are used in an attempt to disprove global warming. Well, as a matter of fact no single weather episode can either prove or disprove global climate change. Actually there is evidence that suggests that this kind of extreme cold is indeed a pattern we can expect to see with increasing frequency, as global warming continues.

So let's get the facts straight:
The polar vortex is by no means something new or something rare. It is a permanent atmospheric feature all year round existing at the North and South Poles. They are a circulation (on a planetary scale, not a mesoscale like a tornado, so it’s big) and are located from the middle troposphere to the stratosphere so it is an upper level phenomenon. The polar vortices are strongly reliant on large scale temperature gradients so in the winter, they are at their strongest due to the temperature gradient between the equatorial regions and the poles. The term “polar vortex” has been used in scientific papers since the 1940’s.
Dayna Vettese, Meteorologist

Limbaugh has repeatedly used the term "environmentalist wacko" when referring to environmental advocates, mainstream climate scientists and other environmental scientists. Here in Canada I often heard the term "tree hugger" used in a similar context. I am still pondering on how I like to be called. Maybe "environmental tree-hugging wacko with a PhD". 

Anyway, for me there are only two main reasons for climate change denial: ignorance or greed. Pick one, Mr. Limbaugh.

Wednesday, January 8, 2014

Many roads lead to Barcode libraries

Gnaphosa parvula from Churchill, Manitoba
Coincidentally two papers on spider DNA Barcoding have been published around December. they describe two ways to build a reference library for DNA Barcodes which I find very interesting especially as it documents how complementary different approaches can be. A colleague of mine (Rodger Gwiazdowski) is even more fascinated by this as he is particularly interested in the nuts and bolts of library building and we already had a few discussions about the topic. Those were usually fueled by blog posts (either mine or others on new papers). So I guess there will be another one after this contribution. Looking forward to it.

The first study I wanted to introduce is from the colleagues here at BIO:
Arctic ecosystems, especially those near transition zones, are expected to be strongly impacted by climate change. Because it is positioned on the ecotone between tundra and boreal forest, the Churchill area is a strategic locality for the analysis of shifts in faunal composition. This fact has motivated the effort to develop a comprehensive biodiversity inventory for the Churchill region by coupling DNA barcoding with morphological studies. The present study represents one element of this effort; it focuses on analysis of the spider fauna at Churchill.

The Spiders were collected during the few snow-free months in summer over six years. The collections cover a wide range of habitats near Churchill and different methods were used (e.g. hand collecting, pitfall traps, sweep nets). Most specimens were obtained through general collecting efforts by field course students. I actually remember supervising students on two of those trips. More targeted sampling was done by the senior author of the paper (our resident spider man Gerry Blagoev). 

The result of those efforts is the first comprehensive DNA Barcode reference library for the spider fauna of any region. The researchers found 198 species among 2704 specimens that were barcoded, tripling the species count for the Churchill region. Diversity estimates suggest that there might be another 10-20 species awaiting discovery. They failed to detect 22 species reported in earlier work in the region which means that the total Churchill fauna may include nearly 250 species. As collections were made exclusively during the snow-free summer, e.g. vernal species associated with snow edges were unlikely to be sampled. Knowing this will help to focus future efforts to complete this unique library.

When building such a reference library, specimens must either be freshly collected or taken from an existing collection. The latter might not be feasible to fill the gaps in the Churchill library simply because past efforts have been very limited. There is simply not much material available. But what if this is different and species were frequently collected over several decades and deposited in a museum collection such as the Naturalis Biodiversity Center in Leiden?

The second study on my list addressed the question if it we can predict which specimens in a museum collection are likely to yield a successful DNA Barcode sequence. If so, can one optimize resources, wisely select museum specimens to sequence, and plan fresh collections to supplement? The study focused on Dutch spiders.

31 target species were selected. For each of these, a series of increasingly older specimens was selected and sequenced. This was supplemented with freshly collected material representing nearly 150 Dutch spider species. The scientists recorded which specimens successfully produced DNA Barcode sequences and which failed. They also experimented with DNA extraction techniques.

For freshly collected specimens overall, body size is not correlated with sequencing success or failure. But larger species seem to have a longer DNA Barcoding shelf life than smaller species. When using common destructive extraction methods, small spiders yield useful amounts of DNA for only a few years while those with a body length >3 mm can yield a barcode sequence for about 20 years after collection.

Nondestructive extraction techniques can significantly increase the chances of obtaining a barcode sequence. Even small spiders with a body length of 4 mm or less yield DNA Barcode sequences up to an age of about 15 years while larger spiders can yield barcode sequences for a considerably longer time.

Now that is good use for all skeptical curators out there. The success of nondestructive extraction demonstrated here coupled with the need to preserve museum specimens for a variety of research purposes bodes well for museum collections as source material for DNA Barcode libraries.