Nature Network

Syndicate content
The latest blog posts from across Nature Network
Updated: 8 years 26 weeks ago

SciCom12 - Impact to Heart Attack (from Suzi Gage's blog)

Tue, 05/15/2012 - 7:34pm

I’ve just returned from the British Science Association‘s Science Communication Conference, a 2 day event in London for everyone from researchers like me who communicate alongside their ’day job’ to press officers to funding bodies. It was my first time attending, and it really was an excellent event.scicom12.jpgI went up to London on Sunday evening for Science Showoff, the recommended pre conference entertainment. It didn’t disappoint, and has me excited for Science Showoff Bristol (featuring a set by me) in July!On to the conference proper. The plenary session was an inspirational talk by Lisa Jardine. The theme of the conference was ‘Impact’, and Lisa really gave us all food for thought, discussing her father’s experiences with his series ‘The Ascent of Man’, and what’s changed in terms of science communication since then. It was somewhat TV-centric, yet her discussion of the shifts in the public’s trust in experts and ability to be armchair internet researchers is something science communicators in all fields can consider.The networking sessions were in my view particularly successful. A random piece of paper each and a nifty algorithm meant that in the space of eight 5-minute sessions I met 34 other delegates, and got a one-minute snapshot of their work and interests. I was particularly lucky and met no-one I already knew. A few email addresses were exchanged and names jotted down, very useful.Breakout sessions followed after lunch, and I plumped for ‘Positively Uncertain’. As an Epidemiologist, explaining risk and uncertainty is a part of everyday work I do, so I was intrigued to see how others did it. The panel included Peter Gibbs, a weatherman from the Met Office and Amanda Burls, who teaches post-grad evidence-based healthcare at Oxford. It was great to see completely different types of uncertainty being tackled. Gibbs was fascinating, but Burls’ comments were more relevant to me. It made me appreciate the quality of teaching we have at University of Bristol on such matters! The scariest slide Burls presented showed graphically the percentage value healthcare professionals put on words like ‘rarely’, ‘sometimes’ and ‘often’. Let’s just say the range was ENORMOUS. As I said on twitter, the meaning of uncertain is…uncertain. Although, as Bertrand Russell said, some things are much more nearly certain than others! Glad that’s cleared that up!Tuesday began with a debate about events like the Olympics and their relationship with science. While some in the audience wondered whether science had been ‘tacked on’ to the Cultural Olympiad (there’s no mention of the word ‘science’ anywhere in the Olympiad brochure), the consensus was that events like the Olympics provide a great opportunity. We were encouraged to be proactive if we felt we had something that could be linked to such events. Collaborations benefit everyone involved.The session I found the most eye opening was about Youth Voice. I learnt about Crest, an organisation set up by BSA to give young people a voice in science. It looks like a great scheme and you can learn more here. The FutureMorph website is another fantastic resource; science careers advice for teenagers. I was already aware of the website as I wrote an article for them last year, but it was really interesting to hear how youth-led they were. The final speaker presented a wonderful investigation where primary school children were asked for their views about how they were taught science, and then themselves helped to interpret the findings. A brilliantly designed and implemented piece of research, with a really important conclusion that I worry may get forgotten sometimes: giving children a voice means you must act on what they say, not just listen. My only criticism of this session was that there wasn’t someone from I’m a Scientist on the panel, that event is Youth Voice to a tee. The kids are in charge throughout from deciding which scientists take part to quizzing them to voting them off!I’m running out of space but other highlights included the Soapbox session. I particularly loved the idea of Lab 13; I’ll be investigating that further. Not to mention ‘immersive science communication’, perhaps too graphically demonstrated by a speaker getting up onstage and having a very convincing heart attack in front of a room of people. I think it might be irresponsible; I’m surprised ambulances weren’t called. People were running onstage to try and help. That said, I certainly talked about it afterwards. Lisa Jardine said ’you’ve got to make your subject matter to the audience’, and this man collapsing feet from my seat certainly mattered to me.To conclude I’d say it’s a fantastic conference, and as a researcher, I would recommend that anyone who finds conference mingling a bit daunting should go to this one. Everyone’s so friendly and chatty it’s easy to go and talk to someone. My confidence in my ability to engage with strangers has grown hugely, I hope it transfers to my next conference!

Get Ur Geek On (from Peter Etchells' blog)

Tue, 05/15/2012 - 4:04pm

geek-cover1.jpgThere was a time, in the (embarrassingly) not-too-distant past, when being called a geek was a slur. If you were a geek, you were socially inept. You spent all your time doing technology-, science-, or maths-based hobbies. You might have been shy and retiring. You were not cool.Not any more. No longer is it synonymous with a drive to do science and science alone. It’s a way of thinking objectively about things, about taking a step back, assessing the evidence, and coming to informed conclusions about things. Now, ‘geek’ is increasingly becoming a badge of honour.Why am I pontificating about this? Well, Mark Henderson’s ‘The Geek Manifesto’ was published this week, and is a rallying cry for geeks everywhere to stand up and make themselves heard. Not just in science, but in politics, healthcare, the media, the justice system, the educational system, and anywhere where rational, evidence-based approaches can lead to sound and sensible policies. It’s a superb book, and if you haven’t already devoured it you really should. Better people than I have already written glowing reviews, but I wanted to provide my own thoughts and experiences, on the media section in particular. We’ve already seen lots about how scientists might engage more effectively with the media in getting research of interest out into the public in an honest but digestible way. When this sort of collaboration breaks down, it doesn’t help anyone; research gets misrepresented and misinterpreted, and it promotes scientists as self-interested sensationalists. Sometimes it’s the fault of the journalist, sometimes (and more than some would care to admit), the fault lies with the scientist. It’s up to the geeks, whichever walk of life they might come from, to provide a barrier against this sort of problem. I started this blog because I felt like I was one of those geeks, and I was getting tired of seeing science improperly reported in the media. But since Counterbalanced started, I’ve found a whole wealth of unexpected benefits about blogging.I’ve had lots of brilliant opportunities to get insight into how scientists communicate with the media, and vice-versa. Obviously there’s still tons for me to learn, but one thing that I hadn’t anticipated is that it’s really helped me gain perspective on why I wanted to become a researcher in the first place. I’ve thought long and hard about the research that I do, and how I might communicate it to the world. Because doing cool new research is great, but it’s infinitely more fun when you can share that new stuff with other people in a way that gets them excited about it too. This comic from xkcd sums it up nicely:ten_thousand.pngFurthermore, being critical about research in the media has made me much more critical of my own work, and hopefully that will make it better and better in the long run. If it doesn’t, then I really hope that someone picks me up on it, maybe in a blog like this one.Reading The Geek Manifesto reaffirmed my belief that as many people as possible who are involved in front-line scientific research should be blogging. It helps to get new findings out to more diverse audiences, and it acts as a great neighbourhood watch scheme for picking up on dodgy science claims, wherever they might be. At the very least, it helps you improve your writing skills. More geeks need to be blogging.And so, with the greatest respect, I have to disagree with Prof. Alice Roberts’ view back in January that it’s not ‘great to be geek’. Much to the contrary, I think now, more than ever, being geeky is pretty damn awesome.

Recovery after world's largest tundra fire raises questions (from Liz O'Connell's blog)

Tue, 05/15/2012 - 2:17pm

by Ned RozellWildfire_AnaktuvukMap.gif The scar from the Anaktuvuk River fire of 2007, which scorched an area as large as Cape Cod.NASA MODIS image.Four summers ago, Syndonia Bret-Harte stood outside at Toolik Lake, watching a wall of smoke creep toward the research station on Alaska’s North Slope. Soon after, smoke oozed over the cluster of buildings.“It was a dense, choking fog,” Bret-Harte said.The smoke looked, smelled and tasted like what Bret-Harte has experienced at her home in Fairbanks, but the far-north version was composed of vaporized tundra plants instead of black spruce and birch. The 2007 Anaktuvuk River fire, which burned an area the size of Cape Cod, is the largest fire ever recorded in tundra. It was the first wildfire in the area since slaves were shoving blocks in place to create the pyramids in Egypt (about 5,000 years ago).Bret-Harte and others working at the research station knew they were witnessing something unusual — or maybe seeing the future. They found funding to study the burn, and time in their schedules to get their feet on the black ground. The group of scientists, led by Michelle Mack of the University of Florida, collaborated on a study published recently in the journal Nature.Bret-Harte, a plant specialist, just returned from a helicopter trip to the site of the big fire. Her close-up images show a green, lush landscape as the tundra recovers nicely after four summers.“It’s not back to what it was before — the shrubs are small,” Bret-Harte said. “But in 10 years it will look pretty similar over much of the area.”The new vegetation is photosynthesizing with such vigor that it is taking up as much carbon dioxide from the air as nearby tundra that did not burn in 2007, Bret-Harte said. This is quite a change compared to the staggering amount of carbon the fire added to the atmosphere four summers ago. The researchers calculated that the smoke from the 2007 fire spewed about half as much carbon dioxide as all arctic vegetation in the world sucked in during an average year.If the tundra burned like that every year, in a flash the Arctic could turn from a place where carbon dioxide is pulled from the atmosphere and locked away, to a carbon dioxide generator that would further warm the world.Wildfire_AnaktuvukSmoke.gif The great Anaktuvuk River tundra fire of 2007.Photo by Michelle Mack.“The carbon that was lost in this fire represented about 30 to 50 years of accumulation in the soil,” Bret-Harte said. “But if you burned it again now, you’re getting into the deeper, older carbon. You’d be burning away this bank of carbon stored in the soil over thousands of years. That would be huge.”Was the 2007 Anaktuvuk River fire a freakish, one-time event, or a sign of things to come? Bret-Harte said she doesn’t know, but she does know the conditions that led to the 2007 event. A lightning strike ignited the tundra in mid-July. Wet soils and vegetation snuff most tundra fires, but this one endured because of an exceptionally dry summer. The fire smoldered for a few months until dry Chinook winds curled over the Brooks Range in September, fanning the fire to life.“It burned most of the area in five or six days,” Bret-Harte said.Though the giant tundra fire of 2007 happened due to a combination of rare conditions, at least one of those factors is becoming more common. According to sensors maintained by workers for the Bureau of Land Management, lightning has struck Alaska’s North Slope much more frequently lately. From a steady hit rate of a few thousand lightning strikes from the mid-1980s until the late 1990s, lightning strikes have jumped to about 20,000 each year in the last decade. More lightning strikes and warmer summers might change what people know as a smoke-free northern Alaska.Bret-Harte wonders, “Is this like a tipping point, moving us to a new regime on the North Slope?”.Climate Change Watch at FrontierScientistsOriginally published in the Alaska Science Forum September 5, 2011 Article #2080 Recovery after world’s largest tundra fire raises questions by Ned Rozell"This column is provided as a public service by the Geophysical Institute, University of Alaska Fairbanks, in cooperation with the UAF research community. Ned Rozell is a science writer at the institute."

Spreading the reasonable word (from Lee Turnpenny's blog)

Tue, 05/15/2012 - 8:51am

Do you ever wonder, like I do, that blogging here is becoming (has become) akin to pissing in the wind? I guess that depends on why you do it … genuine desire to advance/promote/dis(seminate) some standpoint/information/opinion/argument; a vehicle for CV-enhancing self-promotion; an echo chamber for the attention-seeker?As I’m no longer a working scientist, I do ask myself this. But then I’m encouraged by Nicoli Nattrass’s recent piece in NewScientist in my belief that doing this is a worthwhile endeavour – even though, nay, because I’m (technically) no longer a practitioner of the caliginous art. And, what with the Twitter-dissing Blue Spectre – who is supposedly much busier than I – considering quizzical frown-inducing texts a worthwhile use of his time (although I personally wish he’d devote more of it to samurai-ing virtual fruit rather than the livelihoods of the un-rich), I don’t see why I should beat myself up about doing it unpaid.And even though my recent scoffing at this year’s Templeton Foundation’s prize award might now be argued as unwarranted on account of its recipient’s subsequent philanthropy, I maintain that this award is simply an apologetics marketing ruse. (Wonder whether the DL will send the BS a message expressing his sadness and condolence s now that the latter’s text buddy is in the dock). So, allow me to alert you to another instance of the need to be on our guard against the march of pseudoscience’s educational infiltration. We can’t all get our views into national newspapers; some of us are even prevented by our local ones, lest we offend one or two types too sensitive to withstand free and warranted criticism of their respective comfort blankets. So… keep on spreading the word…

Sexy primes | video | (from Grrl Scientist's blog)

Tue, 05/15/2012 - 3:00am

SUMMARY: What happens when a numberphile plays with prime numbers? Today’s video answers that vital question that I know has been burning in the back of your mind: What happens when two numberphiles play with prime numbers?

The Doughnut Universe (from Graham Morehead's blog)

Mon, 05/14/2012 - 8:49pm

<!-- Place this tag where you want the +1 button to render --> <!-- Place this tag where you want the StumbleUpon badge to render --> ')In an interview, Stephen Hawking was asked, “What’s the most common misconception about your work?”

He replied, “People think I’m a Simpsons character.” [Link]

stephen-hawking-simpsons.jpgHe was, in fact, in several Simpsons episodes. In one of them, while sharing a beer with Homer, he is seen discussing the shape of the universe. Hawking tells Homer, “Your theory of a doughnut-shaped universe is intriguing… I may have to steal it.” We all know Homer has a doughnut fixation, but this is a reference to a real theory.

FLATLANDFriends and family sometimes ask me to explain this kind of thing. Let me begin with a little background: There is an old book I like, called “”">Flatland," by Edwin Abbott. In it, Abbott describes a two-dimensional world inhabited by the likes of circles, squares, and triangles. He leads the reader to imagine themself in this 2D world. If you were a square in this world, you would have no concept of up and down. You would only know forward/backward, left/right. You would have no way to conceptualize any other dimension.

Sphere-with-square.pngNow go back to your 3D world, but look down at that square. Let’s call that square “Bob.” Watch as Bob moves around and interacts with other 2D beings. Now take these same shapes and put them on the surface of a large sphere. If the sphere is large enough, Bob will not know he was moved to a world with a different shape. How could he? As far as Bob knows, he can still move forward/backward, left/right. From his perspective, there is no perceivable difference between an infinite flat world, and the surface of a large sphere. He will move along the surface of the sphere while thinking he is still in an infinite flat world.

Eventually, Bob may decide to go on a journey. Here’s something all of us know about spheres: if Bob travels far enough in a straight line, he will eventually return to where he started. We understand spheres, so it makes perfect sense to us, but try to think of it from Bob’s perspective. Bob would be flabbergasted. After a long journey, believing he had been putting more and more distance between himself and home, there it was! The last thing he expected to see was his home.

Now imagine that Bob is on a doughnut-shaped surface instead of a sphere. He would still be able reach home by traveling in a straight line, but the journey’s length would be different depending on the direction of travel. How could Bob ever differentiate between a spherical and a doughnut-shaped surface? All of Bob’s tools for interacting with his world operate purely in two dimensions. Short of traveling throughout the entire universe multiple times, there is no way he could know.

TOPOLOGYNow for a non-sequitir into topology : Have you heard of the Four-Color Theorem? Any map can be colored with just four colors in such a way that no two contiguous regions have the same color.

torus_sphere.JPGMathematicians have known about this problem for a long time, and many smart people have tried and failed to prove it (e.g.: Felix Klein). Mathematicians have also long known that different shapes will need a different number of colors. In 1890, Percy John Heawood conjectured an equation to determine this number. According to Heawood, a sphere will need just four colors, just like a plane, but a doughnut (AKA torus), will require seven. It makes intuitive sense because the sphere seems simpler somehow. Heawood came up with this conjecture, but he was unable to prove it.

In 1968, Gerhard Ringel and J. W. T. Youngs were able to prove that Heawood was right about the torus, but they couldn’t prove anything about the sphere. This was a surprise. A sphere is simpler, right?

In 1974, after their program ran on a supercomputer for 1200 hours, Kenneth Appel and Wolfgang Haken proved the four-color theorem for the sphere, and it was anything but simple. Their program identified 633 fundamental map configurations to which all others were reducible. Then came the criticism. Errors were found.

By 1976 they addressed those errors and published a proof that was 50 pages long, with an appendix of 85 pages having almost 2500 additional diagrams, and 400 microfiche pages with further diagrams and verifications of claims made in the main section of text [MORE]. The color problem for the sphere proved to be immensely more complicated than it was for the torus. Why?

As if that weren’t enough, more errors were found. Once those were addressed, even more errors were found. The enormous size of the proof carried the consequence that years were required to find most of its errors.

Paul-Seymour.JPGEventually, in 1994, Paul Seymour with his team at Princeton found a more stable algorithm to perform reductions of map configurations. His proof now stands as the accepted one. Apparently, four colors are enough for a sphere, but why did it take so long? Why was is so much harder than it was for a torus? I don’t know. I’m not even sure if there’s a connection to the shape of a universe, but let’s quit this digression.

OUR UNIVERSEEven though it’s impossible, imagine you had a starship that could traverse the entire universe — our universe, with its apparent three spatial dimensions. Some physicists believe that if you go far enough in a perfectly straight line, you will end up right where you started! Just like the Bob the Square, you will feel like you are going further and further from home, until one day, you see the Milky Way out your window.

How on earth could this be possible? It’s possible if the universe is a hypersphere. This kind of thing is not easy to imagine. We humans can picture Bob’s situation, but our own may be harder.

What’s the difference between a sphere and a torus? a hole. The doughnut has a hole. Our universe might be a hypersphere, but it might also be a hypertoroid (four-dimensional doughnut), which is just as hard to visualize. Stick with Bob the Square. The limitations on Bob are analogous to our limitations. We only understand three spatial dimensions. All of our tools interact only with three spatial dimensions. How are we to know what kind of universe we actually live in?

In 2003, Tegmark et al. published their analysis of the background radiation left over from the big bang. We don’t have the time or ability to traverse the universe, but this radiation has. Something became clear from their mapping of the radiation, it wasn’t symmetrical. The levels of radiation that are visible now, and the fact that they are stronger along one axis, together indicate that our universe is not infinite in all directions. Not only is it finite, it is shaped like a four-dimensional torus…

We all live in a giant doughnut.


Are all Papers Published During a PhD Treated Equally? (from Richard Williams' blog)

Mon, 05/14/2012 - 5:48am

I’ve now been a PhD student for 20 months. As I rapidly progress from the beginning of my registration period towards the end, I’ve begun to wonder whether all papers published within the lifetime of a PhD are treated equally,As you may recall, I was lucky enough to present research from my masters degree during my first year of PhD, and off the back of that was also fortunate to be invited to submit an extended paper for consideration within a special issue of BMC Bioinformatics. This was submitted late last year, and we received review comments at the end of March this year. These review comments were on the whole positive, and all of them very constructive, so following an extensive re-write of a few sections, the manuscript was resubmitted the end of April. We are now awaiting confirmation of a decision from the Guest Editorial Team.As stated above, the paper was based on work from my masters degree. I am yet to publish from my PhD, as the research, although progressing, is taking much longer than expected due to a couple of unforeseen issues. My aim is to get a computational model up and running before the end of my second year, and perform a number of in silico experiments early in my third year. The objective being to generate novel results so that a paper can also be written from this work.This has got me wondering whether the paper from the masters degree or the PhD would be deemed more important? Or whether the papers will be treated equally. I’m not thinking impact factor here, but more along the lines of the requirements of future employers.To put this in perspective, my thoughts will soon be turning to Post Doc applications. Is there a minimum number of papers that will be required for my application to be taken seriously? Would the paper from the masters degree help differentiate me from other applicants? Would it have been better to have written two papers from my PhD instead, or is it just a numbers game?I guess what I’m asking is Are all papers published during a PhD treated equally, regardless of where the data came from (i.e. masters v PhD v collaboration)?

Robins: 4 Eggs, 4 Weeks | video | (from Grrl Scientist's blog)

Mon, 05/14/2012 - 3:00am

SUMMARY: Or; “Eats, poops & leaves”: This charming video provides a glimpse of the effort it takes a pair of birds to raise four chicks until fledging American robin and chicks.
Screengrab.I love how, in spring, the world is transformed overnight from a silent, brooding place into a riot of colour, exploding with the cries of hungry baby birds, each one a hope for the future.

The Clock says 2022 (from David De Roure's blog)

Sun, 05/13/2012 - 9:30am

The clock says 20:22 which, I note to myself in passing, is also the year. Just finishing putting together my presentation on the Co-Evolution of Research. I’ve even linked to some images in a presentation from back in 2009 when we used to use PowerPoint (remember that?) In 2009 I talked about generations of e-Research but I don’t think people are interested in the subtleties of the early generations any more (though I might use the Grid as an example of how we tried to do things in the old days and how much our thinking has changed).Interesting looking back for the big things. Early on in “e-Science” we focused on automation (systematic processing of “big data” – where big only meant petabytes and we talked about datasets rather than dataflows). Then we learned how human-centric it all needed to be, so we switched to “assistive” – let the machines do what they’re good at so that the humans can get on with what only they do best. Scientists were talking about the 4th paradigm, while “digital” humanists (as we used to call them…) had been data-centric forever, and social scientists were embarking on the underpinning methodological work to embrace new forms of data – like the deluge of raw government data which helped bootstrap our Web Observatories.But there was another major shift going on, and the “appstores” and clouds were the symptoms. The massive social engagement with the Web empowered people and innovation, captured human interactions and generated new fields of study. Behind the scenes we had high performance computing but we were moving along the highly social axis too. The social machines guys spotted this – the need for a rethink of Computer Science where the notion of machine was something fundamentally sociotechnical. And the social scientists had already explored the territory – hybrids, cyborgs. So now we have the assistive systems in place and it’s beginning to feel pretty automated! I had notifications this morning of 17 new papers (funny we still call them that…) that I was co-authored on overnight. Let’s hope some get validated. There’s so much stuff being generated and tested automatically it’s hard to keep an intellectual grip on it. One of the papers was generated entirely as a side effect of an old paper which got rerun automatically with a new dataflow and a new algorithm, another came out of the crowd-sourced analysis that I launched a month ago (expensive though it is now to get a timeslot slot on the best “human computers”!) And I don’t particularly know the authors of any of them.So what are we researchers to do with the automated research deluge… what is Paradigm n+1? We’ve Taylorised the bulk data processing, we have a bunch of tools to help us do our individual pieces of insight and innovation, we get paid directly on outputs. We’re taming the sociotechnical science machine but how do we get new intellectual insights at the higher level? My dashboard is all very well, I love the viz of my work over space and time, and I know my specialist pieces inside out. But I want to work at a level of abstraction that builds on all this, where I can be insightful and play in this amazing new laboratory/sandpit that we’ve co-evolved with machines, and to do that I need something different – a new form of scholarship perhaps? I think we’re just beginning to discuss, describe and choreograph the problems in a new way and perhaps this is a glimpse of the next shift. So that’s what my presentation is about, better get back to it so it can flow through our automated blogsphere as the sun comes up on the other side of the planet. I see it’s 20:32 now. Hmm, maybe I should write a blog as if it were the year 2032 and I’m looking back on today…

Agricultural adventures in a foreign country (from Barbara Ferreira's blog)

Fri, 05/11/2012 - 3:39pm

This week I cooked a couple of meals with home-grown parsley and coriander. My home-grown parsley and coriander. Somehow I managed to place seeds in soil, water them and care for them enough to see them sprout and grow to full-fledged herbs. Those of you growing tomatoes, peppers or courgettes in your backyard may not be impressed with my feat. But you should be. And here’s why.Saying I’m not a plant person is an understatement. When I turned 15 or 16 (I can’t remember the exact age), my dad gave me for my birthday a bonsai as old as I was then. That tiny tree lived happily for a decade and a half before I laid hands on it. No more than a few months were needed for the bonsai to dwindle and die on my watch despite my best attempts to care for it.The Sedum spathulifolium, Cape Blanco (a rock plant) that my co-worker Karen placed on my desk last December is now at the threshold of death. “It needs almost no water, you won’t kill it” she said after I told her the plant would likely dry if I was left to take care of it. It did. photo.JPG I am not declaring a defeat yet, there are still some green leaves in there! It gets worse. It was not my first attempt to grow parsley and coriander.In January, with herb seeds and vases in hand, I went to the nearby garden store here in Munich to buy soil. I came home and I carefully planted the seeds as I had been instructed. For weeks, I watered them and waited for them to grow. Nothing. Thinking February’s cold snap had killed the seeds (or the fact that I forgot to water them several times), I tried to plant again in the Spring. It was only then, on removing the old soil from the vases, that I realized I was moving not earth, but fertilizer! Words cannot describe the putrid smell emanating from those vases. Imagine fertilizer chemicals (or decaying animal and plant matter — I’m not sure if I bought organic or inorganic fertilizer) brewing in water for weeks under a dry top layer concealing the smell. It was foul!In my defense, I did not know the German word for soil. Only at the second attempt did I plant my seeds in Erde. And this time they did germinate (!) — possibly because of the small amounts of fertilizer still in the vases — and grew into delicate herbs with a delicious smell.So yes, I victoriously ate my home-grown parsley and coriander this week. And I’m pleased to say I did not get food (1).JPG My herb garden!

The National Biodiversity Network and Biodiversity Research (from Tom Webb's blog)

Fri, 05/11/2012 - 10:03am

Yesterday Nature reported on the launch of the Map of Life project, a new initiative to collate biodiversity records, which allows users to map these, to extract species lists for any area of the planet, and (ultimately) to upload their own data. Limited initially to terrestrial vertebrates and North American freshwater fish, the demo website still looks like a lot of fun. But it also reminded me somewhat of a UK-based project which has been running for a number of years, the National Biodiversity Network (and specifically the data service at the NBN Gateway). This gives me a good excuse to comment on the NBN, which I’ve been meaning to do for a while. Specifically, why hasn’t the NBN Gateway been used more by the research community?Let me first declare some interests. This question was raised by the British Ecological Society around 18 months ago, who convened a scientific working group chaired by Tim Blackburn, of which I was part. And the NBN Trust (@NBNTrust, if you like to Tweet) is actively trying to promote its potential as a research resource, and I’m writing this post partly in response to a request from Mandy Henshall, NBN Trust Information and Communications officer, to spread the word and to find out what it would take to get the data used.The NBN grew out of the need to standardise and coordinate the many thousands of local, regional or national surveys to provide a national picture of the UK’s biodiversity. The NBN Gateway is simply the portal through which these data can be accessed. And it’s become an extremely impressive dataset: currently >75M records from >700 individual datasets. The Gateway itself if really nicely designed for the general user. You can search on an interactive map, or by site name, or by taxon, and quickly get a list of everything that’s been recorded – fantastic if you’re planning a trip to an RSPB reserve, say, and want to know what birds you’re likely to see; equally good if you’re leading a field trip and want to prime your students about what might be there. (Worth noting too that the NBN encompasses all taxa and habitats, including some limited coverage of marine systems.) As a citizen science / public engagement project, the NBN is absolutely superb, and I urge you to go and have a play.But does it work as a tool for academic biodiversity research? Some things it does well, for instance the (nontrivial) task of standardising taxonomy across multiple datasets. But we identified several potential shortcomings, most obviously the fact that not all data are publicly available – it can be incredibly frustrating to see a great dataset identified by your search, but not to be able to access it. Of course, the problem of data access is not restricted to the NBN, and they clearly had to make a choice – include everything with restricted access, or include only a subset of available data which can be provided completely open. Other initiatives, for instance the Ocean Biogeographic Information System (OBIS) went this second route, the idea being that if sufficient people can be convinced to make their data available, peer pressure will mount on those who won’t. But this discussion of open data is best left for another day.Other barriers we identified concerned the different ways that scientists like to access and download data, compared to the public. For instance, we often want to be able to access data programmatically, or at least to have an audit trail of specific queries, rather than working through nice friendly GUIs. And often we want to download data as a simple text file for further analysis, with no whistles and bells.Finally, there is the matter of the data itself (and pedants: yes, data is a singular noun). The NBN contains some fantastic systematic scientific survey data, but also a lot of more haphazard observational data, which may be reliable in terms of recording the presence of a given species at a particular site, but which tells us little about absences. Suppose Mr Patel has a fascination with limpets, and has been counting them on Filey Brigg every week for years. His data would give us a fantastic picture of the limpet population, but the absence of records for barnacles or periwinkles doesn’t mean that they’re not there – crucial if you’re interested in the whole community.Such limitations suggest that the researcher proceed with caution through the NBN gateway; but the advantages of such a huge dataset mean that simply to ignore it may be to miss out on a terrific resource. There are already various examples of NBN being used by students for research projects. The question is, what would it take for wider uptake by the research community?

Is exposure to angiotensin converting enzyme inhibitors related to Alzheimer's disease? (from Neil Davies' blog)

Fri, 05/11/2012 - 7:11am

One of the hypotheses I have been investigating is whether a type of anti-hypertensive drug, angiotensin converting enzyme inhibitors, may inadvertently cause some people to develop Alzheimer’s disease. AD_image.jpgThere is currently no cure for Alzheimer’s disease. Patients diagnosed with dementia on average survive for 4 years. Alzheimer’s disease was first described in 1906, but scientists do not know exactly what causes the biological changes characteristic of the disease. We do know Alzheimer’s disease gets more common as people get older:AD_incid_rate.jpg Data from MRC CFAS study.This hypothesis around angiotensin converting enzyme inhibitors was motivated by research suggesting that angiotensin converting enzymes may have a role in clearing amyloid-β plaques from the brain. So angiotensin converting enzyme inhibitors which, as their name suggests, inhibit angiotensin converting enzyme, may increase the build up of amyloid-β, increasing the likelihood of Alzheimer’s disease.The ideal way to answer this question would be to do an experiment, which randomly allocates some people to angiotensin converting enzyme inhibitors and others to another anti-antihypertensive, and then see whether there were differences in the number of diagnosed with Alzheimer’s. However, the effects of angiotensin converting enzyme inhibitors are only thought to build up after long periods of exposure. So participants would need to be followed in the experiment for a very long time. Mercifully, Alzheimer’s disease is relatively rare in younger people, so we would either need to run the experiment in older people, or in a very large number of people to detect any difference in diagnosis rates. This could be hugely expensive.Some epidemiologists, Anderson and colleagues, did the next best thing. They followed-up an experiment that had already been run, the ONTARGET study. In which the outcomes of people given angiotensin converting enzyme inhibitors were compared to those given another anti-hypertensive drug, angiotensin receptor blockers. They investigated whether the participants experienced differences in cognitive impairments, defined as a diagnosis of dementia, or had a low score in a cognitive test, the mini-mental state examination depending on the treatment they were allocated. They found that people allocated to angiotensin converting enzyme inhibitors were around 10% more likely to develop cognitive impairments over the four years of the experiment,(odds-ratio 1.11 95%CI (0.99,1.25) p-value =0.06). However, they found little difference in another outcome, cognitive decline (a fall in the mini-mental state exam score).These findings are certainly not conclusive. This might be because the participants of the ONTARGET experiment were relatively young, average age of 66, so few participants would be expected to develop Alzheimer’s disease. Also the ONTARGET trial’s primary outcome was death from cardiovascular causes, myocardial infarction, stroke, or hospitalization for heart failure. This means it did not necessarily have enough data to detect an effect of the drugs on cognitive impairments or cognitive decline.Another way to investigate whether angiotensin converting enzyme inhibitors cause Alzheimer’s disease, is to compare the outcomes of patients prescribed ACE-Is and other anti-hypertensive drugs as part of their normal medical care. Some more epidemiologists, Li and colleagues, did this using data from the United States.4They found that patients prescribed lisinopril (an angiotensin converting enzyme inhibitor) were 23% more likely to develop Alzheimer’s disease than those prescribed angiotensin receptor antagonists, (odds-ratio 1.23 (95%CI:1.04,1.47), p<0.001). Patients prescribed lisinopril were slightly more likely to develop Alzheimer’s disease than patients given other cardiovascular drugs (such as statins) odds-ratio 1.04 (95%CI: 0.99,1.11, p=0.15). But this association was relatively small weak and could be due to chance. Whist Li and colleagues adjusted their findings for their patients’ characteristics; it is possible that these differences are due to underlying differences between patients prescribed lisinopril and angiotensin receptor antagonists in their sample. For instance patients prescribed angiotensin receptor antagonists might be richer or younger.I looked into this using data from the General Practice Research Database. This contains administrative data on diagnoses and prescriptions from over 600 general practices in the UK. We found evidence that fewer patient prescribed angiotensin converting enzymes were diagnosed with Alzheimer’s than those prescribed other anti-hypertensives. However, when we looked at historical exposure to angiotensin converting enzymes we found little evidence of an association. Again this is frustratingly inconclusive.The only way to conclusively prove whether long term exposure to angiotensin converting enzyme inhibitors is related to Alzheimer’s disease is with a randomised controlled trial of sufficient size in older people with higher risks for the disease.Oh also Louis Theroux’s most recent show on dementia is really interesting, catch it on iplayer.And here’s podcast from Professor June Andrews on care services for people with dementia.It would great to hear your thoughts on this and our other posts, and do pass on any papers or links.

Selecting Your LOLcats (from Bob O'Hara's blog)

Fri, 05/11/2012 - 3:34am

The internet is a great thing, but it brings with it some problems. One of the future problems that we, as a civilisation, will face is the increased time we will be forced to spend finding the ideal photo for that LOLcat we want to make. The problem, of course, is that there is an infinite number of cats on the internet. So how do we search them all for that optimum picture?

Identification of the world's smallest mammoth | video | (from Grrl Scientist's blog)

Fri, 05/11/2012 - 3:00am

SUMMARY: The world’s smallest mammoth has been discovered on Crete, and it’s the size of a newborn baby elephant!Mammuthus. In 1904, some remarkable elephant fossils were unearthed on Cape Malekas on the island of Crete by Dorothea Bate, a famous fossil hunter. Some of these fossils appeared to be from a mammoth, a group of elephants that are distinct from those we now know. Mammoths differ from the other elephants in a number of ways, such as having long and gracefully curved tusks instead of straight tusks and a domed skull instead of a flat head. But for many years, all dwarf elephant fossils found on Mediterranean islands, including these from Crete, were thought to be descendants of the mainland straight-tusked elephant, Palaeoloxodon antiquus. Indeed, this European elephant was the ancestor of nearly all other extinct dwarf elephants found on a number of Mediterranean islands including Sicily, Malta and Cyprus. But not everyone in the scientific community was convinced that the Bate fossils were from elephants.

How folding paper can get you to the moon | video | (from Grrl Scientist's blog)

Thu, 05/10/2012 - 3:00am

SUMMARY: Can folding a piece of paper 45 times get you to the moon? This is a fun little video that asks the simple question: Can folding a piece of paper 45 times get you to the moon? By showing you the answer to that question, this video demonstrates the astonishing potential of exponential growth.

Homeopathy: awareness and wariness (from Lee Turnpenny's blog)

Wed, 05/09/2012 - 8:22am

(or: Foot stamping II) Despite its open-call invitation for submission of pieces for its now occasionally empty ‘First Person’ column – ‘on topical subjects or compelling stories of personal experiences’ – my hometown’s messenger, the Leicester Mercury, continues to prefer fallacious boluses to (for example) my take (again), this time on what those tempted by homeopathy (in Leicester) ought to be aware of.So, as dividing the word count of my recent piece by a factor of approximately four took time and effort, I want it ‘out there’, so here is as good as anywhere… __________________________________________________________________________The theme for the recent World Homeopathy Awareness Week was ‘Homeopathy and Infertility: Helping Fertility for Men and Women.’The swanky promotional website provides ‘Reasons for using Homeopathy’. For example, we are informed that homeopathic remedies produce ‘No Harmful Side Effects’, due to a doubly magical process termed ‘potentisation’. The chosen substance is diluted beyond the point where no molecule of it remains, but then activated by ‘succussion’ (ie vigorous shaking). This is attributed to the water retaining a ‘memory’ of that substance. But what of the memories of everything else the water has ever been in contact with? No less remarkably, these are simultaneously detoxified. We are thus expected to believe that water knows the difference.A similar lack of evidence attends the claim that homeopathy boosts the immune system, improving overall health and preventing disease by increasing resistance to infections. This is a favourite resort of those who like to invoke our innate ‘healing energy’, a concept they never define, but which does sound convincingly scientific.If you are still curious, some examples of popular homeopathic ‘remedies’ are provided. Their use as dietary supplements or herbal medicines, which do actually contain some of the stuff, is contentious enough. But, in homeopathic doses (ie, zilch) these versatile substances can, it is confidently claimed, aid in the treatment of all manner of physical and psychological conditions – including fertility problems. (Can cuttlefish ink actually help correct or prevent a prolapsed uterus?) However, although negative aspects of ‘conventional’ fertility treatments are listed, including their limited success rates, no such data are provided for the ‘success’ rates of homeopathy on infertility. Indeed, there is no indication of any track record whatsoever.So why does homeopathy persist? Because a few cosy sessions with a nice, sympathetic homeopath, who can afford the time to listen without hurrying you out of the surgery, may well produce a feeling of improved well-being. At least if you’re not actually ill. And if you were, you would likely get better anyway. This is where it does its thing – not in the administering of charlatan pills, tinctures or potions. How are we to know, beyond glib conjecture, that the immune system is ‘boosted’?It is perhaps trite to make fun of homeopathy. Then again, while the homeopathic mindset continues with its insidious self-marketing, we should guard against its cult-like reach and publicly highlight its laughable nonsense. Because there are instances when it isn’t actually funny.Like the seriously problematic issues of unproven treatments for life-threatening diseases, infertility can be a highly emotive matter, requiring considerable levels of competency and responsibility. If those purporting to treat it are giving false hope, their ethos is unethical.

Ancient poster sections (cartoon) (from Viktor Poor's blog)

Wed, 05/09/2012 - 3:10am

I’m in the middle of pre-conference poster frenzy: still doing experiments and collecting data, while I already started to make poster. And not talking about the last minute poster printing.That made me wonder, why poster sections did not trend in ancient times:egypt poster.PNG

Bee deaths linked to common pesticides (from Grrl Scientist's blog)

Wed, 05/09/2012 - 3:00am

SUMMARY: Two recently published scientific studies show that bee populations are being ravaged by widespread use of a particular type of pesticide, the neonicotinoidsscreengrab. Our love of pesticides has been nothing short of disastrous for our insect friends, the honeybees along with the bumblebees and other wild native bee species. Two recent scientific studies point to modern pesticides as the main culprit for the often dramatic declines in both domestic honeybees, Apis mellifera, as well as native wild bee populations. The pesticides in question are the neonicotinoids, a family of pesticides that are chemically related to nicotine. The neonicotinoids are the first new class of insecticides introduced in the last 50 years. Instead of carrying out their deadly effects by coating the surfaces of a plant’s leaves and stems, neonicotinoids are taken up by and circulated inside flowering plants. When a bee collects nectar and pollen, she also sips a dose of these pesticides. Neonicotinoids are so-named because they act as nicotinic acetylcholine receptor agonists, binding to and activating these neural receptors, causing paralysis and death. This class of pesticides are the most commonly and widely used in the world, used both by large-scale agricultural operations as well as in home gardening products. Two independent studies show that even low doses of neonicotinoid pesticides can impair bees’ navigation abilities, reduce the growth of bee colonies and reduce the number of new queens produced.

Dig Afognak: Revealing the Past, Strengthening the Future (from Liz O'Connell's blog)

Tue, 05/08/2012 - 8:19pm

Play in the dirt with Dig AfognakLaura Nielsen for <a href="""">FrontierScientistsIf uncovering archaeological treasures and exploring local culture appeal to you more than simple sightseeing, you’ll want to check out the Kodiak Archipelago the next time you can make it to Alaska. The Afognak Native Corporation’s program Dig Afognak has visitors, archaeologists, and Native tribal members working side-by-side to find and preserve cultural artifacts and archaeological sites.NPS Alutiiq_dancers.jpgAlaska NPS: Alutiiq + Tlingit dancersAdditionally, Dig Afognak offers cultural activities with varying focuses all meant to teach and preserve Native Alutiiq ways. One week-long cultural immersion program was called Lu’machipet, “Our Culture,” with storytelling, singing, dance and performance serving as mediums to explore Alutiiq language and oral tradition.“Alutiiq” Native people are indigenous to the Kodiak Island Archipelago, the southern coast of the Alaska Peninsula, Prince William Sound, and the lower tip of the Kenai Peninsula. Alutiiq heritage is strong, and people have pride in their traditions. Cultural revitalization movements like the Dig Afognak program and well-preserved practices like <a title=""Alutiiq" href="""">Alutiiq basket weaving keep the remembered past alive.RaspberryIsland_fox.JPGAlutiiq basket weavers were able to share insight with Russian curators; collections held by Russia from the times when the Russian American Company held what is today Alaska contain valuable cultural references.Meanwhile, the Alutiiq Museum and Archaeological Repository, Kodiak AK, curates and exhibits Alutiiq artifacts. The museum’s executive director, Sven Haakanson, has taught at Dig Afognak. He also studies the appearing and disappearing petroglyphs of Cape Alitak.After the Russian colony ended, area Alutiiq Natives faced disasters: the 1912 Mount Katmai eruption (ashfall can be found in some archaeological sites), the disruptive 1964 Great Alaskan Earthquake and tsunami, and the 1989 Exxon Valdez oil spill. Oil clean-up around the Prince William Sound disrupted archaeological sites. It was clear that steps needed to be taken to preserve the archaeological record. Dig Afognak preserves artifacts, reconstructs past lifestyles and teaches community heritage, and also draws ethno-tourists to the area. The response to these disasters shows the strength of the community.Check out the Dig Afognak Archaeological Expedition.ExxonValdez_cleanup.jpgNavy: Exxon Valdez cleanupLearn about other Arctic Archaeology and about Paleo-Eskimo digs.Sources:The Alutiiq Ethnographic by Rachel MasonDig Afognak Archaeological ExpeditionNPS Archaeological Overview of AlaskaTravel Tidings Alaska

The Nature of Learning, or the Learning of Nature? (from Paige Brown's blog)

Tue, 05/08/2012 - 2:33pm

Nano-flower2.png Studying for Mass Comm Theory Final, I have some thoughts about learning from Nature…In 1994, psychologist Albert Bandura gave the world of mass media effects social learning theory, hypothesizing that people don’t learn by trial and error or by reinforcement and reward as much as by observing the behaviors of others_. Social learning theory acknowledged (thank goodness) that “human beings are capable of cognition or thinking and … can benefit from observation and experience.” In social learning theory, learning “takes place through watching other people [or nature?] model various behaviors.”The logical extension of Bandura’s social learning theory was into the realm of mass media effects. The theory was used to explain how adults and children both learn to engage in various behaviors that they see modeled on television or through other mass media. Social learning theory gave way to modeling theory (explaining how and under what conditions individuals adopt behaviors portrayed in the media) which subsequently gave way to the broader social cognitive theory, which focused on the “cognitive processes involved in social learning”. These theories were used to explain, for example, why and how children imitate violent behavior as seen on television or from the adults around them, given that they identify with these ‘characters’ and expect some type of reward (good or ‘bad’) from an imitation of the behavior.But do humans only stand to learn from other people in their social circles or in the media? When I think of “observing the behaviors of others”, I can’t help but think beyond the borders of our own kind… into the realm of Mother Nature with all her wonderfully complex creatures and biological processes. What about learning from these processes? They sure are just as ubiquitous (even more so!) as mass media, and as observable as any behavior of the social companions around us. I’m talking about biomimicry… a special case of learning and imitating the behaviors and physics of the natural world around us. As Scitable’s Doaa Tawfik recently pointed out, scientists and engineers today learn such things as how to make the rotor blades of wind turbines and helicopters more efficient by imitating the dynamics of humpback whale fins! Now if we can learn from these giant swimming mammals how to improve some of our more complex technological innovations, we can learn from almost anything. And psychology scholars really used to think that we learned only from trial and error!Unfortunately, trial and error in the world of science may be more common than we might hope for. Take drug design for example… a classic case of trial and error, learning from ‘reinforcement’ (i.e. which drugs have beneficial effects in clinical trials, and which drugs don’t, after months to years of design and fabrication). Sure, technological computing innovations and complex modeling software packages are improving this process as we speak, but the days of trial and error in the making of science are certainly not coming to an end anytime soon.744px-Bur_Macro_BlackBg.jpg Unless? We get REALLY good, even better, at observing and imitating the natural world! From insect-path algorithmsfrom, to insect swarm intelligence, to the hydrophobic and self-cleaning properties of nanostructures on the rose petal and the lotus leaf, to the hooks on the end of a burs’ spikes that inspired the invention of Velcro, to the reflective and anti-wetting properties of nanostructures on the butterfly wing, to the information storage system of coiled DNA structures, we have already learned a vast amount from nature in the building of our ‘human’ world. But when I consider the vastness of the natural world and all the ways that life has adapted to dynamic conditions, in finding new ways to cope with seemingly complex problems, I am astounded… I don’t think we should stop observing behaviors in the natural world anytime soon. Mimicking nature might give us some of our most innovative designs, technological and energy solutions yet. All we have to do is watch and learn_. 0US/all.js#xfbml=1&appId=227596150586831"; fjs.parentNode.insertBefore(js, fjs);}(document, ‘script’, ‘facebook-jssdk’));