Wednesday, 27 September 2017

Behind the scenes pt1: Giant fullerene formation through thermal treatment of fullerene soot


We recently had a paper accepted where we showed that heating up small cages of carbon (fullerenes) led to them fusing into giant fullerenes. The smaller fullerenes C60 and C70 usually get all the attention because they can be easily extracted using solvents (e.g. toluene). These small cages have found applications in molecule-based solar cells (organic solar cells), superconductors (if you put some potassium atoms between the cages) and are good antioxidants. The header for this blog features C60 buckminsterfullerene the most famous carbon cage (and also the most symmetric molecule known) which is one of the smallest stable cages. For an overview of the main results of the paper I have embedded a 5 minute long audioslides presentation below. However, most of the results require a bit more of an explanation to really understand the significance of the results for the non-expert. So I thought I would give you a behind the scenes look at putting this paper together through a blog series. In the first instalment, I discuss what other people have done on fullerene formation and mechanisms so you can see the underlying motivations for the paper.

Finding out giant fullerenes are more stable than C60

It all started after I was reading about the formation of carbon fullerenes and trying to understand the high abundance of C60 in soot containing fullerenes. Why the high abundance of C60 is puzzling is the way in which fullerenes are usually made, an electrical arc is produced between two carbon rods which produces a carbon plasma at > 3000 degrees C. At these temperatures, you wonder how such a symmetric and ordered molecule could form. Previous mechanisms assumed a bottom-up approach where small carbon molecules grew and closed to form C60 cage molecules. This was challenged by reactive molecular dynamic computer simulations in 2006 that suggested that the carbon structures close into giant fullerene (cages with a greater number of carbon atoms than sixty) and then in order to cool down they eject carbon and shrink to form the magic number fullerenes C60 and C70. This was shown to be at least feasible in 2007 when a giant fullerene was observed, in an electron microscope (more on that below), to shrink when it was heated.


Curl et al. in 2008 reiterated that small fullerene cages are less stable (more negative means more stable) than larger cages (see the figure below). This makes sense as flat graphite is the most stable form of carbon and any attempt to curve the structure will make the structure less stable (increase it's energy/more positive). This might seem straightforward however the common explanation of why C60 is formed (in the bottom up mechanism) was that C60 is a very stable molecule. 
Modified from Curl et al. 2008
Curl et al. also suggested that carbon cages could exchange carbon. This would allow C60 and C70 to be produced in greater abundance as they are significantly lower in energy than their neighbouring carbon cages (due to their high symmetry) and with fullerenes continually exchanging carbon they could get stuck at C60 or C70 as it would be harder for carbon to be exchanged from these highly stable symmetric cages i.e. there are no weak points. Think of a fullerene as rolling into stable notches which are hard to get out of. Further reactive forcefield simulations in 2011 suggested carbon dimers C2 could be ingested into fullerenes and this was experimentally shown in 2012 using mass spectrometry where fullerenes with metals in them could ingest C2 without losing their cargo.

I ran some reactive forcefield simulations starting in 2011 wanting to observe the carbon capture and ejection considering that two interacting cages would exchange C2 from the smaller to the larger cage, but ended up finding that these structures really wanted to fuse together and coalescence into bigger giant fullerenes. Below is a video of a computer simulation I ran showing two fullerenes fusing.


This coalescence has also been observed before with C60 coalescing together inside carbon nanotubes and many different simulations. I, therefore, started to think that perhaps fullerenes actually would rather become larger if they are not in a higher energy chaotic plasma. But I wondered if I could experimentally observe coalescence of fullerenes to form isolated cages.

Tune in next week for part two where I show you an amazing instrument that can weigh individual molecules and show the higher fullerenes in fullerene soot.

Wednesday, 30 August 2017

Carbon conference 2017 and the "Queen of carbon"

I recently attended and presented a poster at the Carbon 2017 conference in Melbourne, Australia. This yearly conference brings together around 800 carbon scientists and engineers from around the world to talk about the many forms of carbon such as fullerenes, nanotubes, graphene and various carbon materials like activated carbons, glassy carbons, nuclear graphite and carbon catalysts.


Something interesting about Carbon Conference this year was that it was a joint conference comprised eight other chemistry related conferences as part of the centenary celebration of the Royal Australian Chemical Institute (RACI). This meant we could attend any one of the many talks in fields from physical chemistry through to green chemistry.

3D printed molecules and electrically polarised carbon

The poster session was also run with the other chemistry conference. While viewing some other posters, I saw this 3D printed model of a metal organic framework which I thought was a creative idea.


I presented a poster on the impact of curvature in aromatic molecules which causes a significant charge polarisation leading to a molecular dipole of about 2 debye per pentagon (compare this with 1.85 debye for water). I had some good conversations with people about this; the activated carbon community was particularly interested in how polar molecules could adsorb onto these carbon structures and how the curved structures should carbonise.

Click on the poster for a larger image.
We recently uploaded a preprint of this work online if your interested further.

The Queen of Carbon

Something quite special about the carbon conference was a memorial session commemorating the life of Mildred Dresselhaus (or Millie to those in the community) who passed away in February this year. Her impact on the field of carbon research cannot be overexpressed. Starting from the elucidation of the electronic structure of graphite (the location of holes and electrons in the first Brillouin zone using magneto-optical spectroscopy) she went on to study the intercalation of ions in graphite (leading to the development of the lithium ion battery). Later, while working on fullerene she suggested the elongation of a fullerene into a tube and studied the electronic properties of these tubes in 1992, before single wall nanotubes had been discovered in 1993, suggesting they could be conducting or semiconducting depending on the nanotubes twist (chirality). She then turned her attention to graphene nanoribbons and graphene, working on tuning the electronic properties of graphene by confining them into nanoribbons as early as 1996.

Modified from presentation 
The "Queen of carbon science" as she was known, was also the first woman to gain a professorship at MIT and was an advocate for women in STEM, she also served in government. What I was struck by, was her character as a scientist. She really filled her life with research but always had space to talk with students, review papers and write textbooks. A truly inspirational woman; I'm sad I did not get an opportunity to meet her.

Wednesday, 19 July 2017

My week at the 2017 Commonwealth Science Conference

In June I was a participant in the Commonwealth Science Conference, held in Singapore. Four hundred delegates with expertise in science and policy were invited from around the world, brought together to focus on some of our biggest challenges that science can help to solve:
  • Emerging infectious diseases
  • Sustainable cities
  • Moving towards low carbon energy
  • The future of the oceans
I attended the talks on low carbon energy and presented a poster in this area. As I am currently working in that field, I have summarised some of the interesting findings from the talks and discussions I had with other delegates during the week.

The conference-defining image

This graph seemed to be on everybody's opening slides. It was first shown by Sir David King (a celebrated chemist with huge involvement in climate change research and policy) during his opening speech. It shows various simulations of where our emissions levels could end up, depending on a representative concentration pathway (RCPs). The different pathways show a number of radiative forcing values by 2100 compared with preindustrial levels (+2.6, +4.5, +6.0, and +8.5 W/m2). These different pathways lead to a range of different expected temperatures, from 0.9 to 5.4 °C increase. We need to be on RCP 2.6 or lower to give a temperature rise of under 2.3 °C.

 

Here are some of the graph's more important aspects:
  • We are currently on track for the worst case scenario, RCP8.5. This will lead to famines, flooding and crop failure. 
  • The scenario we are aiming for (and that which the Paris agreement set) is RCP2.6. This requires not only complete reduction of CO2 production but actively removing carbon from the atmosphere. 
  • There is some uncertainty in the simulations as can be seen from the many light-coloured lines but within this error limit, the results all point to serious warming. 

You are probably already familiar with this depressing outline of global warming. In the following sections, I will outline how scientists and policy makers are planning to tackle the problem.

Hope for a carbon negative future

Much of the conference discussion was focused on which technologies could make up for the shortfall of energy as we decarbonise our energy production. 

Everyone is banking on solar

Professor Martin Green is an Australian expert in silicon solar cells whose group holds the record for the most efficient solar cell (at 25%, up from 11% efficiency in the 1970s). He spoke about the many countries banking on solar energy to replace fossil fuels. The graph below shows the projected chunk of energy that solar is predicted to supply, according to the German Advisory Council on Global Change.

  


Professor Green spoke about how the price of solar cells has dropped to the point of competing with wind power. Some of the first students who left China after Mao's death were among those who came into Professor Green's lab and contributed to this research. Many of them returned to China to set up production of solar cells while maintaining ties with Australia. This was supported by American money, seen as a safe investment due to Germany's aggressive drive to purchase solar energy. The plot below shows the growth of Chinese production of solar cells. 

This increase in production and competition lead to a huge reduction in the costs of solar photovoltaic cells after 2005, as shown below. 

Other drivers include the increase in efficiency of solar cells and a reduction in installation costs. We are currently implementing technology from 1970-80s to mass produce cells at ~18% efficiency. This is set to continue to increase up to 25% with more recent technology and even further with multi-layered designs. Eventually, wind and solar will compete with conventional fossil fuels. 


Utility-scale solar voltaics are large solar farms that reduce the costs, compared with roof PV, by installing a large number of cells usually in a field. However, there are some significant challenges that have to be overcome before we completely transition to renewables. 

What if the sun won't shine?

Solar and wind power both have one major drawback: they are intermittent power sources. Our power supply system relies on power sources that are continually generating. This demands large-scale deployment of power storage to be able to help iron out the fluctuations in the power produced from renewables. While hydro pumping is the most advanced energy storage method (pumping water up into a dam) and allows for the largest amount of energy to be stored it is limited by the requirement for a particular kind of geography (some are suggesting using underground caverns to mitigate this problem). Battery technology really needs to step up and take a much larger section of the power storage capacity if we are able to make use of the renewables. Professor Anthony Chen from the University of West Indies mentioned that using batteries to respond very quickly to power fluctuations in the grid system could provide a service to the grid and might be able to offset the costs of buying the batteries, a system called frequency regulation. We also need to rapidly develop the capacity and efficiency of batteries to meet this enormous need that will arise. If we consider renewables, it also suggests the need for more biomass and perhaps even nuclear power options in the short term to make sure the new decarbonised grid can supply power reliably. 

Professor Jenny Nelson from Imperial College London spoke about another direction, looking into reducing the costs of the manufacturing the solar cells by synthesising them from organic molecules. These organic photovoltaics are significantly less efficient than silicon PVs but are potentially cheaper to make and able to be sprayed onto windows or plastics. This could reduce the costs of solar cells and help to make up for the storage costs. 

Presenting my own work

I presented a poster about my PhD thesis, on reducing the amount of soot produced during combustion. Below is a picture taken at the poster session and I have uploaded a copy of the poster that you can read. Feel free to ask any questions in the comments. 

 


Convincing the public and policy makers about climate change

It is one thing to have possible technology-based options for dealing with climate change, but this is not enough to solve the problem; we need cooperation from everyone up to their leaders. During the first opening plenary, Sir David King spoke about how to shift the discussion around climate change. He began with a story of when he was the Government Chief Scientific Advisor in the United Kingdom and was working to combat a bad outbreak of foot and mouth disease. There was no plan on how to deal with such a problem and it was difficult to manage. After the disaster, he began to think that there must be a better way to prepare for such events. Thus began the development of a plan on how to prepare for flooding in the UK. The idea was a small investment in preemptive action that will reduce the impact of this type of disaster. Many different steps were taken to prepare England for bad flooding, making use of weather models and cutting edge science. In 2015, this type of flooding did occur. The damage was extreme, exceeding five billion dollars, but substantially less than if they had not taken preventive action. It was later calculated that every pound spent in prevention provided a reduction in potential cost from disaster mitigation of seven to eight pounds. 

Image result for Commonwealth Science Conference
In studying the risk associated with flooding, the impact of climate change was also beginning to be taken into account. This posed some serious problems and led the researchers and government to begin considering the risks associated with climate change for the UK. They asked themselves if they could afford to wait for climate change to happen, or if they should act straight away.

One of the risks that the world already face as a result of climate change is an increase in the number and severity of heatwaves. In 2003, a heatwave in Europe killed 70,000 people [1]. Climate science focuses on the average long term changes in the climate; this doesn’t always help to communicate the actual costs associated with climate change (think of Trump's suggestion that a two degree change will not be significant). It will be the rare, big-impact events that will cause the greatest loss of life and capital. These are the most important factors to consider and to mitigate. Considering the heatwave of 2003, the plot below shows the actual temperature in black with some other traces plotting simulations. The rare event that occurred in 2015 is what led to the huge loss of life, however as the temperature increases the chance of a similar event increases proportionally compared with the new baseline. It is also frightening to see that by 2040 the average summer temperature in Europe will be that of the heatwave in 2015. 


European summer temperature different from 1900 temperature levels. [2]

What Sir David King decided to do in order to convince communities about the impact of global warming was to bring in people from the insurance community who are experts in predicting the risk of rare events, to predict the risks associated with global warming in the UK, US, China and India. They were able to make predictions on the chance or risk of a serious disaster happening given different climate scenarios. For example, the chance of crop failure in the US and China head towards 25% (maize) and 75% (rice) respectively with a global temperature rise of around 4-5°C. This helped spur on nations leading up to the Paris agreement, encouraging China, US and India to join. In his discussion of the Paris agreement, he explained the subtle way they plan to make it work. The Paris agreement does not force anyone to make a commitment but allows countries to provide their bids to reduce their emissions. The clever, subtle, aspect is that there is a review process that considers the cumulative sum of the bids and determines, from climate models, whether we could reach the 2 degrees C target given the current pledges. This information then is relayed back to the countries encouraging them to improve their bids in order to meet the global temperature target. Below is a graph showing the impact of the new pledges (as of the Paris agreement) which will be used in the next review round to encourage countries to get improve their pledges to bring the worlds emissions down into the blue region.

You can see that the new pledges do not meet the less than 2 degrees C target and the next review process will be aiming to encourage countries to improve their bids to meet the 2 degrees C target. (wondering how New Zealand is going well we have an "inadequate" rating, for example, agriculture which accounts for 50% of our greenhouse gas emissions are not included in our emissions trading scheme. However, as we produce most of our electricity through renewable energy we have the potential of being a world leader in climate change reversal and a decarbonised society.) It would be great to have a similar risk assessment done in New Zealand in order to consider the cost of the risks associated with climate change on flooding and crop failure in New Zealand to convince the New Zealanders to act to curb emissions.

In another talk, the question was asked. How do scientists communicate with the public and policy makers to make sure decisions are made without misinformation sneaking in? Sir David Spiegelhalter a statistician and science communicator highlighted the need for scientists to be part of the entire communication process. Here is a figure he showed (which I reproduced).

He mentioned a study which showed that the exaggeration of claims in the news are predominately introduced at the universities press office. This means scientists need to be involved in the communication of their research coming out of the press office in their own university and along every step of the communication pathway in order to maintain accuracy. He also talked about empowering the public to be more critical about the information they are receiving (which is partially encouraged by the fact checking regulators).

This level of critique and commentary relies on the trust and integrity of the scientific community. My small contribution to the discussion during the policy discussion was a question about whether the commercialisation of science undermines the public's trust in scientists that are seen to have conflicts of interest. This was further discussed by the head of the Royal Society Sir Venki Ramakrishnan comment in the second policy panel discussion who asked whether the over industrialisation of science undermines its core values. Sir David King spoke about the need for scientists to be taught about ethics and the importance of being transparent in our discussions of science. He pointed to his work on developing an ethical framework for scientists. Just as doctors have to agree to a certain set of ethical principles his hope is that scientists will follow a similar path. I will leave you with the seven principles;

  • Act with skill and care, keep skills up to date
  • Prevent corrupt practice and declare conflicts of interest
  • Respect and acknowledge the work of other scientists
  • Ensure that research is justified and lawful
  • Minimise impacts on people, animals and the environment
  • Discuss issues science raises for society
  • Do not mislead; present evidence honestly


References
[1] "Death toll exceeded 70,000 in Europe during the summer of 2003". Comptes Rendus Biologies. 331 (2): 171–178. ISSN 1631-0691. PMID 18241810. doi:10.1016/j.crvi.2007.12.001.
[2] http://www.nature.com/nature/journal/v432/n7017/abs/nature03089.html


Sunday, 28 May 2017

Molecular beading

I stumbled across a paper in the Journal of Chemical Education on molecular modelling of fullerenes using beads and I had to give it a go. After getting some beads and nylon thread the fullerenes came together quite quickly.


One thing to note is that the beads do not represent the atoms but the carbon-carbon bonds and where three beads meet up there is an atom in the chemical model. It is better to use long beads to get an accurate molecular model as bonds are usually represented by long rods. Below are the models shown in the paper using longer beads with the pentagons being coloured yellow for clarity.

I also constructed a C60 fullerene from the longer beads however the structure was not very rigid and tended to collapse. This shows the importance of the pi bonds that allow the structure to be rigid by acting to flatten the sp2 hybridised carbons.

This would probably be a good exercise for high school students and there are a great set of instructions in the supplementary information of the paper. However, a good understanding of the arrangement of the pentagons and hexagons is crucial to be able to construct the fullerene out of beads and I would recommend building the C60 fullerene in Avogadro a computer program first and then moving onto the beads. 




Tuesday, 14 February 2017

Chemical analysis of milk on a compact disc

We know how much food colouring has been added to a recipe by the intensity of the colour. Food colouring is made of dye molecules which absorb certain colours of light and scatter others. Measuring the concentration of food colouring is easy - all that is needed is a light emitting diode (LED) and a light sensor (link to laser cut detector). But how do we determine the concentration of molecules that are not coloured? One method that has recently gained a lot of attention is Raman spectroscopy. Put simply, a laser is used to excite the molecules, then a camera picks up a characteristic fingerprint derived from the way the molecules jiggle/vibrate due to thermal motion.

Michel Nieuwoudt and others in the Photon Factory have found that Raman spectroscopy can allow for the determination of all of the important components in milk such as protein, fat and health indicators. In a previous post I wrote about measuring the contaminant melamine in milk using gold coated Blu-ray discs which amplified the weak Raman signal to something detectable. This month we published a follow-up work that aimed to integrate milk analysis into a device that could be used in a dairy shed.

Giving milk
CC BY-NC 2.0 Giving Milk by Morton Just

The first challenge was finding a device into which to integrate the analysis. We wanted to make use of microfluidic technology, which uses techniques from the semiconductor industry to make very small fluid channels on the micron scale (your hair is about 100 microns across). Advantages include being able to pack many tests onto a single device and having very precise control over the liquid. Traditional microfluidic requires a lab full of pumps to operate, which is not conducive for analysis in a dairy shed, so we turned to the newer field of centrifugal microfluidics. Combining compact disc technology and microfluidics allows for standalone operation with pumping driven by the centrifugal force as the liquid pipetted into the centre of the disc is spun outwards through microfluidic channels.

CC BY 3.0 LabDisk for SAXS

To make this approach viable we have to use injection-moulded plastic discs to keep the costs down and to allow for large scale production of many discs that can be delivered to the farmer.

The problem with performing Raman analysis in a plastic device is that the plastic has a very strong Raman signal, which drowns out the weak Raman signal from the milk. In order to solve this problem we removed the plastic between the laser and the milk - holding the milk in an open channel using the capillary force. This is the same force that pulls water up the sides of a glass to form a meniscus.

Cross sections of the multilayer disc with the laser cut channel on the bottom, a layer of double sided tape in the middle and a top cover which leaves some of the channel open

The main contribution of the paper was working out how to use the centrifugal force to fill the channel without it overflowing by balancing the centrifugal force (controlled by the disc speed and distance from the centre of the disc) and the capillary force (controlled by the size of the channel). Something else I found quite cool was that the capillary force was enough to hold the liquid upside down in the channel, which means the detection could be done from underneath as in a traditional compact disc player. 

Diagram of the device: a) forces involved on the disc, b) liquid in a closed channel being pumped under rotation, c) open channel for spectroscopy, d) pressure due to the centrifugal pumping, e) balance between the capillary force and the pumping pressure.

We made use of this device to detect the contaminant melamine in milk using Raman spectroscopy and were able to detect down to the parts per million range (limit of detection (LOD) of 209 ppm). This is much more sensitive than infrared spectroscopy (LOD of 1300 ppm), but for higher sensitivities, the Blu-ray SERS surface would be needed (LOD of 70 ppb). 

The applications for this work go further than just milk analysis. This opens up all sorts of vibrational analysis such as infrared and Raman spectroscopy to the centrifugal platform which could provide disease diagnostics, water analysis and DNA detection using these advanced techniques.