top of page

Search Index

355 results found

  • Not all chemists wear white coats: computational organic chemistry | Scientia News

    The newest pillar of chemical research Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Not all chemists wear white coats: computational organic chemistry Last updated: 01/02/26, 19:59 Published: 01/02/26, 19:42 The newest pillar of chemical research Introduction 'Not all chemists wear white coats,' aptly describes the newest pillar of chemical research. Coined by the Royal Society of Chemistry, computational modelling has become an essential tool across all areas of traditional chemistry. As artifical intelligence (AI) and machine learning become increasingly prevalent in research, the future of chemistry may unfold computationally before ever touching a test tube. Given the breadth of the field, this article will focus specifically on computational advancements in organic chemistry. Analytical Chemistry Density Functional Theory (DFT) is a quantum computational method that models molecules based upon the distribution of their electron density. It can be utilised by organic chemists to determine the stereochemistry of a product by modelling Vibrational Circular Dichroism spectra (VCD). VCD is a spectroscopic technique which measures the difference in absorption of left versus right-handed circularly polarised light by chiral molecules. By using DFT to compute the VCD spectra of each enantiomer, chemists can compare them to experimental spectra. A match between the compound and the experimental spectrum indicates an accurate assignment of the molecule’s stereochemistry. See Figure 1 . Predicting molecular conformation While the Cahn-Ingold-Prelog naming system allows chemists to describe the 3D arrangement of a molecule, computational analysis can help predict which molecular shape is preferred in practice. Molecular Mechanics (MM) is a computational method that treats molecules using classical physics, modelling atoms and bonds as ‘balls’ connected by ‘strings’. A force field is used to calculate the potential energy of a molecule, accounting for bond stretching, angle bending, bond rotation, van der Waals interactions and electrostatic forces. A simple example of how this method supports organic chemistry is the determination of the most stable conformation of butane. By rotating the central C-C bond through 360°, the energy of each conformation can be plotted against the dihedral angle. This analysis shows that the anti-conformation is the most stable, as the two methyl groups are positioned 180° apart to minimise steric strain. See Figure 2 . Drug discovery Computational chemistry has also transformed drug discovery by enabling chemists to simulating how potential drug compounds will bind to their target active site. In the past, drug development has often relied on synthesising a large number of candidates and testing each experimentally to see which worked. Today, advances in computational chemistry, combined with X-ray crystallographic data, allows both a drug candidate and its protein binding site to be modelled before any lab work begins. This helps researchers save both time and resources. Known as structural based drug design, this approach commonly relies on hybrid computational methods, particularly Quantum Mechanics/ Molecular Mechanics (QM/MM). In this case, the chemically active regions, such as the drug molecule and protein active site are treated using QM while the rest of the proteins is treated using MM. By combining these techniques, a balance is struck between computational accuracy and calculation time, especially important for larger molecules. See Figure 3. Conclusion In conclusion, computational chemistry is an essential tool for interpreting experimental results and generating new scientific insight. While this article has focused on its role in supporting organic chemistry research, the reach of computational chemistry extends far beyond this field. From modelling batteries and solid state materials to organometallic catalysis, computational chemistry is now firmly embedded in modern chemical research. Written by Antony Lee Related articles: Quantum- chemistry , computing REFERENCES The Royal Society of Chemistry - https://edu.rsc.org/resources/not-all-chemists-wear- white-coats/1654.article (Accessed January 2026) Y.L. Zeng, X.Q. Huang, C.R. Huang, H. Zhang, F. Wang, Z.X. Wang, Angew. Chem. Int. Ed., 2021, 60, 10730-10735 Chemistry LibreTexts https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_%28Mor sch_et_al.%29/03%3A_Organic_Compounds_Alkanes_and_Their_Stereochemistry/3.07% 3A_Conformations_of_Other_Alkanes (Accessed January 2026) Ecole des Bio-Industries - https://www.ebi-edu.com/en/coup-de-coeur-research-9/ (Accessed January 2026) Project Gallery

  • Delving into the world of chimeras | Scientia News

    An exploration of this genetic concept Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Delving into the world of chimeras 09/07/25, 14:03 Last updated: Published: 03/02/24, 11:13 An exploration of this genetic concept The term chimera has been borrowed from Greek mythology, transcending ancient tales to become a captivating concept within the fields of biology and genetics. In mythology, the chimera was a monstrous hybrid creature. However, in the biological context, a chimera refers to an organism with cells derived from two or more zygotes. While instances of natural chimerism exist within humans, researchers are pushing the boundaries of genetics via the intentional creation of chimeras, consequentially sparking debates and breakthroughs in various fields, spanning from medicine to agriculture. Despite the theory that every cell in the body should share identical genomes, chimeras challenge this notion. For example, the fusion of non-identical twin embryos in the womb is a way chimeras can emerge. While visible cues, such as heterochromia or varied skin tone patches, may provide subtle hints of its existence, often individuals with chimerism show no overt signs, making its prevalence uncertain. In cases where male and female cells coexist, abnormalities in reproductive organs may exist. Furthermore, advancements in genetic engineering and CRISPR genome editing have also allowed the artificial creation of chimeras, which may aid medical research and treatments. In 2021, the first human-monkey chimera embryo was created in China to investigate ways of using animals to grow human organs for transplants. The organs could be genetically matched by taking the recipient’s cells and reprogramming them into stem cells. However, the process of creating a chimera can be challenging and inefficient. This was shown when researchers from the Salk Institute in California tried to grow the first embryos containing cells from humans and pigs. From 2,075 implanted embryos, only 186 developed up to the 28-day time limit for the project. Chimeras are not exclusive to the animal kingdom; plants exhibit this genetic complexity as well. The first non-fictional chimera, the “Bizzaria” discovered by a Florentine gardener in the seventeenth century, arose from the graft junction between sour orange and citron. Initially thought to be an asexual hybrid formed from cellular fusion, later analyses revealed it to be a chimera, a mix of cells from both donors. This pivotal discovery in the early twentieth century marked a turning point, shaping our understanding of chimeras as unique biological phenomena. Chimera is a common form of variegation, with parts of the leaf appearing to be green and other parts white. This is because the white or yellow portions of the leaf lack the green pigment chlorophyll, which can be traced to layers in the meristem (areas found at the root and shoot tip that have active cell division) that are either genetically capable or incapable of making chlorophyll. As we conclude this exploration into the world of chimeras, from the mythological realm to the scientific frontier, it’s evident that these entities continue to mystify and inspire, broadening our understanding of genetics, development, and the interconnectedness of organisms. Whether natural wonders or products of intentional creation, chimeras beckon further exploration, promising a deeper comprehension of the fundamental principles that govern the tapestry of life. Written by Maya El Toukhy Related article: Micro-chimerism and George Floyd's death Project Gallery

  • Exploring My Role as a Clinical Computer Scientist in the NHS | Scientia News

    What my role entails Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Exploring My Role as a Clinical Computer Scientist in the NHS 17/04/25, 10:23 Last updated: Published: 06/05/24, 13:03 What my role entails When we think about career choices, we’re often presented with singular choices. Clinical Scientific Computing is a field that combines healthcare and computing. Despite it being relatively unknown, it is an important cog in the healthcare machine. When I applied for the Scientific Training Program in 2021, the specialism I applied for (Clinical Scientific Computing) had one of the lowest application rates out of approximately 27 specialisms. Awareness of this area has now improved, both thanks to better advertisement and exponential advancements in technology and healthcare. According to the NHS, there are now 26.8 thousand full-time equivalent healthcare scientists in England's NHS. As a clinical computer scientist, one's expertise can be applied in diverse settings, including medical physics laboratories and clinical engineering departments. My role in radiotherapy involves overseeing the technical aspects of clinical workflows, ensuring the seamless integration of technology in patient care. Training is a crucial part of being a proficient computer scientist. Especially with the growth of scientific fields in the NHS, there's always an influx of juniors and trainees, and that in turn, warrants the need for excellent trainers. A clinical scientist is someone who is proficient in their craft and able to explain complex concepts in layman's terms. As Einstein famously said: If you can't explain it to a 6-year-old, you can't understand it yourself. Although I am still technically a trainee, I am expected to partake in the training of the more junior trainees in my schedule. On a typical day, this may be as simple as explaining a program and demonstrating its application, or I may dismantle a PC and go through each component, one by one. At the core of clinical science is research. You won't go a day without working on at least one project and sometimes these may not even be your own. Collaboration with others is a huge part of the job. Every scientist has a different way of thinking about a problem, and this is exactly what keeps the wheels spinning in a scientific department. There are numerous times when I seek the help of others and vice versa. It is difficult to talk about 'typical' projects because they are often so varied in scientific computing, but it is likely that you will find yourself working on a variety of programming tasks. Having clinical know-how is crucial when working on projects in this field, and that aspect is exactly what separates the average computer scientist from the clinical computer scientist. A project I am currently working on involves radiation dose calculations, which naturally involves understanding the biological effects of radiation on the human body. This isn't a typical software development project so having a passion for healthcare is absolutely necessary. The unpredictability of technology means that troubleshooting is a constant aspect of our work. If something goes wrong in the department (which it often does), it is our responsibility as technical experts to quickly but effectively diagnose and fix the problems. The clinical workflow is highly sensitive in healthcare especially the cancer pathway where every minute counts. If a radiographer is unable to access patient records or there is an error with a planning system, this can have detrimental effects on the quality of patient care. Addressing errors, like those in treatment planning systems, necessitates a meticulous approach to diagnosis, often leading us from error code troubleshooting to on-site interventions. For example, I may be required to physically attend a treatment planning room and resolve an issue with the PC. This narrative offers a glimpse into the day-to-day life of a clinical computer scientist in the NHS, highlighting the critical blend of technical skill, continuous learning, and the profound impact on patient care. Through this lens, we can hopefully appreciate the essential role of clinical scientific computing in advancing healthcare, marked by innovation, collaboration, and a commitment to improving patient outcomes. This narrative offers a glimpse into the day-to-day life of a clinical computer scientist in the NHS, highlighting the critical blend of technical skill, continuous learning, and the profound impact on patient care. Through this lens, we can hopefully appreciate the essential role of clinical scientific computing in advancing healthcare, marked by innovation, collaboration, and a commitment to improving patient outcomes. For more information on this specialism . Written by Jaspreet Mann Related articles: Virtual reality in healthcare / Imposter syndrome in STEM Project Gallery

  • Brief neuroanatomy of autism | Scientia News

    Differences in brain structure Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Brief neuroanatomy of autism 09/07/25, 13:29 Last updated: Published: 26/12/23, 20:38 Differences in brain structure Autism is a neurodevelopmental condition present in both children and adults worldwide. The core symptoms include difficulties understanding social interaction and communication, and restrictive or repetitive behaviours such as strict routines and stimming. When the term autism was first coined in the 20th century, it was thought of as a disease. However, it is now described as a cognitive difference rather than a disease; that is, the brains of autistic individuals – along with people diagnosed with dyslexia, dyspraxia, or attention deficit hyperactive disorder – are not defective, but simply wired differently. The exact cause or mechanism for autism has not been determined; the symptoms are thought to be brought about by a combination of genetic and environmental factors. Currently, autism disorders are diagnosed solely by observing behaviours, without measuring the brain directly. However, behaviours may be seen as the observable consequence of brain activity. So, what is it about their brains that might make autistic individuals behave differently to neurotypicals? Total brain volume Back before sophisticated imaging techniques were in use, psychiatrics had already observed the head size of autistic infants was often larger than that of other children. Later studies provided more evidence that most children who would go on to be diagnosed had a normal-sized head at birth, but an abnormally large circumference by the time they had turned 2 to 4 years old. Interestingly, increase in head size has been found to be correlated with the onset of main symptoms of autism. However, after childhood, growth appears to slow down, and autistic teenagers and adults present brain sizes comparable to those of neurotypicals. The amygdala As well transient increase of total brain volume, the size and volume of several brain structures in particular seems to differ between individuals with and without autism. Most studies have found that the amygdala, a small area in the centre of the brain that mediates emotions such as fear, appears enlarged in autistic children. The amygdala is a particularly interesting structure to study in autism, as individuals often have difficulty interpreting and regulating emotions and social interactions. Its increased size seems to persist at least until early adolescence. However, studies in adolescents and adults tend to show that the enlargement slows down, and in some cases is even reversed so that the number of amygdala neurons may be lower than normal in autistic adults. The cerebellum Another brain structure that tends to present abnormalities in autism is the cerebellum. Sitting at the back of the head near the spinal cord, it is known to mediate fine motor control and proprioception. Yet, recent literature suggests it may also play an important role in some higher other cognitive functions, including language and social cognition. Specifically, it may be involved in our ability to imagine hypothetical scenarios and to abstract information from social interactions. In other words, it may help us recognise similarities and patterns in past social interactions that we can apply to understand a current situation. This ability is poor in autism; indeed, some investigations have found the volume of the cerebellum may be smaller in autistic individuals, although research is not conclusive. Nevertheless, most research agrees that the number of Purkinje cells is markedly lower in people with autism. Purkinje cells are a type of neuron found exclusively in the cerebellum, able to integrate large amounts of input information into a coherent signal. They are also the only source of output for the cerebellum; they are responsible for connecting the structure with other parts of the brain such as the cortex and subcortical structures. These connections eventually bring about a specific function, including motor control and cognition. Therefore, a low number of Purkinje cells may cause underconnectivity between the cerebellum and other areas, which might be the reason for functions such as social cognition being impaired in autism. Written by Julia Ruiz Rua Related article: Epilepsy Project Gallery

  • Libertarian Paternalism and the ‘Nudge’ Approach | Scientia News

    Delving into the 'Nudge' effect by Thaler and Sunstein Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Libertarian Paternalism and the ‘Nudge’ Approach Last updated: 05/11/25, 20:21 Published: 06/11/25, 08:00 Delving into the 'Nudge' effect by Thaler and Sunstein This is article no. 4 in a series on behavioural economics. Next article: Effect of time (coming soon). Previous article- Loss aversion . So far in our series of behavioural economics, we have discussed why and how people may make less favourable decisions than traditional economics assumes. We have spoken about how people can still be honest even when they are faced with a decision where they can be materially better off; and when someone loses their wallet, they feel more distaste than finding some money on the street; and how an endowment adds a bizarre sense of additional worth, that would cause you to think twice about trading it for something equally valuable. In today’s article, we are going to address why this is important to policy makers, and subsequently you and I, by exploring how governments and institutions can influence our decisions in ways that may seem paternalistic yet still respect individual freedom. This idea lies at the heart of libertarian paternalism . The idea behind the “Nudge” Nudge is a book written by Nobel Prize–winning economist Richard Thaler and legal scholar Cass Sunstein. Building on their 2003 paper, the book develops the idea that people’s choices can be shaped not only by the options available, but also by the context in which those options are presented — even by factors that seem trivial or irrelevant. This is where the concept of a “nudge” comes in: small design changes that steer people toward better decisions without restricting their freedom to choose. A simple change: the pension example A classic example comes from workplace pensions. Before 2008, when someone joined a new company, they were asked whether they wanted to join the company pension scheme. Most people didn’t — they took their full pay instead and failed to save for retirement. This created a growing problem for the government: an ageing population without enough savings to maintain a comfortable lifestyle. The solution was remarkably simple. Instead of asking employees to opt in to a pension, companies began enrolling them automatically, giving them the option to opt out instead. The choice remained exactly the same, pension or no pension, but the framing made all the difference. Opting out felt like losing something, and because people are naturally loss-averse, far fewer did so. In 2012, just under 50% of employees in the private sector had a pension. By 2018, after the introduction of auto-enrolment, that number had risen to around 80%. All from a change in default wording on a form. Libertarian Paternalism – a justification Paternalism is generally considered the situation where the government interferes in our choices, for better or for worse, much like a parent telling their children what they can and cannot do. In many cases, society accepts paternalism as necessary: we ban harmful drugs, make theft illegal, and impose safety regulations. But should governments really be meddling with our personal financial decisions? Should they be influencing our choices about pensions, spending, or saving? Whether they should or shouldn’t is ultimately a political question, not an economic one. However, what we can do is consider Richard Thaler and Cass Sunstein’s explanation of why policies such as pension defaults represent something fundamentally different. When the government restricts drugs or criminalises theft, it removes our freedom to choose — these are examples of hard paternalism, enforced by law. But with pensions, the government doesn’t force participation. The choice remains entirely yours: you can stay enrolled or opt out. This preservation of choice embodies the libertarian element — the freedom to decide for oneself. At the same time, by changing how the choice is presented, such as making enrolment the default option, policymakers can dramatically alter behaviour in a direction they consider beneficial. That is where the paternalistic element comes in. According to Thaler and Sunstein, this combination of freedom and gentle guidance is what defines libertarian paternalism . In Thaler and Sunstein’s eyes, nudging individuals towards better decisions through the use of policy is better and less controversial than implementing outright bans and mandates. It respects our autonomy while encouraging outcomes that they believe will improve collective welfare. If the government genuinely believes certain decisions are in the public’s best interest, then libertarian paternalism provides a way to influence behaviour without infringing on people’s right to choose. A question of freedom I do, however, pose some questions to you. If the government can influence your decision making through manipulating people’s psychology, can it truly be called libertarian ? And more fundamentally - does the government really know best? In recent years, the 'Nudge' approach has faced criticism, particularly regarding the assumptions it makes about what constitutes a “better” decision and who gets to define it. Despite this, the research continues to shape public policy across the world — from pensions and health to energy use and education. What’s crucial is that we remain aware of the ways our choices can be influenced. Recognising these nudges allows us to make decisions that best reflect our own values, circumstances, and goals. And on a deeper level, if every choice we make can be subtly shaped by those in power, how do we ensure that nudges serve the public interest — and not the interest of those who nudge? Written by George Chant Project Gallery

  • The physics behind cumulus clouds | Scientia News

    An explanation of how cumulus clouds form and grow in the atmosphere Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The physics behind cumulus clouds 14/07/25, 14:58 Last updated: Published: 07/10/23, 12:51 An explanation of how cumulus clouds form and grow in the atmosphere When you think of a cloud, it is most likely a cumulus cloud that pops into your head, with its distinct fluffy, cauliflower-like shape. The word ‘cumulus’ means ‘heaped’ in Latin, and aptly describes the clumpy shape of these detached clouds. They are one of the lowest clouds in the sky at altitudes of approximately 600 to 1000 metres, while the highest clouds form nearly 14 km up in the atmosphere. Depending on the position of the clouds in relation to the sun, they can appear in a brilliant white colour, or in a more foreboding grey colour. Cumulus clouds are classified into four different species: cumulus humilis clouds which are wider than they are tall, cumulus mediocris which have similar widths and heights, cumulus congestus which are taller than they are wide, and finally, cumulus fractus which have blurred edges as this is the cloud in its decaying form. Cumulus clouds are often associated with fair weather, with cumulus congestus being the only species that produces rain. So, how do cumulus clouds form, and why are they associated with fair weather? To understand the formation of these clouds, think of a sunny day. The sun shines on the land and causes surface heating. The warm surface heats the air above it which causes this air to rise in thermals, or convection currents. The air in the thermal expands and becomes less dense as it rises through surrounding cool air. The water vapour that is carried upwards in the convection current condenses when it gets cool enough and forms a cumulus cloud. Due to the varying properties of different surface types, some types are better at causing thermals than others. For example, the sun’s radiation will warm the surface of land more efficiently than the sea, leading to the formation of cumulus clouds over land rather than the sea. This is because water has a higher heat capacity than land, meaning it will take more heat to warm the water than the land. As cumulus clouds form on the top of independent thermals, they appear as individual floating puffs. But, what happens when cumulus clouds are knocked off the perch of their thermal by a breeze? How do they keep growing from an innocent, lazy cumulus humilis to a dark cumulus congestus, threatening rain showers? Latent heat gives us the answer. This is the energy that is absorbed, or released, by a body when it changes state. A cumulus cloud forms at the top of a thermal as the water molecules condense (changing state from a gas to a liquid) to form water droplets. When this happens, the warmth given off by the latent heat of condensation heats up the surrounding air causing it to expand and rise further, repeating the cycle and forming the characteristic cauliflower mounds of the cloud. The development of a cumulus humilis to cumulus congestus depends on the available moisture in the atmosphere, the strength of the sun’s radiation to form significant thermals, and whether there is a layer of warmer air higher up in the atmosphere that can halt the rising thermals. If the conditions are right, a cumulus congestus can keep growing and form a cumulonimbus cloud, which is an entirely different beast, more than deserving of its own article. So, the next time you see a cumulus cloud wandering through the sky, you will know how it came to be there. Written by Ailis Hankinson Related article: The physics of LIGO Project Gallery

  • The interaction between circadian rhythms and nutrition | Scientia News

    The effect on sleep on nutrition (nutrition timing) Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The interaction between circadian rhythms and nutrition Last updated: 27/04/25, 11:20 Published: 01/05/25, 07:00 The effect on sleep on nutrition (nutrition timing) The circadian system regulates numerous biological processes with roughly a 24-hour cycle, helping the organism adapt to the day-night rhythm. Among others, circadian rhythms regulate metabolism, energy expenditure, and sleep, for which meal timing is an excellent inducer. Evidence has shown that meal timing has a profound impact on health, gene expression, and lifespan. Proper timed feeding in accordance with the natural circadian rhythms of the body might improve metabolic health and reduce chronic disease risk. Circadian rhythms Circadian rhythms are controlled by the central clock of the brain, which coordinates biological functions with the light-dark cycle. Along with meal timing, circadian rhythms influence key elements of metabolism such as insulin sensitivity, fat storage, and glucose metabolism. When meal timing is not synchronised with the body's natural rhythm, it can cause circadian misalignment, disrupting metabolic processes and contributing to obesity, diabetes, and cardiovascular diseases. Literature has indicated that one should eat best during the daytime, particularly synchronised with the active phase of the body. Eating late at night or in the evening when the circadian rhythm of the body is directed towards sleep could impair metabolic function and lead to weight gain, insulin resistance, and numerous other diseases. Also, having larger meals in the morning and smaller meals later in the evening has been linked to improved metabolic health, sleep quality, and even lifespan. A time-restricted eating window, in which individuals eat all meals within a approximately 10–12 hour window, holds promise for improving human health outcomes like glucose metabolism, inflammation, harmful gene expression, and weight loss ( Figure 1 ). It is necessary to consider the impact of meal timing on gene expression. Our genes react to a number of stimuli, including environmental cues like food and light exposure. Gene expression of the body's metabolic, immune, and DNA repair processes are regulated by the body's circadian clock. Disturbances in meal timing influence the expression of these genes, which may result in greater susceptibility to diseases and reduced lifespan. Certain nutrients, such as melatonin in cherries and grapes, and magnesium in leafy greens and nuts, can improve sleep quality and circadian entrainment. Omega-3 fatty acids in fatty fish and flax seeds also have been shown to regulate circadian genes and improve metabolic functions. Other species Meal timing is quite varied among species, and animals have adapted such that food-seeking behavior is entrained into circadian rhythm and environmental time cues. There are nocturnal animals which eat at night, when they are active ( Figure 2 ). These nocturnal animals have evolved to align their meal time with their period of activity to maximise metabolic efficiency and lifespan. Meal timing is optimised in these animals for night activity and digestion. Humans, and most other animals, are diurnal and consume food during the day. In these animals, consuming most of their calories during the day is conducive to metabolic processes like glucose homeostasis and fat storage. These species tend to have better metabolic health when they are on a feeding regimen that is synchronized with the natural light-dark cycle. Conclusion Meal timing is important in human health, genetics, and life expectancy. Synchronising meal times with the body's circadian rhythms optimises metabolic function, reduces chronic disease incidence, and potentially increases longevity by reducing inflammatory genes and upregulating protective ones. This altered gene expression affects the way food is metabolised and metabolic signals are acted upon by the body. Humans naturally gravitate towards eating during daytime hours, while other creatures have feeding habits that are adaptively suited to their own distinct environmental needs. It is important to consider this science and incorporate it into our schedules to receive the best outcome from an activity that we do not normally think about. Written by B. Esfandyare Related article: The chronotypes REFERENCES Meléndez-Fernández, O.H., Liu, J.A. and Nelson, R.J. (2023). Circadian Rhythms Disrupted by Light at Night and Mistimed Food Intake Alter Hormonal Rhythms and Metabolism. International Journal of Molecular Sciences , [online] 24(4), p.3392. doi: https://doi.org/10.3390/ijms24043392 . Paoli, A., Tinsley, G., Bianco, A. and Moro, T. (2019). The Influence of Meal Frequency and Timing on Health in Humans: The Role of Fasting. Nutrients , [online] 11(4), p.719. Available at: https://www.ncbi.nlm.nih.gov/pubmed/30925707 . Potter, G.D.M., Cade, J.E., Grant, P.J. and Hardie, L.J. (2016). Nutrition and the circadian system. British Journal of Nutrition , [online] 116(3), pp.434–442. doi: https://doi.org/10.1017/s0007114516002117 . St-Onge MP, Ard J, Baskin ML, et al. Meal timing and frequency: implications for obesity prevention. Am J Lifestyle Med. 2017;11(1):7-16. Patterson RE, Sears DD. Metabolic effects of intermittent fasting. Annu Rev Nutr. 2017;37:371-393. Zhdanova IV, Wurtman RJ. Melatonin treatment for age-related insomnia. Endocrine. 2012;42(3):1-12. Prabhat, A., Batra, T. and Kumar, V. (2020). Effects of timed food availability on reproduction and metabolism in zebra finches: Molecular insights into homeostatic adaptation to food-restriction in diurnal vertebrates.Hormones and Behavior, 125, p.104820. Project Gallery

  • Using Natural Substances to Tackle Infectious Diseases | Scientia News

    Natural substances and their treatment potential Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Using Natural Substances to Tackle Infectious Diseases 14/07/25, 15:11 Last updated: Published: 06/06/23, 17:06 Natural substances and their treatment potential Introduction There is increased concern of antimicrobial resistance, especially when referring to bacteria with superbugs such as Methicillin-resistant Staphylococcus aureus (MRSA) and Carbapenem-resistant Enterobacteriaceae (CRE) as they impact lives globally, mainly through fatalities. Given this predicament, It seems that humanity is losing as a result of this pressing issue. However, it is possible for healthcare professionals to utilise more natural products, which are chemicals made by plants, animals and even microorganisms. This includes resources such as wood and cotton aside from food like milk and cacao. In the context of medicinal treatments, an important justification for using more natural products is because although synthetic or partially synthetic drugs are effective for treating countless diseases, an article found that 8% of hospital admissions in the United States and approximately 100,000 fatalities per year were due to people experiencing unfortunate side effects from these drugs. This article explores three specific natural products, where each have similar and unique health properties that can be harnessed to tackle infectious diseases and its subsequent consequences when left sufficiently unaddressed (i.e. antimicrobial resistance). Honey One of the most famous natural products that has been referenced in various areas of research and has been a food and remedial source for thousands of years is honey. It has properties ranging from antibacterial to antioxidant, suggesting that when honey is applied clinically, they have the potential to stop pathogenic bacteria. For example, honey can protect the gastrointestinal system against Helicobacter pylori , which causes stomach ulcers. In disc diffusion assays, the inhibitive properties of honey were shown when honey samples were evaluated holistically as opposed to its individual ingredients. This implies that the macromolecules in honey (carbohydrates, proteins and lipids) work in unison with other biomolecules, illustrating that honey is a distinctive remedy for preventing bacterial growth. For tackling infectious diseases, particularly against wound infections among others, honey’s medicinal properties provide a lot of applications and because it is a natural product, honey would not present any drastic side effects to a patient upon its administration. Garlic Another natural product that can be effective against microorganisms is garlic because similar to honey, it has antimicrobial and antioxidative compounds. A study judged different garlic phenotypes originating from Greece and discovered that they were beneficial against Proteus mirabilis and Escherichia coli aside from inhibiting Candida albicans and C. kruzei . As for fresh garlic juice (FGJ), it increases the zone of inhibition in various pathogens at 10% and more along with it displaying minimum inhibitory concentrations (MICs) in the 4-16% range. Therefore, garlic in solid or liquid form does show potential as a natural antimicrobial agent, especially against pathogenic bacteria and fungi. With this in mind, it too has multiple applications like honey and should be further studied to best isolate the chemical compounds that could be involved in fighting infectious diseases. Turmeric Curcuma longa (also known as turmeric) is one other natural product with unique properties like garlic and honey, making it a suitable candidate against various microbes. One specific pigment that is part of the ginger family and found in turmeric is curcumin, which can tackle diverse microbes through numerous mechanisms illustrated below in Figure 2 . With this said, curcumin has drawbacks: it is highly hydrophobic, has low bioavailability and quickly breaks down. Although when paired with nanotechnology for delivery into the human body, its clinical applications can be advantageous and an additional observation about curcumin is that it can work collaboratively with other plant derived chemicals to stop antibiotic resistant bacteria. One specific bacterial strain that turmeric can attack is Clostridium difficile, a superbug that causes diarrhoea. A study had 27 strains to measure the MICs of turmeric constituents, particularly curcuminoids and curcumin. The results showed reduced C. difficile growth in the concentration range 4-32 μg/mL. Moreover, they had no negative impacts on the gut microbiome and curcumin had more efficacy in stopping C. difficile toxin production compared to fidaxomicin. Thus, turmeric is efficacious as a natural antimicrobial chemical and with further experimentation (same as honey and garlic), it can be harnessed to prevent infectious diseases besides their impact on human lives. Conclusion Considering the above examples of natural products in this article and others not mentioned, it is clear that they can be powerful in the battle against infectious diseases and the problems associated with them, mainly antimicrobial resistance. They are readily available to purchase in markets and shops at low cost, making them convenient. Moreover, populations in Eastern countries like China and India traditionally have used, and are still using these materials for curing pain and illness. In turn, manufacturing medicines from natural products on a larger scale has the prospect of preventing infectious diseases and even alleviating those that patients currently have. Written by Sam Jarada Related article: Mechanisms of pathogen evasion Project Gallery

  • Silicon hydrogel contact lenses | Scientia News

    An engineering case study Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Silicon hydrogel contact lenses 17/07/25, 11:08 Last updated: Published: 29/04/24, 10:59 An engineering case study Introduction Contact lenses have a rich and extensive history dating back over 500 years; when, in 1508, Leonardo Di Vinci first conceived the idea. It was not until the late 19th century that the concept of contact lenses as we know them now were realised. In 1887 F.E.Muller was credited with making the first eye covering that could improve vision without causing any irritation. This eventually led to the first generation of hydrogel-based lenses as the development of the polymer, hydroxyethyl methacrylate (HEMA), allowed Rishi Agarwal to conceive the idea of disposable soft contact lenses. Silicon hydrogel contact lenses dominate the contemporary market. Their superior properties have extended wear options and have transformed the landscape of vision correction. These small but complex items continue to evolve, benefiting wearers worldwide. This evolution is such that the most recent generation of silicon hydrogel lenses have recently been released and aim to phase out all the existing products. Benefits of silicon hydrogel lenses There are many benefits to this material’s use in this application. For example, the higher oxygen permeability improves user comfort and experience through relatively increased oxygen transmissibility that the material offers. These properties are furthered by the lens’ moisture retention which allows for longer wear times without compromising on comfort or eye health. Hence, silicon hydrogel lenses aimed to eradicate the drawbacks of traditional hydrogel lenses including: low oxygen permeability, lower lens flexibility and dehydration causing discomfort and long-term issues. This groundbreaking invention has revolutionised convenience and hygiene for users. The structure of silicon hydrogel lenses Lenses are fabricated from a blend of the two materials: silicon and hydrogel. The silicon component provides high oxygen permeability, while the hydrogel component contributes to comfort and flexibility. Silicon is a synthetic polymer and is inherently oxygen-permeable; it facilitates more oxygen to reach the cornea, promoting eye health and avoiding hypoxia-related symptoms. Its polymer chains form a network, creating pathways for oxygen diffusion. Whereas hydrogel materials are hydrophilic polymers that retain water, keeping the lens moist and comfortable as it contributes to the lens’s flexibility and wettability. Both materials are combined using cross-linking techniques which stabilise the matrix to make the most of both properties and prevent dissolution. (See Figure 1 ). There are two forms of cross-linking that enable the production of silicon hydrogel lenses: chemical and physical. Chemical cross-linking involves covalent bonds between polymer chains, enhancing the lens’s mechanical properties and stability. Additionally, physical cross-links include ionic interactions, hydrogen bonding, and crystallisation. Both techniques contribute to the lens’s structure and properties and can be enhanced with polymer modifications. In fact, silicon hydrogel macromolecules have been modified to optimise properties such as: improved miscibility with hydrophilic components, clinical performance and wettability. The new generation of silicon hydrogel contact lenses Properties Studies show that wearers of silicon hydrogel lenses report higher comfort levels throughout the day and at the end of the day compared to conventional hydrogel lenses. This is attributed to the fact that they allow around 5 times more oxygen to reach the cornea. This is significant as reduced oxygen supply can lead to dryness, redness, blurred vision, discomfort, and even corneal swelling. What’s more, the most recent generation of lenses have further improved material properties, the first of which is enhanced durability and wear resistance. This is attributed to their complex and unique material composition, maintaining their shape and making them suitable for various lens designs. Additionally, they exhibit a balance between hydrophilic and hydrophobic properties which have traditionally caused an issue with surface wettability. This generation of products have overcome this through surface modifications improving comfort by way of improving wettability. Not only this, but silicon hydrogel materials attract relatively fewer protein deposits. Reduced protein buildup leads to better comfort and less frequent lens replacement. Manufacturing There are currently two key manufacturing processes that silicon hydrogel materials are made with. Most current silicon hydrogel lenses are produced using either cast moulding or lathe cutting techniques. In lathe cutting, the material is polymerised into solid rods, which are then cut into buttons for further processing in computerised lathe - creating the lenses. Furthermore, surface modifications are employed to enhance this concept. For example, plasma surface treatments enhance biocompatibility and improve surface wettability compared to earlier silicon elastomer lenses. Future innovations There are various future expansions related to this material and this application. Currently, researchers are exploring ways to create customised and personalised lenses tailored to an individual’s unique eye shape, prescription, and lifestyle. One of the ways they are aiming to do this is by using 3D printing and digital scanning to allow for precise fitting. Although this is feasible, there are some challenges relating to scalability and cost-effectiveness while ensuring quality. Moreover, another possible expansion is smart contact lenses which aim to go beyond just improving the user's vision. For example, smart lenses are currently being developed for glucose and intraocular pressure monitoring to benefit patients with diseases including diabetes and glaucoma respectively. The challenges associated with this idea are data transfer, oxygen permeability and therefore comfort. (See Figure 2 ). Conclusion In conclusion, silicon hydrogel lenses represent a remarkable fusion of material science and engineering. Their positive impact on eye health, comfort, and vision correction continues to evolve. As research progresses, we can look forward to even more innovative solutions benefiting visually-impaired individuals worldwide. Written by Roshan Gill Related articles: Semi-conductor manufacturing / Room-temperature superconductor / Titan Submersible / Nanogels REFERENCES Optical Society of India, Journal of Optics, Volume 53, Issue 1, Springer, 2024 February Lamb J, Bowden T. The history of contact lenses. Contact lenses. 2019 Jan 1:2-17. Ţălu Ş, Ţălu M, Giovanzana S, Shah RD. A brief history of contact lenses. Human and Veterinary Medicine. 2011 Jun 1;3(1):33-7. Brennan NA. Beyond flux: total corneal oxygen consumption as an index of corneal oxygenation during contact lens wear. Optometry and vision science. 2005 Jun 1;82(6):467-72. Dumbleton K, Woods C, Jones L, Fonn D, Sarwer DB. Patient and practitioner compliance with silicon hydrogel and daily disposable lens replacement in the United States. Eye & Contact Lens. 2009 Jul 1;35(4):164-71. Nichols JJ, Sinnott LT. Tear film, contact lens, and patient-related factors associated with contact lens–related dry eye. Investigative ophthalmology & visual science. 2006 Apr 1;47(4):1319-28. Jacinto S. Rubido, Ocular response to silicone-hydrogel contact lenses, 2004. Musgrave CS, Fang F. Contact lens materials: a materials science perspective. Materials. 2019 Jan 14;12(2):261. Shaker LM, Al-Amiery A, Takriff MS, Wan Isahak WN, Mahdi AS, Al-Azzawi WK. The future of vision: a review of electronic contact lenses technology. ACS Photonics. 2023 Jun 12;10(6):1671-86. Kim J, Cha E, Park JU. Recent advances in smart contact lenses. Advanced Materials Technologies. 2020 Jan;5(1):1900728. Project Gallery

  • A love letter from outer space: Lonar Lake, India | Scientia News

    The lunar terrain Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A love letter from outer space: Lonar Lake, India Last updated: 09/10/25, 10:05 Published: 10/04/25, 07:00 The lunar terrain Around 50,000 years ago, outer space gifted the earth with a crater that formed the foundations of the world’s third largest natural saltwater lake, situated within a flat volcanic area known as the Deccan Plateau. This resulted from a 2 million tonne meteorite tunnelling through the earth’s atmosphere at the velocity of 90,000km/hour and colliding into the Deccan Plateau. As time slipped away, pressure and heat melted the basalt rock tucked underneath the impact, and the accumulation of rainwater filled the crater with water. These foundations curated what is famously known today as the ‘Lonar Lake’. What is unique about the Lonar Lake is that it is the only meteorite-crater formed in basaltic terrain - synonymous to a lunar terrain. Additionally, the remnants bear similarities to the terrestrial composition of Mercury, which contains craters, basaltic rock and smooth plains resulting from volcanic activity. Many speculations have arisen to prove the theory of the crater forming from the impact of a meteorite. One such collaborative study conducted by The Smithsonian Institute of Washington D.C. USA, the Geological Survey of India and the US Geological Survey involved drilling holes at the bottom of the crater and scrutinising the compositions of rock samples sourced from the mining. When tested in the laboratory, it was found that the rock samples contained leftovers of the basaltic rock that were modified from the crater collision under high heat and pressure. In addition, shattered cone-shaped fractures, due to high velocity shock waves being transmitted into the rocks, were identified. These two observations align with the meteorite impact phenomenon. Additionally, along with its fascinating astronomical properties, scientists have been intrigued by the chemical composition of the lake within the crater. Its dark green colour results from the presence of the blue-green algae Spirulina. The water also has a pH of 10, making the water alkaline in nature, supporting the development of marine systems. One explanation for the alkalinity of the water is that it is a result of immediate sulphide formation, where the groundwater of meteorite origin contains CO2 undergoes a precipitation reaction with alkaline ions, leaving a carbonate precipitate with an alkaline nature. What is also striking about the composition of the water as well is its saline nature, which coexists with the alkaline environment - a rare phenomenon to occur in ecological sciences. The conception of the lake, from the matrimony of Earth with the debris within outer space, has left its imprints within the physical world. It's a love letter, written in basaltic stone and saline water, fostering innovation in ecology. The inscription of the meteorite’s journey within the crater has branched two opposing worlds, one originating millions of miles away from humans with one that resides in the natural grounds of our souls. Written by Shiksha Teeluck Related articles: Are aliens on Earth? / JWST / The celestial blueprint of time: Stonehenge REFERENCES Taiwade, V. S. (1995). A study of Lonar lake—a meteorite-impact crater in basalt rock. Bulletin of the Astronomical Society of India, 23, 105–111. Tambekar, D. H., Pawar, A. L., & Dudhane, M. N. (2010). Lonar Lake water: Past and present. Nature Environment and Pollution Technology, 9(2), 217–221. Project Gallery

bottom of page