Search Index
355 results found
- Behavioural Economics III | Scientia News
Loss aversion: the power of framing in decision-making and why we are susceptible to poor decisions Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Behavioural Economics III 06/11/25, 11:56 Last updated: Published: 15/10/24, 11:18 Loss aversion: the power of framing in decision-making and why we are susceptible to poor decisions This is article no. 3 in a series on behavioural economics. Next article- Libertarian Paternalism . Previous article- The endowment effect . In the realm of decision-making, the way information is presented can dramatically influence the choices people make. This phenomenon, known as framing, plays a pivotal role in how we perceive potential outcomes, especially when it comes to risks and rewards. We shall now explore the groundbreaking work of Tversky and Kahneman, who sought to explain how different framings of identical scenarios could lead to vastly different decisions. By examining their research, we can gain insight into why we are susceptible to making poor decisions and understand the underlying psychological mechanisms that drive our preferences. The power of framing Imagine that the UK is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. In a paper by Tversky and Kahneman, they examined the importance of how information is conveyed in two different scenarios. In scenario 1: If program A is adopted, 200 people will be saved. If program B is adopted, there is a 1/3 probability that 600 people will be saved and a 2/3 probability that no people will be saved. In scenario 2: If program A is adopted, 400 people will die. If program B is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die. Notice that both scenarios display the exact same information, but the way in which the information is displayed is different. So surely there should be no difference between the two scenarios? In fact, there is a huge difference. Scenario 2 has been given a loss frame, where the loss frame emphasises the potential negative outcomes. By taking a sidestep, we can examine why this is important. Loss aversion is the phenomenon where ‘losses loom larger gains’. In other words, if we lose something, then the negative impact of this is greater than the positive impact of an equal-sized gain. Image 1 illustrates a loss aversion function. As illustrated in the image, a loss of £100 results in a much larger negative reaction than the positive reaction of a gain of £100. To put this into perspective, imagine it’s your birthday and someone gifts you some money. You would hopefully feel quite grateful and happy, but perhaps this feeling isn’t overwhelming. On the contrary, if you soon discover that you lost your wallet or purse, which contained the same amount of money, the psychological impact is often much more severe. Losses are perceived to be much more significant than gains. Going back to the example involving the two scenarios, we see that in scenario 2, program A emphasises the death of 400 people compared to scenario 2, program B, which has a chance to lose more but also a chance to save everyone. Statistically, you should be indifferent between the two, but because the guaranteed loss of 400 people is so overwhelming, people would much rather gamble and take the chance. This same reason is why gambling is so addictive. When you lose money in a gamble, you feel compelled to not accept the loss and decide to continue betting in an effort to make back what you once had. What Kahneman and Tversky found was that in scenario 1, 72% of people chose program A, and in scenario 2, 78% of people chose program B. Clearly, how we frame a policy makes a huge difference in its popularity. By framing the information by saying “200 people will be saved” rather than “400 people will die” out of the same 600 people, our own perception is considerably different. But on a deeper level, why might this be, and why is knowing this distinction important? In my previous article on the endowment effect, we saw that once you own something, you feel possessive over it, and losing something that you have had to work for, like money, makes you feel as though that hard work has gone to waste. But this explanation struggles to translate into our example of people. In researching for this article, I came across the evolutionary psychology perspective and found it to be both interesting and persuasive. From an evolutionary perspective, loss aversion can be seen as an adaptive trait. For our ancestors, losses such as losing food or shelter could have dire consequences for survival, whereas gains such as finding extra food was certainly beneficial but not as crucial for immediate survival. Therefore, we may be hardwired to avoid any losses, which has translated into modern-day loss aversion. The reason why knowing about this is important comes up in two aspects of life. The first is in healthcare. As demonstrated at the beginning of the article, people’s decisions can be impacted by the way in which healthcare professionals and the government frame policies. By understanding this, it allows you to make your own decision on the risks and determine whether you believe it is right for you. Similarly, policymakers can shape public opinion by highlighting the benefits or costs of action or inaction such that it meets their own political agenda. So recognising loss aversion allows for more informed decision-making. Additionally, when it comes to the world of investing, people tend to keep hold of an investment that is performing badly or perhaps at a loss in the hopes that it will go back up in the future. If this belief is justified through analysis or good judgement, then deciding to hold may be a good decision; however, often loss aversion creates a false sense of hope similar to the example I gave for gambling. If you are a keen investor, it’s important to be aware of your own investment psychology so that it allows you to maintain an objective view of a company throughout the time you decide to remain invested. Evidently, understanding how we think and make decisions can play an important role in improving the choices we make in our personal and professional lives. By recognising the impact of loss aversion and framing, we can become more aware of the unconscious biases that drive us to avoid losses at all costs, even when those decisions may not be in our best interest. Whether it’s in healthcare, investing, or everyday life, cultivating this awareness allows for more rational, informed choices that better align with long-term goals rather than short-term fears. In a world where information is constantly framed to sway public opinion, knowing the psychology behind our decision-making processes is a powerful tool that can help us make wiser, more deliberate decisions. Written by George Chant REFERENCES Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981 Jan 30;211(4481):453-8. doi: 10.1126/science.7455683. PMID: 7455683. Image provided by Economicshelp.org , a link to the website: https://www.economicshelp.org/blog/glossary/loss-aversion/ Project Gallery
- The search for a room-temperature superconductor | Scientia News
A (possibly) new class of semiconductors Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The search for a room-temperature superconductor 14/07/25, 15:02 Last updated: Published: 13/01/24, 15:19 A (possibly) new class of semiconductors In early August, the scientific community was buzzing with excitement over the groundbreaking discovery of the first room-temperature superconductor. As some rushed to prove the existence of superconductivity in the material known as LK-99, others were sceptical of the validity of the claims. After weeks of investigation, experts have concluded that LK-99 was likely not the elusive room-temperature superconductor but rather a different type of magnetic material with interesting properties. But what if we did stumble upon a room-temperature superconductor? What could this mean for the future of technology? Superconductivity is a property of some materials at extremely low temperatures that allows the material to conduct electricity with no resistance. Classical physics cannot explain this phenomenon, and instead, we have to turn to quantum mechanics to provide a description of superconductors. Inside superconductors, electrons are paired up and can move through the structure of the material without experiencing any friction. The pairs of electrons are broken up by the thermal energy from temperature, so they will only exist for low temperatures. Therefore, this theory, known as BCS theory after the physicists who formulated it, does not explain the existence of a high-temperature superconductor. To describe high-temperature superconductors, such as those occurring at room temperature, more complicated theories are needed. The magic of superconductors lies in their property of zero resistance. Resistance is a cause of energy waste in circuits due to heating, which leads to the unwanted loss of power, making for inefficient operation. Physically, resistance is caused by electrons colliding with atoms in the structure of a material, causing energy to be lost in the process. The ability for electrons to move through superconductors without experiencing any collisions results in no resistance. Superconductors are useful as components in circuits as they cause no wasted power due to heating effects and are completely energy-efficient in this aspect. Normally, using superconductors requires complex methods of cooling them down to typical superconducting temperatures. For example, the temperature at which copper becomes superconducting is 35 K, or in other words, around 130 °C colder than the temperature at which water freezes. These methods are expensive to implement, which prevents them from being implemented on a wide scale. However, having a room-temperature superconductor would allow access to the beneficial properties of the material, such as its resistance, without the need for extreme cooling. The current record holders for highest-temperature superconductors are the cuprate superconductors at around −135 °C. These are a family of materials made up of layers of copper oxides alternating with layers of other metal oxides. As the mechanism for superconductivity is yet to be revealed, scientists are still scratching their heads over how this material can exhibit superconducting properties. Once this mechanism is discovered, it may be easier to predict and find high-temperature superconducting materials and may lead to the first room-temperature superconductor. Until then, the search continues to unlock the next frontier in low-temperature physics… For more information on superconductors: [1] Theory behind superconductivity [2] Video demonstration Written by Madeleine Hales Related articles: Semiconductor manufacturing / Semiconductor laser technology / Silicon hydrogel lenses / Titan Submersible Project Gallery
- The dopamine connection | Scientia News
How your gut influences your mood and behaviour Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The dopamine connection 11/07/25, 10:02 Last updated: Published: 25/03/24, 12:01 How your gut influences your mood and behaviour Introduction to dopamine Dopamine is a neurotransmitter derived from an amino acid called phenylalanine, which must be obtained through the diet, through foods such as fish, meat, dairy and more. Dopamine is produced and released by dopaminergic neurons in the central nervous system and can be found in different brain regions. The neurotransmitter acts via two mechanisms: wiring transmission and volume transmission. In wiring transmission, dopamine is released to the synaptic cleft and acts on postsynaptic dopamine receptors. In volume transmission, extracellular dopamine arrives at neurons other than postsynaptic ones. Through methods such as diffusion, dopamine then reaches receptors in other neurons that are not in direct contact with the cell that has released the neurotransmitter. In both mechanisms, dopamine binds to the receptors, transmitting signals between neurons and affecting mood and behaviour. The link between dopamine and gut health Dopamine has been known to result in positive emotions, including pleasure, satisfaction and motivation, which can be influenced by gut health. Therefore, what you eat and other factors, including motivation, could impact your mood and behaviour. This was proven by a study (Hamamah et al., 2022), which looked at the bidirectional gut-brain connection. The study found that gut microbiota was important in maintaining the concentrations of dopamine via the gut-brain connection, also known as the gut microbiota-brain axis or vagal gut-to-brain axis. This is the communication pathway between the gut microbiota and the brain facilitated by the vagus nerve, and it is important in the neuronal reward pathway, which regulates motivational and emotional states. Activating the vagal gut-to-brain axis, which leads to dopamine release, suggests that modulating dopamine levels could be a potential treatment approach for dopamine-related disorders. Some examples of gut microbiota include Prevotella, Bacteroides, Lactobacillus, Bifidobacterium, Clostridium, Enterococcus, and Ruminococcus , and they can affect dopamine by modulating dopaminergic activity. These gut microbiota are able to produce neurotransmitters, including dopamine, and their functions and bioavailability in the central nervous system and periphery are influenced by the gut-brain axis. Gut dysbiosis is the disturbance of the healthy intestinal flora, and it can lead to dopamine-related disorders, including Parkinson's disease, ADHD, depression, anxiety, and autism. Gut microbes that produce butyrate, a short-chain fatty acid, positively impact dopamine and contribute to reducing symptoms and effects seen in neurodegenerative disorders. Dopamine as a treatment It is important to understand the link between dopamine and gut health, as this could provide information about new therapeutic targets and improve current methods that have been used to prevent and restore deficiencies in dopamine function in different disorders. Most cells in the immune system contain dopamine receptors, allowing processes such as antigen presentation, T-cell activation, and inflammation to be regulated. Further research into this could open up a new possibility for dopamine to be used as a medication to treat diseases by changing the activity of dopamine receptors. Therefore, dopamine is important in various physiological processes, both in the central nervous and immune systems. For example, studies have shown that schizophrenia can be treated with antipsychotic medications which target dopamine neurotransmission. In addition, schizophrenia has also been treated by targeting the dysregulation (decreasing the amount) of dopamine transmission. Studies have shown promising results regarding dopamine being used as a form of treatment. Nevertheless, further research is needed to understand the interactions between dopamine, motivation and gut health and explore how this knowledge can be used to create medications to treat conditions. Conclusion The bidirectional gut-brain connection shows the importance of gut microbiota in controlling dopamine levels. This connection influences mood and behaviour but also has the potential to lead to new and innovative dopamine-targeted treatments being developed (for conditions including dopamine-related disorders). For example, scientists could target and manipulate dopamine receptors in the immune system to regulate the above mentioned processes: antigen presentation, T-cell activation, and inflammation. While current research has shown some promising results, further investigations are needed to better comprehend the connection between gut health and dopamine levels. Nevertheless, through consistent studies, scientists can gain a deeper understanding of this mechanism to see how changes in gut microbiota could affect dopamine regulation and influence mood and behaviour. Written by Naoshin Haque Related articles: the gut microbiome / Crohn's disease / Microbes in charge Project Gallery
- Breaking down Tay-Sachs | Scientia News
Exploring the genetic roots of a neurological tragedy Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Breaking down Tay-Sachs 15/05/25, 10:43 Last updated: Published: 20/04/24, 11:29 Exploring the genetic roots of a neurological tragedy This is article no. 9 in a series on rare diseases. Next article: Ehlers-Danlos Syndrome . Previous article: Pseudo-Angelman Syndrome . Tay-Sachs disease is a heritable metabolic condition that affects the neurons in the brain. The disease is more common in infants and young children as well as people of Ashkenazi Jewish descent, although it can occur in any ethnicity. Symptoms of the disease most commonly manifest themselves in children around six months of age. However, it is possible to develop symptoms from five years old to the teenage years. There are three different forms of the disease, each appearing at different stages of life: infantile, juvenile, and adult. The adult form is much rarer and non-fatal but can still cause neuron dysfunction and psychosis. Early symptoms of the disease include mobility issues such as difficulty crawling, and as the disease progresses, the child may suffer from seizures, vision, and hearing loss. In the classic infantile form, the disease is fatal within the first few years of life or by three to five years old. In infants, infection and respiratory complications, such as pneumonia, are the most common cause of death. Being categorised as an autosomal recessive disease means that in order to display the phenotype, two copies of the mutated HEXA gene must be present in an individual. This HEXA gene is located on chromosome 15 and is responsible for producing enzymes that affect the nerve cells. The carrier frequency of Tay-Sachs is highly dependent on ethnic backgrounds, with carrier frequency being 1 in 30 for those of Ashkenazi Jewish descent and 1 in 300 for others. The chance of developing the disease early or late is predicated on the specific type of HEXA mutation that is inherited within the family. Meaning, if one child in a family possesses the infantile form, all other members of the family will also possess the infantile form (if they express the phenotype). When both parents are carriers of the Tay-Sachs gene mutation, there is a 25% chance with each pregnancy that the child will inherit two mutated copies of the HEXA gene and thus be affected by the disease. Also, there is a 50% chance the child will be a carrier like the parents and a 25% chance the child will inherit two normal copies of the gene and be unaffected. Furthermore, this particular type of gene mutation results in the disease being commonly labelled as a hexosaminidase A deficiency. The HEXA gene’s significance in the disease is further highlighted due to its ability to code for specific alpha subunits in the enzyme β-hexosaminidase A. This enzyme is involved in breaking down molecules that can be recycled in a cell through the use of lysosomes. This key cellular function helps a cell undergo apoptosis (programmed cell death) or help evade bacteria that can damage a cell. However, in individuals with this HEXA gene mutation, less of the enzyme β-hexosaminidase A is produced, which results in less degradation of GM2 ganglioside. GM2 ganglioside is a lipid involved in a host of processes such as membrane organisation, neuronal differentiation, and signal transduction. In addition, due to its lack of degradation, it accumulates inside the body. The rate at which the lipid accumulates inside the cell ultimately determines the form of Tay-Sachs an individual will possess. It is worth noting that this GM2 ganglioside pathology also includes other diseases, such as Sandhoff disease and the AB variant, which have similar disease prognoses. Furthermore, the disease specifically targets the brain as gangliosides are the main lipids that compose neuronal plasma membranes. Their expression is specific to brain regions, impacting key neurodevelopmental processes like neural tube formation and synaptogenesis. Furthermore, ganglioside synthesis is a highly regulated process facilitated by glycosyltransferases during transcription and post-transcription. They also modulate ion channels and receptor signalling, which are crucial for neurotransmission, memory, and learning. The exact mechanism of how this ganglioside accumulation due to HEXA malfunction leads to neuronal death remains unclear. Figure 1 illustrates the dysfunction of the alpha subunit in HEXA as it cannot break down GM2 gangliosides. This results in an accumulation of GM2 within the liposome, contrasting with its concentration in the external environment. This accumulation of GM2 causes lysosomal dysfunction and eventually cell damage, which leads to the symptoms commonly associated with Tay-Sachs. Mouse models have been created to understand this GM2 pathway in greater detail to develop treatments. However, this is quite limited as mice do not have the same pathway of breaking down GM2 as humans. Also, since the disease may be prevalent before birth, it is hard to establish the damage done to a baby inside the womb, making reversing this disease in infants very challenging. However, the later onset types of Tay-Sachs disease might respond to treatment. Implementing ganglioside synthesis inhibitors in combination with existing DNA and enzymatic screening programs holds promise for eventually managing and controlling this condition. Parents can undergo genetic screening to assess their risk of carrying the Tay-Sachs gene, which is done by doing a simple blood test that examines the DNA for mutations in the HEXA gene. Genetic screening is particularly important for couples who have a family history of Tay-Sachs disease or who belong to ethnic groups with a higher prevalence of the condition. Early detection through genetic screening allows couples to make informed reproductive decisions, such as pursuing in vitro fertilisation with preimplantation genetic testing or opting for prenatal testing during pregnancy to determine if the foetus has inherited the mutated gene. Utilising the acronym SHADES as a mnemonic to recognise potential signs of Tay-Sachs disease in their child can help parents get a prompt medical evaluation if any symptoms arise. SHADES: S tartle response H earing loss A ffecting vision D evelopmental delay E pileptic seizures S wallowing difficulties Written by Imron Shah REFERENCES Center, N. (2015). Tay-Sachs disease. Nih.gov . Available at: https://www.ncbi.nlm.nih.gov/books/NBK22250/ . Leal, A.F., Benincore-Flórez, E., Solano-Galarza, D., Garzón Jaramillo, R.G., Echeverri-Peña, O.Y., Suarez, D.A., Alméciga-Díaz, C.J. and Espejo-Mojica, A.J. (2020). GM2 Gangliosidoses: Clinical Features, Pathophysiological Aspects, and Current Therapies. International Journal of Molecular Sciences, 21(17), p.6213. doi: https://doi.org/10.3390/ijms21176213 . Ramani, P.K. and Parayil Sankaran, B. (2022). Tay-Sachs Disease. PubMed. Available at: https://www.ncbi.nlm.nih.gov/books/NBK564432/ . Project Gallery
- How rising food prices contribute to malnutrition | Scientia News
Food deserts Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How rising food prices contribute to malnutrition 09/07/25, 14:18 Last updated: Published: 18/08/23, 20:13 Food deserts Introduction Over the past year, there have been news articles explaining how food has become more expensive along with people choosing between heating their homes and paying for groceries. According to the Office for National Statistics, the yearly cost of food and non-alcoholic drink has risen to 19.1% within one year till March 2023. There are various reasons for the food price increase; some of them include Brexit, lack of agricultural productivity and weakening of the British pound. Therefore, the spending habits of the general population have shifted towards ultra-processed foods (UPFs) as they tend to be cheaper compared to minimally processed food (MPFs). Yet, UPFs are really unhealthy with a cohort study discovering that there was an increase in mortality by 18% with each additional serving. For people living in food swamps and deserts, this is a harsh reality for them and there have to be policies to properly address this. The difference between food deserts and swamps Food deserts are places where populations have limited access to healthy and affordable food (i.e. MPFs); there are factors that contribute to this phenomenon such as having lower income or geographic location whereby there is a long distance to the nearest market. However, the increase in food prices as illustrated above can even be a part of the problem. In contrast, there are food swamps, which are areas containing more businesses that sell foods lacking nutritional value, so UPFs as opposed to MPFs. This also relates to the cost of groceries because certain populations living in food swamps are likely to purchase UPFs because they are in closer proximity than MPFs, besides being cheaper. Both situations can contribute not only to obesity, but other forms of malnutrition which will be explored below. Malnutrition To suffer from malnutrition means that there is an imbalance of nutrients and can be categorised based on undernutrition or overnutrition along with disparity in macronutrients (carbohydrates, fats and proteins) and micronutrients (vitamins and minerals). Additionally, there are countries experiencing specific forms of malnutrition such as undernutrition in comparison to others due to ongoing warfare, lack of nutritional education and/or living in poverty. The impact of malnutrition on organs in Figure 1 happens because there is deficiency in certain macronutrients and/or micronutrients, which are essential in the structure and functioning of the body. Another consequence of malnutrition is weight loss because there is depletion of fat and muscle mass in the body, leading to impaired muscle function. Food deserts/ swamps and malnutrition Going back to food deserts/swamps, their impact on malnutrition can be drastic. For example, a review focusing on food insecurity (disrupted food intake/eating patterns due to low income or supplementary resources), suggested a link between malnutrition and food insecurity along with a possible association between malnutrition and gut microbiome being negatively altered, though more research is needed. Another review looking at food insecurity in both US adults and children discovered that in a food-insecure adult’s diet, they had less vegetables, fruits and dairy leading to reduced vitamins A and B6, calcium, magnesium and zinc. How do both reviews relate to food swamps/deserts? Well, populations who are food-insecure may be likely to live in areas where there is a lack of access to healthy foods (i.e. food swamps/ deserts). Conclusion Taking into account everything discussed in this article, it seems that governments in countries where food swamps/deserts are prevalent need to address this issue through effective policies. Otherwise, there could be a future where there is an increase in chronic diseases like malnutrition. There is even potential susceptibility to infectious diseases due to malfunctioning organs stemming from malnutrition. Written by Sam Jarada Related articles: Food at the molecular level / Famine-induced epigenetic changes Project Gallery
- The bright future of smart bandages | Scientia News
In wound care Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The bright future of smart bandages 04/07/25, 12:57 Last updated: Published: 19/09/23, 16:30 In wound care Although wounds may seem miscellaneous to the naked eye, they can pose a great threat to the healthcare system, by overburdening health services through infections. Thus, it is essential to navigate wound care thoroughly to reduce burden and increase patient quality of life. Wounds can be caused by an array of different reasons and pose such a great threat because of the limited ways we’ve had to treat them which has resulted in issues such as antibiotic resistance, allergic reactions and so on. In recent times to enable higher quality treatment, a new invention known as the “smart bandage” has been made which uses nothing but light-emitting diode (LED) at its disposal to promote wound healing! The smart bandage is wireless and uses ultraviolet C radiation light (UVC) to sterilise wounds and prevent the risk of infection. This in turn decreases the chances of nosocomial incidences as well as opening doors for disinfection other than antibiotics or chemical based methods. The smart bandage is embedded with light emitting diodes called LEDs which emit UVC wavelength around 265-285nm using a controller. The smart bandage operates by effectively manipulating UVC’s germicidal and antimicrobial properties. Researchers produced a coil which is inductive and flexible so that the technology would easily be inserted into conventional fabric bandages. Wireless power via magnetic resonance is used by the coil so that the UVC LED’s can be powered without batteries being used. A second coil wirelessly transmits power to the inductive coil via electrical mains so that the LED is continuously receiving power supply till the required bacteria in the wound are eradicated. Scientists tested this technology on pathogens like Pseudoalteromonas sp , which are bacteria associated with bloodstream infections, surgical areas as well as wounds. Once the bacteria were cultured and grown, UVC LEDs were exposed to the culture which in turn resulted in the decreased growth of bacterial cells and within six hours completed stopped their growth by causing DNA damage leading to apoptosis of the bacterial cells. Currently, many wound treatment protocols involve the use of antibiotics which over time can lead to antibiotic resistance, thus straining health services by increasing hospital stays. The use of UVC based bandages not only decreases the risk of these consequences but is also environmentally friendly due to its low operating cost and reusability. Figure 4 also demonstrates added advantages of this technology. Looking forward, the revolutionary ability of smart bandages is undeniable. Currently, there is ongoing research being conducted into integrating a monitoring device which also has the capacity to send live data to healthcare professionals regarding the wound being treated. However, the results from this study are still to be replicated and tested in clinical studies. Although these innovations exhibit much promise by providing more flexible and higher quality care for patients, it is still in its infancy. But, it cannot be left unstated that the power of LED’s is remarkable, not only in their ability to treat but also in being economically beneficial. Written by Irha Khalid Related article: Virtual reality in healthcare Project Gallery
- Can we blame our genes for excessive smoking and drinking? | Scientia News
A short exploration of the genetic predisposition behind human behaviours Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Can we blame our genes for excessive smoking and drinking? 09/07/25, 13:31 Last updated: Published: 13/01/24, 15:33 A short exploration of the genetic predisposition behind human behaviours The advancing research on how tobacco, alcohol addictions, and other detrimental behaviors are consequences of complex interplays between genetic and environmental factors has gradually developed and gained credibility. A collaborative effort involving over 100 international scientists, including researchers from the National Institutes of Health (NIH) and the National Institute on Drug Abuse (NIDA), embarked on a genome-wide association study (GWAS) to explore the heritable traits associated with tobacco and alcohol addiction. The study analyzed data from a sample size of 1.2 million biobanks, epidemiological research, and genetic testing companies, shedding light on the relationship between genetics and addiction behaviors. Researchers discovered that phenotypes related to smoking, such as when individuals began smoking habits, are genetically correlated with various diseases. In contrast, increased genetic risk for alcohol consumption is linked to reduced risk of many diseases. Previous studies pinpointed 10 genes involved in the risk of tobacco and alcohol addiction. In addition, this study further contributed to genetic links by identifying more than 400 locations in the genomes with over 500 variants associated with critical functions involving dopamine regulation, glutamate transmission and acetylcholine activation in the brain. Another study involving 3.4 million people with diverse ancestries suggested that approximately 3,823 genetic variants may impact addiction behaviors, with specific variants associated with the age at which individuals start smoking and the number of cigarettes or alcoholic drinks consumed. These studies could indicate a future where genetic screening for genes relevant to addiction behaviors is available, and this could be especially useful for those with relatives involved in certain addictions. Furthermore, it also provides perspective on whether certain genes can increase the likelihood of addiction to illegal drugs like cocaine, heroin or MDMA. However, increasing people’s awareness of whether they are at risk of developing addictions may be insufficient in deterring them from pursuing risky behaviors, which suggests that genetic screening for these genes would be beneficial as an optional screening assessment for individuals. While the influence of environmental and social factors on tobacco and alcohol addictions has long been acknowledged and explored, these studies underscore the significant role genetics plays in determining an individual’s susceptibility to nicotine and alcohol dependence. The prospect of predicting a person’s risk of addiction can lead to early interventions. Furthermore, it prevents countless health-related fatalities associated with smoking and alcoholic beverages. This primary prevention provides a different aspect to risk factors for smoking and alcohol addiction while also reducing the burden of these highly prevalent public health concerns. Written by Maya El Toukhy Related article: Smoking cessation References: New Scientist (n.d.). Thousands of genetic variants may influence smoking and alcohol use. [online] New Scientist. Available at: https://www.newscientist.com/article/2350516thousandsofgenetic-variants-may-influence-smoking-and-alcohol-use/ [Accessed 23 Oct. 2023]. Today’s Clinical Lab. (n.d.). Do Your Genes Predispose You to Smoking and Drinking? [online] Available at: https://www.clinicallab.com/do-your-genes-predispose-you-tosmokinganddrinking-26963 [Accessed 23 Oct. 2023]. University of Minnesota. (2019). Hundreds of genes affecting tobacco and alcohol use discovered. [online] Available at: https://twin-cities.umn.edu/newsevents/hundredsgenesaffecting-tobacco-and-alcohol-use-discovered [Accessed 23 Oct. 2023]. Schlaepfer, I., Hoft, N. and Ehringer, M. (2008). The Genetic Components of Alcohol and Nicotine Co-Addiction: From Genes to Behavior. Current Drug Abuse Reviewse, 1(2), pp.124– 134. doi: https://doi.org/10.2174/1874473710801020124 . Project Gallery
- Origins of COVID-19 | Scientia News
Uncovering the truth behind the origins of the virus Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Origins of COVID-19 10/07/25, 10:27 Last updated: Published: 08/10/23, 16:07 Uncovering the truth behind the origins of the virus The quest for the crime of the century begins now! Suspicion of the Wuhan Institute of Virology Since the early epidemic reports in Wuhan, the origin of COVID-19 has been a matter of contention. Was SARS-CoV-2 the outcome of spontaneous transmission from animals to humans or scientific experimentation? Although most of the recorded initial cases occurred near a seafood market, Western Intelligence Agencies knew that the Wuhan Institute of Virology (WIV) was situated nine miles to the south. Researchers at the biosafety centre combed Yunnan caves for bats harbouring SARS-like viruses. They have been extracting genetic material from their saliva, urine, and faeces. Additionally, bat coronavirus RaTG13 (BatCoV RaTG13) shared 96% of its genome with SARS-CoV-2. Suspicion increased when it was discovered that WIV researchers dealt with chimeric versions of SARS-like viruses capable of infecting human cells. However, similar "gain-of-function" studies in Western biosecurity institutions have shown that such slow virulence increases may occur naturally. The coincidence that the pandemic began in the same city as the WIV outbreak was too obvious to ignore. According to two Chinese specialists , "the likelihood of bats flying to the market was quite remote". Chan and Ridley's "Quest for the Origin of COVID-19" Chan and Ridley have created a viral whodunit titled "Quest for the origin of COVID-19" to excite the curiosity of armchair detectives and scientific sceptics. Both need clarification as to why a virus of unknown origin was detected in Wuhan and not in Yunnan, 900 kilometres to the south. The stakes could not be more significant; if the virus were deliberately developed and spread by a Chinese laboratory, it would be the crime of the century. They are prudent in not going that far. They are, however, within their rights to cast doubt on the findings since their concerns were shared by numerous coronavirus experts who openly discounted the possibility of a non-natural origin and declared that the virus displayed no evidence of design at the time. Is this the impartial and fair probe the world has been waiting for? They present no evidence for the development of SARS-CoV-2. For example, Chan asserts that it seemed pre-adapted to human transmission " to an extent comparable to the late SARS-CoV-2 outbreak ". This statement is based on a single spike protein mutation that appears to "substantially enhance" its potential to connect to human receptor cells, meaning it had "apparently stabilised genetically" when identified in Wuhan. Nonetheless, this is a staggeringly misleading statement. As seen by the alphabet soup of mutations, the coronavirus has undergone multiple alterations that have consistently increased its suitability. Additionally, viruses isolated from pangolins attach to human receptor cells more efficiently than SARS-CoV-2, indicating the possibility of additional adaptation. According to two virologists, although the SARS-CoV-2 virus was not wholly adapted to humans, it was "merely enough". Evidence for design of SARS-CoV-2 and possible natural origins of the virus Another concerning feature of SARS-CoV-2 is a furin cleavage site, which enables it to infect human cells by interfering with the receptor protein. The identical sequence is present in highly pathogenic influenza viruses and was previously utilised to modify the spike protein of COVID-19. Chan and Ridley explain that this is the kind of insertion that would occur in a laboratory-modified bat virus. As a result, 21 leading experts have concluded that the furin sequence is insufficient. Coronaviruses have been shown to possess " near identical " genomes that often can infect humans and animals. Because the furin cleavage site characteristic is not seen in known bat coronaviruses, it is possible that it evolved naturally. Surprisingly, Chan and Ridley do not suggest that the SARS virus's high human infectivity feature was inserted on purpose since "there is no way to determine". There is also no way to determine if a RaTG13 is the pandemic virus's progenitor since history is replete with pandemics that began with zoonotic jumps. This argument is based on the strange fact that WIV researchers retrieved the bat isolate in 2013 from a decommissioned mine shaft in Yunnan. Six people were removing bat guano from the cave that year when they suffered an unexplained respiratory ailment. As a consequence, half of them perished. The 4% genetic diversity between RaTG13 and SARS-CoV-2, on the other hand, is similar to 40 years of evolutionary change. While exploring caves in northern Laos, researchers discovered three more closely related bat coronaviruses, which have a higher affinity to attach to human cells than the early SARS-CoV-2 strains. This indicates an organic origin, either through another animal host or directly from a bat, maybe when a farmer went into a cave. This is arguably the most reasonable explanation since it is consistent with forensic and epidemiological data. The food sample isolates collected from the Wuhan seafood market are similar to human isolates, and the majority of original human cases had a history of market exposure, in contrast to the absence of an epidemiological connection to the WIV or any other Wuhan research institution. Lack of evidence for a laboratory origin If scientists could demonstrate prior infection at the Wuhan market or other Chinese wildlife markets that sell the most likely intermediary species, including pangolins, civet cats, and raccoon dogs, the case for a natural origin would be strengthened. Although multiple animals tested positive for sister human viruses during the SARS epidemic, scientists have yet to find evidence of earlier infections in animals in the instance of Sars-CoV-2. Nonetheless, the absence of evidence does not confirm the absence and may indicate that samples were not taken from the appropriate animal. The same may be said of the lab leak argument's lack of evidence. However, even though history is littered with pandemics, no significant pandemic has ever been traced back to a laboratory. In other words, the null hypothesis is a zoonotic occurrence; Chan and Ridley must demonstrate otherwise. The irony is their drive to construct a compelling case for a laboratory accident. They are oblivious to the much more pressing story of how the commerce in wild animals, global warming, and habitat degradation increase the likelihood of pandemic viral development. This is the most plausible origin story that should concern us. Summary Although Chan and Ridley's "Quest for the Origin of COVID-19" has cast suspicion on the Wuhan Institute of Virology, there is still insufficient evidence to support the lab leak theory. There is, however, growing evidence for a natural origin of SARS-CoV-2, with multiple animals testing positive for sister human viruses during the SARS epidemic and the discovery of more closely related bat coronaviruses in northern Laos. As such, we should be more concerned with the increasing likelihood of pandemic viral development due to the commerce in wild animals, global warming, and habitat degradation. Written by Sara Maria Majernikova Project Gallery
- Anaemia | Scientia News
A disease of the blood Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Anaemia 09/07/25, 10:48 Last updated: Published: 17/06/23, 12:40 A disease of the blood This is article no. 1 in a series about anaemia. Next article: iron-deficiency anaemia Introduction Erythrocytes in their typical state are a biconcave and nucleus free cell responsible for carrying oxygen and carbon dioxide. The production is controlled by erythropoietin and as they mature in the bone marrow, they lose their nuclei. These red blood cells (RBC) contain haemoglobin, which aids in the transport of oxygen and iron, iron is a key component of haem, insufficient levels of iron leads to anaemic disorders. Low oxygen-carrying capacity may be defined by too few RBC in circulation or RBC dysfunction. Haem iron is acquired through the digestion of meat and transported through enterocytes of the duodenum, in its soluble form. Erythrocytic iron accounts for approximately 50% of the iron in blood. Metals cannot move freely throughout the body so they must be transported, the molecule involved in transporting iron is known as transferrin. Plasma transferrin saturation refers to the iron that is attached to transferrin, in iron deficient anaemia (IDA) this will always be low. Anaemia is physiological or pathological, these changes can be due to a plethora of causes; malabsorption due to diet or gastrointestinal (GI) conditions, genetic dispositions such as sideroblastic anaemias (SA), thalassaemia, or deficiency in erythropoietin due to comorbidities and chronic disease; where haemolysis is caused by autoimmune disorders, infections and drugs, or blood loss. Haem The iron is in a protoporphyrin ring at the centre of a haem molecule. The structure of haem consists of two alpha and two beta polypeptide chains to form a single haemoglobin macromolecule. Microcytic anaemias arise from problems in the creation of haemoglobin; sourcing through diet (IDA), synthesising protoporphyrin (SA) or from globin chain defects caused by thalassaemia. Summary Anaemia is a multifactorial condition with many different mechanisms involved, microcytic anaemias have an issue at the haemoglobin level, these can be acquired or inherited. A microcytic anaemia is caused by a failure to efficiently synthesise haemoglobin, whether from iron, protoporphyrin rings or globin chains. The diagnosis of anaemias is reliant on a patient’s background and medical history, as there are many factors involved in an anaemic disorder. A diagnosis should be patient led, as the age and sex of the patient can significantly highlight the origin and pathogenesis, as well as the prognosis and follow up care. Written by Lauren Kelly Related article: Blood Project Gallery
- Green Chemistry | Scientia News
And a hope for a more sustainable future Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Green Chemistry 05/02/25, 16:33 Last updated: Published: 29/06/23, 10:33 And a hope for a more sustainable future Green Chemistry is a branch of chemistry that takes into consideration the design of synthetic reactions to minimise the generation of hazardous by-products, their impact on humans and the environment. Often reactions are designed to take place at low temperatures with short reaction times and increased yields. This is preferred as fewer materials are used and it is more energy efficient. When designing routes it is important to consider ‘How green is the process?’ in this way we are shifting focus to a more sustainable future where we are emitting fewer pollutants, using renewable feedstocks and energy sources with minimal waste. In 1998 Paul Anastas and John Warner devised the twelve principles of Green Chemistry. They serve as a framework for scientists to design innovative scientific solutions to existing and new synthetic routes. Scientists are looking into environmentally friendly reaction schemes which can simplify production as well as being able to use greener resources. It is impossible to fulfil all twelve principles at the same time but making attempts to apply as many principles as possible when designing a protocol is just as good. The twelve principles are: Prevention: waste should be prevented rather than treating waste after it has been created. Atom Economy: designing processes where you are maximising the incorporation of all materials so all reagents are in the final product. Less Hazardous Chemical Synthesis : synthetic methods should be designed to be safe and the hazards of all the substances should be reviewed. Designing Safer Chemicals: designed to eliminate chemicals which are carcinogenic, neurotoxic, etc. essentially safe to the Earth. Safer Solvents and Auxiliaries: using auxiliary substances and minimising usage of solvents to reduce waste created. Design for Energy Efficiency: designing synthetic methods where reactions can be conducted at ambient temperature and pressure. Use of Renewable Feedstock: raw materials used for reactions should be renewable rather than depleting. Reduce Derivatives: reducing the steps required in a reaction by using catalysts/ enzymes and adding protecting or deprotecting groups or temporary modification of functionality. Extra steps require more reagents and generate a lot of waste. Catalysis: catalysts lower energy consumption and increase reaction rates. They allow for decreased use of harmful and toxic chemicals. Design for Degradation: chemical products should be designed so that they can break down and have no harmful effects on the environment. Real-time analysis for Pollution Prevention: analytical techniques required to allow monitoring of the formation of hazardous substances. Inherently Safer Chemistry for Accident Prevention: involves using safer chemical alternatives to prevent the occurrence of an accident e.g. fires; explosions. Some examples of areas where Green Chemistry is implemented: Computer Chips: the use of supercritical carbon dioxide as a step for the preparation of a chip. This has reduced the quantities of chemicals, water and energy required to produce chips. Medicine: developing more efficient ways of synthesising pharmaceuticals e.g. chemotherapy drug Taxol. Green Chemistry is widely being implemented in academic labs as a way to reduce the environmental impact and high costs. As of today and the future mainstream chemical industries have not fully embraced green chemistry and engineering with over 98% of organic chemicals being derived from petroleum. This branch in Chemistry is still fairly new and will likely be one of the most important fields in the future. Written by Khushleen Kaur Related article: The challenges in modern day chemistry Project Gallery










