top of page

Search Index

348 results found

  • Revolutionising sustainable agriculture | Scientia News

    Through AI Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Revolutionising sustainable agriculture 11/07/25, 09:51 Last updated: Published: 27/06/23, 15:34 Through AI Artificial Intelligence (AI) is taking the world by storm. Recent developments now allow scientists to integrate AI into sustainable farming. Through transforming the way we grow crops, manage resources and pests, and most importantly- protect the environment. There are many applications for AI in agriculture. Outlined below are some of the areas in which the incorporation of AI systems improves sustainability: Precision farming Artificial intelligence systems help improve the overall quality and accuracy of harvesting – known as precision farming. Artificial intelligence technology helps detect plant diseases, pests, and malnutrition on farms. AI sensors can detect and target weeds, then decide what herbicide to use in an area. This helps reduce the use of herbicides and lower costs. Many tech companies have developed robots that use computer vision and AI to monitor and precisely spray weeds. These robots can eliminate 80% of the chemicals normally sprayed on crops and reduce herbicide costs by 90%. These intelligent AI sprayers can drastically reduce the amount of chemicals used in the field, improving product quality, and lowering costs. Vertical farming Vertical farming is a technique in which plants are grown vertically by being stacked on top of each other (usually indoors) as opposed to the ‘traditional way’ of growing plants and crops on big strips of land. This approach offers several benefits for sustainable agriculture and waste reduction. The use of AI brings even more significant advancements making vertical farming more sustainable and efficient- Intelligent Climate Control: AI can use algorithms to measure and monitor temperature, humidity, and lighting conditions to optimise climate control in vertical farms. Thus, reducing energy consumption and improving resource efficiency. Creating an enhanced climate-controlled environment also allows for repeatable and programmable crop production. Predictive Plant Modelling: the difference between a profitable year and a failed harvest can just be the specific time the seeds were sowed. By using AI, farmers can use predictive analysis tools to determine the exact date suitable for sowing seeds for maximum yield and reduce waste from overproduction. Automated Nutrient Monitoring: to optimise plant nutrition, AI systems monitor and adjust nutrient levels in hydroponic (plants immersed in nutrient containing water) and aeroponic setups (plants growing outside the soil, with nutrients being provided by spraying the roots). Genetic engineering AI plays a pivotal role in genetic engineering, enhancing the sustainability and precision of crop modification through- Targeted Gene Editing: AI algorithms help in gene editing to produce desirable traits in crops, such as resistance to disease or improved nutritional content. This allows genetic modification without the need to conduct extensive field trials. Thus, saving time and resources. Computational Modelling: by combining AI modelling with gene prediction, farmers will be able to predict which combinations of genes have the potential to increase crop yield. Pest management and disease detection Artificial intelligence solutions such as smart pest detection systems are being used to monitor crops for signs of pests and diseases. These systems detect changes in the environment such as temperature, humidity, and soil nutrients, then alert farmers when something is wrong. This allows farmers to act quickly and effectively, taking preventive measures before pests cause significant damage. Another way to achieve this is by using computer vision and image processing techniques. AI can detect signs of pest infestation, nutrient deficiencies and other issues that can affect yields. This data can help farmers make informed decisions about how to protect their crops. By incorporating AI into these aspects of sustainable agriculture, farmers can achieve high yields, reduce waste and enable more sustainable farming practices, reducing environmental impacts while ensuring efficient food production. Written by Aleksandra Zurowska Related articles: Digital innovation in rural farming / Plant diseases and nanoparticles Project Gallery

  • The rising threat of antibiotic resistance | Scientia News

    Understanding the problem and solutions Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The rising threat of antibiotic resistance 14/07/25, 15:00 Last updated: Published: 07/01/24, 13:47 Understanding the problem and solutions An overview and history of antibiotics Antibiotics are medicines that treat and prevent bacterial infections (such as skin infections, respiratory infections and more). Antibiotic resistance is the process of infection-causing bacteria becoming resistant to antibiotics. As the World Health Organisation (WHO) stated, antibiotic resistance is one of the biggest threats to global health, food security and development. In 1910, Paul Ehrlich discovered the first antibiotic, Salvarsan, used to treat syphilis at the time. His idea was to create anti-infective medication, and Salvarsan was successful. The golden age of antibiotic discovery began with the accidental discovery of penicillin by Alexander Fleming in 1928. He noticed that mould had contaminated one of the petri dishes of Staphylococcus bacteria. He observed that bacteria around the mould were dying and realised that the mould, Penicillium notatum , was causing the bacteria to die. In 1940, Howard Florey and Ernst Chain isolated penicillin and began clinical trials, showing that it effectively treated infectious animals. Penicillin was then used to treat patients by 1943 in the United States. Overall, the discovery and use of antibiotics in the 21st century was a significant scientific discovery, extending people’s lives by around 20 years. Factors contributing to antibiotic resistance Increasing levels of antibiotic resistance could mean routine surgeries and cancer treatments (which can weaken the body’s ability to respond to infections) might become too risky, and minor illnesses and injuries could become more challenging to treat. There are various factors contributing to this, including overusing and misusing antibiotics and low investment in new antibiotic research. Antibiotics are overused and misused due to misunderstanding when and how to use them. As a result, antibiotics may be used for viral infections, and an entire course may not be completed if patients start to feel better. Some patients may also use antibiotics not prescribed to them, such as those of family and friends. Moreover, there has not been enough investment to fund the research of novel antibiotics. This has resulted in a shortage of antibiotics available to treat infections that have become resistant. Therefore, more investment and research are needed to prevent antibiotic resistance from becoming a public health crisis. Combatting antibiotic resistance One of the most effective ways to combat antibiotic resistance is through raising public awareness. Children and adults can learn about when and how to use antibiotics safely. Several resources are available to help individuals and members of the public to do this. Some resources are linked below: 1. The WHO has provided a factsheet with essential information on antibiotic resistance. 2. The Antibiotic Guardian website is a platform with information and resources to help combat antibiotic resistance. It is a UK-wide campaign to improve and reduce antibiotic prescribing and use. Visit the website to learn more, and commit to a pledge to play your part in helping to solve this problem. 3. Public Health England has created resources to support Antibiotic Guardian. 4. The E-bug peer-education package is a platform that aims to educate individuals and provide them with tools to educate others. Written by Naoshin Haque Related articles: Anti-fungal resistance / Why bacteria are essential to human survival Project Gallery

  • The physics behind cumulus clouds | Scientia News

    An explanation of how cumulus clouds form and grow in the atmosphere Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The physics behind cumulus clouds 14/07/25, 14:58 Last updated: Published: 07/10/23, 12:51 An explanation of how cumulus clouds form and grow in the atmosphere When you think of a cloud, it is most likely a cumulus cloud that pops into your head, with its distinct fluffy, cauliflower-like shape. The word ‘cumulus’ means ‘heaped’ in Latin, and aptly describes the clumpy shape of these detached clouds. They are one of the lowest clouds in the sky at altitudes of approximately 600 to 1000 metres, while the highest clouds form nearly 14 km up in the atmosphere. Depending on the position of the clouds in relation to the sun, they can appear in a brilliant white colour, or in a more foreboding grey colour. Cumulus clouds are classified into four different species: cumulus humilis clouds which are wider than they are tall, cumulus mediocris which have similar widths and heights, cumulus congestus which are taller than they are wide, and finally, cumulus fractus which have blurred edges as this is the cloud in its decaying form. Cumulus clouds are often associated with fair weather, with cumulus congestus being the only species that produces rain. So, how do cumulus clouds form, and why are they associated with fair weather? To understand the formation of these clouds, think of a sunny day. The sun shines on the land and causes surface heating. The warm surface heats the air above it which causes this air to rise in thermals, or convection currents. The air in the thermal expands and becomes less dense as it rises through surrounding cool air. The water vapour that is carried upwards in the convection current condenses when it gets cool enough and forms a cumulus cloud. Due to the varying properties of different surface types, some types are better at causing thermals than others. For example, the sun’s radiation will warm the surface of land more efficiently than the sea, leading to the formation of cumulus clouds over land rather than the sea. This is because water has a higher heat capacity than land, meaning it will take more heat to warm the water than the land. As cumulus clouds form on the top of independent thermals, they appear as individual floating puffs. But, what happens when cumulus clouds are knocked off the perch of their thermal by a breeze? How do they keep growing from an innocent, lazy cumulus humilis to a dark cumulus congestus, threatening rain showers? Latent heat gives us the answer. This is the energy that is absorbed, or released, by a body when it changes state. A cumulus cloud forms at the top of a thermal as the water molecules condense (changing state from a gas to a liquid) to form water droplets. When this happens, the warmth given off by the latent heat of condensation heats up the surrounding air causing it to expand and rise further, repeating the cycle and forming the characteristic cauliflower mounds of the cloud. The development of a cumulus humilis to cumulus congestus depends on the available moisture in the atmosphere, the strength of the sun’s radiation to form significant thermals, and whether there is a layer of warmer air higher up in the atmosphere that can halt the rising thermals. If the conditions are right, a cumulus congestus can keep growing and form a cumulonimbus cloud, which is an entirely different beast, more than deserving of its own article. So, the next time you see a cumulus cloud wandering through the sky, you will know how it came to be there. Written by Ailis Hankinson Related article: The physics of LIGO Project Gallery

  • A potential treatment for HIV | Scientia News

    Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A potential treatment for HIV 08/07/25, 16:16 Last updated: Published: 21/07/23, 09:50 Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? The human immunodeficiency virus (HIV) was recorded to affect 38.4 million people globally at the end of 2021. This virus attacks the immune system, incapacitating CD4 cells: white blood cells (WBCs) which play a vital role in activating the innate immune system and fighting infection. The normal range of CD4 cells in our body is from 500 to 1500 cells/mm3 of blood; HIV can rapidly deplete the CD4 count to dangerous levels, damaging the immune system and leaving the body highly susceptible to infections. Whilst antiretroviral therapy (ART) can help manage the virus by interfering with viral replication and helping the body manage the viral load, it fails to eliminate the virus altogether. The reason for this is due to the presence of latent viral reservoirs where HIV can lay dormant and reignite infection if ART is stopped. Whilst a cure has not yet been discovered, a promising avenue being explored in the hopes of eradicating HIV has been CRISPR/Cas9 technology. This highly precise gene-editing tool has been shown to have the ability to induce mutations at specific points in the HIV proviral DNA. Guide RNAs pinpoint the desired genome location and Cas9 nuclease enzymes act as molecular scissors that remove selected segments of DNA.  Therefore, CRISPR/Cas9 technology provides access to the viral genetic material integrated into the genome of infected cells, allowing researchers to cleave HIV genes from infected cells, clearing latent viral reservoirs. Furthermore, the CRISPR/Cas9 gene-editing tool can also prevent HIV from attacking the CD4 cells in the first place. HIV binds to the chemokine receptor, CCR5, expressed on CD4 cells, in order to enter the WBC. CRISPR/Cas9 can cleave the genes for the CCR5 receptor and therefore preventing the virus from entering and replicating inside CD4 cells. CRISPR/Cas9 technology provides a solution that current antiretroviral therapies cannot solve. Through gene-editing, researchers can dispel the lasting reservoirs unreachable by ART that HIV is able to establish in our bodies. However, further research and clinical trials are still required to fully understand the safety and efficacy of this approach to treating HIV before it can be implemented as a standard treatment. Written by Bisma Butt Related articles: Antiretroviral therapy / mRNA vaccines Project Gallery

  • Secondary bone cancer | Scientia News

    Pathology and promising therapeutics Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Secondary bone cancer 11/07/25, 09:52 Last updated: Published: 13/12/23, 17:27 Pathology and promising therapeutics Introduction: what is secondary bone cancer? Secondary bone cancer occurs when cancer cells spread to the bones from a tumour that started somewhere else in the body. The site where the tumour first develops is called primary cancer. Cancer cells can break away from the primary cancer, travel through the bloodstream or lymphatic system, and establish secondary cancers, known as metastasis. Bones are among the most common sites to which cancer can spread. Most type of cancer has the potential to metastasise to the bones, with the most frequent occurrences seen in prostate, breast, lung, thyroid, kidney, and myeloma cancers. Throughout the literature, secondary cancer in the bones is referred to as bone secondaries or bone metastases. The most common areas of secondary bone cancer are the spine, ribs, pelvis, humerus (upper bone of the arm), femur (upper bone of the leg) and skull. There are two main types of bone cancer referred to as osteolytic and osteoblastic metastases. In osteolytic metastases, cancer cells break down the bone, leading to significant weakening. This type of metastasis is more common than osteoblastic metastases and often occurs when breast cancer spreads to the bone. In osteoblastic metastases, cancer cells invade the bone and stimulate excessive bone cell formation. This process results in the bone becoming very dense (sclerotic). Osteoblastic metastases frequently occur when prostate cancer spreads to the bone. Although new bone forms, it grows abnormally, which weakens the overall bone structure. Hormone therapy Like primary bone cancer, treatment for secondary bone cancer includes surgical excision, chemotherapy, and radiation therapy. Treatment for secondary bone cancer aims to control the cancer growth and symptoms. Treatment depends on several factors, including the type of primary cancer, previous treatment, the number of bones affected by cancer, whether cancer has spread to other body parts, overall health, and symptoms. Breast and prostate cancers rely on hormones for their growth. Reducing hormone levels in the body can be effective in managing the proliferation of secondary cancer. Hormone therapy, also known as endocrine therapy, uses synthetic hormones to inhibit the impact of the body’s innate hormones. Typical side effects include hot flashes, mood fluctuations, changes in weight, and sweating. Bisphosphonates Bone is a dynamic tissue with a continuous process of bone formation and resorption. Osteoclasts are cells responsible for breaking down bone tissue. In secondary bone cancer, cancer cells often produce substances that stimulate the activity of osteoclasts. This leads to elevated levels of calcium in the blood (hypercalcemia), resulting in feelings of nausea and excessive thirst. Treating secondary bone cancer involves strengthening bones, alleviating bone pain and managing hypercalcaemia). One option for bone-strengthening is bisphosphonates. Bisphosphonates can be administered orally or intravenously. They have been in clinical practice for over 50 years and are used to treat metabolic bone diseases, osteoporosis, osteolytic metastases, and hypercalcaemia. These compounds selectively target osteoclasts to inhibit their function. Bisphosphonates can be classified into two pharmacologic categories based on their mechanism of action. Nitrogen-containing bisphosphonates, the most potent class, function by suppressing the activity of farnesyl pyrophosphate synthase, a key factor in facilitating the binding of osteoclasts to bone. Consequently, this interference causes the detachment of osteoclasts from the bone surface, effectively impeding the process of bone resorption. Examples of these bisphosphonates include alendronate and zoledronate. Bisphosphonates without nitrogen in their chemical structure are metabolised intracellularly to form an analogue of adenosine triphosphate (ATP), known as 5'-triphosphate pyrophosphate (ApppI). ApppI is a non-functional molecule that disrupts cellular energy metabolism, leading to osteoclast cell death (apoptosis) and, consequently, reduced bone resorption. Examples of these bisphosphonates include etidronate and clodronate. Non-nitrogen-containing bisphosphonates can inhibit bone mineralisation and cause osteomalacia, a condition characterised by bones becoming soft and weak. Due to these considerations, they are not widely utilised. Denosumab Denosumab is another option for bone strengthening. It is administered as an injection under the skin (subcutaneously). Denosumab is a human monoclonal antibody that inhibits RANKL to prevent osteoclast-mediated bone resorption. Denosumab-mediated RANKL inhibition hinders osteoclast maturation, function, and survival in contrast to bisphosphonates, which bind to bone minerals and are absorbed by mature osteoclasts. In some studies, Denosumab demonstrated equal or superior efficacy compared to bisphosphonates in preventing skeletal-related events (SREs) associated with bone metastasis. Denosumab’s mechanism of action provides a targeted approach that may offer benefits for specific populations, such as patients with renal impairment. Bisphosphonates are excreted from the human body by the kidneys. A study by Robinson and colleagues demonstrated that bisphosphonate users had a 14% higher risk of chronic kidney disease (CKD) stage progression (including dialysis and transplant) than non-users. On the other hand, denosumab is independent of renal function and less likely to promote deteriorations in kidney function. Take-home message Secondary bone cancer, resulting from the spread of cancer cells to the bones, poses challenges across various cancers. Two main types, osteolytic and osteoblastic metastases, impact bone structure differently. Hormone therapy, bisphosphonates, and Denosumab have shown promising results and offer effective management of secondary bone cancers. Ultimately, the decision between treatments should be made in consultation with a healthcare professional who can evaluate the specific clinical situation and individual patient factors. The choice should be tailored to meet the patient’s needs and treatment goals. Written by Favour Felix-Ilemhenbhio Related article: Bone cancer Project Gallery

  • Solving the mystery of ancestry with SNPs and haplogroups | Scientia News

    Decoding diversity in clinical settings Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Solving the mystery of ancestry with SNPs and haplogroups 10/02/25, 14:37 Last updated: Published: 15/01/24, 19:47 Decoding diversity in clinical settings Single nucleotide polymorphisms (SNPs) are genetic variants whereby one DNA base becomes substituted for another between individuals or populations. These tiny but influential changes play a pivotal role in defining the differences between populations, affecting disease susceptibility, response to medications, and various biological traits. SNPs serve as genetic markers and are widely used in genetic research to understand the genetic basis of complex traits and diseases. With advancements in sequencing technologies, large-scale genome-wide association studies (GWAS) have become possible, enabling scientists to identify associations between specific SNPs and various phenotypic traits. Haplotypes refer to clusters of SNPs commonly inherited together, whereas haplogroups refer to groups of haplotypes commonly inherited together. Haplogroups are frequently used in evolutionary genetics to elucidate human migration routes based on the ‘Out of Africa’ hypothesis. Notably, the study of mitochondrial and Y-DNA haplogroups has helped shape the phylogenetic tree of the human species along the female line. Haplogroup analysis is also instrumental in forensic genetics and genealogical research. Additionally, haplogroups play a crucial role in population genetics by providing valuable insights into the historical movements of specific populations and even individual families. Certain SNPs in some genes are of clinical importance as they may either increase or decrease the likelihood of developing a particular disease. An example of this is that men belonging to haplogroup I have a 50% higher likelihood of developing coronary artery disease 1 . This predisposition is due to SNPs present in some Y chromosome genes. Cases like these highlight the possibility of personalised medical interventions based on an individual’s haplogroup and therefore, SNPs in their genome. In this case, a treatment plan of exercise, diet, and lifestyle recommendations can be given as preventative measures for men of haplogroup I to mitigate genetic risk factors before they develop the disease. Written by Malintha Hewa Batage REFERENCE https://www.sciencedirect.com/science/article/pii/S002191501300765X?via%3Dihub [02/12/2023 - 14:53] Project Gallery

  • Are aliens on Earth? | Scientia News

    Applications of ancient DNA analysis Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Are aliens on Earth? 09/07/25, 10:52 Last updated: Published: 04/10/23, 17:13 Applications of ancient DNA analysis During a recent congressional hearing regarding UFOs held by Mexico, two alleged alien corpses were presented by UFO enthusiast Jaime Maussan. These artefacts were met with scepticism due to Maussan’s previous five claims to have found aliens, all debunked as mummified human remains. To verify the newly found remains as alien, various lab tests have been performed, one being a carbon-14 analysis by researchers at the Autonomous National University of Mexico. This analysis estimated the corpses to be approximately 1000 years old. Determination of the corpses’ genetic make-up is another essential technique for the verification of the supposed alien remains, but is it possible for these ancient remains to undergo DNA analysis? Yes; in fact, there are methods specialised for cases such as these that enable ancient DNA (aDNA) analysis. The relatively recent advent of high throughput sequencing technology has streamlined DNA sequencing into becoming a more rapid and inexpensive process. However, aDNA has fundamental qualities that complicate its analysis such as postmortem damage, extraneous co-extracted DNA and the presence of other contaminants. Therefore, extra steps are essential in the bioinformatics workflow to make sure that the aDNA is sequenced and analysed as accurately as possible. So, let’s talk about the importance of aDNA analysis in various areas and how looking at the genetics of the past, and potentially space, can unearth information for modern research. Applications of aDNA sequencing and analysis Analysis of ancient DNA is a useful technique for the discovery of human migration events from hundreds of centuries ago. For example, analyses of mitochondrial DNA (mtDNA) have repeatedly substantiated the “Recent African Origin” theory of modern human origins; the most common ancestor of human mtDNA was found to exist in Africa about 100,000-200,000 years ago. There have also been other recent studies within phylogeography; an aDNA study on skeletal remains of ancient northwestern Europeans carried out in 2022 showed that mediaeval society in England was likely the result of mass migration across the North Sea from the Netherlands, Germany and Denmark. Thus, these phylogeographic discoveries improve our knowledge of the historic evolution and migration of human populations. Paleopathology, the study of disease in antiquity, is another area for which ancient DNA analysis is important. Analysis of DNA from the victims of the Plague of Justinian and the Black Death facilitated the identification of Yersinia Pestis and determined it as the causal agent in these pandemics. The contribution of aDNA analysis is consequently important to reveal how diseases have affected past populations and this derived genetic information can be used to identify their prevalence in modern society. Exciting yet debatably ethical plans for the de-extinction of species have also been announced. The biotech company Colossal announced plans in 2021 to resurrect the woolly mammoth among other species such as the Tasmanian tiger and the dodo bird. Other groups plan to resurrect the Christmas Island rat and Steller’s sea cow. In theory, this is exciting, or scary from certain ecological perspectives, but is complicated in practice. Even though the number of nuclear genomes sequenced from extinct species exceeds 20, there has been no restoration of species to date. Are aliens on Earth? Thus, ancient DNA analysis can be applied to a multitude of areas to give historical information that we are able to carry into the modern world. But, finally, are these ‘alien’ corpses legitimately from outer space? José Zalce Benitez is the director of the Health Sciences Research Institute in the secretary of the Mexican Navy’s office and he reports on the scientists’ findings. The DNA tests were allegedly compared with over one million species and found not to be genetically related to “what is known or described up to this moment by science.” In essence, genetic testing has not conflicted with Maussan’s claim that these remains are alien so the possibility of their alien identity cannot yet be dismissed. However, this genetic testing does not appear to be peer-reviewed; NASA is reportedly interested in the DNA analysis of these corpses, so we await further findings. Ancient DNA analysis will undoubtedly provide intriguing information about life from outer space or, alternatively, how this DNA code was faked. Whatever the outcome, ancient DNA analysis remains an exciting area of research about life preceding us. Written by Isobel Cunningham Related article: Astro-geology of Lonar Lake Project Gallery

  • Motivating the Mind | Scientia News

    MIT scientists found reward sensitivity varies by socioeconomic status Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Motivating the Mind 08/02/25, 13:24 Last updated: Published: 22/04/24, 10:41 MIT scientists found reward sensitivity varies by socioeconomic status Behaviour is believed by many, including the famous psychologist B.F. Skinner, to be reinforced by rewards and the degree to which an individual is motivated by rewards is called reward sensitivity. Another common view is that behaviour is influenced by the environment, nowadays including socioeconomic status (SES). People with low SES encounter fewer rewards in their environment, which could affect their behaviour toward pursuing rewards due to their scarcity- Farah 2017. Thus, a study by Decker (2024) investigates the effect of low SES on reward sensitivity in adolescents through a gambling task, using fMRI technology to measure response times, choices and activity in the striatum – the reward centre of the brain. The researchers hypothesised that response times to immediate rewards, average reward rates and striatal activity would differ for participants from high compared to low SES backgrounds. See Figure 1 . The study involved 114 adolescents whose SES was measured using parental education and income. The participants partook in a gambling task involving guessing if numbers were higher or lower than 5, the outcomes of which were pre-determined to create blocks with reward abundance and reward scarcity. Low and high SES background teenagers gave faster responses and switched guesses when the rewards were given more often. Also, immediate rewards made the participants repeat prior choices and slowed response times. In line with the hypothesis, fewer adolescents with lower SES slowed down after rare rewards. Moreover, it was found that lower SES is linked with fewer differences between reward and loss activation in the striatum, indicating experience-based plasticity in the brain. See Figure 2 . Therefore, the research by Decker (2024) has numerous implications for the real world. As adolescents with lower SES displayed reduced behavioural and neural responses to rewards and, according to behaviourism, rewards are essential to learning, attention and motivation, it can be assumed that SES plays a role in the inequality in many cognitive abilities. This critically impacts the understanding of socioeconomic differences in academic achievement, decision-making and emotional well-being, especially if we consider that differences in SES contribute to prejudice based on ingroups and outgroups. Interventions to enhance motivation and engagement with rewarding activities could help buffer against the detrimental impacts of low SES environments on cognitive and mental health outcomes. Overall, this research highlights the need to address systemic inequities that limit exposure to enriching experiences and opportunities during formative developmental periods. Written by Aleksandra Lib Related article: A perspective on well-being REFERENCES Decker, A. L., Meisler, S. L., Hubbard, N. A., Bauer, C. C., Leonard, J., Grotzinger, H., Giebler M. A., Torres Y C., Imhof A., Romeo R. & Gabrieli, J. D. (2024). Striatal and Behavioral Responses to Reward Vary by Socioeconomic Status in Adolescents. The Journal of Neuroscience: the Official Journal of the Society for Neuroscience, 44(11). Farah, M. J. (2017). The neuroscience of socioeconomic status: Correlates, causes, and consequences. Neuron, 96(1), 56-71. Project Gallery

  • Behavioural Economics III | Scientia News

    Loss aversion: the power of framing in decision-making and why we are susceptible to poor decisions Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Behavioural Economics III 06/11/25, 11:56 Last updated: Published: 15/10/24, 11:18 Loss aversion: the power of framing in decision-making and why we are susceptible to poor decisions This is article no. 3 in a series on behavioural economics. Next article- Libertarian Paternalism . Previous article- The endowment effect . In the realm of decision-making, the way information is presented can dramatically influence the choices people make. This phenomenon, known as framing, plays a pivotal role in how we perceive potential outcomes, especially when it comes to risks and rewards. We shall now explore the groundbreaking work of Tversky and Kahneman, who sought to explain how different framings of identical scenarios could lead to vastly different decisions. By examining their research, we can gain insight into why we are susceptible to making poor decisions and understand the underlying psychological mechanisms that drive our preferences. The power of framing Imagine that the UK is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. In a paper by Tversky and Kahneman, they examined the importance of how information is conveyed in two different scenarios. In scenario 1: If program A is adopted, 200 people will be saved. If program B is adopted, there is a 1/3 probability that 600 people will be saved and a 2/3 probability that no people will be saved. In scenario 2: If program A is adopted, 400 people will die. If program B is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die. Notice that both scenarios display the exact same information, but the way in which the information is displayed is different. So surely there should be no difference between the two scenarios? In fact, there is a huge difference. Scenario 2 has been given a loss frame, where the loss frame emphasises the potential negative outcomes. By taking a sidestep, we can examine why this is important. Loss aversion is the phenomenon where ‘losses loom larger gains’. In other words, if we lose something, then the negative impact of this is greater than the positive impact of an equal-sized gain. Image 1 illustrates a loss aversion function. As illustrated in the image, a loss of £100 results in a much larger negative reaction than the positive reaction of a gain of £100. To put this into perspective, imagine it’s your birthday and someone gifts you some money. You would hopefully feel quite grateful and happy, but perhaps this feeling isn’t overwhelming. On the contrary, if you soon discover that you lost your wallet or purse, which contained the same amount of money, the psychological impact is often much more severe. Losses are perceived to be much more significant than gains. Going back to the example involving the two scenarios, we see that in scenario 2, program A emphasises the death of 400 people compared to scenario 2, program B, which has a chance to lose more but also a chance to save everyone. Statistically, you should be indifferent between the two, but because the guaranteed loss of 400 people is so overwhelming, people would much rather gamble and take the chance. This same reason is why gambling is so addictive. When you lose money in a gamble, you feel compelled to not accept the loss and decide to continue betting in an effort to make back what you once had. What Kahneman and Tversky found was that in scenario 1, 72% of people chose program A, and in scenario 2, 78% of people chose program B. Clearly, how we frame a policy makes a huge difference in its popularity. By framing the information by saying “200 people will be saved” rather than “400 people will die” out of the same 600 people, our own perception is considerably different. But on a deeper level, why might this be, and why is knowing this distinction important? In my previous article on the endowment effect, we saw that once you own something, you feel possessive over it, and losing something that you have had to work for, like money, makes you feel as though that hard work has gone to waste. But this explanation struggles to translate into our example of people. In researching for this article, I came across the evolutionary psychology perspective and found it to be both interesting and persuasive. From an evolutionary perspective, loss aversion can be seen as an adaptive trait. For our ancestors, losses such as losing food or shelter could have dire consequences for survival, whereas gains such as finding extra food was certainly beneficial but not as crucial for immediate survival. Therefore, we may be hardwired to avoid any losses, which has translated into modern-day loss aversion. The reason why knowing about this is important comes up in two aspects of life. The first is in healthcare. As demonstrated at the beginning of the article, people’s decisions can be impacted by the way in which healthcare professionals and the government frame policies. By understanding this, it allows you to make your own decision on the risks and determine whether you believe it is right for you. Similarly, policymakers can shape public opinion by highlighting the benefits or costs of action or inaction such that it meets their own political agenda. So recognising loss aversion allows for more informed decision-making. Additionally, when it comes to the world of investing, people tend to keep hold of an investment that is performing badly or perhaps at a loss in the hopes that it will go back up in the future. If this belief is justified through analysis or good judgement, then deciding to hold may be a good decision; however, often loss aversion creates a false sense of hope similar to the example I gave for gambling. If you are a keen investor, it’s important to be aware of your own investment psychology so that it allows you to maintain an objective view of a company throughout the time you decide to remain invested. Evidently, understanding how we think and make decisions can play an important role in improving the choices we make in our personal and professional lives. By recognising the impact of loss aversion and framing, we can become more aware of the unconscious biases that drive us to avoid losses at all costs, even when those decisions may not be in our best interest. Whether it’s in healthcare, investing, or everyday life, cultivating this awareness allows for more rational, informed choices that better align with long-term goals rather than short-term fears. In a world where information is constantly framed to sway public opinion, knowing the psychology behind our decision-making processes is a powerful tool that can help us make wiser, more deliberate decisions. Written by George Chant REFERENCES Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981 Jan 30;211(4481):453-8. doi: 10.1126/science.7455683. PMID: 7455683. Image provided by Economicshelp.org , a link to the website: https://www.economicshelp.org/blog/glossary/loss-aversion/ Project Gallery

  • Revolutionising patient setup in cancer treatment | Scientia News

    Using Surface Guided Radiation Therapy (SGRT) Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Revolutionising patient setup in cancer treatment 11/07/25, 09:50 Last updated: Published: 18/10/23, 10:50 Using Surface Guided Radiation Therapy (SGRT) Cancer treatment can be a painstaking and difficult procedure to undergo given the complexity in the treatment process. The weight of a cancer diagnosis carries a huge mental and physical burden on the patient. It is therefore important to place emphasis on delivering an efficient and streamlined process whilst at the same time not cutting any corners. Manual methods of delivering care can and should be automated by AI and technology where possible. This is especially applicable in the preparation of delivering a dose of radiotherapy treatment where traditionally, breast cancer patients will undergo a tattoo setup which provides physical guidance on area at which the dose should be delivered. Patients suffer not only by the knowledge of the disease, but they are also marked with reminders of the experience by an increasingly outdated positioning technique. Innovation in radiotherapy treatment allows for a more ethical and streamlined solution. Surface Guided Radiation Therapy (SGRT) treatments provide a means for tracking a patient's position before and during radiation therapy, to help ensure a streamlined workflow for accurate treatment delivery. This type of treatment not only eliminates the need for an invasive tattoo setup but also provides a faster and more accurate way to deliver radiation doses to the patient. For example, precise measurements made by the software will ensure that radiation is delivered specifically to the targeted area and not the surrounding tissue. With a regular tattoo setup, this can be a common issue as patient movement, often triggered by respiration, can alter the accuracy of the tattoo markup, thereby reducing the effectiveness of the radiation treatment. The way in which many SGRTs work is through a system of cameras, mounted to the ceiling, which feed data into a software program. Each camera unit uses a projector and image sensors to create a 3D surface model of the area by projecting a red light onto the patient’s skin. (See Figure 2) This 3D surface model serves as a real-time map of the patient's position and surface contours. By constantly comparing the captured data with the pre-defined treatment plan, any deviations or movements can be detected instantly. If the patient moves beyond a predetermined threshold, the treatment can be paused to ensure accuracy and safety. The use of this cutting-edge technology is an important step in being able to provide some level of comfort for patients in a challenging environment. The integration of such systems represents a significant advancement in patient-centric care in the field of radiation therapy. Written by Jaspreet Mann Related article: Nuclear medicine Project Gallery

bottom of page