top of page

Search Index

348 results found

  • A potential treatment for HIV | Scientia News

    Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A potential treatment for HIV 08/07/25, 16:16 Last updated: Published: 21/07/23, 09:50 Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? The human immunodeficiency virus (HIV) was recorded to affect 38.4 million people globally at the end of 2021. This virus attacks the immune system, incapacitating CD4 cells: white blood cells (WBCs) which play a vital role in activating the innate immune system and fighting infection. The normal range of CD4 cells in our body is from 500 to 1500 cells/mm3 of blood; HIV can rapidly deplete the CD4 count to dangerous levels, damaging the immune system and leaving the body highly susceptible to infections. Whilst antiretroviral therapy (ART) can help manage the virus by interfering with viral replication and helping the body manage the viral load, it fails to eliminate the virus altogether. The reason for this is due to the presence of latent viral reservoirs where HIV can lay dormant and reignite infection if ART is stopped. Whilst a cure has not yet been discovered, a promising avenue being explored in the hopes of eradicating HIV has been CRISPR/Cas9 technology. This highly precise gene-editing tool has been shown to have the ability to induce mutations at specific points in the HIV proviral DNA. Guide RNAs pinpoint the desired genome location and Cas9 nuclease enzymes act as molecular scissors that remove selected segments of DNA.  Therefore, CRISPR/Cas9 technology provides access to the viral genetic material integrated into the genome of infected cells, allowing researchers to cleave HIV genes from infected cells, clearing latent viral reservoirs. Furthermore, the CRISPR/Cas9 gene-editing tool can also prevent HIV from attacking the CD4 cells in the first place. HIV binds to the chemokine receptor, CCR5, expressed on CD4 cells, in order to enter the WBC. CRISPR/Cas9 can cleave the genes for the CCR5 receptor and therefore preventing the virus from entering and replicating inside CD4 cells. CRISPR/Cas9 technology provides a solution that current antiretroviral therapies cannot solve. Through gene-editing, researchers can dispel the lasting reservoirs unreachable by ART that HIV is able to establish in our bodies. However, further research and clinical trials are still required to fully understand the safety and efficacy of this approach to treating HIV before it can be implemented as a standard treatment. Written by Bisma Butt Related articles: Antiretroviral therapy / mRNA vaccines Project Gallery

  • Secondary bone cancer | Scientia News

    Pathology and promising therapeutics Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Secondary bone cancer 11/07/25, 09:52 Last updated: Published: 13/12/23, 17:27 Pathology and promising therapeutics Introduction: what is secondary bone cancer? Secondary bone cancer occurs when cancer cells spread to the bones from a tumour that started somewhere else in the body. The site where the tumour first develops is called primary cancer. Cancer cells can break away from the primary cancer, travel through the bloodstream or lymphatic system, and establish secondary cancers, known as metastasis. Bones are among the most common sites to which cancer can spread. Most type of cancer has the potential to metastasise to the bones, with the most frequent occurrences seen in prostate, breast, lung, thyroid, kidney, and myeloma cancers. Throughout the literature, secondary cancer in the bones is referred to as bone secondaries or bone metastases. The most common areas of secondary bone cancer are the spine, ribs, pelvis, humerus (upper bone of the arm), femur (upper bone of the leg) and skull. There are two main types of bone cancer referred to as osteolytic and osteoblastic metastases. In osteolytic metastases, cancer cells break down the bone, leading to significant weakening. This type of metastasis is more common than osteoblastic metastases and often occurs when breast cancer spreads to the bone. In osteoblastic metastases, cancer cells invade the bone and stimulate excessive bone cell formation. This process results in the bone becoming very dense (sclerotic). Osteoblastic metastases frequently occur when prostate cancer spreads to the bone. Although new bone forms, it grows abnormally, which weakens the overall bone structure. Hormone therapy Like primary bone cancer, treatment for secondary bone cancer includes surgical excision, chemotherapy, and radiation therapy. Treatment for secondary bone cancer aims to control the cancer growth and symptoms. Treatment depends on several factors, including the type of primary cancer, previous treatment, the number of bones affected by cancer, whether cancer has spread to other body parts, overall health, and symptoms. Breast and prostate cancers rely on hormones for their growth. Reducing hormone levels in the body can be effective in managing the proliferation of secondary cancer. Hormone therapy, also known as endocrine therapy, uses synthetic hormones to inhibit the impact of the body’s innate hormones. Typical side effects include hot flashes, mood fluctuations, changes in weight, and sweating. Bisphosphonates Bone is a dynamic tissue with a continuous process of bone formation and resorption. Osteoclasts are cells responsible for breaking down bone tissue. In secondary bone cancer, cancer cells often produce substances that stimulate the activity of osteoclasts. This leads to elevated levels of calcium in the blood (hypercalcemia), resulting in feelings of nausea and excessive thirst. Treating secondary bone cancer involves strengthening bones, alleviating bone pain and managing hypercalcaemia). One option for bone-strengthening is bisphosphonates. Bisphosphonates can be administered orally or intravenously. They have been in clinical practice for over 50 years and are used to treat metabolic bone diseases, osteoporosis, osteolytic metastases, and hypercalcaemia. These compounds selectively target osteoclasts to inhibit their function. Bisphosphonates can be classified into two pharmacologic categories based on their mechanism of action. Nitrogen-containing bisphosphonates, the most potent class, function by suppressing the activity of farnesyl pyrophosphate synthase, a key factor in facilitating the binding of osteoclasts to bone. Consequently, this interference causes the detachment of osteoclasts from the bone surface, effectively impeding the process of bone resorption. Examples of these bisphosphonates include alendronate and zoledronate. Bisphosphonates without nitrogen in their chemical structure are metabolised intracellularly to form an analogue of adenosine triphosphate (ATP), known as 5'-triphosphate pyrophosphate (ApppI). ApppI is a non-functional molecule that disrupts cellular energy metabolism, leading to osteoclast cell death (apoptosis) and, consequently, reduced bone resorption. Examples of these bisphosphonates include etidronate and clodronate. Non-nitrogen-containing bisphosphonates can inhibit bone mineralisation and cause osteomalacia, a condition characterised by bones becoming soft and weak. Due to these considerations, they are not widely utilised. Denosumab Denosumab is another option for bone strengthening. It is administered as an injection under the skin (subcutaneously). Denosumab is a human monoclonal antibody that inhibits RANKL to prevent osteoclast-mediated bone resorption. Denosumab-mediated RANKL inhibition hinders osteoclast maturation, function, and survival in contrast to bisphosphonates, which bind to bone minerals and are absorbed by mature osteoclasts. In some studies, Denosumab demonstrated equal or superior efficacy compared to bisphosphonates in preventing skeletal-related events (SREs) associated with bone metastasis. Denosumab’s mechanism of action provides a targeted approach that may offer benefits for specific populations, such as patients with renal impairment. Bisphosphonates are excreted from the human body by the kidneys. A study by Robinson and colleagues demonstrated that bisphosphonate users had a 14% higher risk of chronic kidney disease (CKD) stage progression (including dialysis and transplant) than non-users. On the other hand, denosumab is independent of renal function and less likely to promote deteriorations in kidney function. Take-home message Secondary bone cancer, resulting from the spread of cancer cells to the bones, poses challenges across various cancers. Two main types, osteolytic and osteoblastic metastases, impact bone structure differently. Hormone therapy, bisphosphonates, and Denosumab have shown promising results and offer effective management of secondary bone cancers. Ultimately, the decision between treatments should be made in consultation with a healthcare professional who can evaluate the specific clinical situation and individual patient factors. The choice should be tailored to meet the patient’s needs and treatment goals. Written by Favour Felix-Ilemhenbhio Related article: Bone cancer Project Gallery

  • Solving the mystery of ancestry with SNPs and haplogroups | Scientia News

    Decoding diversity in clinical settings Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Solving the mystery of ancestry with SNPs and haplogroups 10/02/25, 14:37 Last updated: Published: 15/01/24, 19:47 Decoding diversity in clinical settings Single nucleotide polymorphisms (SNPs) are genetic variants whereby one DNA base becomes substituted for another between individuals or populations. These tiny but influential changes play a pivotal role in defining the differences between populations, affecting disease susceptibility, response to medications, and various biological traits. SNPs serve as genetic markers and are widely used in genetic research to understand the genetic basis of complex traits and diseases. With advancements in sequencing technologies, large-scale genome-wide association studies (GWAS) have become possible, enabling scientists to identify associations between specific SNPs and various phenotypic traits. Haplotypes refer to clusters of SNPs commonly inherited together, whereas haplogroups refer to groups of haplotypes commonly inherited together. Haplogroups are frequently used in evolutionary genetics to elucidate human migration routes based on the ‘Out of Africa’ hypothesis. Notably, the study of mitochondrial and Y-DNA haplogroups has helped shape the phylogenetic tree of the human species along the female line. Haplogroup analysis is also instrumental in forensic genetics and genealogical research. Additionally, haplogroups play a crucial role in population genetics by providing valuable insights into the historical movements of specific populations and even individual families. Certain SNPs in some genes are of clinical importance as they may either increase or decrease the likelihood of developing a particular disease. An example of this is that men belonging to haplogroup I have a 50% higher likelihood of developing coronary artery disease 1 . This predisposition is due to SNPs present in some Y chromosome genes. Cases like these highlight the possibility of personalised medical interventions based on an individual’s haplogroup and therefore, SNPs in their genome. In this case, a treatment plan of exercise, diet, and lifestyle recommendations can be given as preventative measures for men of haplogroup I to mitigate genetic risk factors before they develop the disease. Written by Malintha Hewa Batage REFERENCE https://www.sciencedirect.com/science/article/pii/S002191501300765X?via%3Dihub [02/12/2023 - 14:53] Project Gallery

  • Are aliens on Earth? | Scientia News

    Applications of ancient DNA analysis Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Are aliens on Earth? 09/07/25, 10:52 Last updated: Published: 04/10/23, 17:13 Applications of ancient DNA analysis During a recent congressional hearing regarding UFOs held by Mexico, two alleged alien corpses were presented by UFO enthusiast Jaime Maussan. These artefacts were met with scepticism due to Maussan’s previous five claims to have found aliens, all debunked as mummified human remains. To verify the newly found remains as alien, various lab tests have been performed, one being a carbon-14 analysis by researchers at the Autonomous National University of Mexico. This analysis estimated the corpses to be approximately 1000 years old. Determination of the corpses’ genetic make-up is another essential technique for the verification of the supposed alien remains, but is it possible for these ancient remains to undergo DNA analysis? Yes; in fact, there are methods specialised for cases such as these that enable ancient DNA (aDNA) analysis. The relatively recent advent of high throughput sequencing technology has streamlined DNA sequencing into becoming a more rapid and inexpensive process. However, aDNA has fundamental qualities that complicate its analysis such as postmortem damage, extraneous co-extracted DNA and the presence of other contaminants. Therefore, extra steps are essential in the bioinformatics workflow to make sure that the aDNA is sequenced and analysed as accurately as possible. So, let’s talk about the importance of aDNA analysis in various areas and how looking at the genetics of the past, and potentially space, can unearth information for modern research. Applications of aDNA sequencing and analysis Analysis of ancient DNA is a useful technique for the discovery of human migration events from hundreds of centuries ago. For example, analyses of mitochondrial DNA (mtDNA) have repeatedly substantiated the “Recent African Origin” theory of modern human origins; the most common ancestor of human mtDNA was found to exist in Africa about 100,000-200,000 years ago. There have also been other recent studies within phylogeography; an aDNA study on skeletal remains of ancient northwestern Europeans carried out in 2022 showed that mediaeval society in England was likely the result of mass migration across the North Sea from the Netherlands, Germany and Denmark. Thus, these phylogeographic discoveries improve our knowledge of the historic evolution and migration of human populations. Paleopathology, the study of disease in antiquity, is another area for which ancient DNA analysis is important. Analysis of DNA from the victims of the Plague of Justinian and the Black Death facilitated the identification of Yersinia Pestis and determined it as the causal agent in these pandemics. The contribution of aDNA analysis is consequently important to reveal how diseases have affected past populations and this derived genetic information can be used to identify their prevalence in modern society. Exciting yet debatably ethical plans for the de-extinction of species have also been announced. The biotech company Colossal announced plans in 2021 to resurrect the woolly mammoth among other species such as the Tasmanian tiger and the dodo bird. Other groups plan to resurrect the Christmas Island rat and Steller’s sea cow. In theory, this is exciting, or scary from certain ecological perspectives, but is complicated in practice. Even though the number of nuclear genomes sequenced from extinct species exceeds 20, there has been no restoration of species to date. Are aliens on Earth? Thus, ancient DNA analysis can be applied to a multitude of areas to give historical information that we are able to carry into the modern world. But, finally, are these ‘alien’ corpses legitimately from outer space? José Zalce Benitez is the director of the Health Sciences Research Institute in the secretary of the Mexican Navy’s office and he reports on the scientists’ findings. The DNA tests were allegedly compared with over one million species and found not to be genetically related to “what is known or described up to this moment by science.” In essence, genetic testing has not conflicted with Maussan’s claim that these remains are alien so the possibility of their alien identity cannot yet be dismissed. However, this genetic testing does not appear to be peer-reviewed; NASA is reportedly interested in the DNA analysis of these corpses, so we await further findings. Ancient DNA analysis will undoubtedly provide intriguing information about life from outer space or, alternatively, how this DNA code was faked. Whatever the outcome, ancient DNA analysis remains an exciting area of research about life preceding us. Written by Isobel Cunningham Related article: Astro-geology of Lonar Lake Project Gallery

  • Motivating the Mind | Scientia News

    MIT scientists found reward sensitivity varies by socioeconomic status Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Motivating the Mind 08/02/25, 13:24 Last updated: Published: 22/04/24, 10:41 MIT scientists found reward sensitivity varies by socioeconomic status Behaviour is believed by many, including the famous psychologist B.F. Skinner, to be reinforced by rewards and the degree to which an individual is motivated by rewards is called reward sensitivity. Another common view is that behaviour is influenced by the environment, nowadays including socioeconomic status (SES). People with low SES encounter fewer rewards in their environment, which could affect their behaviour toward pursuing rewards due to their scarcity- Farah 2017. Thus, a study by Decker (2024) investigates the effect of low SES on reward sensitivity in adolescents through a gambling task, using fMRI technology to measure response times, choices and activity in the striatum – the reward centre of the brain. The researchers hypothesised that response times to immediate rewards, average reward rates and striatal activity would differ for participants from high compared to low SES backgrounds. See Figure 1 . The study involved 114 adolescents whose SES was measured using parental education and income. The participants partook in a gambling task involving guessing if numbers were higher or lower than 5, the outcomes of which were pre-determined to create blocks with reward abundance and reward scarcity. Low and high SES background teenagers gave faster responses and switched guesses when the rewards were given more often. Also, immediate rewards made the participants repeat prior choices and slowed response times. In line with the hypothesis, fewer adolescents with lower SES slowed down after rare rewards. Moreover, it was found that lower SES is linked with fewer differences between reward and loss activation in the striatum, indicating experience-based plasticity in the brain. See Figure 2 . Therefore, the research by Decker (2024) has numerous implications for the real world. As adolescents with lower SES displayed reduced behavioural and neural responses to rewards and, according to behaviourism, rewards are essential to learning, attention and motivation, it can be assumed that SES plays a role in the inequality in many cognitive abilities. This critically impacts the understanding of socioeconomic differences in academic achievement, decision-making and emotional well-being, especially if we consider that differences in SES contribute to prejudice based on ingroups and outgroups. Interventions to enhance motivation and engagement with rewarding activities could help buffer against the detrimental impacts of low SES environments on cognitive and mental health outcomes. Overall, this research highlights the need to address systemic inequities that limit exposure to enriching experiences and opportunities during formative developmental periods. Written by Aleksandra Lib Related article: A perspective on well-being REFERENCES Decker, A. L., Meisler, S. L., Hubbard, N. A., Bauer, C. C., Leonard, J., Grotzinger, H., Giebler M. A., Torres Y C., Imhof A., Romeo R. & Gabrieli, J. D. (2024). Striatal and Behavioral Responses to Reward Vary by Socioeconomic Status in Adolescents. The Journal of Neuroscience: the Official Journal of the Society for Neuroscience, 44(11). Farah, M. J. (2017). The neuroscience of socioeconomic status: Correlates, causes, and consequences. Neuron, 96(1), 56-71. Project Gallery

  • Behavioural Economics III | Scientia News

    Loss aversion: the power of framing in decision-making and why we are susceptible to poor decisions Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Behavioural Economics III 06/11/25, 11:56 Last updated: Published: 15/10/24, 11:18 Loss aversion: the power of framing in decision-making and why we are susceptible to poor decisions This is article no. 3 in a series on behavioural economics. Next article- Libertarian Paternalism . Previous article- The endowment effect . In the realm of decision-making, the way information is presented can dramatically influence the choices people make. This phenomenon, known as framing, plays a pivotal role in how we perceive potential outcomes, especially when it comes to risks and rewards. We shall now explore the groundbreaking work of Tversky and Kahneman, who sought to explain how different framings of identical scenarios could lead to vastly different decisions. By examining their research, we can gain insight into why we are susceptible to making poor decisions and understand the underlying psychological mechanisms that drive our preferences. The power of framing Imagine that the UK is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. In a paper by Tversky and Kahneman, they examined the importance of how information is conveyed in two different scenarios. In scenario 1: If program A is adopted, 200 people will be saved. If program B is adopted, there is a 1/3 probability that 600 people will be saved and a 2/3 probability that no people will be saved. In scenario 2: If program A is adopted, 400 people will die. If program B is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die. Notice that both scenarios display the exact same information, but the way in which the information is displayed is different. So surely there should be no difference between the two scenarios? In fact, there is a huge difference. Scenario 2 has been given a loss frame, where the loss frame emphasises the potential negative outcomes. By taking a sidestep, we can examine why this is important. Loss aversion is the phenomenon where ‘losses loom larger gains’. In other words, if we lose something, then the negative impact of this is greater than the positive impact of an equal-sized gain. Image 1 illustrates a loss aversion function. As illustrated in the image, a loss of £100 results in a much larger negative reaction than the positive reaction of a gain of £100. To put this into perspective, imagine it’s your birthday and someone gifts you some money. You would hopefully feel quite grateful and happy, but perhaps this feeling isn’t overwhelming. On the contrary, if you soon discover that you lost your wallet or purse, which contained the same amount of money, the psychological impact is often much more severe. Losses are perceived to be much more significant than gains. Going back to the example involving the two scenarios, we see that in scenario 2, program A emphasises the death of 400 people compared to scenario 2, program B, which has a chance to lose more but also a chance to save everyone. Statistically, you should be indifferent between the two, but because the guaranteed loss of 400 people is so overwhelming, people would much rather gamble and take the chance. This same reason is why gambling is so addictive. When you lose money in a gamble, you feel compelled to not accept the loss and decide to continue betting in an effort to make back what you once had. What Kahneman and Tversky found was that in scenario 1, 72% of people chose program A, and in scenario 2, 78% of people chose program B. Clearly, how we frame a policy makes a huge difference in its popularity. By framing the information by saying “200 people will be saved” rather than “400 people will die” out of the same 600 people, our own perception is considerably different. But on a deeper level, why might this be, and why is knowing this distinction important? In my previous article on the endowment effect, we saw that once you own something, you feel possessive over it, and losing something that you have had to work for, like money, makes you feel as though that hard work has gone to waste. But this explanation struggles to translate into our example of people. In researching for this article, I came across the evolutionary psychology perspective and found it to be both interesting and persuasive. From an evolutionary perspective, loss aversion can be seen as an adaptive trait. For our ancestors, losses such as losing food or shelter could have dire consequences for survival, whereas gains such as finding extra food was certainly beneficial but not as crucial for immediate survival. Therefore, we may be hardwired to avoid any losses, which has translated into modern-day loss aversion. The reason why knowing about this is important comes up in two aspects of life. The first is in healthcare. As demonstrated at the beginning of the article, people’s decisions can be impacted by the way in which healthcare professionals and the government frame policies. By understanding this, it allows you to make your own decision on the risks and determine whether you believe it is right for you. Similarly, policymakers can shape public opinion by highlighting the benefits or costs of action or inaction such that it meets their own political agenda. So recognising loss aversion allows for more informed decision-making. Additionally, when it comes to the world of investing, people tend to keep hold of an investment that is performing badly or perhaps at a loss in the hopes that it will go back up in the future. If this belief is justified through analysis or good judgement, then deciding to hold may be a good decision; however, often loss aversion creates a false sense of hope similar to the example I gave for gambling. If you are a keen investor, it’s important to be aware of your own investment psychology so that it allows you to maintain an objective view of a company throughout the time you decide to remain invested. Evidently, understanding how we think and make decisions can play an important role in improving the choices we make in our personal and professional lives. By recognising the impact of loss aversion and framing, we can become more aware of the unconscious biases that drive us to avoid losses at all costs, even when those decisions may not be in our best interest. Whether it’s in healthcare, investing, or everyday life, cultivating this awareness allows for more rational, informed choices that better align with long-term goals rather than short-term fears. In a world where information is constantly framed to sway public opinion, knowing the psychology behind our decision-making processes is a powerful tool that can help us make wiser, more deliberate decisions. Written by George Chant REFERENCES Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981 Jan 30;211(4481):453-8. doi: 10.1126/science.7455683. PMID: 7455683. Image provided by Economicshelp.org , a link to the website: https://www.economicshelp.org/blog/glossary/loss-aversion/ Project Gallery

  • Revolutionising patient setup in cancer treatment | Scientia News

    Using Surface Guided Radiation Therapy (SGRT) Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Revolutionising patient setup in cancer treatment 11/07/25, 09:50 Last updated: Published: 18/10/23, 10:50 Using Surface Guided Radiation Therapy (SGRT) Cancer treatment can be a painstaking and difficult procedure to undergo given the complexity in the treatment process. The weight of a cancer diagnosis carries a huge mental and physical burden on the patient. It is therefore important to place emphasis on delivering an efficient and streamlined process whilst at the same time not cutting any corners. Manual methods of delivering care can and should be automated by AI and technology where possible. This is especially applicable in the preparation of delivering a dose of radiotherapy treatment where traditionally, breast cancer patients will undergo a tattoo setup which provides physical guidance on area at which the dose should be delivered. Patients suffer not only by the knowledge of the disease, but they are also marked with reminders of the experience by an increasingly outdated positioning technique. Innovation in radiotherapy treatment allows for a more ethical and streamlined solution. Surface Guided Radiation Therapy (SGRT) treatments provide a means for tracking a patient's position before and during radiation therapy, to help ensure a streamlined workflow for accurate treatment delivery. This type of treatment not only eliminates the need for an invasive tattoo setup but also provides a faster and more accurate way to deliver radiation doses to the patient. For example, precise measurements made by the software will ensure that radiation is delivered specifically to the targeted area and not the surrounding tissue. With a regular tattoo setup, this can be a common issue as patient movement, often triggered by respiration, can alter the accuracy of the tattoo markup, thereby reducing the effectiveness of the radiation treatment. The way in which many SGRTs work is through a system of cameras, mounted to the ceiling, which feed data into a software program. Each camera unit uses a projector and image sensors to create a 3D surface model of the area by projecting a red light onto the patient’s skin. (See Figure 2) This 3D surface model serves as a real-time map of the patient's position and surface contours. By constantly comparing the captured data with the pre-defined treatment plan, any deviations or movements can be detected instantly. If the patient moves beyond a predetermined threshold, the treatment can be paused to ensure accuracy and safety. The use of this cutting-edge technology is an important step in being able to provide some level of comfort for patients in a challenging environment. The integration of such systems represents a significant advancement in patient-centric care in the field of radiation therapy. Written by Jaspreet Mann Related article: Nuclear medicine Project Gallery

  • The science and controversy of water fluoridation | Scientia News

    Diving deep Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The science and controversy of water fluoridation 14/07/25, 15:02 Last updated: Published: 17/11/23, 17:00 Diving deep In the pursuit of national strategies to improve oral health, few interventions have sparked as much debate and divided opinions as water fluoridation. Whilst some have voiced concerns about water fluoridation in recent years, viewing it as mass medicalisation and an intrusion into personal choice, researchers and dental professionals continue to champion its benefits as a cost-effective, population-wide approach that can significantly reduce tooth decay and enhance the oral health of communities across the country. The statistics from 2021-2022 paint a concerning picture, with a staggering 26,741 extractions performed on 0-19-year-olds under the NHS due to preventable tooth decay, amounting to an estimated cost of £50 million. With the NHS bearing the responsibility of providing dental care to millions of people nationwide, the introduction of water fluoridation stands out as a promising ally in the quest for more efficient healthcare and the alleviation of the burden on our already-strained healthcare system, all while improving dental health in a cost-effective manner. Fluoride is a naturally occurring chemical element found in soil, plants and groundwater, which can reduce dental decay through a dual mechanism; fluoridating water reduces dental decay by both impeding demineralisation of enamel and enhancing remineralisation of teeth following acid attacks in the mouth. When sugars from food or drinks enter the mouth, the bacteria present in plaque act to convert these sugars to acid, demineralising the outer surface of teeth and leading to the formation of cavities. The incorporation of fluoride into the structure of tooth enamel during remineralisation strengthens and hardens the outer layer of teeth, rendering teeth less susceptible to damage and more resistant to acid-induced demineralisation. Moreover, fluoride has also been proven to reverse early tooth decay by repairing and remineralising weakened enamel, thus averting the need for restorative dental procedures such as fillings. The inhibition of demineralisation and encouragement of remineralisation overall prevents cavities forming and preserves the vitality of our smiles. The main adverse effect of fluoridating water is the risk of dental fluorosis, which affects the appearance of teeth. Dental fluorosis is a cosmetic dental condition caused by excessive fluoride exposure, resulting in changes in tooth colour and texture. It presents as small opaque white spots or streaks on the tooth surface. It is important to note that these conditions generally occur at fluoride levels significantly higher than those recommended for water fluoridation. Opponents of water fluoridation also argue on ethical grounds, citing concerns about mass medication infringing on personal choice and the right to decide whether to use fluoride or dental products containing fluoride. In some cases, opposition is rooted in conspiracy theories and scepticism about government motives. Findings from the Office for Health Improvement and Disparities and the UK Health Security Agency highlight the benefits of water fluoridation. The data collected illustrates young populations in areas of England with higher fluoride concentrations are up to 63% less likely to be admitted to hospital for tooth extractions due to decay compared to their counterparts in areas of lower fluoridation levels. This disparity is most pronounced in the most deprived areas, where children and young adults benefit the most from the addition of fluoride to the water supply. These findings strongly support the evidence for the advantages of water fluoridation and highlight how this simple method can substantially improve health outcomes for our population. While fluoridation has proven beneficial for communities, especially those from deprived backgrounds, it has demonstrated successful outcomes for individuals across all demographics, irrespective of age, education, employment, or oral hygiene habits. It's essential to emphasize that water fluoridation should not replace other essential oral health practices such as regular tooth brushing, prudent sugar intake, and dental appointments. Instead, it should complement these practices, working in synergy to optimize oral health. As of now, approximately 10% of the population in England receives water from a fluoridation scheme. While the protective and beneficial effects of fluoridation are well-established, the decision to move towards a nationwide water fluoridation scheme ultimately rests with the Secretary of State for Health in the coming years. Written by Isha Parmar Project Gallery

  • Physics in healthcare | Scientia News

    Nuclear medicine Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Physics in healthcare 10/07/25, 10:28 Last updated: Published: 06/01/24, 10:47 Nuclear medicine When thinking about a career or what to study in university, many students interested in science think that they have to decide between a more academic route or something more vocational, such as medicine. Whilst both paths are highly rewarding, it is possible to mix the two. An example of this is nuclear medicine, allowing physics students to become healthcare professionals. Nuclear medicine is an area of healthcare that involves introducing a radioactive isotope into the system of a patient in order to image their body. A radioactive isotope is an unstable nucleus that decays and emits radiation. This radiation can then be detected, usually by a tool known as a gamma camera. It sounds dangerous, however it is a fantastic tool that allows us to identify abnormalities, view organs in motion and even prevent further spreading of tumours. So, how does the patient receive the isotope? It depends on the scan they are having! The most common route is injection but it is also possible for the patient to inhale or swallow the isotope. Some hospitals give radioactive scrambled eggs or porridge to the patient in gastric emptying imaging. The radioisotope needs to obey some conditions: ● It must have a reasonable half-life. The half-life is the time it takes for the isotope to decay to half of the original activity. If the half-life is too short, the scan will be useless as nothing will be seen. If it is too long, the patient will be radioactive and spread radiation into their immediate surroundings for a long period of time. ● The isotope must be non-toxic. It cannot harm the patient! ● It must be able to biologically attach to the area of the body that is being investigated. If we want to look at bones, there is no point in giving the patient an isotope that goes straight to the stomach. ● It must have radiation of suitable energy. The radiation must be picked up by the cameras and they will be designed to be most efficient over a specific energy range. For gamma cameras, this is around 100-200 keV. Physicists are absolutely essential in nuclear medicine. They have to understand the properties of radiation, run daily quality checks to ensure the scanners are working, they must calibrate devices so that the correct activity of radiation is being given to patients and so much more. It is essential that the safety of patients and healthcare professionals is the first priority when it comes to radiation. With the right people on the job, safety and understanding is the priority of daily tasks. Nuclear medicine is indeed effective and is implemented into standard medicine thanks to the work of physicists. Written by Megan Martin Related articles: Nuclear fusion / The silent protectors / Radiotherapy Project Gallery

  • An end at the beginning: the tale of the Galápagos Tortoises | Scientia News

    Conservation efforts Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link An end at the beginning: the tale of the Galápagos Tortoises 25/09/25, 09:21 Last updated: Published: 06/06/24, 11:20 Conservation efforts The Galápagos Islands Most who know of the name “Darwin” will be familiar with the Galápagos. These relatively uninviting islands protrude harsh, crashing waves like spears of mountainous rock, formed through millions of years of fierce volcanic activity. Even Charles Darwin himself thought life could not be sustained in such a remote and harsh environment, writing in his 1835 Journal of Researches: A broken field of basaltic lava, thrown into the most rugged waves, and crossed by great fissures, is everywhere covered by stunted, sun-burnt brushwood, which shows little signs of life. Little did the 22-year-old university graduate know at the time, these rugged islands would spark the most pivotal and influential theory in the field of modern biology. Due to the Hawaiian archipelago’s unique volcanic origins, the cluster of islands have grown jagged and fractured, with some islands showcasing altitudes as low as a few meters above sea level to others flexing spaces over 5000 feet above sea level. These extremely diverse habitats enable the observation of vastly different sub-populations of the same (or closely related) species*, exhibiting differing adaptations to their unique environments. These morphological distinctions lead to Darwin’s infamous 1859 book ‘On the Origin of Species’, detailing his evidence for the theories of evolution. *This article may refer to the Galápagos Tortoises as different subspecies or species interchanagably, as this remains a contentious area. The giant tortoises One most apparent examples of evolution that Darwin noted were the Galapagos tortoises, Chelonoidis niger , of which there were at least 15 subspecies. Darwin devoted almost four pages of his Journal of Researches to the Galapagos tortoise, more than he did to any other Galápagos species. These captivating reptiles can grow up to 5 feet in length and weigh over 220kg, making them the largest tortoises in the world. This miraculous species can survive over a year without food or water, able to store tremendous volumes of liquid in their bladders in periods of drought - one of the many adaptive characteristics that enable them to routinely live well over 150-years-old. Darwin notably observed the species’ two unique primary shell morphologies - saddleback and domed. Some subspecies, such as the Pinta Island Tortoise ( Chelonoidis niger abingdonii ), have saddle-shaped shells which raise at the front, making it easier for the neck to stretch upwards to feed on taller vegetation on hotter, more arid islands. Whereas the populations with the dome-shaped shells, including the Chelonoidis niger porteri , occupy islands where there’s an abundance of flora lower to the ground, making upward stretching of the neck unnecessary to feed. Features such as these are well documented in Darwin’s evidence for evolutionary adaptation throughout the islands. Torment and tragedy Only two centuries ago, the Galápagos Islands were rife with life, with an estimated 250,000 giant tortoises. Today, multiple species are extinct, with only around 10% of the individuals surviving. The dramatic decline of the Galápagos tortoises has been characterised by frequent human failure, and in some instances, human design. Between the 1790s and 1800s, whalers began operating around the Galápagos, routinely taking long voyages to explore the Pacific Ocean. With whaling voyages lasting about a year, the tortoises were selected as the primary source of fresh meat for the whalers, with each taking 200 to 300 tortoises aboard. Here, in a ship’s hold, the hundreds of tortoises would live without food or water for months, before being killed and consumed. Documentation regarding how many tortoises were taken aboard by whalers is scarce, however estimates place the number between 100,000 and 200,000 by 700 whaling ships between 1800 and 1870. This initial decimation via over-consumption was then followed by the introduction of harmful invasive species. In the years since, multiple foreign species have been introduced to the archipelago, mainly for farming, including pigs (a lot of which are feral), dogs, cats, rats, goats and donkeys. These non-native species are an enduring threat to the giant tortoise populations, preying on their eggs and hatchlings, whilst also providing fierce and unprecedented competition for food. Furthermore, increasing temperatures attributed to climate change are thought to trigger atypical migrations. These migrations have the potential to reduce tortoise nesting success, further adding to the list of threats these species have had to endure. The Pinta giant tortoise, Chelonoidis nigra abingdonii , a species of the unique saddleback shell variety, was thought to be extinct since the early 20th century. But then, in 1971, József Vágvölgyi, a Hungarian scientist on Pinta island made a special discovery – Lonesome George. Seemingly a sole survivor of his kind, Lonesome George became an icon of the sparking conservation movement surrounding the Galápagos species. This lone Pinta individual could have been wandering the small island for decades in search for another member of his species - a search that would unfortunately never bear fruit. Despite selective breeding efforts, on June 24, 2012, at 8:00 A.M. local time, Lonesome George would pass away without producing any offspring, found by park ranger Fausto Llerena who had looked after him for forty years. Hope and the future Despite all the devastation the Galápagos tortoises have endured, not is all lost. Just like the story of Lonesome George, a microcosm of this larger crisis, there is a small light at the end of the tunnel. Just prior to George’s passing a remarkable discovery was made. During 2008, research conducted by the Ecology and Evolutionary Biology Department of Yale University on neighbouring Isabela Island, set out to genetically sequence the local giant tortoise population. Over 1,600 tortoises were tagged and sampled for their DNA, with analyses revealing an astonishing number of tortoises with mixed genetic ancestry. Within this sample, 17 individuals contained DNA from the Pinta tortoise species (and more contained DNA from the also extinct Floreana species). Retrospective study of old whaling logbooks seems to indicate that, in order to lighten the burden of their ships, whalers and pirates dropped large numbers of tortoises in Banks Bay, near Volcano Wolf, Isabela Island, likely accounting for these hybrids. This miracle discovery opens the door to selective breeding efforts, paving a future of reintroduction of the previously-extinct Pinta Island species. While only a fraction of their original numbers remain, the Galápagos tortoises continue to personify evolution’s stunning intricacies and persist as a bright beacon of hope for the greater world of conservation. It is vital that we do our part as human beings to correct the errors of our past and to respect and nurture these gentle giants and all that they represent in this world we call home. Written by Theo Joe Andreas Emberson Related articles: Conservation of marine iguanas / 55 years of vicuna conservation / Gorongosa National Park / Modern Evolutionary Synthesis REFERENCES Sulloway FJ. Tantalizing tortoises and the Darwin-Galápagos legend. J Hist Biol. 2009;42(1):3-31. doi:10.1007/s10739-008-9173-9 Patrick J. Endres. AlaskaPhotoGraphics.com Project Gallery

bottom of page