top of page

Search Index

355 results found

  • NGAL: A Valuable Biomarker for Early Detection of Renal Damage | Scientia News

    How kidney damage can be detected Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link NGAL: A Valuable Biomarker for Early Detection of Renal Damage 10/07/25, 10:22 Last updated: Published: 04/04/24, 16:20 How kidney damage can be detected Nestled under the ribcage, the kidneys are primarily responsible for the filtration of toxins from the bloodstream and their elimination in urine. In instances of Acute Kidney Injury (AKI), however, this vital function is compromised. AKI is the sudden loss of kidney function, which is commonly seen in hospitalised patients. Because patients don’t usually experience pain or distinct symptoms, AKI is difficult to identify. Early detection of AKI is paramount to prevent kidney damage from progressing into more enduring conditions such as Chronic Kidney Disease (CKD). So, how can we detect AKI promptly? This is where Neutrophil Gelatinase-Associated Lipocalin (NAGL), a promising biomarker for the early detection of renal injury, comes into focus. Until recently, assessing the risk of AKI has relied on measuring changes in serum creatinine (sCr) and urine output. Creatinine is a waste product formed by the muscles. Normally, the kidney filters creatinine and other waste products out of the blood into the urine. Therefore, high serum creatinine levels indicate disruption to kidney function, suggesting AKI. However, a limitation of the sCr test is that it is affected by extrarenal factors such as muscle mass; people with higher muscle mass have higher serum creatinine. Additionally, an increase in this biomarker becomes evident once the renal function is irreversibly damaged. NGAL’s ability to rapidly detect kidney damage hours to days before sCr, renders it a more fitting biomarker to prevent total kidney dysfunction. Among currently proposed biomarkers for AKI, the most notable is NGAL. NGAL is a small protein rapidly induced from the kidney tubule upon insult. It is detected in the bloodstream within hours of renal damage. NGAL levels swiftly rise much before the appearance of other renal markers. Such characteristics render NGAL a promising biomarker in quickly pinpointing kidney damage. The concentration of NGAL present in a patient's urine is determined using a particle-enhanced laboratory technique. This involves quantifying the particles in the solution by measuring the reduced transmitted light intensity through the urine sample. In conclusion, the early detection of AKI remains a critical challenge, but NGAL emerges as a promising biomarker for promptly detecting renal injury before total loss of kidney function unfolds. NGAL offers a significant advantage over traditional biomarkers like serum creatinine- its swift induction upon kidney injury allows clinicians and healthcare providers to intervene before renal dysfunction manifests. Written by Fozia Hassan Related article: Cancer biomarkers and evolution REFERENCES Bioporto. (n.d.). NGAL . [online] Available at: https://bioporto.us/ngal/ [Accessed 5 Feb. 2024]. Branislava Medić, Branislav Rovčanin, Katarina Savić Vujović, Obradović, D., Duric, D. and Milica Prostran (2016). Evaluation of Novel Biomarkers of Acute Kidney Injury: The Possibilities and Limitations. Current Medicinal Chemistry , [online] 23(19). doi: https://doi.org/10.2174/0929867323666160210130256 . Buonafine, M., Martinez-Martinez, E. and Jaisser, F. (2018). More than a simple biomarker: the role of NGAL in cardiovascular and renal diseases. Clinical Science , [online] 132(9), pp.909–923. doi: https://doi.org/10.1042/cs20171592 . Giasson, J., Hua Li, G. and Chen, Y. (2011). Neutrophil Gelatinase-Associated Lipocalin (NGAL) as a New Biomarker for Non – Acute Kidney Injury (AKI) Diseases. Inflammation & Allergy - Drug Targets , [online] 10(4), pp.272–282. doi: https://doi.org/10.2174/187152811796117753 . Haase, M., Devarajan, P., Haase-Fielitz, A., Bellomo, R., Cruz, D.N., Wagener, G., Krawczeski, C.D., Koyner, J.L., Murray, P., Zappitelli, M., Goldstein, S.L., Makris, K., Ronco, C., Martensson, J., Martling, C.-R., Venge, P., Siew, E., Ware, L.B., Ikizler, T.A. and Mertens, P.R. (2011). The Outcome of Neutrophil Gelatinase-Associated Lipocalin-Positive Subclinical Acute Kidney Injury. Journal of the American College of Cardiology , [online] 57(17), pp.1752–1761. doi: https://doi.org/10.1016/j.jacc.2010.11.051 . Moon, J.H., Yoo, K.H. and Yim, H.E. (2020). Urinary Neutrophil Gelatinase – Associated Lipocalin: A Marker of Urinary Tract Infection Among Febrile Children. Clinical and Experimental Pediatrics . doi: https://doi.org/10.3345/cep.2020.01130 . Vijaya Marakala (2022). Neutrophil gelatinase-associated lipocalin (NGAL) in kidney injury – A systematic review. International Journal of Clinical Chemistry and Diagnostic Laboratory Medicine , [online] 536, pp.135–141. doi: https://doi.org/10.1016/j.cca.2022.08.029 . www.nice.org.uk . (2014). Overview | The NGAL Test for early diagnosis of acute kidney injury | Advice | NICE . [online] Available at: https://www.nice.org.uk/advice/mib3 [Accessed 6 Feb. 2024]. Project Gallery

  • The chronotypes | Scientia News

    The natural body clock and the involvement of genetics Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The chronotypes 10/07/25, 18:28 Last updated: Published: 27/11/24, 11:47 The natural body clock and the involvement of genetics Feeling like heading to bed at 9 pm and waking up at the crack of dawn? These tendencies define your chronotype, backed up by changes within your body. A generally overlooked topic, chronotypes affect our everyday behaviour. Many people innately associate themselves with a certain chronotype, but what do we know about how these physiological differences are caused at a molecular level? The word ‘chronotype’ was first coined in the 1970s, combining the Greek words chrono (time) and type (kind or form). While the term is relatively modern, the concept emerged in the 18th century. Researchers in the 1960s and 1970s, like Jürgen Aschoff, explored how internal biological clocks influence our sleep-wake cycles, leading to the classification of people into morning or evening types based on their activity patterns. The first evidence of body clocks was found in plants rather than humans, thus leading to the invention of flower clocks, which were used to tell the time of the day. Before delving into the details, let us be introduced to the general categories of chronotypes, which describe a person’s inclination to wake up and sleep while also affecting productivity periods. We know of the following three categories: The morning type (also referred to as larks): they are inclined to wake up and go to bed early because they feel most alert and productive in the mornings. The evening type (also called the owls): they feel most alert and productive in the evenings and onwards, so they are inclined to wake up and go to bed later. The intermediate types (also referred to as the doves): they fall in the middle of this range. Let’s explore what we know about the genetics that prove that chronotypes are a natural phenomenon. Genetics of chronotypes The main determining factor in our chronotypes is the circadian period. This is the body’s 24 hour cycle of changes that manifest into feelings of productivity and energy or tiredness. The length of this is crucial in determining our chronotypes. More importantly, specific physiological changes that cause these effects are melatonin and core body temperature. One study suggested that the morning types might have circadian periods shorter than 24 hours, whereas evening chronotypes might have circadian periods longer than 24 hours. A major clock gene is PER, which includes a collection of genes known as PER1, PER2 and PER3, which are thought to regulate circadian period factors. Specifically, it has been observed that a delay in the expression of the PER1 gene in humans causes an increased circadian period. Possible causes for this delay may be rendered to a variation within the negative feedback loop that PER1 operates in, including hereditary differences, environmental causes, changes to hormonal signals and age. This process may describe the mechanism behind the evening chronotype. Molecular polymorphs in the PER3 gene are thought to cause shorter circadian rhythms and the manifestation of the morning types. Similarly, a polymorph in the PER3 gene can be caused by a plethora of causes, as described for PER1. These nuances cause differences in the periodic release and stop of hormones which control the circadian rhythm, such as melatonin and body temperature. This is important in its power to control our energy levels, windows of productivity, and sleep cycles. The consensus remains that chronotypes are attributable to genetic premeditation by 50%, however, it has also been observed that chronotypes are prone to change with advancing age. Increased age is associated with an inclination towards an earlier phase chronotype. Age-related variation has been observed to be higher in men. There also exists an association between geographical locations and phase preference; increasing latitude (travelling North or South) from the earth's equator is associated with later chronotypes. Of course, many variations and factors come into play to affect these findings, such as ethnic genetics, climate, work culture and even population density. The effect on core body temperature and melatonin Polymorphisms in the PER3 cause a much earlier peak in body temperature and melatonin in the morning than in the evening and intermediate types. These manifest as the need to sleep much earlier in the morning and a decreased feeling of productivity later in the day. In contrast, the evening types experience a later release of melatonin and a drop in core body temperature, causing a later onset of tiredness and lack of energy. It can then be inferred that the intermediate types are affected by the expression of these genes in a way that falls in the middle of this spectrum. Conclusion Understanding differences in circadian rhythms and sleep-wake preferences offers valuable insights into human behaviour and health. Chronotypes influence various aspects of daily life, including sleep patterns and quality, cognitive performance and susceptibility to specific health conditions, including sleep-wake conditions. An extreme deviation in circadian rhythms and sleep cycles may lead to such conditions as Advanced sleep-wake phase Disorder (ASPD) and Delayed sleep-wake phase Disorder (DSPD). Recognising these variations is also helpful in optimising work schedules and aligning to jet lags, improving mental and physical health by tailoring our biological rhythms to our environments. Many individuals opt to do a sleep study at an institution to gain insights into their circadian rhythms. A healthcare professional may also prescribe this if they suspect you have a circadian disturbance such as insomnia. The Morning-Eveningness Questionnaire (MEQ) The MEQ is a self-reported questionnaire you may complete to gain more insight into your chronotype category. Clinical psychologist Micheal Breus created it and uses different animals to categorise the chronotypes further. The framework suggests that the Bear represents individuals whose energy patterns are entrained to the rising and the sun's setting and are the most common types in the general population. The Lions describe the early risers, and Wolves roughly equate to the evening types. Recently, a fourth chronotype has been proposed: the Dolphin, whose responses to the questionnaire suggest that they switch between modes. Whether you're a Bear, Lion, Wolf, or Dolphin, understanding your chronotype can be a game-changer in optimising your daily routine. So, what’s your chronotype—and how can you start working with your body’s natural rhythms to unlock your full potential? A sleep study ? The MEQ ? Maybe keeping a tracker. Written by B. Esfandyare Related articles: Circadian rhythms and nutrition / Does insomnia run in families? REFERENCES Emens JS, Yuhas K, Rough J, Kochar N, Peters D, Lewy AJ. Phase Angle of Entrainment in Morning‐ and Evening‐Types under Naturalistic Conditions. Chronobiology International. 2009 Jan;26(3):474–93. Lee, J.H., Kim, I.S., Kim, S.J., Wang, W. and Duffy, J.F. (2011). Change in Individual Chronotype Over a Lifetime: A Retrospective Study. Sleep Medicine Research , 2(2), pp.48–53. doi: https://doi.org/10.17241/smr.2011.2.2.48 . Ujma, P.P. and Kirkegaard, E.O.W. (2021). The overlapping geography of cognitive ability and chronotype. PsyCh Journal , 10(5), pp.834–846. doi: https://doi.org/10.1002/pchj.477 . Shearman LP, Jin X, Lee C, Reppert SM, Weaver DR. Targeted Disruption of the mPer3 Gene: Subtle Effects on Circadian Clock Function. Molecular and Cellular Biology. 2000 Sep 1;20(17):6269–75. Viola AU, Archer SN, James Lynette M, Groeger JA, Lo JCY, Skene DJ, et al. PER3 Polymorphism Predicts Sleep Structure and Waking Performance. Current Biology. 2007 Apr;17(7):613–8. Project Gallery

  • The Challenges in Modern Day Chemistry | Scientia News

    And can we overcome them? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The Challenges in Modern Day Chemistry 11/07/25, 09:56 Last updated: Published: 24/02/24, 22:09 And can we overcome them? Chemistry, heralded as the linchpin of the natural sciences, serves as the veritable bedrock of our comprehension of the world and concurrently takes on a pivotal role in resolving the multifaceted global challenges that confront humanity. In the context of the modern era, chemistry has undergone a prodigious transformation, with research luminaries persistently challenging the fringes of knowledge and technological application. However, this remarkable trajectory is shadowed by a constellation of intricately interwoven challenges that mandate innovative and often paradigm-shifting solutions. This article embarks on a comprehensive exploration of the salient and formidable challenges that presently beset the discipline of contemporary chemistry. Sustainability and the Imperative of Green Chemistry The paramount challenge confronting modern chemistry pertains to the burgeoning and compelling imperative of environmental sustainability. The chemical industry stands as a colossal contributor to ecological degradation and the inexorable depletion of vital resources. Consequently, an exigent necessity looms: the development of greener and environmentally benign chemical processes. Green chemistry, an avant-garde discipline, is at the vanguard of this transformation, placing paramount emphasis on the architectural design of processes and products that eschew the deployment of hazardous substrates. Researchers within this sphere are diligently exploring alternative, non-toxic materials and propounding energy-efficient methodologies, thereby diminishing the ecological footprint intrinsic to chemical procedures. Energy Storage and Conversion at the Frontier In an epoch marked by the surging clamour for renewable energy sources such as photovoltaic solar panels and wind turbines, the exigency of efficacious energy storage and conversion technologies attains unparalleled urgency. Chemistry assumes a seminal role in the realm of advanced batteries, fuel cells, and supercapacitors. However, extant challenges such as augmenting energy density, fortifying durability, and prudently attenuating production costs remain obstinate puzzles to unravel. In response, a phalanx of researchers is actively engaged in the relentless pursuit of novel materials and the innovative engineering of electrochemical processes to surmount these complexities. Drug Resistance as a Crescendoing Predicament The advent of antibiotic-resistant bacterial strains and the irksome conundrum of drug resistance across diverse therapeutic spectra constitute a formidable quandary within the precincts of medicinal chemistry. With pathogenic entities continually evolving, scientists face the Herculean task of continually conceiving novel antibiotics and antiviral agents. Moreover, the unfolding panorama of personalised medicine and the realm of targeted therapies necessitate groundbreaking paradigms in drug design and precision drug delivery systems. The tantalising confluence of circumventing drug resistance whilst simultaneously obviating deleterious side effects represents a quintessential challenge in the crucible of contemporary chemistry. Ethical Conundrums and the Regulatory Labyrinth As chemistry forges ahead on its unceasing march of progress, ethical and regulatory conundrums burgeon in complexity and profundity. Intellectual property rights, the ethical contours of responsible innovation, and the looming spectre of potential malevolent misuse of chemical knowledge demand perspicacious contemplation and meticulously crafted ethical architectures. Striking an intricate and nuanced equilibrium between the imperatives of scientific advancement and the obligations of prudent stewardship of chemical discoveries constellates an enduring challenge that impels the chemistry community to unfurl its ethical and regulatory sails with sagacity and acumen. In conclusion... Modern-day chemistry, ensconced in its dynamic and perpetually evolving tapestry, stands as the lodestar of innovation across myriad industries while confronting multifarious global challenges. However, it does so against the backdrop of its own set of formidable hurdles, ranging from the exigencies of environmental responsibility to the mysteries of drug resistance and the intricate tangle of ethical and regulatory dilemmas. The successful surmounting of these multifaceted challenges mandates interdisciplinary collaboration, imaginative innovation, and an unwavering commitment to the prudential and ethically-conscious stewardship of the profound knowledge and transformative potential that contemporary chemistry affords. As humanity continues its inexorable march towards an ever-expanding understanding of the chemical cosmos, addressing these challenges is the sine qua non for an enduringly sustainable and prosperous future. Written by Navnidhi Sharma Related article: Green Chemistry Project Gallery

  • A potential treatment for HIV | Scientia News

    Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A potential treatment for HIV 08/07/25, 16:16 Last updated: Published: 21/07/23, 09:50 Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? The human immunodeficiency virus (HIV) was recorded to affect 38.4 million people globally at the end of 2021. This virus attacks the immune system, incapacitating CD4 cells: white blood cells (WBCs) which play a vital role in activating the innate immune system and fighting infection. The normal range of CD4 cells in our body is from 500 to 1500 cells/mm3 of blood; HIV can rapidly deplete the CD4 count to dangerous levels, damaging the immune system and leaving the body highly susceptible to infections. Whilst antiretroviral therapy (ART) can help manage the virus by interfering with viral replication and helping the body manage the viral load, it fails to eliminate the virus altogether. The reason for this is due to the presence of latent viral reservoirs where HIV can lay dormant and reignite infection if ART is stopped. Whilst a cure has not yet been discovered, a promising avenue being explored in the hopes of eradicating HIV has been CRISPR/Cas9 technology. This highly precise gene-editing tool has been shown to have the ability to induce mutations at specific points in the HIV proviral DNA. Guide RNAs pinpoint the desired genome location and Cas9 nuclease enzymes act as molecular scissors that remove selected segments of DNA.  Therefore, CRISPR/Cas9 technology provides access to the viral genetic material integrated into the genome of infected cells, allowing researchers to cleave HIV genes from infected cells, clearing latent viral reservoirs. Furthermore, the CRISPR/Cas9 gene-editing tool can also prevent HIV from attacking the CD4 cells in the first place. HIV binds to the chemokine receptor, CCR5, expressed on CD4 cells, in order to enter the WBC. CRISPR/Cas9 can cleave the genes for the CCR5 receptor and therefore preventing the virus from entering and replicating inside CD4 cells. CRISPR/Cas9 technology provides a solution that current antiretroviral therapies cannot solve. Through gene-editing, researchers can dispel the lasting reservoirs unreachable by ART that HIV is able to establish in our bodies. However, further research and clinical trials are still required to fully understand the safety and efficacy of this approach to treating HIV before it can be implemented as a standard treatment. Written by Bisma Butt Related articles: Antiretroviral therapy / mRNA vaccines Project Gallery

  • Explaining Altruism | Scientia News

    The evolutionary theory VS the empathy-altruism theory Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Explaining Altruism 20/06/24, 10:39 Last updated: Published: 13/05/24, 13:58 The evolutionary theory VS the empathy-altruism theory Altruism is the behaviour of helping others without a reward or even at a cost to the individual who performs it. For instance, volunteering in a soup kitchen to help feed those in need. This article aims to explain altruistic behaviour using the evolutionary and the empathy-altruism theories. The evolutionary theory, originally proposed by Darwin, is based upon three pillars: variation of genes within a species, heritability of those variations to the next generations and differential fitness, also known as survival and reproduction. Evolutionary theory suggests that the degree of relatedness – the extent to which the person being helped carries copies of the altruist's genes – and reproductive value – the extent to which the relative can pass on their genes down to future generations – are the two determining factors in an individual’s decision on whether to help someone in need. Burnstein et al. (1994) conducted a study presenting participants with hypothetical situations involving altruism, in which they manipulated the degree of relatedness, health of the target and the context in which help was needed. They found that in both life-or-death and everyday situations individuals were more likely to help close kin than distant kin and strangers. Additionally, in everyday situations, participants helped ill people more. However, in life-or-death situations, they tended to help healthy people more, proposedly due to their higher reproductive value. Therefore, these findings indicate that altruistic behaviour depends on the degree of relatedness between the individual helping and the individual being helped, and the latter individual’s reproductive value. The empathy-altruism theory is based on the notion that pure altruism can only occur due to empathy – the ability to identify with and experience another person’s emotional state. The empathy-altruism theory suggests that if the altruistic person does not feel empathy, help would only be given if it is in the individual’s interest, also known as the social exchange view. However, the theory suggests that if the altruistic person does feel empathy, help would be given regardless of self-interest and even when costs outweigh the rewards. Toi & Batson (1982) conducted a study in which they manipulated two factors: the empathy felt by participants towards the hypothetical victim by giving them different prompts and the cost of helping the victim by telling the participants whether they will ever come in contact with the victim again. The researchers found that the participants with induced empathy were likely to engage in altruistic behaviour regardless of personal cost and were motivated by an altruistic concern for the victim’s welfare, whilst the individuals in the low empathy condition were more likely to help the victim if the personal costs of seeing the victim again were high. Therefore, the empathy-altruism model has empirical support and is suitable for explaining individual differences in altruistic behaviour. The evolutionary and empathy-altruism theories both suggest that personal gains can motivate altruistic behaviour. In the evolutionary theory, those gains consist of passing the altruist’s genes down to the next generations. In the empathy-altruism theory those gains are the personal interests and cost when the altruist does not feel empathy towards the target. However, the empathy-altruism theory also proposes that when the individual feels empathy towards the target, personal gains are irrelevant to their decision on whether to help them. Therefore, the theories propose two different perspectives on the individual differences in altruism. Whilst the evolutionary theory has significant explanatory value for altruism, there is evidence that emotional closeness is a mediating factor for altruism in kin. Korchmaros & Kenny (2001) found that among genetically related individuals, the tendency to display altruism was affected by their emotional closeness with the specific relative being helped. Therefore, the degree of relatedness alone cannot fully explain the individual differences in altruistic behaviour, and the empathy-altruism theory might be a more suitable explanation because the level of empathy felt by altruists increases with the levels of closeness the individual feels towards the target. Additionally, the evolutionary theory suggests that people rarely help strangers in need, which is overly reductionist and incorrect. Worldwide, many volunteers help people they are unfamiliar with. The empathy-altruism theory is more holistic and, therefore, might be a more appropriate theory for altruism, as many studies have found that empathy can be experienced towards complete strangers. Moreover, even if empathy is not experienced, the empathy-altruism theory explains altruism towards strangers through the social exchange view. Consequently, the empathy-altruism theory explains a wider range of behaviours and individual differences in altruistic behaviour than the evolutionary theory. Therefore, while both theories provide wide descriptions of individual differences in altruism, I think that the empathy-altruism theory provides a more comprehensive explanation for individual differences in altruism than the evolutionary theory. The empathy-altruism model accounts for not only the role of emotional closeness and empathy in motivating altruism towards kin, but also why people help strangers by highlighting how empathy can induce altruistic acts even without genetic relatedness or reproductive value incentives. By encompassing a wider range of situational and psychological factors influencing our decisions to help others, the empathy-altruism theory represents a more complete account of the complex phenomenon of altruism. Written by Aleksandra Lib Related article: The endowment effect REFERENCES Baker R. L. (2008). The social work dictionary . Washington, DC: NASW Press. Batson, C. D., Batson, J. G., Slingsby, J. K., Harrell, K. L., Peekna, H. M., & Todd, R. M. (1991). Empathic joy and the empathy-altruism hypothesis. Journal of personality and social psychology , 61 (3), 413. Burnstein, E., Crandall, C., & Kitayama, S. (1994). Some neo-Darwinian decision rules for altruism: Weighing cues for inclusive fitness as a function of the biological importance of the decision. Journal of personality and social psychology , 67 (5), 773. Grynberg, D., & Konrath, S. (2020). The closer you feel, the more you care: Positive associations between closeness, pain intensity rating, empathic concern and personal distress to someone in pain. Acta Psychologica , 210 , 103175. Kerr, B., Godfrey-Smith, P., & Feldman, M. W. (2004). What is altruism?. Trends in ecology & evolution, 19( 3), 135-140. Korchmaros, J. D., & Kenny, D. A. (2001). Emotional closeness as a mediator of the effect of genetic relatedness on altruism. Psychological science , 12 (3), 262-265. Rodrigues, A. M., & Gardner, A. (2022). Reproductive value and the evolution of altruism. Trends in ecology & evolution , 37 (4), 346-358. Toi, M., & Batson, C. D. (1982). More evidence that empathy is a source of altruistic motivation. Journal of personality and social psychology , 43 (2), 281. Project Gallery

  • Huntington's disease | Scientia News

    A hereditary neurodegenerative disorder Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Huntington's disease 27/09/25, 11:04 Last updated: Published: 18/10/23, 16:12 A hereditary neurodegenerative disorder Huntington’s disease (HD) is a neurodegenerative disorder causing cognitive decline, behavioural difficulties, and uncontrollable movements. It is a hereditary disease that has a devastating effect on the individual’s life and unfortunately is incurable. Genetic component What may come as a surprise, is that in everyone’s genetics there are two copies (one from each parent) of the Huntingtin’s gene coding for the Huntingtin protein. This gene is coded by CAG repeats. In healthy genes, the CAG sequence is repeated between 10 and 26 times. However, if the gene is faulty, CAG repeats over 40 times resulting in a dysfunctional Huntingtin protein. The disease is autosomal dominant meaning regardless of gender, if either parent is a carrier, their child has a 50% chance of inheriting the faulty gene. REMINDER: because the gene is dominant, it means those who inherit even one copy will develop the disease Effect on the brain The faulty Huntingtin protein accumulates in cells, leading to cell death and damage to the brain. If you were to look at the brains of individuals with Huntington’s Disease, you would see a reduction in volume of the caudate and putamen. These areas are part of the striatum, which is a subdivision of the basal ganglia, involved in fine tuning our voluntary movements, i.e., reaching out to grab a cup. As the disease progresses, this atrophy can extend to other areas of the brain including the thalamus, frontal lobe, and cerebellum. Symptoms The symptoms normally manifest in three categories: motor, cognitive and psychiatric. We know that the basal ganglia is involved in our voluntary movement, so the damage causes one of the most visible symptoms in HD- uncontrollable and jerky movements. Cognitive symptoms include personality changes, difficulties with planning and attention. There can also be impairments to how those with HD recognise emotions- all these symptoms can interact to make social interaction more difficult. Finally, the psychiatric symptoms often seen include irritability and aggression, depression, anxiety, and apathy. Impact on life and family At the age when diagnosis usually occurs (around 30 years old), patients are often buying houses, getting married and either having children or deciding to start a family. The diagnosis may change peoples outlook on having children and can put a great psychological burden on them if they have unknowingly passed it along to those already born. Diagnosis also brings consequences to seemingly mundane, but incredibly important issues such as gaining life insurance, with some companies not covering individuals with an official diagnosis. Subsequently this makes life harder for their families, as the patient will eventually be unable to work and there could be associated costs with the need for care facilities as the disease progresses. Unfortunately, this is a progressive neurodegenerative condition with no cure. The only treatment options available at present, are interventions which aim to alleviate the patients’ symptoms. Whilst these treatments will reduce the motor and psychiatric symptoms, they cannot stop the progression of Huntington’s disease. We have only scratched the surface on the impact Huntington’s disease has on a patient and their families. It is so important to understand ways in which everyone that is affected can be best supported during the disease progression, to give all those involved a better quality of life. Written by Alice Jayne Greenan Related articles: A potential gene therapy for HD / Epilepsy Project Gallery

  • Mathematical models in cognitive decision-making | Scientia News

    Can we quantify choices? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Mathematical models in cognitive decision-making 10/07/25, 10:21 Last updated: Published: 19/11/23, 17:33 Can we quantify choices? The simple answer to this question is yes because even if you do not know it, we rank different choices in our heads all the time. Whether it's deciding what to have for breakfast, choosing a route to work, or evaluating job offers, our minds are constantly engaged in the intricate process of decision-making. In this article, we cover some of the mathematical techniques we use to make these decisions. Bayesian statistics Bayesian statistics, named after Thomas Bayes, is a branch of statistics that employs probability theory to quantify and update our beliefs or degrees of certainty about events or hypotheses. At its core is Bayes' Theorem, which is a fundamental principle in Bayesian inference. Bayes' Theorem provides a framework for updating our initial beliefs (prior probabilities) with new evidence (likelihood) to arrive at revised beliefs (posterior probabilities). One of the main reasons for the theorem’s popularity is the validity of what it was proposing. At its essence, Bayes' Theorem acknowledges that our beliefs and understanding of the world are not fixed but can and should change as we learn more information. Hence, the likeliness of events changes as new information surfaces which makes perfect sense now but was incredibly revolutionary when the theorem was first introduced. This technique is undeniably used as core cognitive decision-making process helping us make choices based on the likelihood of events. Game theory and utility theory Game theory and utility theory serve as essential mathematical tools in our exploration of cognitive decision-making. Game theory reveals the strategic thinking behind decisions and their consequences, shedding light on complex interactions that influence our choices in various contexts, such as economics and social behaviour. The study of Game theory includes evaluating and optimising payoff matrices which are the framework to model the payoff from the combination of different decisions. An example is shown below where Alice is the row choices and Bob are the column choices. Alice and Bob’s payoffs are dependent on each other’s actions so decision making becomes difficult which is one of the reasons studying game theory is so interesting. Utility theory, which has ties to Game theory, quantifies how individuals assess the desirability of options. The more you desire an outcome, the higher amount of utility you assign to it. The theory plays a pivotal role in understanding decision preferences and trade-offs. By analysing utility functions, we gain insights into how individuals optimize their choices to maximize satisfaction, aligning our understanding of human decision-making with mathematical precision. Other mathematical frameworks and applications In addition to Bayesian statistics, game theory, and utility theory, several other mathematical frameworks enrich our understanding of cognitive decision-making. These include: - Markov Decision Processes (MDPs): Essential for sequential decision-making and reinforcement learning, used in robotics, artificial intelligence, and operations research. - Prospect Theory: Crucial for understanding how individuals make decisions involving risk and uncertainty, particularly in fields like economics and behavioural psychology. - Decision Trees: Widely used for visualizing complex decision-making processes and often applied in data analysis and operations research. - Neural Networks: Critical for modelling complex cognitive decision-making processes, particularly in machine learning and deep learning applications. I would like to thank you for reading. Be sure to check out other mathematics articles on the site! Written by Temi Abbass Related articles: Behavioural economics I , II and III ; Markov Chains Project Gallery

  • Digital innovation in rural farming | Scientia News

    Transforming agriculture with computer science Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Digital innovation in rural farming 09/07/25, 14:04 Last updated: Published: 21/07/23, 09:58 Transforming agriculture with computer science With their rich agricultural heritage and significant contribution to the national economy, rural farming communities have always been at the forefront of agricultural innovation. Today, as the world undergoes rapid digital transformation, the integration of computer science has emerged as a game-changer in the agricultural sector. By harnessing the power of emerging technologies and data-driven approaches, farmers can enhance productivity, optimize resource allocation, and foster sustainable farming practices. This article delves into the role of computer science in revolutionising agriculture and farming practices in rural areas. From precision agriculture and data analytics to the utilisation of IoT, drones, and decision support tools, we explore how technology-driven solutions are shaping a new era of agriculture, promising increased efficiency, reduced environmental impact, and improved livelihoods for farmers. A recent report revealed that farmers in various regions, specifically rural and eastern regions such as Punjab, India have faced significant challenges, including crop failures, leading to distress and financial difficulties. It is important to address these issues and prevent the associated consequences. Digitalisation within the farming industry can play a vital role in mitigating these challenges and fostering resilience. So how exactly can rural farming benefit from digitalisation? Precision agriculture and data analytics: the implementation of precision agriculture techniques, supported by data analytics, can enable farmers to optimise resource utilisation, improve crop management, and mitigate agricultural risks. By analysing data related to weather patterns, soil conditions, and crop health, farmers can make informed decisions, enhance productivity, and reduce the incidence of crop failures. Market intelligence and price forecasting: computer science tools can facilitate better market intelligence and price forecasting, empowering farmers to make informed decisions about crop selection, timing of harvest, and market strategies. Access to real-time market data, coupled with predictive analytics, can help farmers negotiate fair prices and reduce financial vulnerability caused by market instability. Remote sensing and drone technology: utilising remote sensing and drone technology can enable efficient crop monitoring, early detection of diseases, and targeted interventions. High-resolution imagery and computer vision algorithms can identify crop stress, nutrient deficiencies, or pest outbreaks, allowing farmers to take timely action, reduce crop losses, and enhance yield. Decision support systems: the introduction of decision support systems can provide customised recommendations to farmers, incorporating data from multiple sources such as weather forecasts, market trends, and agronomic best practices. These systems can assist farmers in making well-informed decisions regarding crop selection, input usage, and resource allocation, ultimately improving their profitability, and reducing financial distress. The integration of computer science offers promising avenues for addressing the complex challenges faced by farmers in rural areas. By harnessing the power of data analytics, IoT, drones, and decision support tools, farmers can benefit from enhanced agricultural practices, improved market access, and financial stability. However, it is crucial to ensure the accessibility and affordability of these technologies, coupled with comprehensive support systems and policy reforms, to truly empower farmers and create sustainable change. Written by Jaspreet Mann Related articles: Revolutionising sustainable agriculture through AI / Plant diseases and nanoparticles Project Gallery

  • The Crab Nebula | Scientia News

    An overview Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The Crab Nebula 14/02/25, 13:44 Last updated: Published: 23/03/24, 17:45 An overview Of the 270 known supernova remnants, the Crab Nebula is one of the more well known in popular science, originating from a violent supernova explosion first discovered by Chinese astronomer Wang Yei-te in July of 1054 AD. Yei-te reported the appearance of a “guest star” so bright that it was visible during the day for three weeks, and at night for 22 months. In 1731, English astronomer John Bevis rediscovered the object, which was then observed by Charles Messier in 1758 prompting the nebula’s lesser-known name, Messier 1. Located approximately 6,500 light years from Earth, the nebula cannot be seen with the naked eye but observations in different wavelengths gives rise to the beautiful colored images often published. The Crab Nebula is the result of a violent explosion process that signals what astronomers call “star death.” This occurs when the star runs out of fuel for the fusion process in its core that produces an outward pressure counteracting the constant inward pressure of the star’s outer shells. With the loss of outward pressure, these layers suddenly collapse inwards and produce an explosion astrophysicists call a supernova. Following the explosion, the original star, named SN1054 in this case, collapsed into a rapidly spinning neutron star, also known as a pulsar, which is generally roughly the size of Manhattan, New York. The pulsar is situated at the center of the nebula and ejects two beams of radiation that, while the pulsar rotates, makes it appear as if the object is pulsing 30 times per second. Studies of the Crab Nebula were primarily conducted by the Hubble Space Telescope. Hubble spent three months capturing 24 images that were assembled into a colorful mosaic resembling not what is visible with human eyes, but rather a kind of paint-by-number image where each color mapped to a particular element. Traces of hydrogen, neutral oxygen, doubly ionized oxygen, and sulfur have been detected across multiple wavelengths as the remains span an expanding six to eleven light-year-wide remnant of the supernova event. It was not until 1942 that the Crab Nebula was officially found to be related to the recorded supernova explosion of 1054. This establishment was jointly provided by Professor J. J. L. Duyvendak of Leiden University as well as astronomers N. U. Mayall and J. Oort. Due to its long history of rediscovery and inherent beauty, the Crab Nebula remains as one of the most studied celestial objects today and continues to provide valuable insight into astrophysical processes. Written by Amber Elinsky REFERENCES Hester, J. Jeff. “The Crab Nebula: An Astrophysical Chimera,” Annual Review of Astronomy and Astrophysics 46 (2008): 127-155. https://doi.org/10.1146/annurev.astro.45.051806.110608 . Hester, J. and A. Loll. “Messier 1 (The Crab Nebula),” NASA. https://science.nasa.gov/mission/hubble/science/explore-the-night-sky/hubble-messier-catalog/messier-1/ . Image ref.: European Space Agency; Space Australia; dreamstime. Mayall, N. U., and J. H. Oort. “FURTHER DATA BEARING ON THE IDENTIFICATION OF THE CRAB NEBULA WITH THE SUPERNOVA OF 1054 A. D. PART II. THE ASTRONOMICAL ASPECTS.” Publications of the Astronomical Society of the Pacific 54, no. 318 (1942): 95–104. http://www.jstor.org/stable/40670293 Project Gallery

  • Why blue whales don't get cancer | Scientia News

    Discussing Peto's Paradox Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Why blue whales don't get cancer 14/07/25, 15:16 Last updated: Published: 16/10/23, 21:22 Discussing Peto's Paradox Introduction: What is Peto’s Paradox? Cancer is a disease that occurs when cells divide uncontrollably, owing to genetic and epigenetic factors . Theoretically, the more cells an organism possesses, the higher the probability should be for it to develop cancer. Imagine that you have one tiny organism – a mouse, and a huge organism – an elephant. Since an elephant has more cells than a mouse, it should have a higher chance of developing cancer, right? This is where things get mysterious. In reality, animals with 1,000 times more cells than humans are not more likely to develop cancer. Notably, blue whales, the largest mammals, hardly develop cancer. Why? In order to understand this phenomenon, we must dive deep into Peto’s Paradox. Peto’s paradox is the lack of correlation between body size and cancer risk. In other words, the number of cells you possess does not dictate how likely you are to develop cancer. Furthermore, research has shown body mass and life expectancy are unlikely to impact the risk of death from cancer . (see figure 1) Peto’s Paradox: Protective Mechanisms Mutations, otherwise known as changes or alterations in the deoxyribonucleic acid (DNA) sequence, play a role in cancer and ageing. Research scientists have analysed mutations in the intestines of several mammalian species , ranging from mice, monkeys, cats, dogs, humans, and giraffes, to tigers and lions. Their results reveal that these mutations mostly come from processes that occur inside the body, such as chemicals causing changes in DNA. These processes were similar in all the animals they studied, with slight differences. Interestingly, annually, animals with longer lifespans were found to have fewer mutations in their cells ( figure 2 ). These findings suggest that the rate of mutations is associated with how long an animal lives and might have something to do with why animals age. Furthermore, even though these animals have very different lifespans and sizes, the amount of mutations in their cells at the end of their lives was not significantly different – this is known as cancer burden. Since animals with a larger size or longer lifespan have a larger number of cells (and hence DNA) that could undergo mutation, and a longer time of exposure to mutations, how is it possible that they do not have a higher cancer burden? Evolution has led to the formation of mechanisms in organisms that suppress the development of cancerous cells . Animals possessing 1,000 times as many cells as humans do not display a higher susceptibility to cancer, indicating that natural mechanisms can suppress cancer roughly 1,000 times more efficiently than they operate in human cells . Does this mean larger animals have a more efficient protective mechanism against cancer? A tumour is an abnormal lump formed by cells that grow and multiply uncontrollably. A tumour suppressor gene acts like a bodyguard in your cells. They help prevent the uncontrollable division of cells that could form tumours. Previous analyses have shown that the addition of one or two tumour suppressor gene mutations would be sufficient to reduce the cancer risk of a whale to that of a human. However, evidence does not suggest that an increased number of tumour suppressor genes correlated with increasing body mass and longevity. Although a study by Caulin et al . identified biomarkers in large animals that may explain Peto’s paradox, more experiments need to be conducted to confirm the biological mechanisms involved. Just over a month ago, an investigation of existing evidence on such mechanisms revealed a list of factors that may contribute to Peto’s paradox. This includes replicative immortality, cell senescence, genome instability and mutations, proliferative signalling, growth suppression evasion and cell resistance to death. As far as we know, different strategies have been followed to prevent cancer in species with larger sizes or longer lifespans . However, more studies must be conducted in the future in order to truly explain Peto’s paradox. Peto’s Paradox: Other Theories There are several theories that attempt to explain Peto’s paradox. One of which explains that large organisms have a lower basal metabolic rate, leading to less reactive oxygen species. This means that cells in larger organisms incur less oxidative damage, causing a lower mutation rate and lower risk of developing cancer. Another popular theory is the formation of hypertumours . As cells divide uncontrollably in a tumour, “cheaters” could emerge. These “cheaters”, known as hypertumours, are cells which grow and feed on their original tumour, ultimately damaging or destroying the original tumour. In large organisms, tumours have more time to reach lethal size. Therefore, hypertumours have more time to evolve, thereby destroying the original tumours. Hence, in large organisms, cancer may be more common but is less lethal. Clinical Implications Curing cancer has posed significant challenges. Consequently, the focus on cancer treatment has shifted towards cancer prevention . Extensive research is currently underway to investigate the behaviour and response of cancer cells to the treatment process. This is done through a multifaceted approach; investigating the tumour microenvironment and diagnostic or prognostic biomarkers. Going forward, a deeper understanding of these fields enables the development of prognostic models as well as targeted treatment methods. One example of an exciting discovery is the revelation of TP53 . The discovery of this tumour suppressor gene indicates that it plays a role in making elephant cells more responsive to DNA damage and in triggering apoptosis by regulating the TP53 signaling pathway. These findings imply that having more copies of TP53 may have directly contributed to the evolution of extremely large body sizes in elephants, helping resolve Peto’s paradox . Particularly, there are 20 copies of the TP53 gene in elephants, but only one copy of the TP53 gene in humans (see figure 3 ). Through more robust studies and translational medicine, it would be fascinating to see how such discoveries could be applied into human medicine ( figure 4 ). Conclusion The complete mechanism of how evolution has enabled organisms that are larger in size and have longer lifespans than humans is still a mystery. There is a multitude of hypotheses that need to be extensively investigated with large-scale experiments. By unravelling the mysteries of Peto’s paradox, these studies could provide invaluable insights into cancer resistance and potentially transform cancer prevention strategies for humans. Written by Joecelyn Kirani Tan Related articles: Biochemistry of cancer / Orcinus orca (killer whale) / Canine friends and cancer Project Gallery

bottom of page