top of page

Search Index

331 results found

  • Delving into the world of chimeras | Scientia News

    An exploration of this genetic concept Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Delving into the world of chimeras 09/07/25, 14:03 Last updated: Published: 03/02/24, 11:13 An exploration of this genetic concept The term chimera has been borrowed from Greek mythology, transcending ancient tales to become a captivating concept within the fields of biology and genetics. In mythology, the chimera was a monstrous hybrid creature. However, in the biological context, a chimera refers to an organism with cells derived from two or more zygotes. While instances of natural chimerism exist within humans, researchers are pushing the boundaries of genetics via the intentional creation of chimeras, consequentially sparking debates and breakthroughs in various fields, spanning from medicine to agriculture. Despite the theory that every cell in the body should share identical genomes, chimeras challenge this notion. For example, the fusion of non-identical twin embryos in the womb is a way chimeras can emerge. While visible cues, such as heterochromia or varied skin tone patches, may provide subtle hints of its existence, often individuals with chimerism show no overt signs, making its prevalence uncertain. In cases where male and female cells coexist, abnormalities in reproductive organs may exist. Furthermore, advancements in genetic engineering and CRISPR genome editing have also allowed the artificial creation of chimeras, which may aid medical research and treatments. In 2021, the first human-monkey chimera embryo was created in China to investigate ways of using animals to grow human organs for transplants. The organs could be genetically matched by taking the recipient’s cells and reprogramming them into stem cells. However, the process of creating a chimera can be challenging and inefficient. This was shown when researchers from the Salk Institute in California tried to grow the first embryos containing cells from humans and pigs. From 2,075 implanted embryos, only 186 developed up to the 28-day time limit for the project. Chimeras are not exclusive to the animal kingdom; plants exhibit this genetic complexity as well. The first non-fictional chimera, the “Bizzaria” discovered by a Florentine gardener in the seventeenth century, arose from the graft junction between sour orange and citron. Initially thought to be an asexual hybrid formed from cellular fusion, later analyses revealed it to be a chimera, a mix of cells from both donors. This pivotal discovery in the early twentieth century marked a turning point, shaping our understanding of chimeras as unique biological phenomena. Chimera is a common form of variegation, with parts of the leaf appearing to be green and other parts white. This is because the white or yellow portions of the leaf lack the green pigment chlorophyll, which can be traced to layers in the meristem (areas found at the root and shoot tip that have active cell division) that are either genetically capable or incapable of making chlorophyll. As we conclude this exploration into the world of chimeras, from the mythological realm to the scientific frontier, it’s evident that these entities continue to mystify and inspire, broadening our understanding of genetics, development, and the interconnectedness of organisms. Whether natural wonders or products of intentional creation, chimeras beckon further exploration, promising a deeper comprehension of the fundamental principles that govern the tapestry of life. Written by Maya El Toukhy Related article: Micro-chimerism and George Floyd's death Project Gallery

  • Dark Energy Spectroscopic Instrument (DESI) | Scientia News

    A glimpse into the early universe Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Dark Energy Spectroscopic Instrument (DESI) 05/02/25, 16:21 Last updated: Published: 08/07/23, 13:11 A glimpse into the early universe June 2023 marked the early release of data from the Dark Energy Spectroscopic Instrument (DESI). This instrument will study the nature of Dark Energy, an elusive addition to our cosmological equations that is thought to explain the accelerating expansion of the Universe. Current models estimate that Dark Energy comprises 68% of the total mass and energy of the universe and is distinct from matter and radiation in the sense that as space expands, its energy density remains constant rather than diluting. Imagine your favourite concentrated juice drink tasting the same regardless of how much water you add! DESI will investigate the large-scale structure of the Universe, obtaining spectra of around 40 million galaxies and using their redshift to create 3-D distance maps. The five-year observation effort has aptly been dubbed an experiment in “cosmic cartography”. (Redshift is the phenomenon wherein the light from objects moving away from us is stretched to longer and redder wavelengths.) The revolutionary engineering behind this instrument enables the measurement of light from more than 100,000 galaxies in a single night! This includes 5,000 optical fibres, each connected to a robotic positioner programmed to aim at galaxies from a specified target list. The survey is conducted on the 4-metre Mayall Telescope at the Kitt Peak National Observatory in Arizona. Another staggering feature DESI boasts is that the eventual sample size will outstrip the 20-year Sloan Digital Sky Survey by a factor of 10 in extra-galactic targets! The early release contains 80 Terabytes of data, representing 2% of the total dataset that should be available in 2026. See Figures 1 and 2. In 2005, the Sloan Digital Sky Survey found a signal that DESI will validate and make more precise. This signal is that of Baryonic Acoustic Oscillations (BAO). In the incredibly early universe, there were protons and neutrons, known as baryons, which existed in a hot, dense plasma with electrons. Photons were trapped in this plasma due to the extremely high probability of colliding with an electron. The universe was opaque. Only when the universe had cooled sufficiently so that protons and electrons could form neutral hydrogen atoms—an epoch known as recombination*—*did photons decouple from matter. The Cosmic Microwave Background is actually caused by these photons that were emitted after recombination. Before photons decoupled, the gravitational and high-pressure interactions in the plasma produced oscillations that radiated spherically outward from overdense regions, causing photons and baryons to travel through space together. However, as mentioned earlier, when the universe cooled and photons decoupled, the baryonic matter that was present in these oscillations became essentially frozen in space. The photons were free to stream throughout the now-transparent universe. This provided a so-called standard ruler, the distance that these baryons had travelled as an acoustic oscillation prior to recombination. Linking this back to Dark Energy requires the important detail that the radius of the spherical shell of baryons is tied to the expansion rate of the universe. As Dark Energy has propelled the Universe to expand, this standard ruler has expanded with it. See Figure 3. DESI's 3-D map of galaxies will provide a much clearer picture of the universe's large-scale structure, which is our only hope of finding the imprint of BAO. DESI will show (and has already shown) that there exists an overabundance of galaxies separated by a distance equivalent to the length of the standard ruler. Today, the size of this standard ruler is thought to be approximately 490 million lightyears. DESI represents an impressive step into the era of precision cosmology, and it will require the efforts of hundreds of scientists to make sense of the vast quantities of data we expect by 2026. Written by Joseph Brennan Project Gallery

  • Childhood stunting in developing countries | Scientia News

    The tireless challenge Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Childhood stunting in developing countries 10/07/25, 10:16 Last updated: Published: 09/03/24, 17:53 The tireless challenge Introduction Certain countries worldwide face numerous challenges that decrease their populations' quality of life; some include hunger, poverty and rising harmful emissions, which are complicated to resolve. This is because international cooperation is needed to tackle them effectively. Furthermore, they are associated with stunting, defined as diminished growth and development that children experience because of undernutrition or lack of sufficient nutrients, frequent infections and deficient psychosocial interventions, according to the World Health Organisation. With this definition in mind, this article will delve into stunting and malnutrition before discussing how stunting is linked to infectious diseases and harmful emissions and steps forward to reduce this condition in developing countries, as shown in Figure 2 . Undernutrition and stunting Stunting is one of the consequences of undernutrition, possibly due to reduced synthesis of insulin-like growth factor 1 (IGF-1) in the body, leading to amplified growth hormone (6). As for the determinants of undernutrition, a paper from Brazil found socioeconomic characteristics like family income and biological ones such as age notably linked to undernutrition. Another result of undernutrition is being underweight. A systematic review from Ethiopia focusing on nutrition in 5-year-old children amalgamated 18 studies. It estimated that stunting and being underweight had 42% and 33% prevalence, respectively; it could be inferred that undernutrition is linked to stunting. Additionally, a paper that used data from 32 Sub-Saharan African countries discovered that providing maternal health insurance (MHI) reduces stunting and being underweight, which is more apparent in girls than boys. In turn, MHI is necessary for supporting children’s health. Non-nutritional factors and stunting As for infections and stunting, an article highlighted that children with stunted growth are vulnerable to diarrhoeal and respiratory diseases besides malaria. Moreover, conditions worsen undernutrition, causing a vicious cycle between them, manifesting into growth defects. Furthermore, a systematic review of 80 studies found a connection between helminth infections and stunting, but insufficient evidence supported this hypothesis. With this said, there may need to be additional studies to investigate this further. With undernutrition’s impact on the immune system, newborns and small children with extreme protein deficiency have smaller thymuses and underdeveloped peripheral lymphoid organs, leading to immunological cell defects such as reduced T-cell count. Before concluding this article, exposure to harmful emissions is a recurring problem that affects everyone, including children. Different observational studies proposed that inhaling nitrogen oxide and particulate matter in utero could modify DNA methylation, possibly influencing foetal growth. Conclusion Reflecting on all the evidence in this article, stunting in developing countries is heading in a direction where it could become problematic. However, according to findings from UNICEF, stunting has gradually reduced between 2000 and 2020 in children under 5 years old. Nevertheless, awareness of stunting in developing countries is critical because it is the first step to tackling this health issue. Written by Sam Jarada Related articles: Childhood obesity / Depression in children / Postpartum depression in adolescent mothers REFERENCES Jamali D, Leigh J, Samara G, Barkemeyer R. Grand challenges in developing countries: Context, relationships, and logics. Business Ethics, the Environment & Responsibility. 2021 Sep;30(S1):1–4. Maleta K. Undernutrition. Malawi medical journal: the journal of Medical Association of Malawi. 2006 Dec;18(4):189–205. World Health Organization. Stunting in a nutshell. www.who.int . 2015 Nov;19. Beal T, Tumilowicz A, Sutrisna A, Izwardy D, Neufeld LM. A review of child stunting determinants in Indonesia. Maternal & Child Nutrition. 2018 May 17;14(4):e12617. Vaivada T, Akseer N, Akseer S, Somaskandan A, Stefopulos M, Bhutta ZA. Stunting in childhood: an overview of global burden, trends, determinants, and drivers of decline. The American Journal of Clinical Nutrition. 2020 Aug 29;112. Soliman A, De Sanctis V, Alaaraj N, Ahmed S, Alyafei F, Hamed N, et al. Early and Long-term Consequences of Nutritional Stunting: From Childhood to Adulthood. Acta Bio Medica : Atenei Parmensis. 2021;92(1) Correia LL, Silva AC e, Campos JS, Andrade FM de O, Machado MMT, Lindsay AC, et al. Prevalence and determinants of child undernutrition and stunting in semiarid region of Brazil. Revista de Saúde Pública. 2014 Feb 1;48:19–28. Abdulahi A, Shab-Bidar S, Rezaei S, Djafarian K. Nutritional status of under five children in Ethiopia: a systematic review and meta-analysis. Ethiopian Journal of Health Sciences. 2017 Mar 15;27(2):175. Kofinti RE, Koomson I, Paintsil JA, Ameyaw EK. Reducing children’s malnutrition by increasing mothers’ health insurance coverage: A focus on stunting and underweight across 32 sub-Saharan African countries. Economic Modelling. 2022 Dec 1;117:106049. Vonaesch P, Tondeur L, Breurec S, Bata P, Nguyen LBL, Frank T, et al. Factors associated with stunting in healthy children aged 5 years and less living in Bangui (RCA). Wieringa F, editor. PLOS ONE. 2017 Aug 10;12(8):e0182363. Raj E, Calvo-Urbano B, Heffernan C, Halder J, Webster JP. Systematic review to evaluate a potential association between helminth infection and physical stunting in children. Parasites & Vectors. 2022 Apr 20;15(1). Schaible UE, Kaufmann SHE. Malnutrition and Infection: Complex Mechanisms and Global Impacts. PLoS Medicine. 2007 May 1;4(5):e115. Sinharoy SS, Clasen T, Martorell R. Air pollution and stunting: a missing link? The Lancet Global Health. 2020 Apr;8(4):e472–5. UNICEF. Malnutrition in Children. UNICEF DATA. 2023. Project Gallery

  • Why representation in STEM matters | Scientia News

    Tackling stereotypes and equal access Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Why representation in STEM matters Last updated: 03/04/25, 10:38 Published: 13/03/25, 08:00 Tackling stereotypes and equal access In collaboration with Stemmettes for International Women's Month Representation in Science, Technology, Engineering, and Mathematics (STEM) and Science, Technology, Engineering, Art and Mathematics (STEAM), is crucial for everyone. Historically, STEM fields have been dominated by certain demographics that don’t show the true picture of our world. Maybe you grew up seeing no (or very few) women, people of colour, or other marginalised groups mentioned in your science curriculum. This needs to change because your voice, experiences and talents should be celebrated in any career you choose. Below, we’ll list some of the top reasons why representation is so important. Equal access Why does representation matter? Because it promotes equal access! Whether in an educational or career setting, seeing someone who looks like you do something you never thought possible can be life-changing. After all, you can’t be what you can’t see . Showing up in your role and sharing what you do or your STEM/STEAM interests show other people that these fields are accessible to everyone. Also, finding someone in a field you are (or would like to) get into is a great way to find a mentor, build a network, and boost your knowledge. Feeling excluded or discouraged is bound to happen at some point in your career, but anyone can succeed, no matter their background. Innovation When STEM fields are equally represented, better (and more innovative) ideas come to the table. Everything you’ve experienced can be useful in developing solutions to STEM and STEAM problems, no matter your level of education or upbringing. A lot of STEM doesn’t rely so much on your qualifications, but instead on your problem-solving, creativity, and innovation skills. For example, if you’re part of a culture that nobody else in your team has experienced, or you’ve experienced a disability and made adaptations for yourself, you bring a unique set of ideas to the table that can help solve many different problems. Inclusion There are many examples of when certain demographics haven’t been included in STEM decision-making processes. For example, many face recognition apps have failed to recognise the faces of people of colour, and period trackers have been made with misinformation about cycle lengths. If more diversity were seen throughout the process of creating a STEM product or service, we would see a lot fewer issues and a lot better products! Now, more than ever, your voice is important in STEM because science and technology are shaping the future at a fast rate. With the boom in artificial intelligence (AI) technology and its impact on almost every industry, we can’t afford to have models being trained from an unrepresentative data set. Look at people like Katherine Johnson, who despite facing setbacks as an African American at the time, was a pivotal part of sending astronauts aboard Apollo 11 into space. Or, more recently, Dr Ronx, who is paving the way as a trans-non-binary emergency medicine doctor. Tackling stereotypes Showing up in STEM & STEAM fields is a great way to tackle stereotypes. So many underrepresented groups are usually stereotyped into different career paths that are based on old, outdated notions about what certain people should do. By showing up and talking about what you love, you show that you’re not less capable than anyone else. Shout about your achievements, no matter how big or small, no matter where you are on your career journey so that we can encourage a new idea of what STEM looks like. Conclusion If this article hasn’t already given you the confidence to explore STEM and STEAM fields and all they have to offer, there are so many other reasons why you’re important to these fields and capable of achieving your dreams. Representation from you and others helps us create a more equitable, innovative, and inclusive future. It matters because the progress of science and society depends on the contributions of all, not a select few. Written by Angel Pooler -- Scientia News wholeheartedly thanks Stemmettes for this pertinent piece on the importance of representation in STEM. We hope you enjoyed reading this International Women's Month Special piece! Check out their website , and Zine / Futures youth board (The Stemette Futures Youth Board is made up of volunteers aged 15-25 from the UK and Ireland who will ensure the voices of girls, young women and non-binary young people are heard. They will work alongside the Stemette Futures charity board to guide and lead the mission to inspire more girls, young women and non-binary young people in to STEAM). -- Related articles: Sisterhood in STEM / Women leading in biomedical engineering / African-American women in cancer research Project Gallery

  • Proving causation: causality vs correlation | Scientia News

    Establishing causation through Randomised Controlled Trials and Instrumental Variables Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Proving causation: causality vs correlation Last updated: 03/06/25, 13:43 Published: 12/06/25, 07:00 Establishing causation through Randomised Controlled Trials and Instrumental Variables Does going to the hospital lead to an improvement in health? At first glance, one might assume that visiting a hospital should improve health outcomes. However, if we compare the average health status of those who go to the hospital with those who do not, we might find that hospital visitors tend to have worse health overall. This apparent contradiction arises due to confounding – people typically visit hospitals due to existing health issues. Simply comparing these two groups does not tell us whether hospitals improve health or if the underlying health conditions of patients drive the observed differences. A similar challenge arises when examining the relationship between police presence and crime rates. Suppose we compare two cities—one with a large police force and another with a smaller police force. If the city with more police also has higher crime rates, does this mean that police cause crime? Clearly not. Instead, it is more likely that higher crime rates lead to an increased police presence. This example illustrates why distinguishing causation from correlation is crucial in data analysis, and that stating that two variables are correlated does not imply causation. First, let’s clarify the distinction between causation and correlation. Correlation refers to a relationship between two variables, but it does not imply that one causes the other. Just because two events occur together does not mean that one directly influences the other. To establish causation, we need methods that separate the true effect of an intervention from other influencing factors. Statisticians, medical researchers and economists have ingeniously come up with several techniques that allow us to separate correlation and causation. In medicine, the gold standard for researchers is the use of Randomised Controlled Trials (RCTs). Imagine a group of 100 people, each with a set of characteristics, such as gender, age, political views, health status, university degree, etc. RCTs randomly assign each individual to one of two groups. Consequently, each group of 50 individuals should, on average, have similar ages, gender distribution, and baseline health. Researchers then examine both groups simultaneously while changing only one factor. This could involve instructing one group to take a specific medicine or asking individuals to drink an additional cup of coffee each morning. This results in two statistically similar groups differing in only one key aspect. Therefore, if the characteristics of one group change while those of the other do not, we can reasonably conclude that the change caused the difference between the groups. This is great for examining the effectiveness of medicine, especially when you give one group a placebo, but how would we research the causation behind the police rate and crime example? Surely it would be unwise and perhaps unethical to randomise how many police officers are present in each city? And because not all cities are the same, the conditions for RCTs would not hold. Instead, we use more complex techniques like Instrumental Variables (IV) to overcome those limitations. A famous experiment using IV to explain police levels and crime was published by Steven Levitt (1997). Levitt used the timings of mayoral and gubernatorial elections (the election of a governor) as an instrument for changes in police hiring. Around election time, mayors and governors have incentives to look “tough on crime.” This can lead to politically motivated increases in police hiring before an election. Crucially, hiring is not caused by current crime rates but by the electoral calendar. So, by using the timing of elections to predict an increase in police, we can use those values to estimate the effect on crime. What he found was that more police officers reduce violent and property crime, with a 10% increase in police officers reducing violent crime by roughly 5%. Levitt’s paper is a clever application of IV to get around the endogeneity problem and takes correlation one step further into causation, through the use of exogenous election timing. However, these methods are not without limitations. IV analysis, for instance, hinges on finding a valid instrument—something that affects the independent variable (e.g., police numbers) but has no direct effect on the outcome (e.g., crime) other than through that variable. Finding such instruments can be extremely challenging, and weak or invalid instruments can lead to biased or misleading results. Despite these challenges, careful causal inference allows researchers to better understand the true drivers behind complicated relationships. In a world where influencers, media outlets, and even professionals often mistake correlation for causation, developing a critical understanding of these concepts is an essential skill required to navigate through the data, as well as help drive impactful change in society through exploring the true relationships behind different phenomena. Written by George Chant Related article: Correlation between HDI and mortality rate REFERENCE Steven D. Levitt (1997). “Using Electoral Cycles in Police Hiring to Estimate the Effect of Police on Crime”. American Economic Review 87.3, pp. 270–290 Project Gallery

  • Decoding p53: the guardian against cancer | Scientia News

    Looking at p53 mutations and cancer predisposition Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Decoding p53: the guardian against cancer 09/07/25, 14:03 Last updated: Published: 23/11/23, 11:38 Looking at p53 mutations and cancer predisposition Being a tumour suppressor protein, p53 encoded by the TP53 gene plays a critical role in regulating cell division and preventing the formation of tumours. Its function in maintaining genome stability is vital in inhibiting cancer development. Understanding p53 Located on chromosome locus 17p13.1, TP53 encodes the p53 transcription factor 1. Consisting of three domains, p53 can directly initiate or suppress the expression of 3661 different genes involved in cell cycle control and DNA repair 2. With this control, p53 can influence cell division on a massive scale. Cancer is characterised by uncontrolled cell division, which can occur due to accumulated mutations in either proto-oncogenes or tumour suppressor genes. Wild-type p53 can repair mutations in oncogenes such that they will not affect cell division. However, if p53 itself is mutated, then its ability to repair DNA and control the cell cycle is inhibited, leading to the emergence of cancer. Mutations in TP53 are actually the most prevalent genetic alterations found in patients with cancer. The mechanisms by which mutated p53 leads to cancer are manifold. One such mechanism is p53’s interaction with p21. Encoded by CDKN1A , p21 is activated by p53 and prevents cell cycle progression by inhibiting the activity of cyclin-dependent kinases (CDKs). Therefore, we can see that a non-functional p53 would lead directly to uncontrolled cell division and cancer. Clinical significance The importance of p53 in preventing cancer is highlighted by the fact that individuals with inherited TP53 mutations (a condition known as Li-Fraumeni syndrome or LFS) have a significantly greater risk of developing any cancer. These individuals inherit one defective TP53 allele from one parent, making them highly susceptible to losing the remaining functional TP53 allele, ultimately leading to cancer. Loss of p53 also endows cells with the ability to ignore pro-apoptotic signals such that if a cell becomes cancerous, it is far less likely to undergo programmed cell death 3. Its interactions with the apoptosis-inducing proteins Bax and Bak, are lost when mutated, thus leading to cellular apoptosis resistance. The R337H mutation in TP53 is an example of the founder effect at work. The founder effect refers to the loss of genetic variation when a large population descends from a smaller population of fewer individuals. The descendants of the initial population are much more likely to harbour genetic variations that are less common in the species as a whole. In southern Brazil, the R337H mutation in p53 is present at an unusually high frequency 4 and is thought to have been introduced by European settlers several hundred years ago. It is responsible for a widespread incidence of early-onset breast cancers, LFS, and paediatric adrenocortical tumours. Interestingly, individuals with this mutation can trace their lineage back to the group of European settlers that set foot in Brazil hundreds of years ago. Studying p53 has enabled us to unveil its intricate web of interactions with other proteins and molecules within the cell and unlock the secrets of cancer development and potential therapeutic strategies. By restoring or mimicking the functions of p53, we may be able to provide cancer patients with some relief from this life-changing condition. Written by Malintha Hewa Batage Related articles: Zinc finger proteins / Anti-freeze proteins Project Gallery

  • Allergies | Scientia News

    Deconstructing allergies: mechanisms, treatments, and prevention Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Allergies 08/07/25, 16:24 Last updated: Published: 13/05/24, 14:27 Deconstructing allergies: mechanisms, treatments, and prevention Modern populations have witnessed a dramatic surge in the number of people grappling with allergies, a condition that can lead to a myriad of health issues such as eczema, asthma, hives, and, in severe cases, anaphylaxis. For those who are allergic, these substances can trigger life-threatening reactions due to their abnormal immune response. Common allergens include antibiotics like penicillin, as well as animals, insects, dust, and various foods. The need for strict dietary restrictions and the constant fear of accidental encounters with allergens often plague patients and their families. Negligent business practices and mislabelled food have even led to multiple reported deaths, underscoring the gravity of allergies and their alarming rise in prevalence. The primary reason for the global increase in allergies is believed to be the lack of exposure to microorganisms during early childhood. The human microbiome, a collection of microorganisms that live in and on our bodies, is a key player in our immune system. The rise in sanitation practices is thought to reduce the diversity of the microbiome, potentially affecting immune function. This lack of exposure to infections may cause the immune system to overreact to normally harmless substances like allergens. Furthermore, there is speculation about the impact of vitamin D deficiency, which is becoming more common due to increased indoor time. Vitamin D is known to support a healthy immune response, and its deficiency could worsen allergic reactions. Immune response Allergic responses occur when specific proteins within an allergen are encountered, triggering an immune response that is typically used to fight infections. The allergen's proteins bind to complementary antigens on macrophage cells, causing these cells to engulf the foreign substance. Peptide fragments from the allergen are then presented on the cell surface via major histocompatibility complexes (MHCs), activating receptors on T helper cells. These activated T cells stimulate B cells to produce immunoglobulin E (IgE) antibodies against the allergen. This sensitizes the immune system to the allergen, making the individual hypersensitive. Upon re-exposure to the allergen, IgE antibodies bind to allergen peptides, activating receptors on mast cells and triggering the release of histamines into the bloodstream. Histamines cause vasodilation and increase vascular permeability, leading to inflammation and erythema. In milder cases, patients may experience itching, hives, and runny nose; however, in severe allergic reactions, intense swelling can cause airway constriction, potentially leading to respiratory compromise or even cessation. At this critical point, conventional antihistamine therapy may not be enough, necessitating the immediate use of an EpiPen to alleviate symptoms and prevent further deterioration. EpiPens administer a dose of epinephrine, also known as adrenaline, directly into the bloodstream when an individual experiences anaphylactic shock. Anaphylactic shock is typically characterised by breathing difficulties. The primary function of the EpiPen is to relax the muscles in the airway, facilitating easier breathing. Additionally, they counteract the decrease in blood pressure associated with anaphylaxis by narrowing the blood vessels, which helps prevent symptoms such as weakness or fainting. EpiPens are the primary treatment for severe allergic reactions leading to anaphylaxis and have been proven effective. However, the reliance on EpiPens underscores the necessity for additional preventative measures for individuals with allergies before a reaction occurs. Preventative treatment Young individuals may have a genetic predisposition to developing allergies, a condition referred to as atopy. Many atopic individuals develop multiple hypersensitivities during childhood, but some may outgrow these allergies by adulthood. However, for high-risk atopic children, preventive measures may offer a promising solution to reduce the risk of developing severe allergies. Clinical trials conducted on atopic infants explored the concept of immunotherapy treatments, involving continuous exposure to small doses of peanut allergens to prevent the onset of a full-blown allergy. Initially, skin prick tests for peanut allergens were performed, and only children exhibiting negative or mild reactions were selected for the trial. Those with severe reactions were excluded due to the high risk of anaphylactic shock with continued exposure. The remaining participants were randomly assigned to either consume peanuts or follow a peanut-free diet. Monitoring these infants as they aged revealed that continuous exposure to peanuts reduced the prevalence of peanut allergies by the age of 5. Specifically, only 3% of atopic children exposed to peanuts developed an allergy compared to 17% of those in the peanut-free group. The rise in severe allergies poses a growing concern for global health. Once an atopic individual develops an allergy, mitigating their hypersensitivity can be challenging. Current approaches often involve waiting for children to outgrow their allergies, overlooking the ongoing challenges faced by adults who remain highly sensitive to allergens. Implementing preventive measures, such as early exposure through immunotherapy, could enhance the quality of life for future generations and prevent sudden deaths in at-risk individuals. In conclusion, a dramatic surge in the prevalence of allergies in modern populations requires more attention from researchers and health care providers. Living with allergies can bring many complexities into someone’s life even before they potentially have a serious reaction. Currently treatments are focused on post-reaction emergency care, however preventative strategies are still a pressing need. With cases of allergies predicted to rise further, research into this global health issue will become increasingly important. There are already promising results from early trials of immunotherapy treatments, and with further research and implementation these treatments could improve the quality of life of future generations. Written by Charlotte Jones Related article: Mechanisms of pathogen evasion Project Gallery

  • Story of the atom | Scientia News

    From the Big Bang to the current model Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Story of the atom 11/02/25, 12:23 Last updated: Published: 20/04/24, 11:16 From the Big Bang to the current model The Greek philosopher and physicist Democritus proposed the idea of an atom at around 440 B.C. The atom is first explained by him using a stone. When a stone is split in half, it becomes two separate stones. There would eventually come to be a portion of the stone that would be too small to be cut if it were to be cut continuously i.e., indivisible. Since then, many scientists have adopted, discarded, or published their own theories about the nature, structure, and size of atoms. However, the most widely accepted, and still the basic model used to study atoms is Rutherford’s model. Rutherford published his theory of the atom suggesting that it had an electron orbiting a positively charged nucleus. This model was created after a series of experiments which included shooting alpha particles at thin gold sheets. Most of the alpha particles flowed through with little disturbance, but a tiny fraction was scattered at extreme angles to their initial direction of motion. Rutherford calculated the estimated size of the gold atom's nucleus and discovered that it was at least 10,000 times smaller than the atom's total size, with a large portion of the atom made up of empty space. This theory paved the way to further the atomic models by various other scientists. (Figure 1) Researchers have discovered unidentified molecules in space which are believed to be the precursor of all chemistry in the universe. The earliest "atoms" in the cosmos were actually nuclei without any electrons. The universe was incredibly hot and dense in the earliest seconds following the Big Bang. The quarks and electrons that make up matter first appeared when the cosmos cooled, and the ideal conditions were met for them to do so. Protons and neutrons were created by quarks aggregating a few millionths of a second later. These protons and neutrons joined to form nuclei in a matter of minutes. (Figure 2) Things started to happen more slowly as the cosmos cooled and expanded. The first atoms were formed 380,000 years ago when electrons were locked into orbits around nuclei. These were mostly hydrogen and helium, which are still the elements that are found in the universe in the greatest quantities. Even now, the most basic nucleus, found in ordinary hydrogen, is only a single, unadorned proton. There were other configurations of protons and neutrons that also developed, but since the number of protons in an atom determines its identity, all these other conglomerations were essentially just variations of hydrogen, helium, and lithium traces. To say that this is an exciting time for astrochemistry is an understatement. Furthermore, the formation mechanism of amino acids and nucleobases is being demonstrated by laboratory simulations of interstellar environments. Now that we are finding answers to these known problems, even more are arising. Hopefully, a more thorough understanding of these chemical processes will enable us to make more precise deductions about the general history of the universe and astrophysics. Written by Navnidhi Sharma REFERENCES CERN (n.d.). The early universe. [online] CERN. Available at: https://home.cern/science/physics/earlyuniverse#:~:text=As%20the%20universe%20continued%20to . Compound Interest (2016). The History of the Atom – Theories and Models | Compound Interest. [online] Compound Interest. Available at: https://www.compoundchem.com/2016/10/13/atomicmodels/ . Fortenberry, R.C. (2020). The First Molecule in the Universe. Scientific American. [online] doi: https://doi.org/10.1038/scientificamerican0220-58 . Sharp, T. (2017). What is an Atom? [online] Live Science. Available at: https://www.livescience.com/37206-atom-definition.html . Project Gallery

  • A comprehensive guide to the Relative Strength Index (RSI) | Scientia News

    The maths behind trading Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A comprehensive guide to the Relative Strength Index (RSI) 08/07/25, 14:37 Last updated: Published: 27/12/23, 11:02 The maths behind trading In this piece, we will delve into the essential concepts surrounding the Relative Strength Index (RSI). The RSI serves as a gauge for assessing the strength of price momentum and offers insights into whether a particular stock is in an overbought or oversold condition. Throughout this exploration, we will demystify the underlying calculations of RSI, explore its significance in evaluating market momentum, and unveil its practical applications for traders. From discerning opportune moments to buy or sell based on RSI values to identifying potential shifts in market trends, we will unravel the mathematical intricacies that underpin this critical trading indicator. Please note that none of the below content should be used as financial advice, but for educational purposes only. This article does not recommend that investors base their decisions on technical analysis alone. As indicated in the name, RSI measures the strength of a stock's momentum and can be used to show when a stock can be considered over- or under-bought, allowing us to make a more informed decision as to whether we should enter a position or hold off until a bit longer. It’s all very well and good to know that ‘you should buy when RSI is under 30 and sell when RSI is over 70' , but in this article, I will attempt to explain why this is the case and what RSI is really measuring. The calculations The relative strength index is an index of the relative strength of momentum in a market. This means that its values range from 0 to 100 and are simply a normalised relative strength. But what is the relative strength of momentum? Initial Average Gain = Sum of gains over the past 14 days / 14 Initial Average Loss = Sum of losses over the past 14 days / 14 Relative strength is the ratio of higher closes to lower closes. Over a fixed period of usually 14 days (but sometimes 21), we measure how much the price of the stock has increased in each trading day and find the mean average between them. We then repeat and do the same to find the average loss. The subsequent average gains and losses can then be calculated: Average Gain = [(Previous Avg. Gain * 13) + Current Day's Gain] / 14 Average Loss = [(Previous Avg. Loss * 13) + Current Day's Loss] / 14 With this, we can now calculate relative strength! Therefore, if our stock gained more than it lost in the past 14 days, then our RS value would be >1. On the other hand, if we lost more than we gained, then our RS value would be <1. Relative strength tells us whether buyers or sellers are in control of the price. If buyers were in control, then the average gain would be greater than the average loss, so the relative strength would be greater than 1. In a bearish market, if this begins to happen, we can say that there is an increase in buyers’ momentum; the momentum is strengthening. We can normalise relative strength into an index using the following equation: Relative Strength= Average Gain / Average Loss Traders then use the RSI in combination with other techniques to assess whether to buy or sell. When a market is ranging, which means that price is bouncing between support and resistance (has the same highs and lows for a period), we can use the RSI to see when we may be entering a trend. When the RSI is reaching 70, it is an indication that the price is being overbought, and in a ranging market, there is likely to be a correction and the price will fall so that the RSI stays at around 50. The opposite is likely to happen when the RSI dips to 30. Price action is deemed to be extreme, and a correction is likely. It should, however, be noted that this type of behaviour is only likely in assets presenting mean-reversion characteristics. In a trending market, RSI can be used to indicate a possible change in momentum. If prices are falling and the RSI reaches a low and then, a few days later, it reaches a higher low (therefore, the low is not as low as the first), it indicates a possible change in momentum; we say there is a bullish divergence. Divergences are rare when a stock is in a long-term trend but is nonetheless a powerful indicator. In conclusion, the relative strength index aims to describe changes in momentum in price action through analysing and comparing previous day's highs and lows. From this, a value is generated, and at the extremes, a change in momentum may take place. RSI is not supposed to be predictive but is very helpful in confirming trends indicated by other techniques. Written by George Chant Project Gallery

  • Iron deficiency anaemia | Scientia News

    A type of anaemia Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Iron deficiency anaemia 10/07/25, 10:20 Last updated: Published: 27/06/23, 17:10 A type of anaemia This article is no. 2 of the anaemia series. Next article: anaemia of chronic disease . Previous article: Anaemia . Aetiology Iron deficiency anaemia (IDA) is the most frequent in children due to rapid growth (adolescence) and poor diets (infants), and in peri and post -menopausal women due to rapid growth (pregnancy) and underlying conditions. Anaemia typically presents, in around 50% of cases as headache, lethargy and pallor depending on the severity. Less common side effects include organomegaly and Pica which occurs in patients with zinc and iron deficiency and is defined by the eating of things with little to no nutritional value. Pathophysiology Iron is primarily sourced through diet, as haem (Fe2+) and non-haem iron (Fe3+). Fe2+ is sourced through meat, fish, and other animal-based products, Fe2+ can be absorbed directly through the enterocyte via the haem carrier protein1 (HCP1). Fe3+ is less easily absorbed and is mostly found in plant-based products. Fe3+ must be reduced and transported through the duodenum by the enzyme duodenal cytochrome B (DcytB) and the divalent metal transporter 1 (DMT1), respectively. Diagnosis As with any diagnosis, the first test to run would be a full blood count and this will occur with all the anaemias. In suspected cases of anaemia, the Haemoglobin (Hb) levels would be lower than 130 in males and 120 in females. The mean cell volume (MCV) is a starting point for pinpointing the type of anaemia, for microcytic anaemias you would expect to see an MCV < 80. Iron studies are best for diagnosing anaemias, for IDA you would expect most of the results to be low. A patient with IDA has little to no available iron so the body would halt the mechanism’s for storing iron. As ferratin is directly related to storage, low ferratin can be a lone diagnostic of IDA. Total iron-binding capacity (TIBC) would be expected to be raised, as transferrin transports iron throughout the body, the higher it is the more iron it would be capable of binding to. Elliptocytes (tear drop) are elongated RBC, often described as pencil like in structure and are regularly seen in IDA and other anaemias. Typically, one would see hypochromic RBC as they contain less Hb than normal cells, the Hb is what gives red cells their pigment. It’s not uncommon to see other changes in RBC such as target cells, given their name due to the bullseye appearance. Target cells are frequently seen in cases with blood loss. Summary IDA is the most frequent anaemia affecting patients of all age ranges and usually presents with lethargy and headaches. Dietary iron from animal derivatives are the most efficient source of iron uptake. Diagnosis of IDA is through iron studies, red cell morphological investigations alongside clinical presentation, to rule out other causes. Written by Lauren Kelly Project Gallery

bottom of page