Search Index
278 items found
- Alzheimer's disease | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Alzheimer's disease 12/12/24, 12:16 Last updated: The mechanisms of the disease Introduction to Alzheimer’s disease Alzheimer’s disease is a neurodegenerative disease that results in cognitive decline and dementia with increasing age, environmental and genetic factors contributing to its onset. Scientists believe this is the result of protein biomarkers that build-up in the brain and accumulate within neurones. As of 2020, 55 million people suffer with dementia, with Alzheimer’s being a leading cause. Thus, it is crucial we develop efficacious treatments, with final adverse effects. A new drug called Iecanemab, may be the key to a new era of Alzheimer’s treatment… The disease is most common in people over 65, with 1/14 affected in the UK, thus, there is a huge emphasis on defining the disorder and developing drug treatments. The condition results in difficulty with memory, planning, decision making and can result in co-morbidities such as depression or personality change. This short article will explain the pathology of the disorder and the genetic predispositions for its onset. It will also explore future avenues for treatment, such as the drug I ecanemab that may provide, “a new era for Alzheimer’s disease”. Pathology and molecular aspects The neurodegeneration seen in Alzheimer’s has, as far, been associated protein dispositions in the brain, such as the amyloid precursor protein (APP) and Tau tangles. This has been deduced by PET scans and post-mortem study. APP, located on chromosome 21, is responsible for synapse formation and signalling. It is cleaved to b-amyloid peptides by enzymes called secretases, but overexpression of both these factors can be neurotoxic (figure 1). The result is accumulation of protein aggregates called beta-amyloid plaques in neurons, impairing their survival. This deposition starts in the temporo-basal and front-medial areas of the brain and spreads to the neocortex and sensory-motor cortex. Thus, many pathways are affected, resulting in the characteristic cognitive decline. Tau proteins support nerve cells structurally and can be phosphorylated at various regions, changing the interactions they have with surrounding cellular components. Hyperphosphorylation of these proteins result in the Tau pathology in the form of tau oligomer (short peptides) that is toxic to neurons. These enter the limbic regions and neocortex. It is not clearly defined which protein aggregate proceeds the other, however, the amyloid cascade hypothesis suggests that b-amyloid plaque pathology comes first. It is speculated that b-amyloid accumulation leads to activation of the brain’s immune response, the microglial cells, which then promotes the hyperphosphorylation of Tau. Sometimes, there is a large release of pro-inflammatory cytokines, known as a cytokine storm, that promotes neuroinflammation. This is common amongst older individuals, due to a “worn-out” immune system, which may in part explain Alzheimer’s disease. Genetic component to Alzheimer’s disease There is strong evidence obtained through whole genome-sequencing studies (WGS), that suggests there is a genetic element to the disease. One gene is the Apoliprotein E (APOE) gene, responsible for b-amyloid clearance/metabolism. Some alleles of this gene show association with faulty clearance, leading to the characteristic b-amyloid build-up. In the body, proteins are made consistently depending on need, a dysregulation of the recycling process can be catastrophic for the cells involved. PSEN1 gene that codes for the presenilin 1 protein, part of a secretase enzyme complex. As mentioned, the secretase enzyme is responsible for the cleavage of APP, the precursor for b-amyloid. Variants of this gene have been associated with early onset Alzheimer’s disease, due to APP processing being altered to produce a longer form of the b-amyloid plaque. The genetic aspects to Alzheimer’s disease are not limited to these genes, and in actuality, one gene can have an assortment of mutation that results in a faulty protein. Understanding the genetic aspects, may provide avenue for gene therapy in the future. Treatment Understanding the point in which the “system goes wrong” is crucial for directing treatment. For example, we may use secretase inhibitors to reduce the rate of plaque formation. An example of this is the g- secretase BACE1 inhibitor. There is a need for this drug-type to be more selective to its target, as has been found to produce unwanted adverse effects. A more selective approach may be to target the patient’s immune system with the use of monoclonal antibodies (mAb). This means designing an antibody that recognises a specific component, such as the b-amyloid plaque, so it may bind and then encourage immune cells to target the plaque (figure 3). An example is Aducanumab mAb, which targets b-amyloid as fibrils and oligomers. The Emerge study demonstrated a decrease in amyloid by the end of the 78-week study. As of June 2021, Aducanumab received approval from the FDA for prescription of this drug, but this is controversial as there are claims it brings no clinical benefit to the patient. The future of Alzheimer’s disease Of note, drug development and approval is a slow process, and there must be a funding source in order to carry out plans. Thus, particularly in Alzheimer’s, it is relevant to educate the public and funding bodies to supply the financial support to the process. However, with many hits (potential drug candidates), these often fail at phase III clinical trials. Despite this, another mAb, lecanemab, has recently been approved by the FDA (2023), due to its ability to slow cognitive decline by 27% in early Alzheimer’s disease. The Clarity AD study on Iecanemab, found the drug benefited memory and thinking, but also allowed for better performance of daily tasks. This drug is currently being prescribed on a double-blind basis, meaning a patient may either receive the drug or the placebo. This study shows a hope for those suffering from the disease. Drugs that have targeted the Tau tangles, have as far, not been successful in clinical trials. However, the future of Alzheimer’s treatment may be in the combination therapy directed to both Tau protein and b-amyloid. Washington universities neurology department have launched a trial known as Tau NextGen, in which participants will receive both Iecanemab and tau-reducing antibody. Conclusion This article provides a summary to what we know about Alzheimer’s disease and the potential treatments of the future. Overall, the future of Alzheimer’s treatment lies in the combination therapy to target known biomarkers of the disease. Written by Holly Kitley Related articles: CRISPR-Cas9 as Alzheimer's treatment / Hallmarks of Alzheimer's Project Gallery
- Herpes vs devastating skin disease | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Herpes vs devastating skin disease 09/01/25, 12:07 Last updated: From foe to ally Have you ever plucked loose skin near your nail, ripping off a tiny strip of good skin too? Albeit very small, that wound can be painful. Now imagine that it is not just a little strip that peels off, but an entire sheet. And it does not detach only when pulled, but at the slightest touch. Even a hug opens wounds, even a caress brings you pain. This is life with recessive dystrophic epidermolysis bullosa (RDEB), the most severe form of dystrophic pidermolysis bullosa (DEB). Herpes becomes a therapy DEB is a rare genetic disease of the skin that affects 3 to 10 individuals per million people (prevalence is hard to nail down for rare diseases). A cure is still far off, but there is good news for patients. Last May, the US FDA (Food and Drug Administration) approved Vyjuvek (beremagen geperparvec) to treat skin wounds in DEB. Clinical studies showed that it speeds up healing and reduces pain. Vyjuvek is the first gene therapy for DEB. It is manufactured by Krystal Biotech and - get this- it is a tweaked version of the herpes virus. Yes, you got that right, the virus causing blisters and scabs has become the primary ally against a devastating skin disease. This approval is a milestone for gene therapies, as Vyjuvek is the first gene therapy - based on the herpes virus, - to apply on the skin as a gel, - approved for repeated use. This article describes how DEB, and especially RDEB, affects the skin and wreaks havoc on the body; the following article will explain how Vyjuvek works. DEB disrupts skin integrity We carry around six to nine pounds of skin. Yet we often forget its importance: it stops germs and UVs, softens blows, regulates body temperature and makes us sensitive to touch. Diseases that compromise the skin are therefore devastating. These essential functions rely on the organisation of the skin in three layers: epidermis, dermis and hypodermis ( Figure 1 ). Typically, a Velcro strap of the protein collagen VII firmly anchors the epidermis to the dermis. The gene COL7A1 contains the instructions on how to produce collagen VII. In DEB, mutations in COL7A1 result in the production of a faulty collagen VII. As the Velcro strap is weakened, the epidermis becomes loosely attached to the dermis. Mutations in one copy of COL7A1 cause the dominant form of the disease (DDEB), mutations in both copies cause RDEB. With one copy of the gene still functional, the skin still produces some collagen VII, when both copies are mutated, little to no collagen VII is left. Therefore, RDEB is more severe than DDEB. In people with RDEB, the skin can slide off at the slightest touch and even gentle rubs can cause blisters and tears ( Figure 2 ). Living with RDEB Life with RDEB is gruelling and life expectancy doesn't exceed 30 years old. Wounds are very painful, slow to heal and get infected easily. The risk of developing an aggressive skin cancer is higher. The constant scarring can cause limb deformities. In addition, blisters can appear in the mouth, oesophagus, eyes and other organs. There is no cure for DEB for now; treatments can only improve the quality of life. Careful dressing of wounds promotes healing and prevents infections. Painkillers are used to ease pain. Special diets are required. And, to no one's surprise, physical activities must be avoided. Treating RDEB Over the past decade, cell and genetic engineering advances have sparked the search for a cure. Scientists have explored two main alternatives to restore the production of collagen VII in the skin. The first approach is based on transferring skin cells able to produce collagen VII. Despite promising results, this approach treats only tinyl patches of skin, requires treatments in highly specialised centres and it may cause cancer. The second approach is the one Vyjuvek followed. Scientists place the genetic information to make collagen VII in a modified virus and apply it to a wound. There, the virus infects skin cells, providing them with a new COL7A1 gene to use. These cells now produce a functional collagen VII and can patch the damage up. We already know which approach came up on top. Vyjuvek speeds up the healing of wounds as big as a smartphone. Professionals can apply it in hospitals, clinics or even at the patient’s home. And it uses a technology that does not cause cancer. But how does Vyjuvek work? And why did scientists choose the herpes virus to build Vyjuvek? We will find the answer in the following article. And since perfection does not belong to biology, we will also discuss the limitations of this remarkable gene therapy. NOTES: 1. DEB is part of a group of four inherited conditions, collectively named epidermolysis bullosa (EB), where the skin loses integrity. EB is also known as “Butterfly syndrome” because the skin becomes as fragile as a butterfly’s wing. These conditions are EB simplex, junction EB, dystrophic EB and Kindler EB. 2. Most gene therapies are based on modified, or recombinant in science jargon, adenoassociated viruses, which I reviewed for Scientia News. 3. Over 700 mutations have been reported. They disrupt collagen VII and its function with various degrees of severity. Consequently, RDEB and DDEB display several clinical phenotypes. 4. Two studies have adopted this approach: in the first study, Siprashvili and colleagues (2016) grafted ex vivo retrovirally-modified keratinocytes, the main cell type in the epidermis, over the skin of people with RDEB; in the second study, Lwin and colleagues (2019) injected ex vivo lentivirally-modified fibroblasts in the dermis of people with RDEB. Written by Matteo Cortese, PhD Related article: Ehler-Danos syndrome Project Gallery
- The rising threat of antibiotic resistance | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The rising threat of antibiotic resistance 05/12/24, 12:25 Last updated: Understanding the problem and solutions An overview and history of antibiotics Antibiotics are medicines that treat and prevent bacterial infections (such as skin infections, respiratory infections and more). Antibiotic resistance is the process of infection-causing bacteria becoming resistant to antibiotics. As the World Health Organisation (WHO) stated, antibiotic resistance is one of the biggest threats to global health, food security and development. In 1910, Paul Ehrlich discovered the first antibiotic, Salvarsan, used to treat syphilis at the time. His idea was to create anti-infective medication, and Salvarsan was successful. The golden age of antibiotic discovery began with the accidental discovery of penicillin by Alexander Fleming in 1928. He noticed that mould had contaminated one of the petri dishes of Staphylococcus bacteria. He observed that bacteria around the mould were dying and realised that the mould, Penicillium notatum , was causing the bacteria to die. In 1940, Howard Florey and Ernst Chain isolated penicillin and began clinical trials, showing that it effectively treated infectious animals. Penicillin was then used to treat patients by 1943 in the United States. Overall, the discovery and use of antibiotics in the 21st century was a significant scientific discovery, extending people’s lives by around 20 years. Factors contributing to antibiotic resistance Increasing levels of antibiotic resistance could mean routine surgeries and cancer treatments (which can weaken the body’s ability to respond to infections) might become too risky, and minor illnesses and injuries could become more challenging to treat. There are various factors contributing to this, including overusing and misusing antibiotics and low investment in new antibiotic research. Antibiotics are overused and misused due to misunderstanding when and how to use them. As a result, antibiotics may be used for viral infections, and an entire course may not be completed if patients start to feel better. Some patients may also use antibiotics not prescribed to them, such as those of family and friends. Moreover, there has not been enough investment to fund the research of novel antibiotics. This has resulted in a shortage of antibiotics available to treat infections that have become resistant. Therefore, more investment and research are needed to prevent antibiotic resistance from becoming a public health crisis. Combatting antibiotic resistance One of the most effective ways to combat antibiotic resistance is through raising public awareness. Children and adults can learn about when and how to use antibiotics safely. Several resources are available to help individuals and members of the public to do this. Some resources are linked below: 1. The WHO has provided a factsheet with essential information on antibiotic resistance. 2. The Antibiotic Guardian website is a platform with information and resources to help combat antibiotic resistance. It is a UK-wide campaign to improve and reduce antibiotic prescribing and use. Visit the website to learn more, and commit to a pledge to play your part in helping to solve this problem. 3. Public Health England has created resources too support Antibiotic Guardian. 4. The E-bug peer-education package is a platform that aims to educate individuals and provide them with tools to educate others. Written by Naoshin Haque Related article: Anti-fungal resistance Project Gallery
- TDP43 and Parkinson's | Scientia News
Go Back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link TDP-43 and Me: the Neurodegenerative Impact of Gene Misplacement in Parkinsonism Last updated: 18/11/24 Practice and Progress in Neurology Since 2006 when the link between amyotrophic lateral sclerosis (ALS), frontotemporal degeneration and TDP-43 mutations was demonstrated by Arai et al., it has remained a focus in neurological academia. This is for good reason; the research boo m around the role of TDP-43 in neurodegeneration has elucidated link s between TDP-43, parkinsonism and frontotemporal dementia (FTD). The link between point mutations, deletions and loss of gene function in PRKN has long been established, but has yet to lead to the development of a targeted therapeutic treatment. PRKN is involved in the tagging of excess or faulty proteins with ubiquitin, which leads to degradation of the proteins in the ubiquitin/proteasome system (UPS)- a system characterised in medical neurology by its potential to cause serious neurological disorders. This places parkinsonism in a domain of neurodegenerative disorders sharing a common root in UPS dysfunction, including Alzheimer’s Disease, multiple sclerosis and Huntington’s Disease. Panda et al. (2022) demonstrated how the dysfunction of the UPS due to PRKN aberration inhibits the breakdown of the damaging TDP-43 aggregates which develop in human brains in response to mutation or stress. In healthy people, autophagic granules would attack and kill off these TDP-43 aggregates as an end result of the UPS , but due to aberrations in PRKN the UPS is inhibited in those afflicted with parkinsonism, causing neurodegeneration. The discovery of how TDP-43 and parkinsonism are linked could lead to the development of a treatment mimicking the organic catalyst of the TDP-43 aggregate breakdown to replicate UPS, reducing TDP-43 aggregate volume and by proxy, inhibiting neurodegeneration. In 2007, research by Esper et al. catalysed recognition of drug-induced Parkinsonism as severely underdiagnosed, with evidence proving even neurologists fail to effectively remember which medications cause parkinsonism. Fast halting of the inciting agent is necessary for the reversal of all parkinsonism symptoms, but in some patients, cognitive symptoms may persist for a time after the medication is stopped. In response to the novel discoveries of Panda et al. (2022), it is likely due to the aggregation of TDP-43. Another possibility is that permanent cognitive symptoms after inciting agent cessation in DIP may be due to large TDP-43 aggregates unable to be destroyed by the UPS. Further research will demonstrate whether TDP-43 aggregates become more resistant to UPS or autophagy through the progression of DIP, whether due to size or other extraneous factors. The implications of such a promising lead in neurotherapeutics for refractory parkinsonism cannot be understated. Surgical therapies have long since remained the industry standard in treating refractory parkinsonism, though this option remains prone to risk since many of those afflicted with parkinsonism are elderly, with drug-induced parkinsonism from treatment with antipsychotics, calcium channel blockers or other medications always heightening the number of the geriatric population requiring care for parkinsonism . Furthermore, the adequate treatment of those with parkinsonism in their youth could inhibit their progression to a refractory disease state in old age. Overall, the future looks very promising for those around the world suffering from all different forms of parkinsonism. Written by Aimee Wilson Related article: A common diabetes drug treating Parkinson's disease
- The Lyrids meteor shower | Scientia News
Go back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The Lyrids meteor shower Last updated: 14/11/24 The Lyrids bring an end to the meteor shower drought that exists during the first few months of the year. On April 22nd, the shower is predicted to reach its peak, offering skygazers an opportunity to witness up to 20 bright, fast-moving meteors per hour which leave long, fiery trails across the sky, without any specialist equipment. The name Lyrids comes from the constellation Lyra - the lyre, or harp - which is the radiant point of this shower, i.e. the position on the sky from which the paths of the meteors appear to originate. In the Northern Hemisphere Lyra rises above the horizon in the northeast and reaches the zenith (directly overhead) shortly before dawn, making this the optimal time to observe the shower. Lyra is a prominent constellation, largely due to Vega which forms one of its corners, and is one of the brightest stars in the sky. Interestingly, Vega is defined as the zero point of the magnitude scale - a logarithmic system used to measure the brightness of celestial objects. Technically, the brightness of all stars and galaxies are measured relative to Vega! Have you ever wondered why meteor showers occur exactly one year apart and why they always radiate from the same defined point in the sky? The answer lies in the Earth's orbit around the Sun, which takes 365 days. During this time, Earth may encounter streams of debris left by a comet, composed of gas and dust particles that are released when an icy comet approaches the Sun and vaporizes. As the debris particles enter Earth’s atmosphere, they burn up due to friction, creating a streak of light known as a meteor. Meteorites are fragments that make it through the atmosphere to the ground. The reason that the Lyrids meteor shower peaks in mid-late April each year is that the Earth encounters the same debris stream at the point on its orbit corresponding to mid-late April. Comets and their debris trails have very eccentric, but predictable orbits, and the Earth passes through the trail of Comet Thatcher in mid-late April every year. Additionally, Earth’s orbit intersects the trail at approximately the same angle every year, and from the perspective of an observer on Earth, the constellation Lyra most accurately matches up with the radiant point of the meteors when they are mapped onto the canvas of background stars in the night sky. The Lyrids meteor shower peaks in mid-late April each year. Image/ EarthSky.org This year, there is a fortunate alignment of celestial events. New Moon occurs on April 20th, meaning that by the time the Lyrids reach their maximum intensity, the Moon is only 6% illuminated, resulting in darker skies and an increased chance to see this dazzling display. Written by Joseph Brennan
- A perspective on well-being: hedonic VS eudaimonic well-being | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A perspective on well-being: hedonic VS eudaimonic well-being 23/09/24, 15:52 Last updated: Based on the ideas of Aristippus and Aristotle Since ancient times well-being has been discussed in two broad domains: hedonic and eudaimonic. Hedonic well-being is based on the ideas of Aristippus, who proposed that the ultimate aim of all human endeavours and pursuits is pleasure (hedonism). Therefore, hedonic well-being (aka subjective well-being) is a shorter-term evaluation of well-being that balances between positive and negative emotions and between pleasure attainment and pain avoidance. A real-life example of behaviour that leads to hedonic happiness is spending a large amount of money on a designer item to satisfy the need to stay current with fashion trends. According to Keyes et al. (2002), the three aspects of subjective well-being are positive affect (mood), negative affect (mood) and life satisfaction. The most common tools used to measure subjective well-being are the Positive and Negative Affect Schedule (PANAS) by Watson, Clark & Tellegen (1988) and the Satisfaction with Life Scale (SWLS) by Diener et al. (1985). Subjective well-being has been associated with having a present temporal focus and higher income levels, suggesting it is grounded in physical aspects of life and not the greater goals of self-actualisation. On the other hand, eudaimonic well-being is based on the philosophy of Aristotle, who argued that humans can only achieve true happiness and flourish by finding meaning and purpose in life (eudaimonia). Thus, eudaimonic well-being (aka psychological well-being) is a longer-term evaluation of well-being that results from engagement with development and challenges in life posed during the search for meaning and self-reflection. An example of an action that leads to eudaimonic happiness is reading philosophical books and learning more about life holistically. According to Keyes et al. (2002), the six aspects of psychological well-being are autonomy, environmental mastery, personal growth, purpose in life, positive relations with others and self-acceptance. The Scales of Psychological Well-being by Riff (1989) are often used to measure eudaimonic well-being. Recent research shows that psychological well-being is associated with higher levels of self-compassion, mindfulness practices and exposure to natural environments. Therefore, hedonic and eudaimonic well-being represent distinct perspectives on life. Hedonic well-being is more focused on a person's present emotional state and evaluation of their current life circumstances, whereas eudaimonic well-being takes a longer-term view, considering how well a person is functioning and developing their potential over time. The two different types of well-being also are related to separate life outcomes. Higher subjective well-being is associated with better physical health, longevity and relationship quality; while greater psychological well-being is linked to resilience, continued personal growth and self-actualisation. Whilst perhaps it is impossible to determine which well-being is more beneficial, it is definite that hedonic and eudaimonic well-being are intertwined into our daily lives. Written by Aleksandra Lib Related articles: Motivating the mind / Negligent exercise / Environmental factors and exercise REFERENCES Diener, E., & Chan, M. Y. (2011). Happy people live longer: Subjective well-being contributes to health and longevity. Applied Psychology: Health and Well-Being, 3 (1), 1-43. Diener, E. D., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The satisfaction with life scale. Journal of personality assessment , 49 (1), 71-75. Howell, A. J., Passmore, H.-A., & Holder, M. D. (2023). Savoring the here and now: The role of temporal focus for well-being. Journal of Positive Psychology, 18 (2), 221-236. Keyes, C. L., Shmotkin, D., & Ryff, C. D. (2002). Optimizing well-being: the empirical encounter of two traditions. Journal of personality and social psychology , 82 (6), 1007. Koo, J., & Park, K. (2022). Does money buy happiness after all? Revisiting the income-wellbeing link. Journal of Happiness Studies, 23 (3), 1133-1154. Mair, C., Jarrett, M., Watson, M., & Jones, P. B. (2022). The impact of nature exposure on psychological well-being: A systematic review. Environmental Research, 208 , 112677. Krieger, T., Hermann, H., Zimmermann, J., & grosse Holtforth, M. (2022). The role of self-compassion in promoting psychological well-being during the COVID-19 pandemic. Journal of Counseling Psychology, 69 (4), 380–396. Ryff, C.D. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of Personality and Social Psychology 57 , 1069–1081. Ryff, C. D. (2014). Psychological well-being revisited: Advances in the science and practice of eudaimonia. Psychotherapy and Psychosomatics, 83 (1), 10-28. Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of personality and social psychology , 54 (6), 1063. Xu, W., Rodriguez, M. C., Zhang, Q., & Liu, X. (2021). Mindfulness and psychological well-being: A longitudinal study. Mindfulness, 12 (9), 2154-2164. Zaheer, Z. O. B. I. A., & Khan, M. A. (2022). Perceived stress, resilience and psychological well-being among university students: The role of optimism as a mediator. Asian Social Studies and Applied Research , 3 (1), 55-67. Project Gallery
- Female Nobel Prize Winners in Chemistry | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Female Nobel Prize Winners in Chemistry 24/09/24, 11:03 Last updated: Women who changed the world Women contributing their innovative ideas has strengthened the knowledge held in the scientific world. It is important to realise that women in STEM need to be celebrated all year round – they need to be given the recognition they deserve. A total of 60 women have been awarded the Nobel Prize between 1901 and 2022. Specifically looking at the Female Nobel Prize winners in Chemistry – all of whom have changed the way society views women but also puts a spotlight on the progress that can still be made if we have more women in the field of STEM. There have been eight women to receive this prestigious award: Carolyn R. Bertozzi, Emmanuelle Charpentier, Jennifer A. Doudna, Frances H. Arnold, Ada E., Dorothy Crowfoot Hodgkin, Yonath, Irène Joliot-Curie and Marie Curie. This article celebrates their ground-breaking discoveries and contributions to the world of science and is a way to serve as an inspiration to young girls and women in the hope to raise a generation where more women are studying STEM subjects and acquiring high-ranked roles to reduce the gender gap. Nobel Prizes won in- 2022: Carolyn R. Bertozzi was awarded for her development of biorthogonal reactions which has allowed scientists to explore and track biological processes without disrupting the chemistry of the original cells. 2020: Emmanuelle Charpentier and Jennifer Doudna were awarded for their development of a method for high-precision genome editing: CRISPR/Cas9 genetic scissors. They used the immune system of a bacterium, which disables viruses by cutting their DNA up with a type of genetic scissors. The CRISPR/Cas9 genetic scissors has led to many exciting discoveries and new ways to fight against cancer and genetic diseases. 2018: Frances Arnold was awarded because of her work on directed evolution of enzymes. In 1993, Arnold conducted the first directed evolution of enzymes, which are proteins that catalyse chemical reactions. This has led to the manufacturing of environmentally friendly chemical substances such as pharmaceuticals, and the production of renewable fuels. 2009: Ada Yonath was awarded the Nobel Prize for her studies on the structure and functions of the ribosome. In the 1970s, Ada began a project that concluded in her successful mapping of the structure of ribosomes, which consisted of thousands of atoms, using x-ray crystallography. This has been important in the production of antibiotics. 1964: Dorothy Hodgkin was awarded the 1964 Nobel Prize in Chemistry for solving the atomic structure of molecules such as penicillin and insulin, using X-ray crystallography. 1935: Irène Joliot-Curie was awarded for her discovery that radioactive atoms could be created artificially. By Khushleen Kaur Related articles: Female Nobel prize winners in physics / African-American women in cancer research Project Gallery
- Why the Northern Lights were seen in the UK | Scientia News
Go Back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Why were the Northern Lights seen in the UK? Last updated: 13/11/24 On the 26th and 27th of February 2023, the UK experienced a rare treat - a “Red Alert” indicating a good chance of seeing the Northern Lights, or Aurora Borealis. This captivating event drew people from all over the country, eager to witness one of nature's most awe-inspiring displays. But why is it that opportunities to observe the Northern Lights from the lower latitudes of the UK, France, and Germany are so rare? To truly appreciate the answer to this question, it's important to understand the fascinating science behind the Northern Lights and the 'Northern' aspect that gives them their name. What are the Northern Lights? The Northern Lights, or Aurora Borealis, are a result of the Sun's immense gravity weakening with increasing distance from its centre, enabling the outermost regions of the Sun's corona to escape as solar wind, which travels towards Earth. The boundary at which the solar wind and corona are distinguished is known as the Alfvén surface. This solar wind is a plasma composed of protons, electrons, and other charged particles, which collide with atoms in Earth's atmosphere and excite the electrons in these atoms to higher energy levels. Upon de-excitation, the energy gained via collisions is released by the emission of light. Lucky observers saw the characteristic emerald green hues, which result from oxygen atoms at an altitude of around 100km. Those luckier still may have seen crimson aurorae caused by oxygen atoms at roughly 150km upwards. We observe different colours because the chemical composition of Earth's atmosphere varies with altitude. The Northern Lights. Credit: Evan Boyce Why are they (typically) only visible at the poles? The solar wind travels at millions of kilometres per hour and engulfs the Earth. Equatorial regions are protected by Earth's magnetic field as it deflects the solar wind. However, the magnetic field converges at Earth's magnetic poles, redirecting the charged particles of the solar wind to these high-latitude regions, such as Scandinavia and Canada. The same effect occurs at the southern magnetic pole, only these lights are named "Aurora Australis." The "auroral zone" is the region of Earth's atmosphere associated with this magnetic funnelling of charged particles. It takes the shape of an annulus centred on Earth's north magnetic pole and is usually in the 65°-70° latitude range. Why were they visible in the UK last month? The “auroral zone” is key to understanding this question. It is by no means a fixed or static region. There happened to be two coronal mass ejections (CMEs) which arrived at Earth on consecutive nights. The much greater intensity of these CMEs can give rise to distortions to the magnetic field lines resulting in what is called a geomagnetic storm. This triggers the expansion of the ‘auroral zone’ to lower latitudes, thus allowing the Northern Lights to be seen by UK observers. A graph displaying geomagnetic activity with universal time (UTC). Credit: @aurorawatchuk on Twitter How to know when to look? AuroraWatch UK is a free service run by the Lancaster University Department of Physics, providing alerts on the likelihood of observing the Northern Lights. This likelihood is based on geomagnetic activity measurements - disturbances in Earth’s magnetic field - from a network of magnetometers called SAMNET (Sub-Auroral Magnetometer Network). I will certainly be eagerly awaiting the next “Red Alert” and hoping for clear skies! Written by Joseph Brennan
- Behavioural Economics III | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Behavioural Economics III 28/11/24, 12:01 Last updated: Loss aversion: the power of framing in decision-making and why we are susceptible to poor decisions This article is part 3 in a four-part series on behavioural economics. Next article: Effect of Time (coming soon). Previous article- The endowment effect . In the realm of decision-making, the way information is presented can dramatically influence the choices people make. This phenomenon, known as framing, plays a pivotal role in how we perceive potential outcomes, especially when it comes to risks and rewards. We shall now explore the groundbreaking work of Tversky and Kahneman, who sought to explain how different framings of identical scenarios could lead to vastly different decisions. By examining their research, we can gain insight into why we are susceptible to making poor decisions and understand the underlying psychological mechanisms that drive our preferences. The power of framing Imagine that the UK is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. In a paper by Tversky and Kahneman, they examined the importance of how information is conveyed in two different scenarios. In scenario 1: If program A is adopted, 200 people will be saved. If program B is adopted, there is a 1/3 probability that 600 people will be saved and a 2/3 probability that no people will be saved. In scenario 2: If program A is adopted, 400 people will die. If program B is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 people will die. Notice that both scenarios display the exact same information, but the way in which the information is displayed is different. So surely there should be no difference between the two scenarios? In fact, there is a huge difference. Scenario 2 has been given a loss frame, where the loss frame emphasises the potential negative outcomes. By taking a sidestep, we can examine why this is important. Loss aversion is the phenomenon where ‘losses loom larger gains’. In other words, if we lose something, then the negative impact of this is greater than the positive impact of an equal-sized gain. Image 1 illustrates a loss aversion function. As illustrated in the image, a loss of £100 results in a much larger negative reaction than the positive reaction of a gain of £100. To put this into perspective, imagine it’s your birthday and someone gifts you some money. You would hopefully feel quite grateful and happy, but perhaps this feeling isn’t overwhelming. On the contrary, if you soon discover that you lost your wallet or purse, which contained the same amount of money, the psychological impact is often much more severe. Losses are perceived to be much more significant than gains. Going back to the example involving the two scenarios, we see that in scenario 2, program A emphasises the death of 400 people compared to scenario 2, program B, which has a chance to lose more but also a chance to save everyone. Statistically, you should be indifferent between the two, but because the guaranteed loss of 400 people is so overwhelming, people would much rather gamble and take the chance. This same reason is why gambling is so addictive. When you lose money in a gamble, you feel compelled to not accept the loss and decide to continue betting in an effort to make back what you once had. What Kahneman and Tversky found was that in scenario 1, 72% of people chose program A, and in scenario 2, 78% of people chose program B. Clearly, how we frame a policy makes a huge difference in its popularity. By framing the information by saying “200 people will be saved” rather than “400 people will die” out of the same 600 people, our own perception is considerably different. But on a deeper level, why might this be, and why is knowing this distinction important? In my previous article on the endowment effect, we saw that once you own something, you feel possessive over it, and losing something that you have had to work for, like money, makes you feel as though that hard work has gone to waste. But this explanation struggles to translate into our example of people. In researching for this article, I came across the evolutionary psychology perspective and found it to be both interesting and persuasive. From an evolutionary perspective, loss aversion can be seen as an adaptive trait. For our ancestors, losses such as losing food or shelter could have dire consequences for survival, whereas gains such as finding extra food was certainly beneficial but not as crucial for immediate survival. Therefore, we may be hardwired to avoid any losses, which has translated into modern-day loss aversion. The reason why knowing about this is important comes up in two aspects of life. The first is in healthcare. As demonstrated at the beginning of the article, people’s decisions can be impacted by the way in which healthcare professionals and the government frame policies. By understanding this, it allows you to make your own decision on the risks and determine whether you believe it is right for you. Similarly, policymakers can shape public opinion by highlighting the benefits or costs of action or inaction such that it meets their own political agenda. So recognising loss aversion allows for more informed decision-making. Additionally, when it comes to the world of investing, people tend to keep hold of an investment that is performing badly or perhaps at a loss in the hopes that it will go back up in the future. If this belief is justified through analysis or good judgement, then deciding to hold may be a good decision; however, often loss aversion creates a false sense of hope similar to the example I gave for gambling. If you are a keen investor, it’s important to be aware of your own investment psychology so that it allows you to maintain an objective view of a company throughout the time you decide to remain invested. Evidently, understanding how we think and make decisions can play an important role in improving the choices we make in our personal and professional lives. By recognising the impact of loss aversion and framing, we can become more aware of the unconscious biases that drive us to avoid losses at all costs, even when those decisions may not be in our best interest. Whether it’s in healthcare, investing, or everyday life, cultivating this awareness allows for more rational, informed choices that better align with long-term goals rather than short-term fears. In a world where information is constantly framed to sway public opinion, knowing the psychology behind our decision-making processes is a powerful tool that can help us make wiser, more deliberate decisions. Written by George Chant REFERENCES Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981 Jan 30;211(4481):453-8. doi: 10.1126/science.7455683. PMID: 7455683. Image provided by Economicshelp.org , a link to the website: https://www.economicshelp.org/blog/glossary/loss-aversion/ Project Gallery
- The search for a room-temperature superconductor | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The search for a room-temperature superconductor 13/12/24, 12:09 Last updated: A (possibly) new class of semiconductors In early August, the scientific community was buzzing with excitement over the groundbreaking discovery of the first room-temperature superconductor. As some rushed to prove the existence of superconductivity in the material known as LK-99, others were sceptical of the validity of the claims. After weeks of investigation, experts have concluded that LK-99 was likely not the elusive room-temperature superconductor but rather a different type of magnetic material with interesting properties. But what if we did stumble upon a room-temperature superconductor? What could this mean for the future of technology? Superconductivity is a property of some materials at extremely low temperatures that allows the material to conduct electricity with no resistance. Classical physics cannot explain this phenomenon, and instead, we have to turn to quantum mechanics to provide a description of superconductors. Inside superconductors, electrons are paired up and can move through the structure of the material without experiencing any friction. The pairs of electrons are broken up by the thermal energy from temperature, so they will only exist for low temperatures. Therefore, this theory, known as BCS theory after the physicists who formulated it, does not explain the existence of a high-temperature superconductor. To describe high-temperature superconductors, such as those occurring at room temperature, more complicated theories are needed. The magic of superconductors lies in their property of zero resistance. Resistance is a cause of energy waste in circuits due to heating, which leads to the unwanted loss of power, making for inefficient operation. Physically, resistance is caused by electrons colliding with atoms in the structure of a material, causing energy to be lost in the process. The ability for electrons to move through superconductors without experiencing any collisions results in no resistance. Superconductors are useful as components in circuits as they cause no wasted power due to heating effects and are completely energy-efficient in this aspect. Normally, using superconductors requires complex methods of cooling them down to typical superconducting temperatures. For example, the temperature at which copper becomes superconducting is 35 K, or in other words, around 130 °C colder than the temperature at which water freezes. These methods are expensive to implement, which prevents them from being implemented on a wide scale. However, having a room-temperature superconductor would allow access to the beneficial properties of the material, such as its resistance, without the need for extreme cooling. The current record holders for highest-temperature superconductors are the cuprate superconductors at around −135 °C. These are a family of materials made up of layers of copper oxides alternating with layers of other metal oxides. As the mechanism for superconductivity is yet to be revealed, scientists are still scratching their heads over how this material can exhibit superconducting properties. Once this mechanism is discovered, it may be easier to predict and find high-temperature superconducting materials and may lead to the first room-temperature superconductor. Until then, the search continues to unlock the next frontier in low-temperature physics… For more information on superconductors: [1] Theory behind superconductivity [2] Video demonstration Written by Madeleine Hales Related articles: Semiconductor manufacturing / Semiconductor laser technology / Silicon hydrogel lenses Project Gallery