Search Index
355 results found
- An end at the beginning: the tale of the Galápagos Tortoises | Scientia News
Conservation efforts Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link An end at the beginning: the tale of the Galápagos Tortoises 25/09/25, 09:21 Last updated: Published: 06/06/24, 11:20 Conservation efforts The Galápagos Islands Most who know of the name “Darwin” will be familiar with the Galápagos. These relatively uninviting islands protrude harsh, crashing waves like spears of mountainous rock, formed through millions of years of fierce volcanic activity. Even Charles Darwin himself thought life could not be sustained in such a remote and harsh environment, writing in his 1835 Journal of Researches: A broken field of basaltic lava, thrown into the most rugged waves, and crossed by great fissures, is everywhere covered by stunted, sun-burnt brushwood, which shows little signs of life. Little did the 22-year-old university graduate know at the time, these rugged islands would spark the most pivotal and influential theory in the field of modern biology. Due to the Hawaiian archipelago’s unique volcanic origins, the cluster of islands have grown jagged and fractured, with some islands showcasing altitudes as low as a few meters above sea level to others flexing spaces over 5000 feet above sea level. These extremely diverse habitats enable the observation of vastly different sub-populations of the same (or closely related) species*, exhibiting differing adaptations to their unique environments. These morphological distinctions lead to Darwin’s infamous 1859 book ‘On the Origin of Species’, detailing his evidence for the theories of evolution. *This article may refer to the Galápagos Tortoises as different subspecies or species interchanagably, as this remains a contentious area. The giant tortoises One most apparent examples of evolution that Darwin noted were the Galapagos tortoises, Chelonoidis niger , of which there were at least 15 subspecies. Darwin devoted almost four pages of his Journal of Researches to the Galapagos tortoise, more than he did to any other Galápagos species. These captivating reptiles can grow up to 5 feet in length and weigh over 220kg, making them the largest tortoises in the world. This miraculous species can survive over a year without food or water, able to store tremendous volumes of liquid in their bladders in periods of drought - one of the many adaptive characteristics that enable them to routinely live well over 150-years-old. Darwin notably observed the species’ two unique primary shell morphologies - saddleback and domed. Some subspecies, such as the Pinta Island Tortoise ( Chelonoidis niger abingdonii ), have saddle-shaped shells which raise at the front, making it easier for the neck to stretch upwards to feed on taller vegetation on hotter, more arid islands. Whereas the populations with the dome-shaped shells, including the Chelonoidis niger porteri , occupy islands where there’s an abundance of flora lower to the ground, making upward stretching of the neck unnecessary to feed. Features such as these are well documented in Darwin’s evidence for evolutionary adaptation throughout the islands. Torment and tragedy Only two centuries ago, the Galápagos Islands were rife with life, with an estimated 250,000 giant tortoises. Today, multiple species are extinct, with only around 10% of the individuals surviving. The dramatic decline of the Galápagos tortoises has been characterised by frequent human failure, and in some instances, human design. Between the 1790s and 1800s, whalers began operating around the Galápagos, routinely taking long voyages to explore the Pacific Ocean. With whaling voyages lasting about a year, the tortoises were selected as the primary source of fresh meat for the whalers, with each taking 200 to 300 tortoises aboard. Here, in a ship’s hold, the hundreds of tortoises would live without food or water for months, before being killed and consumed. Documentation regarding how many tortoises were taken aboard by whalers is scarce, however estimates place the number between 100,000 and 200,000 by 700 whaling ships between 1800 and 1870. This initial decimation via over-consumption was then followed by the introduction of harmful invasive species. In the years since, multiple foreign species have been introduced to the archipelago, mainly for farming, including pigs (a lot of which are feral), dogs, cats, rats, goats and donkeys. These non-native species are an enduring threat to the giant tortoise populations, preying on their eggs and hatchlings, whilst also providing fierce and unprecedented competition for food. Furthermore, increasing temperatures attributed to climate change are thought to trigger atypical migrations. These migrations have the potential to reduce tortoise nesting success, further adding to the list of threats these species have had to endure. The Pinta giant tortoise, Chelonoidis nigra abingdonii , a species of the unique saddleback shell variety, was thought to be extinct since the early 20th century. But then, in 1971, József Vágvölgyi, a Hungarian scientist on Pinta island made a special discovery – Lonesome George. Seemingly a sole survivor of his kind, Lonesome George became an icon of the sparking conservation movement surrounding the Galápagos species. This lone Pinta individual could have been wandering the small island for decades in search for another member of his species - a search that would unfortunately never bear fruit. Despite selective breeding efforts, on June 24, 2012, at 8:00 A.M. local time, Lonesome George would pass away without producing any offspring, found by park ranger Fausto Llerena who had looked after him for forty years. Hope and the future Despite all the devastation the Galápagos tortoises have endured, not is all lost. Just like the story of Lonesome George, a microcosm of this larger crisis, there is a small light at the end of the tunnel. Just prior to George’s passing a remarkable discovery was made. During 2008, research conducted by the Ecology and Evolutionary Biology Department of Yale University on neighbouring Isabela Island, set out to genetically sequence the local giant tortoise population. Over 1,600 tortoises were tagged and sampled for their DNA, with analyses revealing an astonishing number of tortoises with mixed genetic ancestry. Within this sample, 17 individuals contained DNA from the Pinta tortoise species (and more contained DNA from the also extinct Floreana species). Retrospective study of old whaling logbooks seems to indicate that, in order to lighten the burden of their ships, whalers and pirates dropped large numbers of tortoises in Banks Bay, near Volcano Wolf, Isabela Island, likely accounting for these hybrids. This miracle discovery opens the door to selective breeding efforts, paving a future of reintroduction of the previously-extinct Pinta Island species. While only a fraction of their original numbers remain, the Galápagos tortoises continue to personify evolution’s stunning intricacies and persist as a bright beacon of hope for the greater world of conservation. It is vital that we do our part as human beings to correct the errors of our past and to respect and nurture these gentle giants and all that they represent in this world we call home. Written by Theo Joe Andreas Emberson Related articles: Conservation of marine iguanas / 55 years of vicuna conservation / Gorongosa National Park / Modern Evolutionary Synthesis REFERENCES Sulloway FJ. Tantalizing tortoises and the Darwin-Galápagos legend. J Hist Biol. 2009;42(1):3-31. doi:10.1007/s10739-008-9173-9 Patrick J. Endres. AlaskaPhotoGraphics.com Project Gallery
- DFNB9: The first deafness ever treated by gene therapy | Scientia News
DFNB9 affects 1 to 16 newborns every 50,000 Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link DFNB9: The first deafness ever treated by gene therapy 09/07/25, 14:02 Last updated: Published: 05/09/24, 10:03 DFNB9 affects 1 to 16 newborns every 50,000 Two (TWO!) AAV gene therapies have restored hearing in deaf patients! Scientists have corrected DFNB9 deafness! These are headlines you have likely read last January. The technology making this achievement possible rightfully took the spotlight (e ven I chimed in! ). But what is DFNB9 deafness in the first place? Why do DFNB9 patients lose their hearing? In a nutshell, DFNB9 deafness is the failure of the ear to share what it has heard with the brain because of mutations in the OTOF gene. Do you want to learn more? Let me explain. Medical and genetic definitions of DFNB9 deafness DFNB9 is a type of genetic deafness. It affects 1 to 16 newborns every 50,000, and it accounts for 2 to 8% of all cases of genetic deafness. DFNB9 is (take a deep breath!) an autosomal recessive prelingual severe-to-profound non-syndromic sensorineural hearing loss. That’s a mouthful of a definition, I agree. Let’s break it down. In medical terms, DFNB9 deafness is: severe — sounds must be louder than 70 dB (think of a vacuum cleaner) to be heard — to profound — sounds must be even louder, over 90 dB (picture a lawn mower), prelingual, that is hearing is lost before developing language skills (2–3 years of age) not associated with other pathologies (non-syndromic). Geneticists describe DFNB9 as an autosomal recessive disease: the gene mutated is not on the sex chromosomes (but on the autosomes) and both alleles must be mutated for the disease to appear (recessive). This gene is OTOF . OTOF encodes otoferlin, a protein that enables the cells detecting sounds to communicate with neurons. As mutations in OTOF disrupt this dialogue, DFNB9 is classified as a sensorineural type of deafness. Otoferlin enables inner hair cells to speak to neurons How does otoferlin enable us to hear? This question needs a few notions on the two main cell types involved in hearing: auditory hair cells and primary auditory neurons. Auditory hair cells are the sound detector. These cells are surmounted by a structure resembling a tuft of hair, the hair bundle. Sounds bend the hair bundle, opening its ion channels; positive ions rush into the cells generating electrical signals that travel across the cell. Inner hair cells — one of the two types of auditory cells — transmit these signals to the primary auditory neurons ( Figure 1 ) The primary auditory neurons are the first station of the nervous pathway between the ear and the brain. Some primary auditory neurons (type I) extend their dendrites to the inner hair cells and listen. The information received is analysed and sent to the brain along the auditory nerve ( Figure 2 ). The synapse is where inner hair cells speak to primary auditory neurons. Otoferlin is essential for this dialogue: without it, inner hair cells cannot share what they have heard. Otoferlin, the calcium sensor At the synapse, synaptic vesicles are placed just beneath the membrane, like Formula 1 cars lined up the grid waiting for the race to start. In response to a sound, electrical signals trigger the opening of calcium channels and calcium ions (Ca2+) rush in. The sudden increase in Ca2+ is the biological equivalent of the “lights out” signal in Formula 1: as soon as Ca2+ enters, the synaptic vesicles rapidly fuse with the membrane. This event releases glutamate onto the primary auditory neurons ( Figure 3 ). The information in the sound is on its way to the brain. In the inner hair cells, otoferlin enables synaptic vesicles to sense changes in Ca2+. Anchored to the vesicles by its tail, otoferlin extends into the cell multiple regions with high affinity to Ca2+ (C2 domains) ( Figure 4 ). The many roles of otoferlin at the synapse Otoferlin is essential throughout the lifecycle of synaptic vesicles (Figure 5). This is a brief overview of its main roles at the synapse: 1 — Docking : Otoferlin helps position vesicles filled with glutamate at the synapse 2 — Priming : Otoferlin interacts with SNARE proteins, which are essential for the fusion with the membrane, and the vesicles become ready to rapidly fuse 3 — Fusion : electrical signals, triggered by sounds, open Ca2+ channels; Otoferlin senses the increase in Ca2+ and prompts the vesicles to fuse with the cell membrane, releasing glutamate 4 — Recycling : Otoferlin helps clear fused vesicles and recycle their components Imperfect knowledge can be enough knowlege (sometimes) Despite years of studies, the functions of otoferlin at the inner hair cell synapse are still elusive. Even more puzzling is the synapse of inner hair cells as a whole. Researchers are captivated and baffled by its mysterious architecture and properties (we would need a new article just to scratch the surface of this topic!). But let’s not forget that we now have two gene therapies to improve the deafness caused by mutations in the OTOF gene. These breakthroughs should encourage us: even with imperfect knowledge, we can (at least in some cases) still develop impactful treatments for diseases. Written by Matteo Cortese, PhD REFERENCES Manchanda A, Bonventre JA, Bugel SM, Chatterjee P, Tanguay R, Johnson CP. Truncation of the otoferlin transmembrane domain alters the development of hair cells and reduces membrane docking. Mol Biol Cell. 2021 Jul 1;32(14):1293–1305. Morton CC, Nance WE. Newborn hearing screening — a silent revolution. N Engl J Med. 2006 May 18;354(20):2151–64. Johnson CP, Chapman ER. Otoferlin is a calcium sensor that directly regulates SNARE-mediated membrane fusion. J Cell Biol. 2010 Oct 4;191(1):187–97. Pangrsic T, Lasarow L, Reuter K, Takago H, Schwander M, Riedel D, Frank T, Tarantino LM, Bailey JS, Strenzke N, Brose N, Müller U, Reisinger E, Moser T. Hearing requires otoferlin-dependent efficient replenishment of synaptic vesicles in hair cells. Nat Neurosci. 2010 Jul;13(7):869–76. Qi J, Tan F, Zhang L, Lu L, Zhang S, Zhai Y, Lu Y, Qian X, Dong W, Zhou Y, Zhang Z, Yang X, Jiang L, Yu C, Liu J, Chen T, Wu L, Tan C, Sun S, Song H, Shu Y, Xu L, Gao X, Li H, Chai R. AAV-Mediated Gene Therapy Restores Hearing in Patients with DFNB9 Deafness. Adv Sci (Weinh). 2024 Jan 8:e2306788. Roux I, Safieddine S, Nouvian R, Grati M, Simmler MC, Bahloul A, Perfettini I, Le Gall M, Rostaing P, Hamard G, Triller A, Avan P, Moser T, Petit C. Otoferlin, defective in a human deafness form, is essential for exocytosis at the auditory ribbon synapse. Cell. 2006 Oct 20;127(2):277–89 Vona B, Rad A, Reisinger E. The Many Faces of DFNB9: Relating OTOF Variants to Hearing Impairment. Genes (Basel). 2020 Nov 26;11(12):1411. Project Gallery
- Neuroimaging and spatial resolution | Scientia News
Peering into the mind Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Neuroimaging and spatial resolution 10/07/25, 10:24 Last updated: Published: 04/11/24, 14:35 Peering into the mind Introduction Neuroimaging has been at the forefront of brain discovery ever since the first ever images of the brain were recorded in 1919 by Walter Dandy, using a technique called pneumoencephalography (PET). Fast-forward over a decade and neuroimaging is more than just blurry singular images. Modern techniques allow us to observe real time changes in brain activity with millisecond resolution, leading to breakthroughs in scientific discovery that would not be possible without it. Memory is a great example - with functional magnetic resonance imaging (fMRI) techniques we have been able to demonstrate that more recent long-term memories are stored and retrieved with brain activity in the hippocampus, but as memories become more in the distant past, they are transferred to the medial temporal lobe. While neuroimaging techniques keep the doors open for new and exciting discoveries, spatial limitations leave many questions unanswered, especially at a cellular and circuit level. For example - within the hippocampus, is each memory encoded via complete distinct neural circuits? Or do similar memories share similar neural pathways? Within just a millimetre cubed of brain tissue we could have up to 57,000 cells (most of them neurons), all of which may have different properties, be part of different circuits, and produce different outcomes. This almost makes revolutionary techniques such as fMRI, with almost unparalleled image quality, seem pointless. To truly understand how neural circuits work, we have to dig as deep as possible to record the smallest regions possible. So that begs the question, how small can we actually record in the human brain? EEG 2024 marks a decade since the first recorded electroencephalography (also known as EEG) scan by Hans Berger in Germany. This technique involves placing electrodes all around the scalp to record activity throughout the whole outer surface of the brain ( Figure 1 ). Unlike the methods we see later on, EEG scans provide a direct measure of activity in the brain, by measuring electrical activity when the brain is active. However, because electrodes are only placed across the scalp, EEG scans are only able to pick up activity from the outer cortex, missing important activity in deeper parts of the brain. In our memory example, this means it would completely miss any activity in the hippocampus. EEG resolution is also quite underwhelming, typically being able to resolve activity with a few centimetres’ resolution - not great for mapping behaviours to specific structures in the brain. EEG scans are used in a medical environment to measure overall activity levels, assisting with epilepsy diagnosis. Let's look at what we can use to dig deeper into the brain and locate signals of activity… PET Position emission tomography (PET) scans offer a chance to record activity throughout the whole brain by ingesting a radioactive tracer, typically glucose labelled with a mildly radioactive substance. This tracer is tracked and uptake in specific parts of the brain is a sign for greater metabolic activity, indicating a higher signalling rate. PET scans already offer a resolution far beyond the capacities of EEG scans, distinguishing activity between areas with a resolution of up to 4mm. With the use of different radioactive labels, we can also detect activity of specific populations of neurons such as dopamine neurons to diagnose Parkinson's disease. In fact, many studies have reliably demonstrated the ability of PET scans to detect the root cause of Parkinson's disease, which is a reduced number of dopamine neurons in the basal ganglia, before symptoms become too extreme. As impressive as it sounds, a 4mm resolution can locate activity in large areas of the cortex, but is limited in its resolving power for discrete cortical layers. Take the human motor cortex for example - all 6 layers have an average width of only 2.79mm. A PET scan would not be powerful enough to determine which layer is most active, so we need to dig a little deeper… fMRI Since its inception in the early 90's, fMRI has gained the reputation of becoming the gold standard for human neuroimaging, thanks to its non-invasiveness, lack of artefacts, and reliable signalling. fMRI uses Nuclear Magnetic Resonance to measure changes in oxygenated blood flow, which is correlative of neural activity, known as BOLD signals. In comparison to EEG, measuring blood oxygen levels cannot reach a highly impressive temporal resolution, and is also not a direct measure of neural activity. fMRI makes up for this with its superior spatial resolution, resolving spaces as small as 1mm apart. Using our human motor cortex example, this would allow us to resolve activity between every 2-3 layers - not a bad return considering it doesn’t even leave a scar. PET, and especially EEG, pales in comparison to the capabilities of fMRI that has since been used for a wide range of neuroimaging research. Most notably, structural MRI has been used to support the idea of hippocampal involvement during spatial navigation from memory tasks ( Figure 2 ). Its resolving power and highly precise images also make it suitable to be used for mapping surgical procedures. Conclusion With a resolution of up to 1mm, fMRI takes the crown as the human neuroimaging technique with the best spatial resolution! Table 1 shows a brief summary of each neuroimaging method. Unfortunately though, there is still so much more we need to do to look at individual circuits and connections. As mentioned before, even within a millimetre cubed of brain, we have 5 figures worth of cells, making the number of neurons that make up the whole brain impossible to comprehend. To observe the activity of a single neuron, we would need an imaging technique with the power of viewing cells in the 10’s of micrometre range. So what can we do to get to the resolution we desire while still being suitable for humans? Maybe there isn't a solution. Instead, maybe if we want to record singular neuron activity, we have to take inspiration from invasive animal techniques such as microelectrode recordings. Typically used in rats and mice, these can achieve single-cell resolution to look at neuroscience from the smallest of components. It would be unethical to stick an electrode into a healthy human's brain and record activity, but perhaps in the future a non-invasive form of electrode recording could be developed? The current neuroscience field is foggy and shrouded in mystery. Most of these mysteries simply cannot be solved with the current research techniques we have at our disposal. But this is what makes neuroscience exciting - there is still so much to explore! Who knows when we will be able to map behaviours to neural circuits with single-cell precision, but with how quickly imaging techniques are being enhanced and fine-tuned, I wouldn't be surprised if it's sooner than we think. Written by Ramim Rahman Related articles: Neuromyelitis optica / Traumatic brain injuries REFERENCES Hoeffner, E.G. et al. (2011) ‘Neuroradiology back to the future: Brain Imaging’, American Journal of Neuroradiology, 33(1), pp. 5–11. doi:10.3174/ajnr.a2936. Maguire, E.A. and Frith, C.D. (2003) ‘Lateral asymmetry in the hippocampal response to the remoteness of autobiographical memories’, The Journal of Neuroscience, 23(12), pp. 5302–5307. doi:10.1523/jneurosci.23-12-05302.2003. Wong, C. (2024) ‘Cubic millimetre of brain mapped in spectacular detail’, Nature, 629(8013), pp. 739–740. doi:10.1038/d41586-024-01387-9. Butman, J. A., & Floeter, M. K. (2007). Decreased thickness of primary motor cortex in primary lateral sclerosis. AJNR. American journal of neuroradiology, 28(1), 87–91. Loane, C., & Politis, M. (2011). Positron emission tomography neuroimaging in Parkinson's disease. American journal of translational research, 3(4), 323–341. Maguire, E.A. et al. (2000) ‘Navigation-related structural change in the hippocampi of taxi drivers’, Proceedings of the National Academy of Sciences, 97(8), pp. 4398–4403. doi:10.1073/pnas.070039597. [Figure 1] EEG (electroencephalogram) (2024) Mayo Clinic . Available at: https://www.mayoclinic.org/tests-procedures/eeg/about/pac-20393875 (Accessed: 18 October 2024). [Figure 2] Boccia, M. et al. (2016) ‘Direct and indirect parieto-medial temporal pathways for spatial navigation in humans: Evidence from resting-state functional connectivity’, Brain Structure and Function, 222(4), pp. 1945–1957. doi:10.1007/s00429-016-1318-6. Project Gallery
- Unleashing the power of the stars: how nuclear fusion holds the key to tackling climate change | Scientia News
Looking at the option of nuclear fusion to generate renewable energy Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Unleashing the power of the stars: how nuclear fusion holds the key to tackling climate change 14/07/25, 15:08 Last updated: Published: 30/04/23, 10:55 Looking at the option of nuclear fusion to generate renewable energy Imagine a world where we have access to a virtually limitless and clean source of energy, one that doesn't emit harmful greenhouse gases or produce dangerous radioactive waste. A world where our energy needs are met without contributing to climate change. This may sound like science fiction, but it could become a reality through the power of nuclear fusion. Nuclear fusion, often referred to as the "holy grail" of energy production, is the process of merging light atomic nuclei to form a heavier nucleus, releasing an incredible amount of energy in the process. It's the same process that powers the stars, including our very own sun, and holds the potential to revolutionize the way we produce and use energy here on Earth. Nuclear fusion occurs at high temperature and pressure when two atoms (e.g. Tritium and Deuterium atoms) merge together to form Helium. This merge releases excess energy and a neutron. This energy an then be harvested inform of heat to produce electricity. Progress in the field of creating a nuclear fusion reactor has been slow, despites the challenges there are some promising technologies and approaches have been developed. Some of the notable approaches to nuclear fusion research include: 1. Magnetic Confinement Fusion (MCF) : In MCF, high temperatures and pressures are used to confine and heat the plasma, which is the hot, ionized gas where nuclear fusion occurs. One of the most promising MCF devices is the tokamak, a donut-shaped device that uses strong magnetic fields to confine the plasma. The International Thermonuclear Experimental Reactor (ITER), currently under construction in France, is a large-scale tokamak project that aims to demonstrate the scientific and technical feasibility of nuclear fusion as a viable energy source. 2. Inertial Confinement Fusion (ICF) : In ICF, high-energy lasers or particle beams are used to compress and heat a small pellet of fuel, causing it to undergo nuclear fusion. This approach is being pursued in facilities such as the National Ignition Facility (NIF) in the United States, which has made significant progress in achieving fusion ignition, although it is still facing challenges in achieving net energy gain. In December of 2022, the US lab reported that for the first time, more energy was released compared to the input energy. 3. Compact Fusion Reactors: There are also efforts to develop compact fusion reactors, which are smaller and potentially more practical for commercial energy production. These include technologies such as the spherical tokamak and the compact fusion neutron source, which aim to achieve high energy gain in a smaller and more manageable device. While nuclear fusion holds immense promise as a clean and sustainable energy source, there are still significant challenges that need to be overcome before it becomes a practical reality. In nature nuclear fusion is observed in stars, to be able to achieve fusion on Earth such conditions have to be met which can be an immense challenge. High level of temperature and pressure is required to overcome the fundamental forces in atoms to fuse them together. Not only that, but to be able to actually use the energy it has to be sustained and currently more energy is required then the output energy. Lastly, the material and technology also pose challenges in development of nuclear fusion. With high temperature and high energy particles, the inside of a nuclear fusion reactor is a harsh environment and along with the development of sustained nuclear fusion, development of materials and technology that can withstand such harsh conditions is also needed. Despite many challenges, nuclear fusion has the potential to be a game changer in fight against not only climate change but also access of cheap and clean energy globally. Unlike many forms of energy used today, fusion energy does not emit any greenhouse gasses and compared to nuclear fission is stable and does not produce radioactive waste. Furthermore, the fuel for fusion, which is deuterium is present in abundance in the ocean, where as tritium may require to synthesised at the beginning, but once the fusion starts it produce tritium by itself making it self-sustained. When the challenges are weighted against the benefits of nuclear fusion along with the new opportunities it would unlock economically and in scientific research, it is clear that the path to a more successful and clean future lies within the development of nuclear fusion. While there are many obstacles to overcome, the progress made in recent years in fusion research and development is promising. The construction of ITER project, along with first recordings of a higher energy outputs from US NIF programs, nuclear fusion can become a possibility in a not too distant future. In conclusion, nuclear fusion holds the key to address the global challenge of climate change. It offers a clean, safe, and sustainable energy source that has the potential to revolutionize our energy systems and reduce our dependence on fossil fuels. With continued research, development, and investment, nuclear fusion could become a reality and help us build a more sustainable and resilient future for our planet. It's time to unlock the power of the stars and harness the incredible potential of nuclear fusion in the fight against climate change. Written by Zari Syed Related articles: Nuclear medicine / Geoengineering / The silent protectors / Hydrogen cars Project Gallery
- Correlation between wealthy countries and COVID-19 mortality rate | Scientia News
Linking a country's HDI with its COVID-19 mortality rate Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Correlation between wealthy countries and COVID-19 mortality rate 09/07/25, 13:35 Last updated: Published: 24/08/23, 16:20 Linking a country's HDI with its COVID-19 mortality rate Investigation title: Could there have been a correlation between very rich countries and COVID-19 mortality rate? Investigation period: December 2019- November 2020 (Approx. 1 year) Background The World Health Organisation (WHO) were first alerted about coronavirus on the 31st December 2019, by a lot of pneumonia cases in Wuhan, China that has a population of 11 million. Furthermore, by 15th January 2020 there were precisely 289 cases recorded in countries such as: Thailand, Japan, S.Korea, and other places in China. And of the original cases there were 6 deaths, 51 severe cases - 12 of which were in critical condition. Meanwhile, the virus responsible for the cases was isolated and had its genome mapped, and was shared on 12th January. HDI represents the measurement of development. This is a composite of Gross National Income (GNI) per capita, mean years of education and life expectancy at birth, to measure the development of a country. It is calculated between a scale of 0 (least developed) to 1 (most developed) and all its values are to 3 significant figures. HDI values of 2019 and countries of HDI greater than 0.800 were used, as these are all regarded as very high HDI-countries so were in the scope of this investigation. Therefore, this research aimed to determine the impact of human development on the number of mortalities caused by SARS-CoV-2; where human development is measured by HDI, and the number of mortalities per hundred thousand from December 2019 to November 2020. Method Stratified sampling produced 12 countries, in descending order of HDI value: - Australia, Netherlands, UK, Austria, Spain, Estonia, UAE, Portugal, Bahrain, Kazakhstan, Romania, Malaysia See Table 4 . Results See Chart 2 . r= 0.321 (3 s.f.) – Pearson’s test ∴ There is a moderate positive linear correlation between HDI and mortality rate due to SARS-CoV-2 per 100,000. Further stats testing- Spearman’s Rank ∑D^2 = 216 n = 12 Rs = 1 - (6 ∑D^2 )/ n(n^2 – n) = 1 - (6 x 216) 1584 = 0.182 (3 d.p.) Rs = 0.245 < Critical Value (0.0.587591) ∴ There is no correlation between HDI and mortality rate due to coronavirus per 100,000. Conclusion The null hypothesis was accepted: there is no correlation between a country’s HDI and its mortality rate due to SARS-CoV-2. A biogeographical reason for this is that the more developed countries (such as those in my investigation- for example, the UK) have a higher level of immigration from latitudes closer to the equator, therefore there is a section of their society with increased susceptibility to SARS-CoV-2 due to vitamin D deficiency. It is known that low vitamin D levels have a negative impact on immune function and that low vitamin D levels are common in the immigrant population. Therefore, it is likely that there is a link between vitamin D deficiency and mortality rate per 100,000, however this could be overstated due to confounding factors such as socioeconomic status, residence and employment. This would explain why countries at higher latitudes like the Netherlands have higher mortality rates per 100,000 (41.80) which is the third highest HDI-country in this investigation. Another explanation for this non-correlation could be that the less developed countries could be more used to dealing with a pandemic, or stress on a healthcare system, due to previous experience. For example, after the SARS outbreak, many countries decided to prepare in case of a pandemic, however some large HDI-countries such as the UK chose not to and even ignored other warnings on the effects of a pandemic (like the exercise signs simulation). Moreover, studies have shown that as a very high HDI-country becomes more developed, its healthcare system continues to develop until it reaches a peak where its effectiveness is undermined by economic benefit or interest. This would explain why the UK had a death rate of 68.00 per 100,000 and a total death count of over 45,000 (as of December 2020). Implications Since there is no correlation between a country’s HDI index and its mortality rate of COVID-19, this may apply to other diseases that became pandemics such as 1918’s Spanish Flu, or more recent ones like the SARS outbreak in the early 21st century. As for tropical diseases (malaria, dengue, chikungunya and others) and other illnesses such as the common cold and the flu, these diseases present in only certain geographies. This means that the countries with these ailments will be of a similar HDI and economical status; therefore there would be a correlation between a country’s HDI index and its mortality rate of these diseases, to a certain extent. Investigation conducted and written by Roshan Gill Tables, charts, stats and calculations by Roshan Gill This summary by Manisha Halkhoree ‘Implications’ section by Manisha Halkhoree Related articles: Causality vs correlation / Impacts of global warming on dengue fever / Global Health Injustices (series) Project Gallery
- Silicon hydrogel contact lenses | Scientia News
An engineering case study Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Silicon hydrogel contact lenses 17/07/25, 11:08 Last updated: Published: 29/04/24, 10:59 An engineering case study Introduction Contact lenses have a rich and extensive history dating back over 500 years; when, in 1508, Leonardo Di Vinci first conceived the idea. It was not until the late 19th century that the concept of contact lenses as we know them now were realised. In 1887 F.E.Muller was credited with making the first eye covering that could improve vision without causing any irritation. This eventually led to the first generation of hydrogel-based lenses as the development of the polymer, hydroxyethyl methacrylate (HEMA), allowed Rishi Agarwal to conceive the idea of disposable soft contact lenses. Silicon hydrogel contact lenses dominate the contemporary market. Their superior properties have extended wear options and have transformed the landscape of vision correction. These small but complex items continue to evolve, benefiting wearers worldwide. This evolution is such that the most recent generation of silicon hydrogel lenses have recently been released and aim to phase out all the existing products. Benefits of silicon hydrogel lenses There are many benefits to this material’s use in this application. For example, the higher oxygen permeability improves user comfort and experience through relatively increased oxygen transmissibility that the material offers. These properties are furthered by the lens’ moisture retention which allows for longer wear times without compromising on comfort or eye health. Hence, silicon hydrogel lenses aimed to eradicate the drawbacks of traditional hydrogel lenses including: low oxygen permeability, lower lens flexibility and dehydration causing discomfort and long-term issues. This groundbreaking invention has revolutionised convenience and hygiene for users. The structure of silicon hydrogel lenses Lenses are fabricated from a blend of the two materials: silicon and hydrogel. The silicon component provides high oxygen permeability, while the hydrogel component contributes to comfort and flexibility. Silicon is a synthetic polymer and is inherently oxygen-permeable; it facilitates more oxygen to reach the cornea, promoting eye health and avoiding hypoxia-related symptoms. Its polymer chains form a network, creating pathways for oxygen diffusion. Whereas hydrogel materials are hydrophilic polymers that retain water, keeping the lens moist and comfortable as it contributes to the lens’s flexibility and wettability. Both materials are combined using cross-linking techniques which stabilise the matrix to make the most of both properties and prevent dissolution. (See Figure 1 ). There are two forms of cross-linking that enable the production of silicon hydrogel lenses: chemical and physical. Chemical cross-linking involves covalent bonds between polymer chains, enhancing the lens’s mechanical properties and stability. Additionally, physical cross-links include ionic interactions, hydrogen bonding, and crystallisation. Both techniques contribute to the lens’s structure and properties and can be enhanced with polymer modifications. In fact, silicon hydrogel macromolecules have been modified to optimise properties such as: improved miscibility with hydrophilic components, clinical performance and wettability. The new generation of silicon hydrogel contact lenses Properties Studies show that wearers of silicon hydrogel lenses report higher comfort levels throughout the day and at the end of the day compared to conventional hydrogel lenses. This is attributed to the fact that they allow around 5 times more oxygen to reach the cornea. This is significant as reduced oxygen supply can lead to dryness, redness, blurred vision, discomfort, and even corneal swelling. What’s more, the most recent generation of lenses have further improved material properties, the first of which is enhanced durability and wear resistance. This is attributed to their complex and unique material composition, maintaining their shape and making them suitable for various lens designs. Additionally, they exhibit a balance between hydrophilic and hydrophobic properties which have traditionally caused an issue with surface wettability. This generation of products have overcome this through surface modifications improving comfort by way of improving wettability. Not only this, but silicon hydrogel materials attract relatively fewer protein deposits. Reduced protein buildup leads to better comfort and less frequent lens replacement. Manufacturing There are currently two key manufacturing processes that silicon hydrogel materials are made with. Most current silicon hydrogel lenses are produced using either cast moulding or lathe cutting techniques. In lathe cutting, the material is polymerised into solid rods, which are then cut into buttons for further processing in computerised lathe - creating the lenses. Furthermore, surface modifications are employed to enhance this concept. For example, plasma surface treatments enhance biocompatibility and improve surface wettability compared to earlier silicon elastomer lenses. Future innovations There are various future expansions related to this material and this application. Currently, researchers are exploring ways to create customised and personalised lenses tailored to an individual’s unique eye shape, prescription, and lifestyle. One of the ways they are aiming to do this is by using 3D printing and digital scanning to allow for precise fitting. Although this is feasible, there are some challenges relating to scalability and cost-effectiveness while ensuring quality. Moreover, another possible expansion is smart contact lenses which aim to go beyond just improving the user's vision. For example, smart lenses are currently being developed for glucose and intraocular pressure monitoring to benefit patients with diseases including diabetes and glaucoma respectively. The challenges associated with this idea are data transfer, oxygen permeability and therefore comfort. (See Figure 2 ). Conclusion In conclusion, silicon hydrogel lenses represent a remarkable fusion of material science and engineering. Their positive impact on eye health, comfort, and vision correction continues to evolve. As research progresses, we can look forward to even more innovative solutions benefiting visually-impaired individuals worldwide. Written by Roshan Gill Related articles: Semi-conductor manufacturing / Room-temperature superconductor / Titan Submersible / Nanogels REFERENCES Optical Society of India, Journal of Optics, Volume 53, Issue 1, Springer, 2024 February Lamb J, Bowden T. The history of contact lenses. Contact lenses. 2019 Jan 1:2-17. Ţălu Ş, Ţălu M, Giovanzana S, Shah RD. A brief history of contact lenses. Human and Veterinary Medicine. 2011 Jun 1;3(1):33-7. Brennan NA. Beyond flux: total corneal oxygen consumption as an index of corneal oxygenation during contact lens wear. Optometry and vision science. 2005 Jun 1;82(6):467-72. Dumbleton K, Woods C, Jones L, Fonn D, Sarwer DB. Patient and practitioner compliance with silicon hydrogel and daily disposable lens replacement in the United States. Eye & Contact Lens. 2009 Jul 1;35(4):164-71. Nichols JJ, Sinnott LT. Tear film, contact lens, and patient-related factors associated with contact lens–related dry eye. Investigative ophthalmology & visual science. 2006 Apr 1;47(4):1319-28. Jacinto S. Rubido, Ocular response to silicone-hydrogel contact lenses, 2004. Musgrave CS, Fang F. Contact lens materials: a materials science perspective. Materials. 2019 Jan 14;12(2):261. Shaker LM, Al-Amiery A, Takriff MS, Wan Isahak WN, Mahdi AS, Al-Azzawi WK. The future of vision: a review of electronic contact lenses technology. ACS Photonics. 2023 Jun 12;10(6):1671-86. Kim J, Cha E, Park JU. Recent advances in smart contact lenses. Advanced Materials Technologies. 2020 Jan;5(1):1900728. Project Gallery
- Reaching new horizons in Alzheimer's research | Scientia News
The role of CRISPR-Cas9 technology Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Reaching new horizons in Alzheimer's research 10/07/25, 10:34 Last updated: Published: 12/10/23, 10:50 The role of CRISPR-Cas9 technology The complexity of Alzheimer’s Alzheimer's disease (AD) is a formidable foe, marked by its relentless progression and the absence of a definitive cure. As the leading cause of dementia, its prevalence is expected to triple by 2050. Traditional therapies mainly focus on managing symptoms; however, advances in genetics research, specifically CRISPR-Cas9 gene-editing technology, offer newfound hope for understanding and treating this debilitating condition. The disease is characterized by progressive deterioration of cognitive function, with memory loss being its hallmark symptom. Primarily affecting individuals aged 65 and over, age is the most significant risk factor. Although this precise cause remains elusive, scientists believe that a combination of genetic, lifestyle and environmental factors contributes to its development. CRISPR’s role in Alzheimer’s research After the discovery of using CRISPR-Cas9 for gene editing, this technology is receiving interest for its potential ability to manipulate genes contributing to Alzheimer’s. Researchers from the University of Tokyo used a screening technique involving CRISPR-Cas9 to identify calcium, proteins, and integrin-binding protein 1, which is involved in the formation of AD. Furthermore, Canadian researchers have edited genes in brain cells to prevent Alzheimer’s using CRISPR. The team identified a genetic variant called A673T, found to decrease Alzheimer’s likelihood by a factor of four and reduce Alzheimer’s biomarker beta-amyloid (Aβ). Using CRISPR in petri dish studies, they managed to activate this A673T variant in lab-grown brain cells. However, the reliability and validity of this finding are yet to be confirmed by replication in animal studies. One final example of CRISPR application is targeting the amyloid precursor protein (APP) gene. The Swedish mutation in the APP gene is associated with dominantly inherited AD. Scientists were able to specifically target and disrupt the mutant allele of this gene using CRISPR, which decreased pathogenic Aβ peptide. Degenerating neurons are surrounded by Aβ fibrils, the production of Αβ in the brain initiates a series of events which cause the clinical syndrome of dementia. The results of this study were replicated both ex vivo and in vivo and demonstrated this could be a potential treatment strategy in the future. The road ahead While CRISPR technology’s potential in Alzheimer’s research is promising, its therapeutic application is still in its Infancy. Nevertheless, with the aid of cutting-edge tools like CRISPR, deepening our understanding of AD, we are on the cusp of breakthroughs that could transform the landscape of Alzheimer’s disease treatment. Written by Maya El Toukhy Related articles: Alzheimer's disease (an overview) / Hallmarks of Alzheimer's / Sleep and memory loss Project Gallery
- Allergies | Scientia News
Deconstructing allergies: mechanisms, treatments, and prevention Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Allergies 08/07/25, 16:24 Last updated: Published: 13/05/24, 14:27 Deconstructing allergies: mechanisms, treatments, and prevention Modern populations have witnessed a dramatic surge in the number of people grappling with allergies, a condition that can lead to a myriad of health issues such as eczema, asthma, hives, and, in severe cases, anaphylaxis. For those who are allergic, these substances can trigger life-threatening reactions due to their abnormal immune response. Common allergens include antibiotics like penicillin, as well as animals, insects, dust, and various foods. The need for strict dietary restrictions and the constant fear of accidental encounters with allergens often plague patients and their families. Negligent business practices and mislabelled food have even led to multiple reported deaths, underscoring the gravity of allergies and their alarming rise in prevalence. The primary reason for the global increase in allergies is believed to be the lack of exposure to microorganisms during early childhood. The human microbiome, a collection of microorganisms that live in and on our bodies, is a key player in our immune system. The rise in sanitation practices is thought to reduce the diversity of the microbiome, potentially affecting immune function. This lack of exposure to infections may cause the immune system to overreact to normally harmless substances like allergens. Furthermore, there is speculation about the impact of vitamin D deficiency, which is becoming more common due to increased indoor time. Vitamin D is known to support a healthy immune response, and its deficiency could worsen allergic reactions. Immune response Allergic responses occur when specific proteins within an allergen are encountered, triggering an immune response that is typically used to fight infections. The allergen's proteins bind to complementary antigens on macrophage cells, causing these cells to engulf the foreign substance. Peptide fragments from the allergen are then presented on the cell surface via major histocompatibility complexes (MHCs), activating receptors on T helper cells. These activated T cells stimulate B cells to produce immunoglobulin E (IgE) antibodies against the allergen. This sensitizes the immune system to the allergen, making the individual hypersensitive. Upon re-exposure to the allergen, IgE antibodies bind to allergen peptides, activating receptors on mast cells and triggering the release of histamines into the bloodstream. Histamines cause vasodilation and increase vascular permeability, leading to inflammation and erythema. In milder cases, patients may experience itching, hives, and runny nose; however, in severe allergic reactions, intense swelling can cause airway constriction, potentially leading to respiratory compromise or even cessation. At this critical point, conventional antihistamine therapy may not be enough, necessitating the immediate use of an EpiPen to alleviate symptoms and prevent further deterioration. EpiPens administer a dose of epinephrine, also known as adrenaline, directly into the bloodstream when an individual experiences anaphylactic shock. Anaphylactic shock is typically characterised by breathing difficulties. The primary function of the EpiPen is to relax the muscles in the airway, facilitating easier breathing. Additionally, they counteract the decrease in blood pressure associated with anaphylaxis by narrowing the blood vessels, which helps prevent symptoms such as weakness or fainting. EpiPens are the primary treatment for severe allergic reactions leading to anaphylaxis and have been proven effective. However, the reliance on EpiPens underscores the necessity for additional preventative measures for individuals with allergies before a reaction occurs. Preventative treatment Young individuals may have a genetic predisposition to developing allergies, a condition referred to as atopy. Many atopic individuals develop multiple hypersensitivities during childhood, but some may outgrow these allergies by adulthood. However, for high-risk atopic children, preventive measures may offer a promising solution to reduce the risk of developing severe allergies. Clinical trials conducted on atopic infants explored the concept of immunotherapy treatments, involving continuous exposure to small doses of peanut allergens to prevent the onset of a full-blown allergy. Initially, skin prick tests for peanut allergens were performed, and only children exhibiting negative or mild reactions were selected for the trial. Those with severe reactions were excluded due to the high risk of anaphylactic shock with continued exposure. The remaining participants were randomly assigned to either consume peanuts or follow a peanut-free diet. Monitoring these infants as they aged revealed that continuous exposure to peanuts reduced the prevalence of peanut allergies by the age of 5. Specifically, only 3% of atopic children exposed to peanuts developed an allergy compared to 17% of those in the peanut-free group. The rise in severe allergies poses a growing concern for global health. Once an atopic individual develops an allergy, mitigating their hypersensitivity can be challenging. Current approaches often involve waiting for children to outgrow their allergies, overlooking the ongoing challenges faced by adults who remain highly sensitive to allergens. Implementing preventive measures, such as early exposure through immunotherapy, could enhance the quality of life for future generations and prevent sudden deaths in at-risk individuals. In conclusion, a dramatic surge in the prevalence of allergies in modern populations requires more attention from researchers and health care providers. Living with allergies can bring many complexities into someone’s life even before they potentially have a serious reaction. Currently treatments are focused on post-reaction emergency care, however preventative strategies are still a pressing need. With cases of allergies predicted to rise further, research into this global health issue will become increasingly important. There are already promising results from early trials of immunotherapy treatments, and with further research and implementation these treatments could improve the quality of life of future generations. Written by Charlotte Jones Related article: Mechanisms of pathogen evasion Project Gallery
- Bone cancer | Scientia News
Pathology and emerging therapeutics Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Bone cancer 09/07/25, 13:27 Last updated: Published: 12/10/23, 10:38 Pathology and emerging therapeutics Introduction: what is bone cancer? Primary bone cancer can originate in any b one. However, most cases develop in the long bones of the legs or upper arms. Each year, approximately 550 new cases are diagnosed in the United Kingdom. Primary bone cancer is distinct from secondary bone cancer, which occurs when cancer spreads to the bones from another region of the body. The focus of this article is on primary bone cancer. There are several types of bone cancer: osteosarcoma, Ewing sarcoma, and chondrosarcoma. Osteosarcoma originates in the osteoblasts that form bone. It is most common in children and teens, with the majority of cases occurring between the ages of 10 and 30. Ewing (pronounced as YOO-ing) sarcoma develops in bones or the soft tissues around the bones. Like osteosarcoma, this cancer type is more common in children and teenagers. Chondrosarcoma occurs in the chondrocytes that form the cartilage. Chondrosarcoma is most common in adults between the ages of 30 and 70 and is rare in the under-21 age group. Causes of bone cancer include genetic factors such as inherited mutations and syndromes, and environmental factors such as previous radiation exposure. Treatment will often depend on the type of bone cancer, as the specific pathogenesis of each case is unknown. What is the standard treatment for bone cancer? Most patients are treated with a combination of surgical excision, chemotherapy, and radiation therapy. Surgical excision is employed to remove the cancerous bone. Typically, it is possible to repair or replace the bone, although amputation is sometimes required. Chemotherapy involves using powerful chemicals to kill rapidly growing cells in the body. It is widely used for osteosarcoma and Ewing sarcoma but less commonly used for chondrosarcomas. Radiation therapy (also termed radiotherapy) uses high doses of radiation to damage the DNA of cancer cells, leading to the killing of cancer cells or slowed growth. Six out of every ten individuals with bone cancer will survive for at least five years after their diagnosis, and many of these will be completely cured. However, these treatments have limitations in terms of effectiveness and side effects. The limitation of surgical excision is the inability to eradicate microscopic cancer cells around the edges of the tumour. Additionally, the patient must be able to withstand the surgery and anaesthesia. Chemotherapy can harm the bone marrow, which produces new blood cells, leading to low blood cell counts and an increased risk of infection due to a shortage of white blood cells. Moreover, radiation therapy uses high doses of radiation, resulting in the damage of nearby healthy tissues such as nerves and blood vessels. Taken together, this underscores the need for a therapeutic approach that is non-invasive, bone cancer-specific, and with limited side effects. miR-140 and tRF-GlyTCC Dr Darrell Green and colleagues investigated the role of small RNAs (sRNAs) in bone cancer and its progression. Through the analysis of patient chondrosarcoma samples, the researchers identified two sRNA candidates associated with overall patient survival: miR-140 and tRF-GlyTCC. MiR-140 was suggested to inhibit RUNX2, a gene upregulated in high-grade tumours. Simultaneously, tRF-GlyTCC was demonstrated to inhibit RUNX2 expression by displacing YBX1, a multifunctional protein with various roles in cellular processes. Interestingly, the researchers found that tRF-GlyTCC was attenuated during chondrosarcoma progression, indicating its potential involvement in disease advancement. Furthermore, since RUNX2 has been shown to drive bone cancer progression, the identified miR-140 and tRF-GlyTCC present themselves as promising therapeutic targets. CADD522 Dr Darrell Green and colleagues subsequently investigated the impact of a novel therapeutic agent, CADD522, designed to target RUNX2. In vitro experiments have revealed that CADD522 reduced proliferation in chondrosarcoma and osteosarcoma. However, a bimodal effect was observed in Ewing sarcoma, indicating that lower levels of CADD522 promoted sarcoma proliferation, whereas higher levels of the same drug suppressed proliferation. In mouse models treated with CADD522, there was a significant reduction in cancer volumes observed in both osteosarcoma and Ewing sarcoma. Take-home message The results described here contribute to understanding the molecular mechanisms involved in bone cancer. They highlight the anti-proliferative and anti-tumoral effects of CADD522 in treating osteosarcoma and Ewing sarcoma. Further research is necessary to fully elucidate the specific molecular mechanism of CADD522 in bone cancer and to identify potential side effects. Written by Favour Felix-Ilemhenbhio Related articles: Secondary bone cancer / Importance of calcium / Novel neuroblastoma driver for therapeutics Project Gallery
- Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype | Scientia News
Setting Neuropsychiatry In a Wider Medical Context Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype 20/02/25, 11:54 Last updated: Published: 24/05/23, 09:45 Setting Neuropsychiatry In a Wider Medical Context In novel research by Campeau et al. (2022), the proteomic analysis of 742 proteins from the blood plasma of 54 schizophrenic participants and 51 age-matched healthy volunteers. This investigation resulted in the validation of the previously-contentious link between premature aging and schizophrenia by testing for a wide variation of proteins involved in cognitive decline, aging-related comorbidities, and biomarkers of earlier-than-average mortality. The results from this research demonstrated that age-linked changes in protein abundance occur earlier on in life in people with schizophrenia. This data also helps to explain the heightened incidence rate of age-related disorders and early all-cause death in schizophrenic people too, with protein imbalances associated with both phenomena being present in all schizophrenic age strata over age 20. This research is the result of years of medical intrigue regarding the biomedical underpinnings of schizophrenia. The comorbidities and earlier death associated with schizophrenia were focal points of research for many years, but only now have valid explanations been posed to answer the question of the presence of such phenomena. The explanation for the greater incidence rate of early death in schizophrenia was described in this study as the increased volume of certain proteins. Specifically, these included biomarkers of heart disease (Cystatin-3, Vitronectin), blood clotting abnormalities (Fibrinogen-B) and an inflammatory marker (L-Plastin). These proteins were tested for due to their inclusion in a dataset of protein biomarkers of early all-cause mortality in healthy and mentally-ill people published by Ho et al. (2018) for the Journal of the American Heart Association. Furthermore, a protein linked to degenerative cognitive deficit with age, Cystatin C, was present in increased volume in schizophrenic participants both under and over the age of 40. This explains why antipsychotics have limited effectiveness in reducing the cognitive effects of schizophrenia. In this study, schizophrenics under 40 had similar plasma protein content as the healthy over-60 strata set, including both biomarkers of cognitive decline, age-related diseases and death. Schizophrenics under-40 showed the same likelihood for incidence of the latter phenomena compared to the healthy over-60 set. These results could demonstrate the necessity for use of medications often used to treat age-related cognitive decline and mortality-linked protein abundances to treat schizophrenia. One of these options include polyethylene glycol-Cp40, a C3 inhibitor used to treat nocturnal haemoglobinuria, which could be used to ameliorate the risk of developing age-related comorbidities in schizophrenic patients. This treatment may be effective in the reduction of C3 activation, which would reduce the opsonisation (tagging of detected foreign products in blood). When overexpressed, C3 can cause the opsonisation of healthy blood cells in a process called haemolysis, which can catalyse the reduction of blood volume implicated in cardiac events and other comorbidities. However, whether or not this treatment would benefit those with schizophrenia is yet to be proven. The potential of this research to catalyse new treatment options for schizophrenia cannot be understated. Since the publication of Kilbourne et al. in 2009, the impact of cardiac comorbidities in catalysing early death in schizophrenic patients has been accepted medical dogma. The discovery of exact protein targets to reduce the incidence rate of age-linked conditions and early death in schizophrenia will allow the condition to be treated more holistically, with greater observance to the fact that schizophrenia is not only a psychiatric illness, but also a neurocognitive disorder with affiliated comorbidities that have to be prevented adequately. Written by Aimee Wilson Related articles: Genetics of ageing and longevity / Ageing and immunity / Inflammation therapy Project Gallery










