Search Index
355 results found
- Conserving the California condors | Scientia News
Captive breeding has grown the California condor population over 18-fold Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Conserving the California condors 24/04/25, 11:46 Last updated: Published: 04/11/24, 14:56 Captive breeding has grown the California condor population over 18-fold This is article no. 2 in a series on animal conservation. Next article: Beavers are back in Britain . Previous article: The cost of coats: celebrating 55 years of vicuna conservation . California condors are critically endangered birds living on the west coast of North America. Their population decline was first reported in 1953, and they were nearly extinct by 1987. Since then, a captive breeding and reintroduction program has saved the species in the face of multiple human threats. This article will describe some of those threats and available measures to mitigate them. Why California condors became endangered Lead poisoning was the main cause of California condor mortality in the late 20th century. Like vultures, California condors eat dead mammals. When these mammals were shot dead with lead bullets, condors ingested fragments of the bullets, and the lead poisoned their bloodstream. Multiple condors feeding on the same carcass got poisoned, which could be why the population crashed so badly. Today, lead poisoning is the biggest, but not the only, threat to California condor survival ( Figure 1 ). The birds used to be hunted for museums and private collections in the early 20th century, but nowadays, any shootings are accidental. A bigger concern, and the second-most common human-related cause of mortality, is condors colliding with utility poles and power lines. The third-most common is fires: a 2015 study found that every recent wildfire in California has coincided with at least one condor death. Climate change will make these fires more frequent and severe. These threats mainly apply to inland California condors - halogenated organic compound (HOC) pollution is an issue for coastal birds. When coastal condors eat marine mammals contaminated with HOCs, the compounds disrupt their reproductive system and thin their eggshells. In short, humans have created a hostile environment for California condors. Successful captive breeding and population recovery Despite these threats, captive breeding has grown the California condor population over 18-fold ( Figure 2 ). In 1987, all remaining wild condors were captured and bred, with juveniles released to the wild from 1992 onwards. Reintroduced birds are monitored regularly, and poisoned birds are treated with chelation therapy - where a drug binds to lead in the bloodstream and takes it to the kidneys to be filtered out. Since 1995, power line collisions have been avoided by giving juveniles behavioural training before reintroduction. Because of these measures, the California condor mortality rate in the wild decreased from 37.2% in 1992-1994 to 5.4% in 2001-2011. Challenges of conserving California condors Although captive breeding has saved the California condor population, it has also altered behaviours. The original condors stay with one mate longer than reintroduced condors, which may form polygamous relationships. Scientists think that spending so much time with non-family members in captivity has made juveniles promiscuous when reintroduced. Captive bred condors have also gotten used to being fed by people - so they approach people more often, spend longer in areas of human activity, and forage over a smaller area than the original condors. Moreover, condors in southern California were spotted feeding their chicks human litter. These behavioural changes mean the wild California condor population is not self-sustaining. The wild population is also not self-sustaining because condors are still being poisoned ( Figure 3 ). Banning lead bullets is the most effective way to guarantee population growth, but enforcing it has been challenging. Non-toxic alternative bullets like copper cannot find popularity. For population growth, every adult California condor killed is estimated to be worth 2-3 reintroduced juveniles. This is because released juveniles are more vulnerable and take years to reach breeding age. Therefore, American conservationists must keep pressuring authorities to reduce threats to adult California condors. Conclusion Pollution, urbanisation, and climate change have made it hard for the California condor population to recover from decades of lead poisoning. Long generation times and behavioural changes mean captive breeding is the species’ only hope of survival. Perhaps humans are the ones who need to change their behaviour - not feeding California condors and switching to copper bullets would allow these majestic birds to keep roaming the skies. Written by Simran Patel Related articles: Marine iguana conservation / Deception by African birds / Emperor penguins REFERENCES Bakker, V.J. et al. (2024) Practical models to guide the transition of California condors from a conservation-reliant to a self-sustaining species. Biological Conservation . 291: 110447. Available from: https://www.sciencedirect.com/science/article/pii/S0006320724000089 (Accessed 19th September 2024). D’Elia, J., Haig, S.M., Mullins, T.D. & Miller, M.P. (2016) Ancient DNA reveals substantial genetic diversity in the California Condor (Gymnogyps californianus) prior to a population bottleneck. The Condor . 118 (4): 703–714. Available from: https://doi.org/10.1650/CONDOR-16-35.1 (Accessed 28th September 2024). Finkelstein, M.E. et al. (2023) California condor poisoned by lead, not copper, when both are ingested: A case study. Wildlife Society Bulletin . 47 (3): e1485. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/wsb.1485 (Accessed 28th September 2024). Kelly, T.R. et al. (2015) Two decades of cumulative impacts to survivorship of endangered California condors in California. Biological Conservation . 191: 391–399. Available from: https://www.sciencedirect.com/science/article/pii/S0006320715300173 (Accessed 28th September 2024). Mee, A. & Snyder, N. (2007) California Condors in the 21st Century - conservation problems and solutions. In: 243–279. Meretsky, V.J., Snyder, N.F.R., Beissinger, S.R., Clendenen, D.A. & Wiley, J.W. (2000) Demography of the California Condor: Implications for Reestablishment. Conservation Biology . 14 (4): 957–967. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1046/j.1523-1739.2000.99113.x (Accessed 29th September 2024). Stack, M.E. et al. (2022) Assessing Marine Endocrine-Disrupting Chemicals in the Critically Endangered California Condor: Implications for Reintroduction to Coastal Environments. Environmental Science & Technology . 56 (12): 7800–7809. Available from: https://doi.org/10.1021/acs.est.1c07302 (Accessed 19th September 2024). U.S. Fish and Wildlife Service (2023) California Condor Population Graph, 1980-2022 | FWS.gov . 18 April 2023. Available from: https://www.fws.gov/media/california-condor-population-graph-1980-2022 (Accessed 28th September 2024). U.S. Fish and Wildlife Service (2020) California Condor Recovery Program 2020 Annual Population Status . Available from: https://www.fws.gov/sites/default/files/documents/2020-California-Condor-Population-Status.pdf (Accessed 28th September 2024). Project Gallery
- Cryptosporidium: bridging local outbreaks to global health disparities | Scientia News
Investigating the outbreak in Devon, UK in May 2024 Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Cryptosporidium: bridging local outbreaks to global health disparities 20/03/25, 12:06 Last updated: Published: 01/09/24, 12:50 Investigating the outbreak in Devon, UK in May 2024 In early May, news emerged of numerous Devon (UK) residents experiencing vomiting and diarrhoea. Majorly affecting the Brixham region, over 40 people were diagnosed with cryptosporidiosis, and over 16,000 homes were advised to boil water before consuming it to kill potential pathogens ( Figure 1 ). Despite a controversial handling of the situation from South West Water (SWW) (from initial denial of the ‘crisis’, to major profit increases for the company), the outbreak was eventually linked to a broken pipe from where animal faeces could have entered, contaminating the water supply, a SWW representative suggested. In this article, we will investigate the disease and its relevance worldwide. So, what is Cryptosporidiosis? Cryptosporidiosis is commonly associated with gastrointestinal symptoms, such as vomiting, diarrhoea and severe abdominal cramps. It is caused by cryptosporidium, from the Apicocomplexa family. This microorganism is an intra-cellular gut parasite which invades the microvilli in the gut and depletes host nutrients. The parasite is spread via faecal-oral transmission, and it is commonly found in contaminated water, food and animals. Its life cycle starts with oocyst (egg) ingestion, leading to attachment to host gut epithelia, and asexual reproduction. This allows sexual reproduction to ensue, and oocyst formation. Eventually, the oocysts are released via faeces, for the cycle of infection to continue. Cryptosporidium species are often identified by the immune system via Toll-Like Receptors, specifically TLR-4, in the gut epithelia; Cryptosporidium-derived molecules are treated as TLR-4 ligands, since the microbe does not produce LPS molecules. Adaptive immune signalling pathways, such as NF-kB, are triggered, encouraging IL-8, CXCL1 and other chemokine secretion from the gut ( Figure 2 ). Consequently, gut inflammation is increased, as well as levels of Intracellular Adhesion Molecule-1 (ICAM-1), to aid immunocyte recruitment and improve pathogenic clearance. Other mechanisms the epithelial barrier uses to eliminate cryptosporidium infection include NO secretion and mucin production, to kill the pathogen, and prevent further infection by blocking extracellular oocyst binding, respectively. In some individuals, cryptosporidium can evade immune response due to its intracellular nature. Most immunocompetent patients suffer mild symptoms and so are offered symptomatic treatment, but some immunocompromised patients (those with HIV, for example) can develop chronic diarrhoea as a result of cryptosporidium infection. In this instance, managing fluid loss and rest is often insufficient; these patients are prescribed nitazoxanide, a broad-spectrum antiparasitic, to manage their diarrhoea. Cryptosporidiosis on a global scale Although controversial, the management of the cryptosporidium ‘crisis’ in Devon was resolved relatively quickly compared to outbreaks in other countries ( Figure 3 ). There are clear links between socio-economic dynamics and water-borne illness prevalence. In some developing regions, such as areas in the Middle East and North Africa (MENA), cryptosporidiosis is considered endemic, due to poor quality water-sanitation centres, rapid population growth and inadequate potable water supply. Globally, 3.4 million people die each year from water-borne illnesses - and poor sanitation ranks higher in causes of human morbidity than war and terrorism. Additionally, in 2015, cryptosporidium was the fourth leading cause of death amongst children under 5, clearly highlighting the danger this parasite can cause. For children in developing countries, who are already predisposed to starvation, cryptosporidiosis can kick-start a malnutrition cycle. Here, cryptosporidium exacerbates host malnutrition due to its parasitic nature, potentially causing cognitive impairment and growth stunting. Cryptosporidiosis, although typically mild, can be devastating for some people (the immunocompromised and young children). Particularly, those who are malnourished can suffer severe effects. The water contamination in Devon (UK), handled by SWW, was unfortunate and many in the region experienced severe illness. Globally, cryptosporidiosis is a major problem and in some regions, it is considered endemic. Thus, it is important we spread awareness of the devastating effects of this disease, continue efforts to prevent transmission and strive for eradication. Written by Eloise Nelson REFERENCES Abuseir, S. (2023) ‘A systematic review of frequency and geographic distribution of water-borne parasites in the Middle East and North Africa’, Eastern Mediterranean Health Journal , 29(2), pp. 151–161. doi:10.26719/emhj.23.016. Chalmers, R.M., Davies, A.P. and Tyler, K. (2019) ‘Cryptosporidium’, Microbiology , 165(5), pp. 500–502. doi:10.1099/mic.0.000764. Hassan, E.M. et al. (2020) ‘A review of cryptosporidium spp. and their detection in water’, Water Science and Technology , 83(1), pp. 1–25. doi:10.2166/wst.2020.515. News, S. (2024) ‘Brixham: More than 50 people in Devon ill from contaminated water - as South West Water’s owner posts £166m profit’, Sky News , 21 May. Available at: https://news.sky.com/story/brixham-more-than-50-people-in-devon-ill-from-contaminated-water-as-south-west-waters-owner-posts-166m-profit-13140820#:~:text=More%20than%2050%20cases%20of,water%2C%20health%20bosses%20have%20said . Sparks, H. et al. (2015) ‘Treatment of cryptosporidium: What we know, gaps, and the way forward’, Current Tropical Medicine Reports , 2(3), pp. 181–187. doi:10.1007/s40475-015-0056-9. Caccio SM. Cryptosporidium : parasite and disease, Immunology of Cryptosporidiosis. Springer Verlag Gmbh; 2016. Project Gallery
- Plastics and their environmental impact: a double-edged sword | Scientia News
The chemistry that makes plastics strong also makes them extremely resistant to deterioration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Plastics and their environmental impact: a double-edged sword 10/07/25, 10:29 Last updated: Published: 06/11/24, 12:25 The chemistry that makes plastics strong also makes them extremely resistant to deterioration Plastics have become an indispensable part of modern life. They are found in everything from electronics and packaging to construction materials and medical equipment. These multipurpose materials, mostly derived from petrochemicals, are successful because they are inexpensive, lightweight, and long-lasting. However, one of the biggest environmental problems of our time is their resilience, which makes them so beneficial. The chemistry that makes plastics strong also makes them extremely resistant to deterioration, which causes environmental damage and widespread contamination. The chemistry behind plastics Most plastics are composed of polymers, which are lengthy chains of monomers—repeating molecular units. Depending on how the molecules are arranged and the chemical additives added during synthesis, these polymers can be made to have a variety of characteristics, including stiffness or flexibility. Hydrocarbons from natural gas or crude oil are polymerised to create common plastics like polypropylene, which is used in food containers, and polyethene, which is used in plastic bags. While these plastics are ideal for their intended purposes —protecting products, storing food, and more, they are extremely resistant to degradation. This is due to their stable carbon-carbon bonds, which natural organisms and processes find difficult to break down. As a result, plastics can remain in the environment for hundreds of years, breaking down into tiny bits rather than entirely dissolving. See Figure 1 . The problem of micro-plastics Plastics in the environment degrade over time into tiny fragments known as microplastics, which are defined as particles smaller than 5 mm in diameter. These microplastics originate from a variety of sources, including the breakdown of larger plastic debris, microbeads used in personal care products, synthetic fibres shed from textiles and industrial processes. They are now widespread in every corner of the globe, from the deepest parts of the oceans to remote mountain ranges, the air we breathe, and even drinking water and food. Microplastics are particularly problematic in marine environments. Marine animals such as fish, birds, and invertebrates often mistake microplastics for food. Once ingested, these particles can accumulate in the animals' digestive systems, leading to malnutrition, physical damage, or even death. More concerning is the potential for these plastics to work their way up the food chain. Predators, including humans, may consume prey that has ingested microplastics, raising concerns about the potential effects on human health. Recent studies have detected microplastics in various human-consumed products, including seafood, table salt, honey, and drinking water. Alarmingly, microplastics have also been found in human organs, blood, and even placentas, highlighting the pervasive nature of this contamination. While the long-term environmental and health effects of microplastics are still not fully understood, research raises significant concerns. Microplastics can carry toxic substances such as persistent organic pollutants (POPs) and heavy metals, posing risks to the respiratory, immune, reproductive, and digestive systems. Exposure through ingestion, inhalation, and skin contact has been linked to DNA damage, inflammation, and other serious health issues. Biodegradable plastics: a possible solution? One possible solution to plastic pollution is the development of biodegradable plastics, which are engineered to degrade more easily in the environment. These plastics can be created from natural sources such as maize starch or sugarcane, which are turned into polylactic acid (PLA), or from petroleum-based compounds designed to disintegrate more quickly. However, biodegradable polymers do not provide a perfect answer. Many of these materials require certain circumstances, such as high heat and moisture, to degrade effectively. These conditions are more commonly encountered in industrial composting plants than in landfills or natural ecosystems. As a result, many biodegradable plastics can remain in the environment if not properly disposed of. Furthermore, their production frequently necessitates significant quantities of energy and resources, raising questions about whether they are actually more sustainable than traditional plastics. Innovations in plastic recycling Given the limitations of biodegradable polymers, improving recycling technology has become the main issue in the battle against plastic waste. Traditional recycling methods, like mechanical recycling, involve breaking down plastics and remoulding them into new products. However, this process can degrade the material's quality over time. However, this may compromise the material's quality over time. Furthermore, many types of plastics are difficult or impossible to recycle due to variances in chemical structure, contamination, or a lack of adequate machinery. Recent advances have been made to address these issues. Chemical recycling, for example, converts plastics back into their original monomers, allowing them to be re-polymerised into high-quality plastic. This technique has the ability to recycle materials indefinitely without compromising functionality. Another intriguing technique is enzymatic recycling, in which specially built-enzymes break down plastics into their constituent parts at lower temperatures, reducing the amount of energy required for the process. While these technologies provide hope, they are still in their early phases of development and face significant economic and logistical challenges. Expanding recycling infrastructure and developing more effective ways are critical to reduce the amount of plastic waste entering the environment. The way forward The environmental impact of plastics has inspired a global campaign to reduce plastic waste. Governments, industry, and consumers are taking action by prohibiting single-use plastics, increasing recycling efforts, and developing alternatives. However, addressing the plastic problem necessitates a multifaceted strategy. This includes advances in material science, improved waste management systems, and, perhaps most crucially, a transformation in how we perceive and utilise plastics in our daily lives. The chemistry of plastics is both fascinating and dangerous. While they have transformed businesses and increased quality of life, their long-term presence in the environment poses a substantial risk to ecosystems and human health. Rethinking how we make, use, and discard plastics in order to have a more sustainable relationship with these intricate polymers may be more important for the future of plastics than just developing new materials. Written by Laura K Related articles: Genetically-engineered bacteria break down plastic / The environmental impact of EVs Project Gallery
- Chirality in drugs | Scientia News
Why chirality is important in developing drugs Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Chirality in drugs 04/02/25, 15:50 Last updated: Published: 06/06/23, 16:53 Why chirality is important in developing drugs Nearly 90% of the drugs currently on the market are racemates, which are composed of an equimolar mixture of two enantiomers, and approximately half of all drugs are chiral compounds. Chirality is the quality of an item that prevents it from superimposing on its mirror counterpart, similar to left and right hands. Chirality, a generic characteristic of "handedness,"plays a significant role in the creation of several pharmaceutical drugs. It's interesting to note that 20 of the 35 drugs the Food and Drug Administration (FDA) authorised in 2020 are chiral drugs. For example, Ibuprofen, a chiral 2-aryl propionic acid derivative, is a common over-the-counter analgesic, antipyretic, and anti-inflammatory medication. However, Ibuprofen and other medications from similar families can have side effects and risks related to their usage. Drugs of the chiral class have the drawback that only one of the two enantiomers may be active, while the other may be ineffective or have some negative effects. The inactive enantiomer can occasionally interact with the active enantiomer, lowering its potency or producing undesirable side effects. Additionally, Ibuprofen and other members of the chiral family of pharmaceuticals can interact with other drugs, including over-the-counter and prescription ones. To guarantee that only the active enantiomer is present in chiral-class medications, it is crucial for pharmaceutical companies to closely monitor their production and distribution processes. Lessening the toxicity or adverse effects linked to the inactive enantiomer, medical chemistry has recently seen an increase in the use of enantiomerically pure drugs. In any instance, the choice of whether to utilise a single enantiomer or a combination of enantiomers of a certain medicine should be based on clinical trial results and clinical competence. In addition to requests to determine and control the enantiomeric purity of the enantiomers from a racemic mixture, the use of single enantiomer drugs may result in simpler and more selective pharmacological profiles, improved therapeutic indices, simpler pharmacokinetics, and fewer drug interactions. Although, there have been instances where the wrong enantiomer results in unintended side effects, many medications are still used today as racemates with their associated side effects; this issue is probably brought on by both the difficulty of the chiral separation technique and the high cost of production. In conclusion, Ibuprofen and other medications in the chiral family, including those used to treat pain and inflammation, can be useful, but they also include a number of dangers and adverse effects. It's critical to follow a doctor's instructions when using these medications and to be aware of any possible interactions, allergic reactions, and other hazards. To maintain the security and efficacy of medicines in the chiral class, pharma producers also have a duty to closely monitor their creation and distribution. Written by Navnidhi Sharma Project Gallery
- Neuroimaging and spatial resolution | Scientia News
Peering into the mind Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Neuroimaging and spatial resolution 10/07/25, 10:24 Last updated: Published: 04/11/24, 14:35 Peering into the mind Introduction Neuroimaging has been at the forefront of brain discovery ever since the first ever images of the brain were recorded in 1919 by Walter Dandy, using a technique called pneumoencephalography (PET). Fast-forward over a decade and neuroimaging is more than just blurry singular images. Modern techniques allow us to observe real time changes in brain activity with millisecond resolution, leading to breakthroughs in scientific discovery that would not be possible without it. Memory is a great example - with functional magnetic resonance imaging (fMRI) techniques we have been able to demonstrate that more recent long-term memories are stored and retrieved with brain activity in the hippocampus, but as memories become more in the distant past, they are transferred to the medial temporal lobe. While neuroimaging techniques keep the doors open for new and exciting discoveries, spatial limitations leave many questions unanswered, especially at a cellular and circuit level. For example - within the hippocampus, is each memory encoded via complete distinct neural circuits? Or do similar memories share similar neural pathways? Within just a millimetre cubed of brain tissue we could have up to 57,000 cells (most of them neurons), all of which may have different properties, be part of different circuits, and produce different outcomes. This almost makes revolutionary techniques such as fMRI, with almost unparalleled image quality, seem pointless. To truly understand how neural circuits work, we have to dig as deep as possible to record the smallest regions possible. So that begs the question, how small can we actually record in the human brain? EEG 2024 marks a decade since the first recorded electroencephalography (also known as EEG) scan by Hans Berger in Germany. This technique involves placing electrodes all around the scalp to record activity throughout the whole outer surface of the brain ( Figure 1 ). Unlike the methods we see later on, EEG scans provide a direct measure of activity in the brain, by measuring electrical activity when the brain is active. However, because electrodes are only placed across the scalp, EEG scans are only able to pick up activity from the outer cortex, missing important activity in deeper parts of the brain. In our memory example, this means it would completely miss any activity in the hippocampus. EEG resolution is also quite underwhelming, typically being able to resolve activity with a few centimetres’ resolution - not great for mapping behaviours to specific structures in the brain. EEG scans are used in a medical environment to measure overall activity levels, assisting with epilepsy diagnosis. Let's look at what we can use to dig deeper into the brain and locate signals of activity… PET Position emission tomography (PET) scans offer a chance to record activity throughout the whole brain by ingesting a radioactive tracer, typically glucose labelled with a mildly radioactive substance. This tracer is tracked and uptake in specific parts of the brain is a sign for greater metabolic activity, indicating a higher signalling rate. PET scans already offer a resolution far beyond the capacities of EEG scans, distinguishing activity between areas with a resolution of up to 4mm. With the use of different radioactive labels, we can also detect activity of specific populations of neurons such as dopamine neurons to diagnose Parkinson's disease. In fact, many studies have reliably demonstrated the ability of PET scans to detect the root cause of Parkinson's disease, which is a reduced number of dopamine neurons in the basal ganglia, before symptoms become too extreme. As impressive as it sounds, a 4mm resolution can locate activity in large areas of the cortex, but is limited in its resolving power for discrete cortical layers. Take the human motor cortex for example - all 6 layers have an average width of only 2.79mm. A PET scan would not be powerful enough to determine which layer is most active, so we need to dig a little deeper… fMRI Since its inception in the early 90's, fMRI has gained the reputation of becoming the gold standard for human neuroimaging, thanks to its non-invasiveness, lack of artefacts, and reliable signalling. fMRI uses Nuclear Magnetic Resonance to measure changes in oxygenated blood flow, which is correlative of neural activity, known as BOLD signals. In comparison to EEG, measuring blood oxygen levels cannot reach a highly impressive temporal resolution, and is also not a direct measure of neural activity. fMRI makes up for this with its superior spatial resolution, resolving spaces as small as 1mm apart. Using our human motor cortex example, this would allow us to resolve activity between every 2-3 layers - not a bad return considering it doesn’t even leave a scar. PET, and especially EEG, pales in comparison to the capabilities of fMRI that has since been used for a wide range of neuroimaging research. Most notably, structural MRI has been used to support the idea of hippocampal involvement during spatial navigation from memory tasks ( Figure 2 ). Its resolving power and highly precise images also make it suitable to be used for mapping surgical procedures. Conclusion With a resolution of up to 1mm, fMRI takes the crown as the human neuroimaging technique with the best spatial resolution! Table 1 shows a brief summary of each neuroimaging method. Unfortunately though, there is still so much more we need to do to look at individual circuits and connections. As mentioned before, even within a millimetre cubed of brain, we have 5 figures worth of cells, making the number of neurons that make up the whole brain impossible to comprehend. To observe the activity of a single neuron, we would need an imaging technique with the power of viewing cells in the 10’s of micrometre range. So what can we do to get to the resolution we desire while still being suitable for humans? Maybe there isn't a solution. Instead, maybe if we want to record singular neuron activity, we have to take inspiration from invasive animal techniques such as microelectrode recordings. Typically used in rats and mice, these can achieve single-cell resolution to look at neuroscience from the smallest of components. It would be unethical to stick an electrode into a healthy human's brain and record activity, but perhaps in the future a non-invasive form of electrode recording could be developed? The current neuroscience field is foggy and shrouded in mystery. Most of these mysteries simply cannot be solved with the current research techniques we have at our disposal. But this is what makes neuroscience exciting - there is still so much to explore! Who knows when we will be able to map behaviours to neural circuits with single-cell precision, but with how quickly imaging techniques are being enhanced and fine-tuned, I wouldn't be surprised if it's sooner than we think. Written by Ramim Rahman Related articles: Neuromyelitis optica / Traumatic brain injuries REFERENCES Hoeffner, E.G. et al. (2011) ‘Neuroradiology back to the future: Brain Imaging’, American Journal of Neuroradiology, 33(1), pp. 5–11. doi:10.3174/ajnr.a2936. Maguire, E.A. and Frith, C.D. (2003) ‘Lateral asymmetry in the hippocampal response to the remoteness of autobiographical memories’, The Journal of Neuroscience, 23(12), pp. 5302–5307. doi:10.1523/jneurosci.23-12-05302.2003. Wong, C. (2024) ‘Cubic millimetre of brain mapped in spectacular detail’, Nature, 629(8013), pp. 739–740. doi:10.1038/d41586-024-01387-9. Butman, J. A., & Floeter, M. K. (2007). Decreased thickness of primary motor cortex in primary lateral sclerosis. AJNR. American journal of neuroradiology, 28(1), 87–91. Loane, C., & Politis, M. (2011). Positron emission tomography neuroimaging in Parkinson's disease. American journal of translational research, 3(4), 323–341. Maguire, E.A. et al. (2000) ‘Navigation-related structural change in the hippocampi of taxi drivers’, Proceedings of the National Academy of Sciences, 97(8), pp. 4398–4403. doi:10.1073/pnas.070039597. [Figure 1] EEG (electroencephalogram) (2024) Mayo Clinic . Available at: https://www.mayoclinic.org/tests-procedures/eeg/about/pac-20393875 (Accessed: 18 October 2024). [Figure 2] Boccia, M. et al. (2016) ‘Direct and indirect parieto-medial temporal pathways for spatial navigation in humans: Evidence from resting-state functional connectivity’, Brain Structure and Function, 222(4), pp. 1945–1957. doi:10.1007/s00429-016-1318-6. Project Gallery
- Dark Energy Spectroscopic Instrument (DESI) | Scientia News
A glimpse into the early universe Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Dark Energy Spectroscopic Instrument (DESI) 23/10/25, 10:22 Last updated: Published: 08/07/23, 13:11 A glimpse into the early universe ! Widget Didn’t Load Check your internet and refresh this page. If that doesn’t work, contact us. Project Gallery
- How does physical health affect mental health? | Scientia News
Healthy heart, healthy mind Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How does physical health affect mental health? Last updated: 16/10/25, 10:20 Published: 30/01/25, 08:00 Healthy heart, healthy mind Introduction Over the last decade, maintaining good mental health has become an increasing global priority. More people are committing time to self-care meditation, and other cognitive practices. We have also seen a rise in people taking care of their physical health through exercise and clean eating. This is fantastic – people are making time for one of the most important aspects of life, their health! But with the fast-paced nature of modern lifestyles, it is hard to devote separate time each week to purely mental and physical wellbeing. What if there were ways we could enhance both physical and mental wellbeing at the same time? Are both forms of health completely distinct from one another, or could a change in one have an effect on the other? If you’re looking for ways to improve your self-care efficiency, this may be the article for you! Healthy heart, healthy mind Physical health is a lot easier to define, on account of it being largely visible. Mental health on the other hand lacks much of a concrete definition. What is widely agreed is that emotions and feelings play a large part in making up our mental health. Emotions are largely determined by how we feel about our current internal and external environment, meaning bad bodily signs (as part of our internal environment) will have a negative effect on our overall mood. This is why being ill puts us in such a bad mood – even a blocked nose can annoy us by affecting how we do everyday activities. Poor fitness levels are likely no different – not being the most physically capable and finding everyday physical tasks challenging will likely have an effect on your mood and your confidence. Recent studies have backed up this idea, namely that signs of bodily inflammation are associated with increased risk of depression and negative mood. The role of neurotransmitters So being physically fit is associated with having better mental health, but does that mean exercise itself is mentally health as well, or is it just the effect of exercise that makes us happy? In other words, we seem to enjoy the result, but do we enjoy the process too? Studies have found that exercise increases dopamine levels in the brain. Dopamine is a neurotransmitter (a chemical messenger in the brain) that signals reward and motivation, similar to when we earn something for the work we put in ( Figure 1 ). Exercise is therefore seen as rewarding to the brain. There is also a lot of evidence suggesting exercise increases serotonin levels in both rats and humans. Serotonin is also a neurotransmitter, associated with directly enhancing mood and even having anti-depressant effects. Experiments in rats even suggest that increases in serotonin can decrease anxiety levels. Now, this does not mean exercise alone can cure anxiety disorder or depression, but could it be a useful variable in a clinical setting? Clinical uses Studies in depressive patients suggest that, yes, exercise does lead to better mental and physical health in patients with depression. This pairs well with another common finding that depressed patients are very rarely willing to complete difficult tasks for reward. So even on an extreme clinical scale, mental ill-health can have very damning consequences on maintaining good physical health. On the other hand, simple activities such as light jogs or walks may be the key to reversing negative spirals and getting on the right track towards recovery ( Figure 2 ). Conclusion and what we can do So far we have pretty solid evidence that mental health can impact physical health and vice versa, both negatively and positively. Going back to the introductory question, yes! We can find activities that improve both our physical and mental health. The trick is to find exercises that we find enjoyable and rewarding. On the clinical side, this could mean that physical exercise may be as effective at remitting depressive symptoms as antidepressants, likely with a lot fewer side effects. With that said, stay active and have fun, it helps more than you think! Written by Ramim Rahman Related articles: Environmental factors in exercise / Stress and neurodegeneration / Personal training / Mental health awareness REFERENCES Nord, C. (2024) The balanced brain . Cambridge: Penguin Random House. Osimo, E.F. et al. (2020) ‘Inflammatory markers in depression: A meta-analysis of mean differences and variability in 5,166 patients and 5,083 controls’, Brain, Behavior, and Immunity, 87, pp. 901–909. doi:10.1016/j.bbi.2020.02.010. Basso, J.C. and Suzuki, W.A. (2017) ‘The effects of acute exercise on mood, cognition, neurophysiology, and neurochemical pathways: A Review’, Brain Plasticity , 2(2), pp. 127–152. doi:10.3233/bpl-160040. [figure 1] DiCarlo, G.E. and Wallace, M.T. (2022) ‘Modeling dopamine dysfunction in autism spectrum disorder: From invertebrates to vertebrates’, Neuroscience & Biobehavioral Reviews, 133, p. 104494. doi:10.1016/j.neubiorev.2021.12.017. [figure 2] Donvito, T. (2020) Cognitive behavioral therapy for arthritis: Does it work? what’s it like?, CreakyJoints. Available at: https://creakyjoints.org/living-with-arthritis/mental-health/cognitive-behavioral-therapy-for-arthritis/ (Accessed: 06 December 2024) Project Gallery
- How does moving houses impact your health and well-being? | Scientia News
Evaluating the advantages and disadvantages of gentrification in the context of health Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How does moving houses impact your health and well-being? 09/07/25, 14:17 Last updated: Published: 13/07/24, 11:02 Evaluating the advantages and disadvantages of gentrification in the context of health Introduction According to the World Health Organization (WHO), health is “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity". Another way to define health is an individual being in a condition of equilibrium within themselves and the surrounding environment, which includes their social interactions and other factors. Reflecting on historical views of health, ancient Indian and Chinese medicine and society in Ancient Greece thought of health as harmony between a person and their environment, which underlines the cohesion between the soul and body; this is similar to the WHO’s definition of health. Considering these ideas, one key determinant of health is gentrification (see Figure 1 ). It was first defined in 1964 by British sociologist Ruth Glass, who witnessed the dilapidated houses in the London Borough of Islington being taken over and renovated by middle-class proprietors. The broader consequences of gentrification include enhanced living conditions for the residents, differences in ownership prerequisites, increased prices of land and houses, and transformations in the social class structure. Also, these changes cause lower-income inhabitants to be pushed out or go to poorer neighbourhoods, and the conditions in these neighbourhoods, which can include racial separation, lead to inequities and discrepancies in health. For example, a systematic review discovered that elderly and Black residents were affected more by gentrification compared to younger and White citizens; this highlights the importance of support and interventions for specific populations during urban renewal. Given the knowledge provided above, this article will delve further into the advantages and disadvantages of gentrification in the context of health outcomes. Advantages of gentrification Gentrification does have its benefits. Firstly, it is positively linked with collective efficacy, which is about enhancing social cohesion within neighbourhoods and maintaining etiquette; this has health benefits for residents, like decreased rates of obesity, sexually transmitted diseases, and all-cause mortality. Another advantage of gentrification is the possibility of economic growth because as more affluent tenants move into specific neighbourhoods, they can bring companies, assets, and an increased demand for local goods and services, creating more jobs in the area for residents. Additionally, gentrification can be attributed to decreased crime rates in newly developed areas because the inflow of wealthier citizens often conveys a more substantial sense of community and investment in regional security standards. Therefore, this revitalised feeling of safety can make these neighbourhoods more appealing to existing and new inhabitants, which leads to further economic development. Moreover, reducing crime can improve health outcomes by reducing stress and anxiety levels among residents, for example. As a result, the community's general well-being can develop, leading to healthier lifestyle choices and more lively neighbourhoods. Furthermore, the longer a person lives in a gentrifying neighbourhood, the better their self-reported health, which does not differ by race or ethnicity, as observed in Los Angeles. Disadvantages of gentrification However, it is also essential to mention the drawbacks of gentrification, which are more numerous. In a qualitative study involving elderly participants, for example, one of them stated that, “The cost of living increases, but the money that people get by the end of the month is the same, this concerning those … even retired people, and people receiving the minimum wage, the minimum wage increases x every year, isn’t it? But it is not enough”. Elderly residents in Barcelona faced comparable challenges of residential displacement between 2011 and 2017 due to younger adults with higher incomes and those pursuing university education moving into the city. These cases spotlight how gentrification can raise the cost of living without an associated boost in earnings, making it problematic for people with lower incomes or vulnerable individuals to live in these areas. Likewise, a census from gentrified neighbourhoods in Pittsburgh showed that participants more typically conveyed negative health changes and reduced resources. Additionally, one study examined qualitative data from 14 cities in Europe and North America and commonly noticed that gentrification negatively affects the health of historically marginalised communities. These include threats to housing and monetary protection, socio-cultural expulsion, loss of services and conveniences, and raised chances of criminal behaviour and compromised public security. This can be equally observed during green gentrification, where longtime historically marginalised inhabitants feel excluded from green or natural spaces, and are less likely to use them compared to newer residents. To mitigate these negative impacts of gentrification, inclusive urban renewal guidelines should be drafted that consider vulnerable populations to boost health benefits through physical and social improvements. The first step would be to provide residents with enough information and establish trust between them and the local authorities because any inequality in providing social options dramatically affects people’s health-related behaviours. Intriguingly, gentrification has been shown to increase the opportunity for exposure to tick-borne pathogens by populations staying in place, displacement within urban areas, and suburban removal. This increases tick-borne disease risk, which poses a health hazard to impacted residents ( Figure 2 ). As for mental health, research has indicated that residing in gentrified areas is linked to greater levels of anxiety and depression in older adults and children. Additionally, one study found young people encountered spatial disconnection and affective exclusion due to gentrification and felt disoriented by the quickness of transition. Therefore, all of these problems associated with gentrification reveal that it can harm public health and well-being, aggravating disparities and creating feelings of isolation and aloneness in impacted communities. Conclusion Gentrification is a complicated and controversial approach that has noteworthy consequences for the health of neighbourhoods. Its advantages include enhanced infrastructure and boosted economic prospects, potentially leading to fairer access to healthcare services and improved health outcomes for residents. However, gentrification often leads to removal and the loss of affordable housing, which can harm the health of vulnerable populations. Therefore, it is vital for policymakers and stakeholders to carefully evaluate the likely health effects of gentrification and enforce alleviation strategies to safeguard the well-being of all citizens (see Table 1 ). Written by Sam Jarada Related articles: A perspective on well-being / Life under occupation REFERENCES WHO. Health and Well-Being. Who.int . 2015. Available from: https://www.who.int/data/gho/data/major-themes/health-and-well-being Sartorius N. The meanings of health and its promotion. Croatian Medical Journal. 2006;47(4):662–4. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2080455/ Krahn GL, Robinson A, Murray AJ, Havercamp SM, Havercamp S, Andridge R, et al. It’s time to Reconsider How We Define Health: Perspective from disability and chronic condition. Disability and Health Journal. 2021 Jun;14(4):101129. Available from: https://www.sciencedirect.com/science/article/pii/S1936657421000753 Svalastog AL, Donev D, Jahren Kristoffersen N, Gajović S. Concepts and Definitions of Health and health-related Values in the Knowledge Landscapes of the Digital Society. Croatian Medical Journal. 2017 Dec;58(6):431–5. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5778676/ Foryś I. Gentrification on the Example of Suburban Parts of the Szczecin Urban Agglomeration. remav. 2013 Sep 1;21(3):5–14. Uribe-Toril J, Ruiz-Real J, de Pablo Valenciano J. Gentrification as an Emerging Source of Environmental Research. Sustainability. 2018 Dec 19;10(12):4847. Schnake-Mahl AS, Jahn JL, Subramanian SV, Waters MC, Arcaya M. Gentrification, Neighborhood Change, and Population Health: a Systematic Review. Journal of Urban Health. 2020 Jan 14;97(1):1–25. Project Gallery
- A potential treatment for HIV | Scientia News
Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A potential treatment for HIV 08/07/25, 16:16 Last updated: Published: 21/07/23, 09:50 Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? The human immunodeficiency virus (HIV) was recorded to affect 38.4 million people globally at the end of 2021. This virus attacks the immune system, incapacitating CD4 cells: white blood cells (WBCs) which play a vital role in activating the innate immune system and fighting infection. The normal range of CD4 cells in our body is from 500 to 1500 cells/mm3 of blood; HIV can rapidly deplete the CD4 count to dangerous levels, damaging the immune system and leaving the body highly susceptible to infections. Whilst antiretroviral therapy (ART) can help manage the virus by interfering with viral replication and helping the body manage the viral load, it fails to eliminate the virus altogether. The reason for this is due to the presence of latent viral reservoirs where HIV can lay dormant and reignite infection if ART is stopped. Whilst a cure has not yet been discovered, a promising avenue being explored in the hopes of eradicating HIV has been CRISPR/Cas9 technology. This highly precise gene-editing tool has been shown to have the ability to induce mutations at specific points in the HIV proviral DNA. Guide RNAs pinpoint the desired genome location and Cas9 nuclease enzymes act as molecular scissors that remove selected segments of DNA. Therefore, CRISPR/Cas9 technology provides access to the viral genetic material integrated into the genome of infected cells, allowing researchers to cleave HIV genes from infected cells, clearing latent viral reservoirs. Furthermore, the CRISPR/Cas9 gene-editing tool can also prevent HIV from attacking the CD4 cells in the first place. HIV binds to the chemokine receptor, CCR5, expressed on CD4 cells, in order to enter the WBC. CRISPR/Cas9 can cleave the genes for the CCR5 receptor and therefore preventing the virus from entering and replicating inside CD4 cells. CRISPR/Cas9 technology provides a solution that current antiretroviral therapies cannot solve. Through gene-editing, researchers can dispel the lasting reservoirs unreachable by ART that HIV is able to establish in our bodies. However, further research and clinical trials are still required to fully understand the safety and efficacy of this approach to treating HIV before it can be implemented as a standard treatment. Written by Bisma Butt Related articles: Antiretroviral therapy / mRNA vaccines Project Gallery
- Nature vs nurture in childhood intelligence | Scientia News
What matters most for the development of intelligence in childhood? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Nature vs nurture in childhood intelligence 11/12/25, 14:14 Last updated: Published: 21/09/24, 15:38 What matters most for the development of intelligence in childhood? Introduction Intelligence is reasoning, planning, solving problems, thinking abstractly, comprehending complex ideas, and learning from experience. This broad and deep capacity for understanding our surroundings can be measured using standardised tests, such as Intelligence Quotient (IQ) tests, picture vocabulary tests for verbal ability and matrix reason tasks for non-verbal ability. This article explores how nature facilitates the development of intelligence in childhood through twin studies, which highlight genome-wide association studies. On the other hand, how nurture can further aid in developing intelligence in childhood through social environmental influences and investigate the association between parent socio-economic status (SES) and intelligence. However, there is no definite answer to whether only nature or nurture leads to the development of intelligence in childhood. Therefore, genotype-environment correlations are also explored. Nature Argument Nativists such as Jensen (1969) believe that intelligence is determined by nature-genetic makeup and estimate that 80% of the heritability in IQ is accounted for by genes. One way to determine the heritability of intelligence is by using twin studies. McGue and Bouchard (1998) reviewed five studies of monozygotic (MZ) twins who were reared apart and found that when accounting for heritability of intelligence, there was a correlation of 0.86 in MZ twins compared to the 0.60 correlation in dizygotic (DZ) twins. Hence, nature does play a role in the development of intelligence in childhood, because the monozygotic/MZ twins didn’t share the same environment but did share the same genes and had a higher correlation than dizygotic/DZ twins, suggesting that intelligence is heritable. Genome-wide association studies (GWAS) further investigate the relationship between genetic sequences and intelligence by examining individual chromosomal markers, such as single nucleotide polymorphisms (SNPs). Butcher et al (2008) conducted a genome scan using 7000 subjects and found six SNPs associated with intelligence, indicating that it is polygenic. The correlation between SNP-set scores and g scores is 0.105. Squaring this correlation indicates an effect size of 1.1% comparable to the sum of the effect sizes of the six SNPs. Figure 2 depicts a genotype-by-phenotype plot illustrating the relationship between SNP-set scores and standardised g . Identifying target alleles and SNP associations in genome polygenic scores has helped account for the heritability of intelligence in childhood. However, due to intelligence being polygenic, the contribution of any individual locus is small. Therefore, genomic variance only explains 10%, which makes it very difficult to detect relevant SNPS without huge samples. Foreseeable advances in genetic technology can mitigate this problem. Nurture Argument Alternatively, empiricists emphasise the family environment, socioeconomic status, and schooling, where schooling is a social influence. Sternberg et al., 2001 claim that pursuing higher education and schooling is associated with higher IQs. Dearey et al (2007), in their 5-year longitudinal study, recruited approximately 70,000 children and found a large overall contribution of intelligence to educational attainment, with an average chance of 58% of attaining grades between A and C. Therefore, their study establishes educational attainment for intelligence as an environmental outcome. However, the decision to pursue education may not be motivated by intelligence but may result from social causation, suggesting that social-economic conditions influence intelligence. The relationship between a parent's SES and a child's intelligence also exemplifies the role of nurture in the development of intelligence. This is further supported by Turkheimer et al (2003), where the authors concluded that in families with low levels of SES, 60% of the variance in IQ is explained by the shared environment, while in affluent families, all variation was accounted for by genes. However, parents with higher levels of intelligence may qualify for better-paying jobs. Hence, they have higher levels of SES, referring to social selection — when individuals influence the quality of their socio-economic environment — and genetics. Meanwhile, impoverished families do not get to develop their full genetic potential, and thus, the heritability of IQ is very low. Conversely, adoption can be seen as a social intervention that moves children from lower to higher SES homes and explores the gene-environment interplay in the development of intelligence. Kendler et al. (2015) studied 436 full male siblings, separated at birth, and tested at 18–20 years. A comparison was made between pairs of separated siblings (one raised in their biological family, the other in an adoptive family). Adopted-away siblings tested 7.6 points higher than their biological siblings when their adoptive parents had higher education levels than their biological parents (such as high school versus some postsecondary education). Gene-environment interplay According to Lerner et al. (2015), nature and nurture are inextricably linked and never exist independently of each other. In this way, the nature-nurture dichotomy presented in the title may be false. Gene-environment (GE) interplay offers two concepts: GE interaction and GE correlation. GE interaction is where the effects of genes on intelligence depend on the environment. GE correlation can be explained through adoption studies that compare genetically unrelated and related individuals. Supporting evidence from Van Ijzendoorn et al (2005) indicates that children who were adopted away from institutions had a better IQ than those children who remained in institutional care. Using 75 studies involving 3,800 children from 19 countries, a meta-analysis compared the intellectual development of children living in orphanages to those living with their adoptive families. On average, children growing up in orphanages had an IQ of 16.5 points lower than their adopted peers. This illustrates how adoptive families who typically have higher SES levels can assist children in achieving higher levels of IQ. However, the generalisability of Ijzendoorn's findings can be questioned as they used participants who were highly deprived in institutional settings, suggesting that their cognitive development is at risk. Furthermore, Neiss and Rowe (2000) contradicted Ijzendoorn’s findings by comparing adopted children to birth children to estimate the genetic-environmental effect of the mother's and father's years of education on the child's verbal intelligence. In biological families, mother-child (0.41) and father-child (0.36) correlations were significantly higher than in adoptive families (0.16). This implies that the adoptive parent's home environment has modest effects on the children's cognitive abilities, whereas the heredity and environment of the birth parents exert a profound influence. Conclusion In conclusion, both nature and nurture represent their significant role in childhood intelligence development, as they both offer testable evidence through twin studies and socio-economic correlations. Nevertheless, scientists have claimed that both genetics and environmental factors will predominantly influence the development of intelligence in childhood. This essay and future research in this field demonstrate that intelligence can be malleable, especially in children, through major social interventions and that the environment will continuously affect gene action. Written by Pranavi Rastogi Related articles: Mutualism theory of intelligence / Depression in children / Childhood stunting / Intellectual deficits / Does being bilingual make you smarter? Project Gallery










