Search Index
355 results found
- Plastics and their environmental impact: a double-edged sword | Scientia News
The chemistry that makes plastics strong also makes them extremely resistant to deterioration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Plastics and their environmental impact: a double-edged sword 10/07/25, 10:29 Last updated: Published: 06/11/24, 12:25 The chemistry that makes plastics strong also makes them extremely resistant to deterioration Plastics have become an indispensable part of modern life. They are found in everything from electronics and packaging to construction materials and medical equipment. These multipurpose materials, mostly derived from petrochemicals, are successful because they are inexpensive, lightweight, and long-lasting. However, one of the biggest environmental problems of our time is their resilience, which makes them so beneficial. The chemistry that makes plastics strong also makes them extremely resistant to deterioration, which causes environmental damage and widespread contamination. The chemistry behind plastics Most plastics are composed of polymers, which are lengthy chains of monomers—repeating molecular units. Depending on how the molecules are arranged and the chemical additives added during synthesis, these polymers can be made to have a variety of characteristics, including stiffness or flexibility. Hydrocarbons from natural gas or crude oil are polymerised to create common plastics like polypropylene, which is used in food containers, and polyethene, which is used in plastic bags. While these plastics are ideal for their intended purposes —protecting products, storing food, and more, they are extremely resistant to degradation. This is due to their stable carbon-carbon bonds, which natural organisms and processes find difficult to break down. As a result, plastics can remain in the environment for hundreds of years, breaking down into tiny bits rather than entirely dissolving. See Figure 1 . The problem of micro-plastics Plastics in the environment degrade over time into tiny fragments known as microplastics, which are defined as particles smaller than 5 mm in diameter. These microplastics originate from a variety of sources, including the breakdown of larger plastic debris, microbeads used in personal care products, synthetic fibres shed from textiles and industrial processes. They are now widespread in every corner of the globe, from the deepest parts of the oceans to remote mountain ranges, the air we breathe, and even drinking water and food. Microplastics are particularly problematic in marine environments. Marine animals such as fish, birds, and invertebrates often mistake microplastics for food. Once ingested, these particles can accumulate in the animals' digestive systems, leading to malnutrition, physical damage, or even death. More concerning is the potential for these plastics to work their way up the food chain. Predators, including humans, may consume prey that has ingested microplastics, raising concerns about the potential effects on human health. Recent studies have detected microplastics in various human-consumed products, including seafood, table salt, honey, and drinking water. Alarmingly, microplastics have also been found in human organs, blood, and even placentas, highlighting the pervasive nature of this contamination. While the long-term environmental and health effects of microplastics are still not fully understood, research raises significant concerns. Microplastics can carry toxic substances such as persistent organic pollutants (POPs) and heavy metals, posing risks to the respiratory, immune, reproductive, and digestive systems. Exposure through ingestion, inhalation, and skin contact has been linked to DNA damage, inflammation, and other serious health issues. Biodegradable plastics: a possible solution? One possible solution to plastic pollution is the development of biodegradable plastics, which are engineered to degrade more easily in the environment. These plastics can be created from natural sources such as maize starch or sugarcane, which are turned into polylactic acid (PLA), or from petroleum-based compounds designed to disintegrate more quickly. However, biodegradable polymers do not provide a perfect answer. Many of these materials require certain circumstances, such as high heat and moisture, to degrade effectively. These conditions are more commonly encountered in industrial composting plants than in landfills or natural ecosystems. As a result, many biodegradable plastics can remain in the environment if not properly disposed of. Furthermore, their production frequently necessitates significant quantities of energy and resources, raising questions about whether they are actually more sustainable than traditional plastics. Innovations in plastic recycling Given the limitations of biodegradable polymers, improving recycling technology has become the main issue in the battle against plastic waste. Traditional recycling methods, like mechanical recycling, involve breaking down plastics and remoulding them into new products. However, this process can degrade the material's quality over time. However, this may compromise the material's quality over time. Furthermore, many types of plastics are difficult or impossible to recycle due to variances in chemical structure, contamination, or a lack of adequate machinery. Recent advances have been made to address these issues. Chemical recycling, for example, converts plastics back into their original monomers, allowing them to be re-polymerised into high-quality plastic. This technique has the ability to recycle materials indefinitely without compromising functionality. Another intriguing technique is enzymatic recycling, in which specially built-enzymes break down plastics into their constituent parts at lower temperatures, reducing the amount of energy required for the process. While these technologies provide hope, they are still in their early phases of development and face significant economic and logistical challenges. Expanding recycling infrastructure and developing more effective ways are critical to reduce the amount of plastic waste entering the environment. The way forward The environmental impact of plastics has inspired a global campaign to reduce plastic waste. Governments, industry, and consumers are taking action by prohibiting single-use plastics, increasing recycling efforts, and developing alternatives. However, addressing the plastic problem necessitates a multifaceted strategy. This includes advances in material science, improved waste management systems, and, perhaps most crucially, a transformation in how we perceive and utilise plastics in our daily lives. The chemistry of plastics is both fascinating and dangerous. While they have transformed businesses and increased quality of life, their long-term presence in the environment poses a substantial risk to ecosystems and human health. Rethinking how we make, use, and discard plastics in order to have a more sustainable relationship with these intricate polymers may be more important for the future of plastics than just developing new materials. Written by Laura K Related articles: Genetically-engineered bacteria break down plastic / The environmental impact of EVs Project Gallery
- Chirality in drugs | Scientia News
Why chirality is important in developing drugs Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Chirality in drugs 04/02/25, 15:50 Last updated: Published: 06/06/23, 16:53 Why chirality is important in developing drugs Nearly 90% of the drugs currently on the market are racemates, which are composed of an equimolar mixture of two enantiomers, and approximately half of all drugs are chiral compounds. Chirality is the quality of an item that prevents it from superimposing on its mirror counterpart, similar to left and right hands. Chirality, a generic characteristic of "handedness,"plays a significant role in the creation of several pharmaceutical drugs. It's interesting to note that 20 of the 35 drugs the Food and Drug Administration (FDA) authorised in 2020 are chiral drugs. For example, Ibuprofen, a chiral 2-aryl propionic acid derivative, is a common over-the-counter analgesic, antipyretic, and anti-inflammatory medication. However, Ibuprofen and other medications from similar families can have side effects and risks related to their usage. Drugs of the chiral class have the drawback that only one of the two enantiomers may be active, while the other may be ineffective or have some negative effects. The inactive enantiomer can occasionally interact with the active enantiomer, lowering its potency or producing undesirable side effects. Additionally, Ibuprofen and other members of the chiral family of pharmaceuticals can interact with other drugs, including over-the-counter and prescription ones. To guarantee that only the active enantiomer is present in chiral-class medications, it is crucial for pharmaceutical companies to closely monitor their production and distribution processes. Lessening the toxicity or adverse effects linked to the inactive enantiomer, medical chemistry has recently seen an increase in the use of enantiomerically pure drugs. In any instance, the choice of whether to utilise a single enantiomer or a combination of enantiomers of a certain medicine should be based on clinical trial results and clinical competence. In addition to requests to determine and control the enantiomeric purity of the enantiomers from a racemic mixture, the use of single enantiomer drugs may result in simpler and more selective pharmacological profiles, improved therapeutic indices, simpler pharmacokinetics, and fewer drug interactions. Although, there have been instances where the wrong enantiomer results in unintended side effects, many medications are still used today as racemates with their associated side effects; this issue is probably brought on by both the difficulty of the chiral separation technique and the high cost of production. In conclusion, Ibuprofen and other medications in the chiral family, including those used to treat pain and inflammation, can be useful, but they also include a number of dangers and adverse effects. It's critical to follow a doctor's instructions when using these medications and to be aware of any possible interactions, allergic reactions, and other hazards. To maintain the security and efficacy of medicines in the chiral class, pharma producers also have a duty to closely monitor their creation and distribution. Written by Navnidhi Sharma Project Gallery
- Neuroimaging and spatial resolution | Scientia News
Peering into the mind Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Neuroimaging and spatial resolution 10/07/25, 10:24 Last updated: Published: 04/11/24, 14:35 Peering into the mind Introduction Neuroimaging has been at the forefront of brain discovery ever since the first ever images of the brain were recorded in 1919 by Walter Dandy, using a technique called pneumoencephalography (PET). Fast-forward over a decade and neuroimaging is more than just blurry singular images. Modern techniques allow us to observe real time changes in brain activity with millisecond resolution, leading to breakthroughs in scientific discovery that would not be possible without it. Memory is a great example - with functional magnetic resonance imaging (fMRI) techniques we have been able to demonstrate that more recent long-term memories are stored and retrieved with brain activity in the hippocampus, but as memories become more in the distant past, they are transferred to the medial temporal lobe. While neuroimaging techniques keep the doors open for new and exciting discoveries, spatial limitations leave many questions unanswered, especially at a cellular and circuit level. For example - within the hippocampus, is each memory encoded via complete distinct neural circuits? Or do similar memories share similar neural pathways? Within just a millimetre cubed of brain tissue we could have up to 57,000 cells (most of them neurons), all of which may have different properties, be part of different circuits, and produce different outcomes. This almost makes revolutionary techniques such as fMRI, with almost unparalleled image quality, seem pointless. To truly understand how neural circuits work, we have to dig as deep as possible to record the smallest regions possible. So that begs the question, how small can we actually record in the human brain? EEG 2024 marks a decade since the first recorded electroencephalography (also known as EEG) scan by Hans Berger in Germany. This technique involves placing electrodes all around the scalp to record activity throughout the whole outer surface of the brain ( Figure 1 ). Unlike the methods we see later on, EEG scans provide a direct measure of activity in the brain, by measuring electrical activity when the brain is active. However, because electrodes are only placed across the scalp, EEG scans are only able to pick up activity from the outer cortex, missing important activity in deeper parts of the brain. In our memory example, this means it would completely miss any activity in the hippocampus. EEG resolution is also quite underwhelming, typically being able to resolve activity with a few centimetres’ resolution - not great for mapping behaviours to specific structures in the brain. EEG scans are used in a medical environment to measure overall activity levels, assisting with epilepsy diagnosis. Let's look at what we can use to dig deeper into the brain and locate signals of activity… PET Position emission tomography (PET) scans offer a chance to record activity throughout the whole brain by ingesting a radioactive tracer, typically glucose labelled with a mildly radioactive substance. This tracer is tracked and uptake in specific parts of the brain is a sign for greater metabolic activity, indicating a higher signalling rate. PET scans already offer a resolution far beyond the capacities of EEG scans, distinguishing activity between areas with a resolution of up to 4mm. With the use of different radioactive labels, we can also detect activity of specific populations of neurons such as dopamine neurons to diagnose Parkinson's disease. In fact, many studies have reliably demonstrated the ability of PET scans to detect the root cause of Parkinson's disease, which is a reduced number of dopamine neurons in the basal ganglia, before symptoms become too extreme. As impressive as it sounds, a 4mm resolution can locate activity in large areas of the cortex, but is limited in its resolving power for discrete cortical layers. Take the human motor cortex for example - all 6 layers have an average width of only 2.79mm. A PET scan would not be powerful enough to determine which layer is most active, so we need to dig a little deeper… fMRI Since its inception in the early 90's, fMRI has gained the reputation of becoming the gold standard for human neuroimaging, thanks to its non-invasiveness, lack of artefacts, and reliable signalling. fMRI uses Nuclear Magnetic Resonance to measure changes in oxygenated blood flow, which is correlative of neural activity, known as BOLD signals. In comparison to EEG, measuring blood oxygen levels cannot reach a highly impressive temporal resolution, and is also not a direct measure of neural activity. fMRI makes up for this with its superior spatial resolution, resolving spaces as small as 1mm apart. Using our human motor cortex example, this would allow us to resolve activity between every 2-3 layers - not a bad return considering it doesn’t even leave a scar. PET, and especially EEG, pales in comparison to the capabilities of fMRI that has since been used for a wide range of neuroimaging research. Most notably, structural MRI has been used to support the idea of hippocampal involvement during spatial navigation from memory tasks ( Figure 2 ). Its resolving power and highly precise images also make it suitable to be used for mapping surgical procedures. Conclusion With a resolution of up to 1mm, fMRI takes the crown as the human neuroimaging technique with the best spatial resolution! Table 1 shows a brief summary of each neuroimaging method. Unfortunately though, there is still so much more we need to do to look at individual circuits and connections. As mentioned before, even within a millimetre cubed of brain, we have 5 figures worth of cells, making the number of neurons that make up the whole brain impossible to comprehend. To observe the activity of a single neuron, we would need an imaging technique with the power of viewing cells in the 10’s of micrometre range. So what can we do to get to the resolution we desire while still being suitable for humans? Maybe there isn't a solution. Instead, maybe if we want to record singular neuron activity, we have to take inspiration from invasive animal techniques such as microelectrode recordings. Typically used in rats and mice, these can achieve single-cell resolution to look at neuroscience from the smallest of components. It would be unethical to stick an electrode into a healthy human's brain and record activity, but perhaps in the future a non-invasive form of electrode recording could be developed? The current neuroscience field is foggy and shrouded in mystery. Most of these mysteries simply cannot be solved with the current research techniques we have at our disposal. But this is what makes neuroscience exciting - there is still so much to explore! Who knows when we will be able to map behaviours to neural circuits with single-cell precision, but with how quickly imaging techniques are being enhanced and fine-tuned, I wouldn't be surprised if it's sooner than we think. Written by Ramim Rahman Related articles: Neuromyelitis optica / Traumatic brain injuries REFERENCES Hoeffner, E.G. et al. (2011) ‘Neuroradiology back to the future: Brain Imaging’, American Journal of Neuroradiology, 33(1), pp. 5–11. doi:10.3174/ajnr.a2936. Maguire, E.A. and Frith, C.D. (2003) ‘Lateral asymmetry in the hippocampal response to the remoteness of autobiographical memories’, The Journal of Neuroscience, 23(12), pp. 5302–5307. doi:10.1523/jneurosci.23-12-05302.2003. Wong, C. (2024) ‘Cubic millimetre of brain mapped in spectacular detail’, Nature, 629(8013), pp. 739–740. doi:10.1038/d41586-024-01387-9. Butman, J. A., & Floeter, M. K. (2007). Decreased thickness of primary motor cortex in primary lateral sclerosis. AJNR. American journal of neuroradiology, 28(1), 87–91. Loane, C., & Politis, M. (2011). Positron emission tomography neuroimaging in Parkinson's disease. American journal of translational research, 3(4), 323–341. Maguire, E.A. et al. (2000) ‘Navigation-related structural change in the hippocampi of taxi drivers’, Proceedings of the National Academy of Sciences, 97(8), pp. 4398–4403. doi:10.1073/pnas.070039597. [Figure 1] EEG (electroencephalogram) (2024) Mayo Clinic . Available at: https://www.mayoclinic.org/tests-procedures/eeg/about/pac-20393875 (Accessed: 18 October 2024). [Figure 2] Boccia, M. et al. (2016) ‘Direct and indirect parieto-medial temporal pathways for spatial navigation in humans: Evidence from resting-state functional connectivity’, Brain Structure and Function, 222(4), pp. 1945–1957. doi:10.1007/s00429-016-1318-6. Project Gallery
- Dark Energy Spectroscopic Instrument (DESI) | Scientia News
A glimpse into the early universe Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Dark Energy Spectroscopic Instrument (DESI) 23/10/25, 10:22 Last updated: Published: 08/07/23, 13:11 A glimpse into the early universe ! Widget Didn’t Load Check your internet and refresh this page. If that doesn’t work, contact us. Project Gallery
- How does physical health affect mental health? | Scientia News
Healthy heart, healthy mind Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How does physical health affect mental health? Last updated: 16/10/25, 10:20 Published: 30/01/25, 08:00 Healthy heart, healthy mind Introduction Over the last decade, maintaining good mental health has become an increasing global priority. More people are committing time to self-care meditation, and other cognitive practices. We have also seen a rise in people taking care of their physical health through exercise and clean eating. This is fantastic – people are making time for one of the most important aspects of life, their health! But with the fast-paced nature of modern lifestyles, it is hard to devote separate time each week to purely mental and physical wellbeing. What if there were ways we could enhance both physical and mental wellbeing at the same time? Are both forms of health completely distinct from one another, or could a change in one have an effect on the other? If you’re looking for ways to improve your self-care efficiency, this may be the article for you! Healthy heart, healthy mind Physical health is a lot easier to define, on account of it being largely visible. Mental health on the other hand lacks much of a concrete definition. What is widely agreed is that emotions and feelings play a large part in making up our mental health. Emotions are largely determined by how we feel about our current internal and external environment, meaning bad bodily signs (as part of our internal environment) will have a negative effect on our overall mood. This is why being ill puts us in such a bad mood – even a blocked nose can annoy us by affecting how we do everyday activities. Poor fitness levels are likely no different – not being the most physically capable and finding everyday physical tasks challenging will likely have an effect on your mood and your confidence. Recent studies have backed up this idea, namely that signs of bodily inflammation are associated with increased risk of depression and negative mood. The role of neurotransmitters So being physically fit is associated with having better mental health, but does that mean exercise itself is mentally health as well, or is it just the effect of exercise that makes us happy? In other words, we seem to enjoy the result, but do we enjoy the process too? Studies have found that exercise increases dopamine levels in the brain. Dopamine is a neurotransmitter (a chemical messenger in the brain) that signals reward and motivation, similar to when we earn something for the work we put in ( Figure 1 ). Exercise is therefore seen as rewarding to the brain. There is also a lot of evidence suggesting exercise increases serotonin levels in both rats and humans. Serotonin is also a neurotransmitter, associated with directly enhancing mood and even having anti-depressant effects. Experiments in rats even suggest that increases in serotonin can decrease anxiety levels. Now, this does not mean exercise alone can cure anxiety disorder or depression, but could it be a useful variable in a clinical setting? Clinical uses Studies in depressive patients suggest that, yes, exercise does lead to better mental and physical health in patients with depression. This pairs well with another common finding that depressed patients are very rarely willing to complete difficult tasks for reward. So even on an extreme clinical scale, mental ill-health can have very damning consequences on maintaining good physical health. On the other hand, simple activities such as light jogs or walks may be the key to reversing negative spirals and getting on the right track towards recovery ( Figure 2 ). Conclusion and what we can do So far we have pretty solid evidence that mental health can impact physical health and vice versa, both negatively and positively. Going back to the introductory question, yes! We can find activities that improve both our physical and mental health. The trick is to find exercises that we find enjoyable and rewarding. On the clinical side, this could mean that physical exercise may be as effective at remitting depressive symptoms as antidepressants, likely with a lot fewer side effects. With that said, stay active and have fun, it helps more than you think! Written by Ramim Rahman Related articles: Environmental factors in exercise / Stress and neurodegeneration / Personal training / Mental health awareness REFERENCES Nord, C. (2024) The balanced brain . Cambridge: Penguin Random House. Osimo, E.F. et al. (2020) ‘Inflammatory markers in depression: A meta-analysis of mean differences and variability in 5,166 patients and 5,083 controls’, Brain, Behavior, and Immunity, 87, pp. 901–909. doi:10.1016/j.bbi.2020.02.010. Basso, J.C. and Suzuki, W.A. (2017) ‘The effects of acute exercise on mood, cognition, neurophysiology, and neurochemical pathways: A Review’, Brain Plasticity , 2(2), pp. 127–152. doi:10.3233/bpl-160040. [figure 1] DiCarlo, G.E. and Wallace, M.T. (2022) ‘Modeling dopamine dysfunction in autism spectrum disorder: From invertebrates to vertebrates’, Neuroscience & Biobehavioral Reviews, 133, p. 104494. doi:10.1016/j.neubiorev.2021.12.017. [figure 2] Donvito, T. (2020) Cognitive behavioral therapy for arthritis: Does it work? what’s it like?, CreakyJoints. Available at: https://creakyjoints.org/living-with-arthritis/mental-health/cognitive-behavioral-therapy-for-arthritis/ (Accessed: 06 December 2024) Project Gallery
- How does moving houses impact your health and well-being? | Scientia News
Evaluating the advantages and disadvantages of gentrification in the context of health Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How does moving houses impact your health and well-being? 09/07/25, 14:17 Last updated: Published: 13/07/24, 11:02 Evaluating the advantages and disadvantages of gentrification in the context of health Introduction According to the World Health Organization (WHO), health is “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity". Another way to define health is an individual being in a condition of equilibrium within themselves and the surrounding environment, which includes their social interactions and other factors. Reflecting on historical views of health, ancient Indian and Chinese medicine and society in Ancient Greece thought of health as harmony between a person and their environment, which underlines the cohesion between the soul and body; this is similar to the WHO’s definition of health. Considering these ideas, one key determinant of health is gentrification (see Figure 1 ). It was first defined in 1964 by British sociologist Ruth Glass, who witnessed the dilapidated houses in the London Borough of Islington being taken over and renovated by middle-class proprietors. The broader consequences of gentrification include enhanced living conditions for the residents, differences in ownership prerequisites, increased prices of land and houses, and transformations in the social class structure. Also, these changes cause lower-income inhabitants to be pushed out or go to poorer neighbourhoods, and the conditions in these neighbourhoods, which can include racial separation, lead to inequities and discrepancies in health. For example, a systematic review discovered that elderly and Black residents were affected more by gentrification compared to younger and White citizens; this highlights the importance of support and interventions for specific populations during urban renewal. Given the knowledge provided above, this article will delve further into the advantages and disadvantages of gentrification in the context of health outcomes. Advantages of gentrification Gentrification does have its benefits. Firstly, it is positively linked with collective efficacy, which is about enhancing social cohesion within neighbourhoods and maintaining etiquette; this has health benefits for residents, like decreased rates of obesity, sexually transmitted diseases, and all-cause mortality. Another advantage of gentrification is the possibility of economic growth because as more affluent tenants move into specific neighbourhoods, they can bring companies, assets, and an increased demand for local goods and services, creating more jobs in the area for residents. Additionally, gentrification can be attributed to decreased crime rates in newly developed areas because the inflow of wealthier citizens often conveys a more substantial sense of community and investment in regional security standards. Therefore, this revitalised feeling of safety can make these neighbourhoods more appealing to existing and new inhabitants, which leads to further economic development. Moreover, reducing crime can improve health outcomes by reducing stress and anxiety levels among residents, for example. As a result, the community's general well-being can develop, leading to healthier lifestyle choices and more lively neighbourhoods. Furthermore, the longer a person lives in a gentrifying neighbourhood, the better their self-reported health, which does not differ by race or ethnicity, as observed in Los Angeles. Disadvantages of gentrification However, it is also essential to mention the drawbacks of gentrification, which are more numerous. In a qualitative study involving elderly participants, for example, one of them stated that, “The cost of living increases, but the money that people get by the end of the month is the same, this concerning those … even retired people, and people receiving the minimum wage, the minimum wage increases x every year, isn’t it? But it is not enough”. Elderly residents in Barcelona faced comparable challenges of residential displacement between 2011 and 2017 due to younger adults with higher incomes and those pursuing university education moving into the city. These cases spotlight how gentrification can raise the cost of living without an associated boost in earnings, making it problematic for people with lower incomes or vulnerable individuals to live in these areas. Likewise, a census from gentrified neighbourhoods in Pittsburgh showed that participants more typically conveyed negative health changes and reduced resources. Additionally, one study examined qualitative data from 14 cities in Europe and North America and commonly noticed that gentrification negatively affects the health of historically marginalised communities. These include threats to housing and monetary protection, socio-cultural expulsion, loss of services and conveniences, and raised chances of criminal behaviour and compromised public security. This can be equally observed during green gentrification, where longtime historically marginalised inhabitants feel excluded from green or natural spaces, and are less likely to use them compared to newer residents. To mitigate these negative impacts of gentrification, inclusive urban renewal guidelines should be drafted that consider vulnerable populations to boost health benefits through physical and social improvements. The first step would be to provide residents with enough information and establish trust between them and the local authorities because any inequality in providing social options dramatically affects people’s health-related behaviours. Intriguingly, gentrification has been shown to increase the opportunity for exposure to tick-borne pathogens by populations staying in place, displacement within urban areas, and suburban removal. This increases tick-borne disease risk, which poses a health hazard to impacted residents ( Figure 2 ). As for mental health, research has indicated that residing in gentrified areas is linked to greater levels of anxiety and depression in older adults and children. Additionally, one study found young people encountered spatial disconnection and affective exclusion due to gentrification and felt disoriented by the quickness of transition. Therefore, all of these problems associated with gentrification reveal that it can harm public health and well-being, aggravating disparities and creating feelings of isolation and aloneness in impacted communities. Conclusion Gentrification is a complicated and controversial approach that has noteworthy consequences for the health of neighbourhoods. Its advantages include enhanced infrastructure and boosted economic prospects, potentially leading to fairer access to healthcare services and improved health outcomes for residents. However, gentrification often leads to removal and the loss of affordable housing, which can harm the health of vulnerable populations. Therefore, it is vital for policymakers and stakeholders to carefully evaluate the likely health effects of gentrification and enforce alleviation strategies to safeguard the well-being of all citizens (see Table 1 ). Written by Sam Jarada Related articles: A perspective on well-being / Life under occupation REFERENCES WHO. Health and Well-Being. Who.int . 2015. Available from: https://www.who.int/data/gho/data/major-themes/health-and-well-being Sartorius N. The meanings of health and its promotion. Croatian Medical Journal. 2006;47(4):662–4. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2080455/ Krahn GL, Robinson A, Murray AJ, Havercamp SM, Havercamp S, Andridge R, et al. It’s time to Reconsider How We Define Health: Perspective from disability and chronic condition. Disability and Health Journal. 2021 Jun;14(4):101129. Available from: https://www.sciencedirect.com/science/article/pii/S1936657421000753 Svalastog AL, Donev D, Jahren Kristoffersen N, Gajović S. Concepts and Definitions of Health and health-related Values in the Knowledge Landscapes of the Digital Society. Croatian Medical Journal. 2017 Dec;58(6):431–5. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5778676/ Foryś I. Gentrification on the Example of Suburban Parts of the Szczecin Urban Agglomeration. remav. 2013 Sep 1;21(3):5–14. Uribe-Toril J, Ruiz-Real J, de Pablo Valenciano J. Gentrification as an Emerging Source of Environmental Research. Sustainability. 2018 Dec 19;10(12):4847. Schnake-Mahl AS, Jahn JL, Subramanian SV, Waters MC, Arcaya M. Gentrification, Neighborhood Change, and Population Health: a Systematic Review. Journal of Urban Health. 2020 Jan 14;97(1):1–25. Project Gallery
- A potential treatment for HIV | Scientia News
Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A potential treatment for HIV 08/07/25, 16:16 Last updated: Published: 21/07/23, 09:50 Can CRISPR/Cas9 overcome the challenges posed by current HIV treatments? The human immunodeficiency virus (HIV) was recorded to affect 38.4 million people globally at the end of 2021. This virus attacks the immune system, incapacitating CD4 cells: white blood cells (WBCs) which play a vital role in activating the innate immune system and fighting infection. The normal range of CD4 cells in our body is from 500 to 1500 cells/mm3 of blood; HIV can rapidly deplete the CD4 count to dangerous levels, damaging the immune system and leaving the body highly susceptible to infections. Whilst antiretroviral therapy (ART) can help manage the virus by interfering with viral replication and helping the body manage the viral load, it fails to eliminate the virus altogether. The reason for this is due to the presence of latent viral reservoirs where HIV can lay dormant and reignite infection if ART is stopped. Whilst a cure has not yet been discovered, a promising avenue being explored in the hopes of eradicating HIV has been CRISPR/Cas9 technology. This highly precise gene-editing tool has been shown to have the ability to induce mutations at specific points in the HIV proviral DNA. Guide RNAs pinpoint the desired genome location and Cas9 nuclease enzymes act as molecular scissors that remove selected segments of DNA. Therefore, CRISPR/Cas9 technology provides access to the viral genetic material integrated into the genome of infected cells, allowing researchers to cleave HIV genes from infected cells, clearing latent viral reservoirs. Furthermore, the CRISPR/Cas9 gene-editing tool can also prevent HIV from attacking the CD4 cells in the first place. HIV binds to the chemokine receptor, CCR5, expressed on CD4 cells, in order to enter the WBC. CRISPR/Cas9 can cleave the genes for the CCR5 receptor and therefore preventing the virus from entering and replicating inside CD4 cells. CRISPR/Cas9 technology provides a solution that current antiretroviral therapies cannot solve. Through gene-editing, researchers can dispel the lasting reservoirs unreachable by ART that HIV is able to establish in our bodies. However, further research and clinical trials are still required to fully understand the safety and efficacy of this approach to treating HIV before it can be implemented as a standard treatment. Written by Bisma Butt Related articles: Antiretroviral therapy / mRNA vaccines Project Gallery
- Nature vs nurture in childhood intelligence | Scientia News
What matters most for the development of intelligence in childhood? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Nature vs nurture in childhood intelligence 11/12/25, 14:14 Last updated: Published: 21/09/24, 15:38 What matters most for the development of intelligence in childhood? Introduction Intelligence is reasoning, planning, solving problems, thinking abstractly, comprehending complex ideas, and learning from experience. This broad and deep capacity for understanding our surroundings can be measured using standardised tests, such as Intelligence Quotient (IQ) tests, picture vocabulary tests for verbal ability and matrix reason tasks for non-verbal ability. This article explores how nature facilitates the development of intelligence in childhood through twin studies, which highlight genome-wide association studies. On the other hand, how nurture can further aid in developing intelligence in childhood through social environmental influences and investigate the association between parent socio-economic status (SES) and intelligence. However, there is no definite answer to whether only nature or nurture leads to the development of intelligence in childhood. Therefore, genotype-environment correlations are also explored. Nature Argument Nativists such as Jensen (1969) believe that intelligence is determined by nature-genetic makeup and estimate that 80% of the heritability in IQ is accounted for by genes. One way to determine the heritability of intelligence is by using twin studies. McGue and Bouchard (1998) reviewed five studies of monozygotic (MZ) twins who were reared apart and found that when accounting for heritability of intelligence, there was a correlation of 0.86 in MZ twins compared to the 0.60 correlation in dizygotic (DZ) twins. Hence, nature does play a role in the development of intelligence in childhood, because the monozygotic/MZ twins didn’t share the same environment but did share the same genes and had a higher correlation than dizygotic/DZ twins, suggesting that intelligence is heritable. Genome-wide association studies (GWAS) further investigate the relationship between genetic sequences and intelligence by examining individual chromosomal markers, such as single nucleotide polymorphisms (SNPs). Butcher et al (2008) conducted a genome scan using 7000 subjects and found six SNPs associated with intelligence, indicating that it is polygenic. The correlation between SNP-set scores and g scores is 0.105. Squaring this correlation indicates an effect size of 1.1% comparable to the sum of the effect sizes of the six SNPs. Figure 2 depicts a genotype-by-phenotype plot illustrating the relationship between SNP-set scores and standardised g . Identifying target alleles and SNP associations in genome polygenic scores has helped account for the heritability of intelligence in childhood. However, due to intelligence being polygenic, the contribution of any individual locus is small. Therefore, genomic variance only explains 10%, which makes it very difficult to detect relevant SNPS without huge samples. Foreseeable advances in genetic technology can mitigate this problem. Nurture Argument Alternatively, empiricists emphasise the family environment, socioeconomic status, and schooling, where schooling is a social influence. Sternberg et al., 2001 claim that pursuing higher education and schooling is associated with higher IQs. Dearey et al (2007), in their 5-year longitudinal study, recruited approximately 70,000 children and found a large overall contribution of intelligence to educational attainment, with an average chance of 58% of attaining grades between A and C. Therefore, their study establishes educational attainment for intelligence as an environmental outcome. However, the decision to pursue education may not be motivated by intelligence but may result from social causation, suggesting that social-economic conditions influence intelligence. The relationship between a parent's SES and a child's intelligence also exemplifies the role of nurture in the development of intelligence. This is further supported by Turkheimer et al (2003), where the authors concluded that in families with low levels of SES, 60% of the variance in IQ is explained by the shared environment, while in affluent families, all variation was accounted for by genes. However, parents with higher levels of intelligence may qualify for better-paying jobs. Hence, they have higher levels of SES, referring to social selection — when individuals influence the quality of their socio-economic environment — and genetics. Meanwhile, impoverished families do not get to develop their full genetic potential, and thus, the heritability of IQ is very low. Conversely, adoption can be seen as a social intervention that moves children from lower to higher SES homes and explores the gene-environment interplay in the development of intelligence. Kendler et al. (2015) studied 436 full male siblings, separated at birth, and tested at 18–20 years. A comparison was made between pairs of separated siblings (one raised in their biological family, the other in an adoptive family). Adopted-away siblings tested 7.6 points higher than their biological siblings when their adoptive parents had higher education levels than their biological parents (such as high school versus some postsecondary education). Gene-environment interplay According to Lerner et al. (2015), nature and nurture are inextricably linked and never exist independently of each other. In this way, the nature-nurture dichotomy presented in the title may be false. Gene-environment (GE) interplay offers two concepts: GE interaction and GE correlation. GE interaction is where the effects of genes on intelligence depend on the environment. GE correlation can be explained through adoption studies that compare genetically unrelated and related individuals. Supporting evidence from Van Ijzendoorn et al (2005) indicates that children who were adopted away from institutions had a better IQ than those children who remained in institutional care. Using 75 studies involving 3,800 children from 19 countries, a meta-analysis compared the intellectual development of children living in orphanages to those living with their adoptive families. On average, children growing up in orphanages had an IQ of 16.5 points lower than their adopted peers. This illustrates how adoptive families who typically have higher SES levels can assist children in achieving higher levels of IQ. However, the generalisability of Ijzendoorn's findings can be questioned as they used participants who were highly deprived in institutional settings, suggesting that their cognitive development is at risk. Furthermore, Neiss and Rowe (2000) contradicted Ijzendoorn’s findings by comparing adopted children to birth children to estimate the genetic-environmental effect of the mother's and father's years of education on the child's verbal intelligence. In biological families, mother-child (0.41) and father-child (0.36) correlations were significantly higher than in adoptive families (0.16). This implies that the adoptive parent's home environment has modest effects on the children's cognitive abilities, whereas the heredity and environment of the birth parents exert a profound influence. Conclusion In conclusion, both nature and nurture represent their significant role in childhood intelligence development, as they both offer testable evidence through twin studies and socio-economic correlations. Nevertheless, scientists have claimed that both genetics and environmental factors will predominantly influence the development of intelligence in childhood. This essay and future research in this field demonstrate that intelligence can be malleable, especially in children, through major social interventions and that the environment will continuously affect gene action. Written by Pranavi Rastogi Related articles: Mutualism theory of intelligence / Depression in children / Childhood stunting / Intellectual deficits / Does being bilingual make you smarter? Project Gallery
- The chemistry of an atomic bomb | Scientia News
Julius Oppenheimer Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The chemistry of an atomic bomb 04/07/25, 12:57 Last updated: Published: 23/08/23, 16:29 Julius Oppenheimer Julius Robert Oppenheimer, often credited with leading the development of the atomic bomb, played a significant role in its creation in the early 1940s. However, it is essential to recognise the collaborative effort of many scientists, engineers, and researchers who contributed to the project. The history and chemistry of the atomic bomb are indeed fascinating, shedding light on the scientific advancements that made it possible. The destructive power of an atomic bomb stems from the rapid release of energy resulting from the splitting, or fission, of fissile atomic nuclei in its core. Isotopes such as uranium-235 and plutonium-239 are selected for their ability to undergo fission readily and sustain a self-sustaining chain reaction, leading to the release of an immense amount of energy. The critical mass of fissionable material required for detonation ensures that the neutrons produced during fission have a high probability of impacting other nuclei and initiating a chain reaction. To facilitate a controlled release of energy, neutron moderation plays a crucial role in the functioning of an atomic bomb. Neutrons emitted during fission have high velocities, making them less likely to be absorbed by other fissile material. However, by employing a moderator material such as heavy water (deuterium oxide) or graphite, these high-speed neutrons can be slowed down. Slowing down the neutrons increases the likelihood of their absorption by fissile material, enhancing the efficiency of the chain reaction and the release of energy. The sheer magnitude of the energy released by atomic bombs is staggering. For example, one kilogram (2.2 pounds) of uranium-235 can undergo complete fission, producing an amount of energy equivalent to that released by 17,000 tons (17 kilotons) of TNT. This tremendous release of energy underscores the immense destructive potential of atomic weapons. It is essential to note that the development of the atomic bomb represents a confluence of scientific knowledge and technological advancements, with nuclear chemistry serving as a foundational principle. The understanding of nuclear fission, the critical mass requirement, and the implosion design were key factors in the creation of the atomic bomb. Exploring the chemistry behind this devastating weapon not only provides insights into the destructive capabilities of atomic energy but also emphasises the responsibility that accompanies its use. In conclusion, while Oppenheimer's contributions to the development of the atomic bomb are significant, it is crucial to acknowledge the collective effort that led to its creation. The chemistry behind atomic bombs, from the selection of fissile isotopes to neutron moderation, plays a pivotal role in harnessing the destructive power of nuclear fission. Understanding the chemistry of atomic weapons highlights the remarkable scientific achievements and reinforces the need for responsible use of atomic energy. Written by Navnidhi Sharma Project Gallery
- The power of probiotics | Scientia News
Unlocking the secrets to gut health Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The power of probiotics 14/07/25, 14:59 Last updated: Published: 18/08/23, 19:58 Unlocking the secrets to gut health What are probiotics? Probiotics are dietary supplements that consist of live cultures of bacteria or yeast. In the human body, more precisely in the microbiome, there are about 4 trillion bacteria, which include almost 450 species. These bacteria are necessary for the proper functioning of the entire body, especially the intestines and digestive system. In probiotics, bacteria from the Lactobacillus and Bifidobacterium families are most often used, as well as yeasts such as Saccharomyces cerevisiae. How probiotics work? Probiotics have a wide range of effects on our body. Their main task is to strengthen immunity and improve the condition of the digestive tract. This is because microorganisms produce natural antibodies, and also constitute a kind of protective barrier that does not allow factors conducive to infection to our intestine. Types of probiotics Most often, lactic acid bacteria of the genera Lactobacillus and Bifidobacterium are used as probiotics, but some species of Escherichia and Bacillus bacteria and the yeast Saccharomyces cerevisiae boulardi also have pro-health properties. Probiotics for your gut health The composition of our bacterial flora in the intestines determines the proper functioning of the digestive and immune systems. Probiotics have a positive effect primarily on the intestinal flora. They speed up metabolism and lower bad cholesterol (LDL). Live cultures of bacteria protect our digestive system. They improve digestion, regulate intestinal peristalsis, and prevent diarrhoea. They also increase the nutritional value of products - they facilitate the absorption of minerals such as magnesium and iron as well as vitamins from group B and K. In addition, probiotics strengthen immunity and protect us from infections caused by pathogenic bacteria. Therefore, it is very important to take as many probiotics as possible during and after antibiotic treatment. They will then regenerate the intestinal flora damaged by antibiotic therapy and reduce inflammation. Main benefits · facilitate the digestive process · increase the absorption of vitamins and minerals · during antibiotic treatments, they protect our intestinal microflora · affect the immune system by increasing resistance to infections · some strains have anti-allergic and anti-cancer properties · lower cholesterol · relieve the symptoms of lactose intolerance · ability to synthesize some B vitamins, vitamin K, folic acid Written by Aleksandra Zurowska Related articles: The gut microbiome / Vitamins / Interplay of hormones and microbiome Project Gallery










