top of page

Search Index

342 results found

  • Chirality in drugs | Scientia News

    Why chirality is important in developing drugs Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Chirality in drugs 04/02/25, 15:50 Last updated: Published: 06/06/23, 16:53 Why chirality is important in developing drugs Nearly 90% of the drugs currently on the market are racemates, which are composed of an equimolar mixture of two enantiomers, and approximately half of all drugs are chiral compounds. Chirality is the quality of an item that prevents it from superimposing on its mirror counterpart, similar to left and right hands. Chirality, a generic characteristic of "handedness,"plays a significant role in the creation of several pharmaceutical drugs. It's interesting to note that 20 of the 35 drugs the Food and Drug Administration (FDA) authorised in 2020 are chiral drugs. For example, Ibuprofen, a chiral 2-aryl propionic acid derivative, is a common over-the-counter analgesic, antipyretic, and anti-inflammatory medication. However, Ibuprofen and other medications from similar families can have side effects and risks related to their usage. Drugs of the chiral class have the drawback that only one of the two enantiomers may be active, while the other may be ineffective or have some negative effects. The inactive enantiomer can occasionally interact with the active enantiomer, lowering its potency or producing undesirable side effects. Additionally, Ibuprofen and other members of the chiral family of pharmaceuticals can interact with other drugs, including over-the-counter and prescription ones. To guarantee that only the active enantiomer is present in chiral-class medications, it is crucial for pharmaceutical companies to closely monitor their production and distribution processes. Lessening the toxicity or adverse effects linked to the inactive enantiomer, medical chemistry has recently seen an increase in the use of enantiomerically pure drugs. In any instance, the choice of whether to utilise a single enantiomer or a combination of enantiomers of a certain medicine should be based on clinical trial results and clinical competence. In addition to requests to determine and control the enantiomeric purity of the enantiomers from a racemic mixture, the use of single enantiomer drugs may result in simpler and more selective pharmacological profiles, improved therapeutic indices, simpler pharmacokinetics, and fewer drug interactions. Although, there have been instances where the wrong enantiomer results in unintended side effects, many medications are still used today as racemates with their associated side effects; this issue is probably brought on by both the difficulty of the chiral separation technique and the high cost of production. In conclusion, Ibuprofen and other medications in the chiral family, including those used to treat pain and inflammation, can be useful, but they also include a number of dangers and adverse effects. It's critical to follow a doctor's instructions when using these medications and to be aware of any possible interactions, allergic reactions, and other hazards. To maintain the security and efficacy of medicines in the chiral class, pharma producers also have a duty to closely monitor their creation and distribution. Written by Navnidhi Sharma Project Gallery

  • Plastics and their environmental impact: a double-edged sword | Scientia News

    The chemistry that makes plastics strong also makes them extremely resistant to deterioration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Plastics and their environmental impact: a double-edged sword 10/07/25, 10:29 Last updated: Published: 06/11/24, 12:25 The chemistry that makes plastics strong also makes them extremely resistant to deterioration Plastics have become an indispensable part of modern life. They are found in everything from electronics and packaging to construction materials and medical equipment. These multipurpose materials, mostly derived from petrochemicals, are successful because they are inexpensive, lightweight, and long-lasting. However, one of the biggest environmental problems of our time is their resilience, which makes them so beneficial. The chemistry that makes plastics strong also makes them extremely resistant to deterioration, which causes environmental damage and widespread contamination. The chemistry behind plastics Most plastics are composed of polymers, which are lengthy chains of monomers—repeating molecular units. Depending on how the molecules are arranged and the chemical additives added during synthesis, these polymers can be made to have a variety of characteristics, including stiffness or flexibility. Hydrocarbons from natural gas or crude oil are polymerised to create common plastics like polypropylene, which is used in food containers, and polyethene, which is used in plastic bags. While these plastics are ideal for their intended purposes —protecting products, storing food, and more, they are extremely resistant to degradation. This is due to their stable carbon-carbon bonds, which natural organisms and processes find difficult to break down. As a result, plastics can remain in the environment for hundreds of years, breaking down into tiny bits rather than entirely dissolving. See Figure 1 . The problem of micro-plastics Plastics in the environment degrade over time into tiny fragments known as microplastics, which are defined as particles smaller than 5 mm in diameter. These microplastics originate from a variety of sources, including the breakdown of larger plastic debris, microbeads used in personal care products, synthetic fibres shed from textiles and industrial processes. They are now widespread in every corner of the globe, from the deepest parts of the oceans to remote mountain ranges, the air we breathe, and even drinking water and food. Microplastics are particularly problematic in marine environments. Marine animals such as fish, birds, and invertebrates often mistake microplastics for food. Once ingested, these particles can accumulate in the animals' digestive systems, leading to malnutrition, physical damage, or even death. More concerning is the potential for these plastics to work their way up the food chain. Predators, including humans, may consume prey that has ingested microplastics, raising concerns about the potential effects on human health. Recent studies have detected microplastics in various human-consumed products, including seafood, table salt, honey, and drinking water. Alarmingly, microplastics have also been found in human organs, blood, and even placentas, highlighting the pervasive nature of this contamination. While the long-term environmental and health effects of microplastics are still not fully understood, research raises significant concerns. Microplastics can carry toxic substances such as persistent organic pollutants (POPs) and heavy metals, posing risks to the respiratory, immune, reproductive, and digestive systems. Exposure through ingestion, inhalation, and skin contact has been linked to DNA damage, inflammation, and other serious health issues. Biodegradable plastics: a possible solution? One possible solution to plastic pollution is the development of biodegradable plastics, which are engineered to degrade more easily in the environment. These plastics can be created from natural sources such as maize starch or sugarcane, which are turned into polylactic acid (PLA), or from petroleum-based compounds designed to disintegrate more quickly. However, biodegradable polymers do not provide a perfect answer. Many of these materials require certain circumstances, such as high heat and moisture, to degrade effectively. These conditions are more commonly encountered in industrial composting plants than in landfills or natural ecosystems. As a result, many biodegradable plastics can remain in the environment if not properly disposed of. Furthermore, their production frequently necessitates significant quantities of energy and resources, raising questions about whether they are actually more sustainable than traditional plastics. Innovations in plastic recycling Given the limitations of biodegradable polymers, improving recycling technology has become the main issue in the battle against plastic waste. Traditional recycling methods, like mechanical recycling, involve breaking down plastics and remoulding them into new products. However, this process can degrade the material's quality over time. However, this may compromise the material's quality over time. Furthermore, many types of plastics are difficult or impossible to recycle due to variances in chemical structure, contamination, or a lack of adequate machinery. Recent advances have been made to address these issues. Chemical recycling, for example, converts plastics back into their original monomers, allowing them to be re-polymerised into high-quality plastic. This technique has the ability to recycle materials indefinitely without compromising functionality. Another intriguing technique is enzymatic recycling, in which specially built-enzymes break down plastics into their constituent parts at lower temperatures, reducing the amount of energy required for the process. While these technologies provide hope, they are still in their early phases of development and face significant economic and logistical challenges. Expanding recycling infrastructure and developing more effective ways are critical to reduce the amount of plastic waste entering the environment. The way forward The environmental impact of plastics has inspired a global campaign to reduce plastic waste. Governments, industry, and consumers are taking action by prohibiting single-use plastics, increasing recycling efforts, and developing alternatives. However, addressing the plastic problem necessitates a multifaceted strategy. This includes advances in material science, improved waste management systems, and, perhaps most crucially, a transformation in how we perceive and utilise plastics in our daily lives. The chemistry of plastics is both fascinating and dangerous. While they have transformed businesses and increased quality of life, their long-term presence in the environment poses a substantial risk to ecosystems and human health. Rethinking how we make, use, and discard plastics in order to have a more sustainable relationship with these intricate polymers may be more important for the future of plastics than just developing new materials. Written by Laura K Related articles: Genetically-engineered bacteria break down plastic / The environmental impact of EVs Project Gallery

  • Neuroimaging and spatial resolution | Scientia News

    Peering into the mind Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Neuroimaging and spatial resolution 10/07/25, 10:24 Last updated: Published: 04/11/24, 14:35 Peering into the mind Introduction Neuroimaging has been at the forefront of brain discovery ever since the first ever images of the brain were recorded in 1919 by Walter Dandy, using a technique called pneumoencephalography (PET). Fast-forward over a decade and neuroimaging is more than just blurry singular images. Modern techniques allow us to observe real time changes in brain activity with millisecond resolution, leading to breakthroughs in scientific discovery that would not be possible without it. Memory is a great example - with functional magnetic resonance imaging (fMRI) techniques we have been able to demonstrate that more recent long-term memories are stored and retrieved with brain activity in the hippocampus, but as memories become more in the distant past, they are transferred to the medial temporal lobe. While neuroimaging techniques keep the doors open for new and exciting discoveries, spatial limitations leave many questions unanswered, especially at a cellular and circuit level. For example - within the hippocampus, is each memory encoded via complete distinct neural circuits? Or do similar memories share similar neural pathways? Within just a millimetre cubed of brain tissue we could have up to 57,000 cells (most of them neurons), all of which may have different properties, be part of different circuits, and produce different outcomes. This almost makes revolutionary techniques such as fMRI, with almost unparalleled image quality, seem pointless. To truly understand how neural circuits work, we have to dig as deep as possible to record the smallest regions possible. So that begs the question, how small can we actually record in the human brain? EEG 2024 marks a decade since the first recorded electroencephalography (also known as EEG) scan by Hans Berger in Germany. This technique involves placing electrodes all around the scalp to record activity throughout the whole outer surface of the brain ( Figure 1 ). Unlike the methods we see later on, EEG scans provide a direct measure of activity in the brain, by measuring electrical activity when the brain is active. However, because electrodes are only placed across the scalp, EEG scans are only able to pick up activity from the outer cortex, missing important activity in deeper parts of the brain. In our memory example, this means it would completely miss any activity in the hippocampus. EEG resolution is also quite underwhelming, typically being able to resolve activity with a few centimetres’ resolution - not great for mapping behaviours to specific structures in the brain. EEG scans are used in a medical environment to measure overall activity levels, assisting with epilepsy diagnosis. Let's look at what we can use to dig deeper into the brain and locate signals of activity… PET Position emission tomography (PET) scans offer a chance to record activity throughout the whole brain by ingesting a radioactive tracer, typically glucose labelled with a mildly radioactive substance. This tracer is tracked and uptake in specific parts of the brain is a sign for greater metabolic activity, indicating a higher signalling rate. PET scans already offer a resolution far beyond the capacities of EEG scans, distinguishing activity between areas with a resolution of up to 4mm. With the use of different radioactive labels, we can also detect activity of specific populations of neurons such as dopamine neurons to diagnose Parkinson's disease. In fact, many studies have reliably demonstrated the ability of PET scans to detect the root cause of Parkinson's disease, which is a reduced number of dopamine neurons in the basal ganglia, before symptoms become too extreme. As impressive as it sounds, a 4mm resolution can locate activity in large areas of the cortex, but is limited in its resolving power for discrete cortical layers. Take the human motor cortex for example - all 6 layers have an average width of only 2.79mm. A PET scan would not be powerful enough to determine which layer is most active, so we need to dig a little deeper… fMRI Since its inception in the early 90's, fMRI has gained the reputation of becoming the gold standard for human neuroimaging, thanks to its non-invasiveness, lack of artefacts, and reliable signalling. fMRI uses Nuclear Magnetic Resonance to measure changes in oxygenated blood flow, which is correlative of neural activity, known as BOLD signals. In comparison to EEG, measuring blood oxygen levels cannot reach a highly impressive temporal resolution, and is also not a direct measure of neural activity. fMRI makes up for this with its superior spatial resolution, resolving spaces as small as 1mm apart. Using our human motor cortex example, this would allow us to resolve activity between every 2-3 layers - not a bad return considering it doesn’t even leave a scar. PET, and especially EEG, pales in comparison to the capabilities of fMRI that has since been used for a wide range of neuroimaging research. Most notably, structural MRI has been used to support the idea of hippocampal involvement during spatial navigation from memory tasks ( Figure 2 ). Its resolving power and highly precise images also make it suitable to be used for mapping surgical procedures. Conclusion With a resolution of up to 1mm, fMRI takes the crown as the human neuroimaging technique with the best spatial resolution! Table 1 shows a brief summary of each neuroimaging method. Unfortunately though, there is still so much more we need to do to look at individual circuits and connections. As mentioned before, even within a millimetre cubed of brain, we have 5 figures worth of cells, making the number of neurons that make up the whole brain impossible to comprehend. To observe the activity of a single neuron, we would need an imaging technique with the power of viewing cells in the 10’s of micrometre range. So what can we do to get to the resolution we desire while still being suitable for humans? Maybe there isn't a solution. Instead, maybe if we want to record singular neuron activity, we have to take inspiration from invasive animal techniques such as microelectrode recordings. Typically used in rats and mice, these can achieve single-cell resolution to look at neuroscience from the smallest of components. It would be unethical to stick an electrode into a healthy human's brain and record activity, but perhaps in the future a non-invasive form of electrode recording could be developed? The current neuroscience field is foggy and shrouded in mystery. Most of these mysteries simply cannot be solved with the current research techniques we have at our disposal. But this is what makes neuroscience exciting - there is still so much to explore! Who knows when we will be able to map behaviours to neural circuits with single-cell precision, but with how quickly imaging techniques are being enhanced and fine-tuned, I wouldn't be surprised if it's sooner than we think. Written by Ramim Rahman Related articles: Neuromyelitis optica / Traumatic brain injuries REFERENCES Hoeffner, E.G. et al. (2011) ‘Neuroradiology back to the future: Brain Imaging’, American Journal of Neuroradiology, 33(1), pp. 5–11. doi:10.3174/ajnr.a2936. Maguire, E.A. and Frith, C.D. (2003) ‘Lateral asymmetry in the hippocampal response to the remoteness of autobiographical memories’, The Journal of Neuroscience, 23(12), pp. 5302–5307. doi:10.1523/jneurosci.23-12-05302.2003. Wong, C. (2024) ‘Cubic millimetre of brain mapped in spectacular detail’, Nature, 629(8013), pp. 739–740. doi:10.1038/d41586-024-01387-9. Butman, J. A., & Floeter, M. K. (2007). Decreased thickness of primary motor cortex in primary lateral sclerosis. AJNR. American journal of neuroradiology, 28(1), 87–91. Loane, C., & Politis, M. (2011). Positron emission tomography neuroimaging in Parkinson's disease. American journal of translational research, 3(4), 323–341. Maguire, E.A. et al. (2000) ‘Navigation-related structural change in the hippocampi of taxi drivers’, Proceedings of the National Academy of Sciences, 97(8), pp. 4398–4403. doi:10.1073/pnas.070039597. [Figure 1] EEG (electroencephalogram) (2024) Mayo Clinic . Available at: https://www.mayoclinic.org/tests-procedures/eeg/about/pac-20393875 (Accessed: 18 October 2024). [Figure 2] Boccia, M. et al. (2016) ‘Direct and indirect parieto-medial temporal pathways for spatial navigation in humans: Evidence from resting-state functional connectivity’, Brain Structure and Function, 222(4), pp. 1945–1957. doi:10.1007/s00429-016-1318-6. Project Gallery

  • Germline gene therapy (GGT): its potential and problems | Scientia News

    A Scientia News Biology and Genetics collaboration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Germline gene therapy (GGT): its potential and problems 09/07/25, 14:14 Last updated: Published: 21/01/24, 11:47 A Scientia News Biology and Genetics collaboration Introduction Genetic diseases arise when there are alterations or mutations to genes or genomes. In most acquired cases, mutations occur in somatic cells. However, when these mutations happen in germline cells (i.e. sperm and egg cells), they are incorporated into the genome of every cell. In other words, should this mutation be deleterious, all cells will have this issue. Furthermore, this mutation becomes inheritable. This is partly why most genetic diseases are complicated to treat and cure. Gene therapy is a concept that has been circulating among geneticists for some time. Indeed, addressing a disease directly from the genes that caused or promoted it has been an attractive and appealing avenue of therapies. The first successful attempt at gene therapy dates back to 1990, using retrovirus-derived vectors to transduce the T-lymphocytes of a 4-year-old girl with X-linked severe combined immunodeficiency disease (SCID-X1) with enzyme adenosine deaminase (ADA) deficiency. The trial was a great success, eliminating the girl's disease and marking a great milestone in the history of genetics. Furthermore, the success of viral vectors also opened new avenues to gene editing, such as zinc finger nucleases and the very prominent CRISPR-Cas9. For example, in mid-November 2023, the UK Medicines and Healthcare products Regulatory Agency or MHRA approved the CRISPR-based gene therapy, Casgevy, for sickle cell disease and β-thalassemia. It is clear that the advent of gene therapies significantly shaped the treatment landscape and our approach to genetic disorders. However, for most of gene therapy history, it is done almost exclusively on somatic cells or some stem cells, not germline cells. How it works As mentioned, inherited genetic disease-associated mutations are also present in germline cells or gametes. The current approach to gene therapy targets genes of some or very specific somatic or multipotent stem cells. For example, in the 1990 trial, the ADA-deficient SCID-X1 T-lymphocytes were targeted, and in recently approved Casgevy, the BCL11A erythroid-specific enhancer in hematopoietic stem cells. The methods involved in gene therapies also vary, each with advantages and limitations and carrying some therapeutic risks. Nevertheless, when aiming to treat genetic diseases, gene therapy should answer two things: how to do it and where. There are a few elucidated strategies of gene therapies. Unlike some popular beliefs, gene therapies do not always directly change or edit mutated genes. Instead, some gene therapies target enhancers or regulatory regions that control the expression of mutated genes. In other cases, such as in Casgevy, enhancers of a different subtype are targeted. By targeting or reducing BCL11A expression, Casgevy aims to induce the production of foetal haemoglobin (HbF), which contains the γ-globin chain as opposed to the defective β-chain in the adult haemoglobin (HbA) of sickle cell disease or β-thalassemia. Some gene therapies can also be done ex vivo or in vivo . Ex vivo strategies involve extracting cells from the body and modifying them in the lab, whilst in vivo strategies directly modify the cell without extraction (e.g. using viral/ non-viral vectors to insert genes). In essence, the list of strategies for gene therapies is growing, each with limitations and a promising prospect of tackling genetic diseases. These methods aim to “cure” genetic diseases in patients. However, the strategies mentioned above have all been researched using and, perhaps, made therapeutically for somatic or multipotent stem cells. Germline gene therapy (GGT), involves directly editing the genetic materials of germline cells or the egg and sperm cells before fertilisation. This means if it is done successfully, fertilisation of these cells will eliminate the disease phenotype from all cells of the offspring instead of only effector cells. Potentially, GGT may eradicate a genetic disease for all future generations. Therefore, it is an appealing alternative to human embryo editing, as it achieves similar or the same result without the need to modify an embryo. However, due to its nature, its advantage may also be its limitation. Ethical issues GGT has the potential to cure genetic disorders within families. However, because it involves editing either the egg or sperm cells before fertilisation, there are prominent ethical issues associated with this method, like the use of embryos for research and many more. Firstly, GGT gives no room for error. Mistakes during the gene modification process could cause systemic side effects or a harsher disease than the one initially targeted, leading to a multigenerational effect. For example, if parents went to a clinic to check if one/both their germ cells have a gene coding for proteins implicated in cystic fibrosis, an off-target mistake during GGT may lead to their child developing Prader-Willi Syndrome or other hereditary disorders caused by editing out significant genes for development. Secondly, an ecological perspective asserts that the current human gene pool, an outcome of many generations of natural selection, could be weakened by germline gene editing. Also, there is the religious perspective, where editing embryos goes against the natural order of how god created living creatures as they should be, where their natural phenotypes are “assigned” for when they are alive. Another reason GGT may be unethical is it leads to eugenics or creating “designer babies”. These are controversial ideas dating back to the late 19th century, where certain traits are “better” than others. This implies they should appear in human populations while individuals without them should be sterilised/killed off. For instance, it is inconceivable to forget the Nazi Aktion T4 program, which sought to murder disabled people because they were seen as “less suitable” for society. Legal and social issues Eugenics is notorious today because of its history. Genetic counselling may be seen like this as one possible outcome may be parents who end pregnancies if their child inherits a genetic disease. Moreover, understanding GGT’s societal influences is crucial, so clinical trial designs must consider privacy, self-ownership, informed consent and social justice. In China, the public’s emotional response to GGT in 2018 was mainly neutral, as shown in Figure 1, but some of the common “hot words” when discussed were ‘mankind’, ‘ethics’, and ‘law’. With this said, regulations are required with other nations for a wider social consensus on GGT research. In other countries, there are stricter rules for GGT. it is harder to conduct experiments using purposely formed/altered human embryos with inheritable mutations in the United States because the legal outcomes can include prison time and $100,000 fines. Furthermore, when donors are required, they must be fairly compensated, and discussing methodologies is crucial because there are issues on how they can impact men and women. South Africa has two opposing thoughts on GGT or gene editing. Bioconservatism has worries about genetic modification and asserts its restrictions, while bioliberalism is receptive to this technology because of the possible benefits. Likewise, revisions to the current regulations are suggested, such as rethinking GGT research or a benefit-risk analysis for the forthcoming human. Conclusion Overall, gene therapies have transformed the therapeutic landscape for genetic diseases. GGT is nevertheless a unique approach that promises to completely cure a genetic disease for families without the need to edit human embryos. However, GGT’s prospects may do more harm than good because its therapeutic effects are translated systemically and multigenerationally. On top of that, controversial ideas such as designer babies can arise if GGT is pushed too far. Additionally, certain countries have varying regulations due to cultural attitudes towards particular scientific innovations and the beginning of life. Reflecting on the ethical, legal and social issues, GGT is still contentious and probably would not be a prominent treatment option anytime soon for genetic diseases. Written by Sam Jarada and Stephanus Steven Introduction, and How it works by Stephanus Ethical issues, and Legal and social issues by Sam Conclusion by Sam and Stephanus Related article: Monkey see, monkey clone References: Cavazzana-Calvo, M. et al. (2000) ‘Gene therapy of human severe combined immunodeficiency (SCID)-X1 disease’, Science , 288(5466), pp. 669–672. doi:10.1126/science.288.5466.669. Demarest, T.G. and Biferi, M.G. (2022) ‘Translation of gene therapy strategies for amyotrophic lateral sclerosis’, Trends in Molecular Medicine , 28(9), pp. 795–796. doi:10.1016/j.molmed.2022.07.001. Frangoul, H. et al. (2021) ‘CRISPR-Cas9 gene editing for sickle cell disease and β-thalassemia’, New England Journal of Medicine , 384(3), pp. 252–260. doi:10.1056/nejmoa2031054. AGAR, N. (2018). Why We Should Defend Gene Editing as Eugenics. Cambridge Quarterly of Healthcare Ethics, 28(1), pp.9–19. doi: https://doi.org/10.1017/s0963180118000336 . de Miguel Beriain, I., Payán Ellacuria, E. and Sanz, B. (2023). Germline Gene Editing: The Gender Issues. Cambridge Quarterly of Healthcare Ethics, 32(2), pp.1–7. doi: https://doi.org/10.1017/s0963180122000639 . Genome.gov . (2021). Eugenics: Its Origin and Development (1883 - Present). [online] Available at: https://www.genome.gov/about-genomics/educational-resources/timelines/eugenics#:~:text=Discussions%20of%20eugenics%20began%20in . Johnston, J. (2020). Budgets versus Bans: How U.S. Law Restricts Germline Gene Editing. Hastings Center Report, 50(2), pp.4–5. doi: https://doi.org/10.1002/hast.1094 . Kozaric, A., Mehinovic, L., Stomornjak-Vukadin, M., Kurtovic-Basic, I., Catibusic, F., Kozaric, M., Mesihovic-Dinarevic, S., Hasanhodzic, M. and Glamuzina, D. (2016). Diagnostics of common microdeletion syndromes using fluorescence in situ hybridization: single center experience in a developing country. Bosnian Journal of Basic Medical Sciences, [online] 16(2). doi: https://doi.org/10.17305/bjbms.2016.994 . Luque Bernal, R.M. and Buitrago BejaranoR.J. (2018). Assessoria genética: uma prática que estimula a eugenia? Revista Ciencias de la Salud, 16(1), p.10. doi: https://doi.org/10.12804/revistas.urosario.edu.co/revsalud/a.6475 . Nielsen, T.O. (1997). Human Germline Gene Therapy. McGill Journal of Medicine, 3(2). doi: https://doi.org/10.26443/mjm.v3i2.546 . Niemiec, E. and Howard, H.C. (2020). Germline Genome Editing Research: What Are Gamete Donors (Not) Informed About in Consent Forms? The CRISPR Journal, 3(1), pp.52–63. doi: https://doi.org/10.1089/crispr.2019.0043 . Peng, Y., Lv, J., Ding, L., Gong, X. and Zhou, Q. (2022). Responsible governance of human germline genome editing in China. Biology of Reproduction, 107(1). doi: https://doi.org/10.1093/biolre/ioac114 . Shozi, B. (2020). A critical review of the ethical and legal issues in human germline gene editing: Considering human rights and a call for an African perspective. South African Journal of Bioethics and Law, 13(1), p.62. doi: https://doi.org/10.7196/sajbl.2020.v13i1.00709 . Thaldar, D., Botes, M., Shozi, B., Townsend, B. and Kinderlerer, J. (2020). Human germline editing: Legal-ethical guidelines for South Africa. South African Journal of Science, 116(9/10). doi: https://doi.org/10.17159/sajs.2020/6760 . Zhang, D. and Lie, R.K. (2018). Ethical issues in human germline gene editing: a perspective from China. Monash Bioethics Review, 36(1-4), pp.23–35. doi: https://doi.org/10.1007/s40592-018-0091-0 . Project Gallery

  • Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype | Scientia News

    Setting Neuropsychiatry In a Wider Medical Context Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype 20/02/25, 11:54 Last updated: Published: 24/05/23, 09:45 Setting Neuropsychiatry In a Wider Medical Context In novel research by Campeau et al. (2022), the proteomic analysis of 742 proteins from the blood plasma of 54 schizophrenic participants and 51 age-matched healthy volunteers. This investigation resulted in the validation of the previously-contentious link between premature aging and schizophrenia by testing for a wide variation of proteins involved in cognitive decline, aging-related comorbidities, and biomarkers of earlier-than-average mortality. The results from this research demonstrated that age-linked changes in protein abundance occur earlier on in life in people with schizophrenia. This data also helps to explain the heightened incidence rate of age-related disorders and early all-cause death in schizophrenic people too, with protein imbalances associated with both phenomena being present in all schizophrenic age strata over age 20. This research is the result of years of medical intrigue regarding the biomedical underpinnings of schizophrenia. The comorbidities and earlier death associated with schizophrenia were focal points of research for many years, but only now have valid explanations been posed to answer the question of the presence of such phenomena. The explanation for the greater incidence rate of early death in schizophrenia was described in this study as the increased volume of certain proteins. Specifically, these included biomarkers of heart disease (Cystatin-3, Vitronectin), blood clotting abnormalities (Fibrinogen-B) and an inflammatory marker (L-Plastin). These proteins were tested for due to their inclusion in a dataset of protein biomarkers of early all-cause mortality in healthy and mentally-ill people published by Ho et al. (2018) for the Journal of the American Heart Association. Furthermore, a protein linked to degenerative cognitive deficit with age, Cystatin C, was present in increased volume in schizophrenic participants both under and over the age of 40. This explains why antipsychotics have limited effectiveness in reducing the cognitive effects of schizophrenia. In this study, schizophrenics under 40 had similar plasma protein content as the healthy over-60 strata set, including both biomarkers of cognitive decline, age-related diseases and death. Schizophrenics under-40 showed the same likelihood for incidence of the latter phenomena compared to the healthy over-60 set. These results could demonstrate the necessity for use of medications often used to treat age-related cognitive decline and mortality-linked protein abundances to treat schizophrenia. One of these options include polyethylene glycol-Cp40, a C3 inhibitor used to treat nocturnal haemoglobinuria, which could be used to ameliorate the risk of developing age-related comorbidities in schizophrenic patients. This treatment may be effective in the reduction of C3 activation, which would reduce the opsonisation (tagging of detected foreign products in blood). When overexpressed, C3 can cause the opsonisation of healthy blood cells in a process called haemolysis, which can catalyse the reduction of blood volume implicated in cardiac events and other comorbidities. However, whether or not this treatment would benefit those with schizophrenia is yet to be proven. The potential of this research to catalyse new treatment options for schizophrenia cannot be understated. Since the publication of Kilbourne et al. in 2009, the impact of cardiac comorbidities in catalysing early death in schizophrenic patients has been accepted medical dogma. The discovery of exact protein targets to reduce the incidence rate of age-linked conditions and early death in schizophrenia will allow the condition to be treated more holistically, with greater observance to the fact that schizophrenia is not only a psychiatric illness, but also a neurocognitive disorder with affiliated comorbidities that have to be prevented adequately. Written by Aimee Wilson Related articles: Genetics of ageing and longevity / Ageing and immunity / Inflammation therapy Project Gallery

  • Obesity in children | Scientia News

    Obesity is one of the most common problems among many in all age groups. As per world health organisation obesity or overweight defined as abnormal or excessive fat accumulation that may cause impair health. Obesity measured by BMI (Body mass index), normal BMI for children is  range Go back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Childhood obesity Last updated: 18/11/24 Published: 25/03/23 Obesity is one of the most common problems among many in all age groups. As per the World Health Organisation, obesity or overweight is defined as abnormal or excessive fat accumulation that may cause impaired health. Obesity is measured by Body Mass Index (BMI). The normal BMI for children ranges from 13.53 to 20.08. Children are the most vulnerable age group for becoming overweight. Early prevention reduces the overall burden of health care system globally. Obesity causes: Obesity mainly results from imbalance between energy intake and utilisation of calorie intake. There are several reasons for becoming overweight. Five main causes for overweight are- Genetic factors Food quality and quantity Parental belief Sedentary lifestyle Environmental resource Symptoms of childhood obesity: Shortness of breath while physical activity Difficulty in breathing while sleeping. Easily fatigue. Gastric problems such as gastroesophageal reflux disease Fat deposits in various body parts such as breast, abdomen and thigh area Prevalence The prevalence of overweight children is increasing every year. In England, in the year 2019/2020, the prevalence of overweight increased rapidly. The National Child Measurement Program measure shows that in Reception (4-5 years old), the obesity rate was 9.9% and continued to increase to 21% in year 6. Childhood obesity is tackled early so complications can be managed before it worsens. There are many ways to prevent childhood obesity. Prevention The National Institute for Health and Care Excellence guidance currently recommends lifestyle intervention as the main treatment for prevention of childhood obesity. Diet management and physical activity are the main areas to focus on for obesity prevention. Dietary modification includes limited use of refined grains and sweets, potatoes, red meat, processed meat, sugary drinks, and alternatively increase intake of fresh fruits and vegetables, whole grain and adopt more healthier food options, instead of fatty and junk food. On top of that, add physical activity in daily routine. It is one of the key factors for reduction of obesity. Another way for communities to tackle obesity is to take part in government programmes such as Healthier You and NHS Digital weight management programme, which are helpful for handling obesity. Written by Chhaya Dhedhi Related articles: Depression in children / Childhood stunting in developing nations / Nature vs nurture in childhood intelligence

  • Key discoveries in the history of public health | Scientia News

    To begin, there was the Humoral Theory, which looked at how disease was caused by gaps in fluids/humours which were: blood, yellow bile, black bile and phlegm, which equated to the elements of air, fire, earth and water respectively. The imbalance can come from habits like overeating Go back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Key historical events and theories in public health Last updated: 17/11/24 Published: 10/02/23 Introduction Now more than ever, public health has become crucial, which looks at promoting health and preventing disease within a society. There have been numerous events and concepts that have helped shape our current health systems today because without them, it is possible that our health systems would not have advanced without previous knowledge to evolve from. This article will focus on certain key events and concepts. Humoral Theory (Ancient Greek and Roman times) To begin, there was the Humoral Theory, which looked at how disease was caused by gaps in fluids/humours which were: blood, yellow bile, black bile and phlegm, which equated to the elements of air, fire, earth and water respectively. The imbalance can come from habits like overeating and too little/much exercise or external factors such as the weather. This theory was thought to have originated from the Hippocratic Corpus, a compilation of 60 medical documents written during the Ancient Greek era by Hippocrates. Although this theory as we know now is flawed, it did provide a foundational understanding of the human body and was utilised in public health for centuries before being subsequently discredited for the Germ Theory established during the mid-19th century. Miasma Theory (Ancient Greek era to the 19th century) Another theory replaced by Germ Theory was the Miasma theory, which stated that diseases like the plague and cholera were spread due to toxic vapours from the ground/decomposing matter. This theory along with the Humoral theory was accepted for thousands of years since the Ancient Greek era. With regards to the cholera outbreaks in the Victorian era, John Snow’s theory of polluted water causing cholera was initially not accepted by the scientific community during his death in 1858. Eventually though, his theory became accepted when Joseph Bazalgette worked to fix London’s sewage to prevent more deaths by cholera. This event with the Germ Theory led to Miasma and Humoral theories to be disproved, although they provided foundational understanding of how diseases spread. The discovery of vaccines (late 18th century) Aside from theories such as the four humors from above, there were concepts or discoveries that advanced public health measures such as vaccination, which eradicated smallpox and is still used today to prevent the severity of diseases such as COVID-19, influenza and polio. The origins of successful vaccines could be traced back to Edward Jenner who in 1796, retrieved samples from cowpox lesions from a milkmaid because he noticed that contracting cowpox protected against smallpox. With this in mind, he inoculated an 8 year old boy and after this, the boy developed mild symptoms, but then became better. Without this event, it is likely that the human population would significantly decrease as there is more vulnerability to infectious diseases and public health systems being weaker or less stable. Image of a COVID-19 injection. Germ Theory (19th century) As for current scientific theories relating to public health, there is the widely accepted Germ Theory by Robert Koch during the 19th century in the 1860s, stating that microorganisms can cause diseases. He established this theory by looking at cow’s blood through a microscope to see that they died from anthrax and observed rod-shaped bacteria with his hypothesis that they caused anthrax. To test this, he infected mice with blood from the cows and the mice also developed anthrax. After these tests, he developed postulates and even though there are limitations to his postulates at the time like not taking into account prions or that certain bacteria do not satisfy the postulates, they are vital to the field of microbiology, in turn making them important to public health. The establishment of modern epidemiology (19th century) Another key concept for public health is epidemiology, which is the study of the factors as well as distribution of chronic and infectious diseases within populations. One of epidemiology’s key figures is John Snow, who explored the cholera epidemics in London 1854, where he discovered that contaminated water from specific water pumps was the source of the outbreaks. Moreover, John Snow’s work on cholera earned him the title of the “father of modern epidemiology” along with his work providing a basic understanding of cholera. Therefore, this event among others has paved the way for health systems to become more robust in controlling outbreaks such as influenza and measles. Conclusion Looking at the key events above, it is evident that each of them has played an essential role in building the public health systems today through the contributions of the scientists. However, public health, like any other science, is constantly evolving and there are still more future advancements to look forward to that can increase health knowledge. Written by Sam Jarada Related articles: Are pandemics becoming less severe? / Rare zoonotic diseases / How bioinformatics helped with COVID-19 vaccines REFERENCES Lagay F. The Legacy of Humoral Medicine. AMA Journal of Ethics. 2002 Jul 1;4(7). Earle R. Humoralism and the colonial body. Earle R, editor. Cambridge University Press. Cambridge: Cambridge University Press; 2012. Halliday S. Death and miasma in Victorian London: an obstinate belief. BMJ. 2001 Dec 22;323(7327):1469–71. Riedel S. Edward Jenner and the history of smallpox and vaccination. Proceedings (Baylor University Medical Center). 2005 Jan 18;18(1):21. National Research Council (US) Committee to Update Science, Medicine, and Animals. A Theory of Germs. Nih.gov. National Academies Press (US); 2017. Sagar Aryal. Robert Koch and Koch’s Postulates. Microbiology Notes. 2022. Tulchinsky TH. John Snow, Cholera, the Broad Street Pump; Waterborne Diseases Then and Now. National Library of Medicine. Elsevier; 2018. p. 77–99.

  • The Lyrids meteor shower | Scientia News

    Lyra is a prominent constellation, largely due to Vega which forms one of its corners, and is one of the brightest stars in the sky. Interestingly, Vega is defined as the zero point of the magnitude scale - a logarithmic system used to measure the brightness of celestial objects. Technically, the brightness of all stars and galaxies are measured relative to Vega! Go back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The Lyrids meteor shower Last updated: 14/11/24 Published: 10/06/23 The Lyrids bring an end to the meteor shower drought that exists during the first few months of the year. On April 22nd, the shower is predicted to reach its peak, offering skygazers an opportunity to witness up to 20 bright, fast-moving meteors per hour which leave long, fiery trails across the sky, without any specialist equipment. The name Lyrids comes from the constellation Lyra - the lyre, or harp - which is the radiant point of this shower, i.e. the position on the sky from which the paths of the meteors appear to originate. In the Northern Hemisphere Lyra rises above the horizon in the northeast and reaches the zenith (directly overhead) shortly before dawn, making this the optimal time to observe the shower. Lyra is a prominent constellation, largely due to Vega which forms one of its corners, and is one of the brightest stars in the sky. Interestingly, Vega is defined as the zero point of the magnitude scale - a logarithmic system used to measure the brightness of celestial objects. Technically, the brightness of all stars and galaxies are measured relative to Vega! Have you ever wondered why meteor showers occur exactly one year apart and why they always radiate from the same defined point in the sky? The answer lies in the Earth's orbit around the Sun, which takes 365 days. During this time, Earth may encounter streams of debris left by a comet, composed of gas and dust particles that are released when an icy comet approaches the Sun and vaporizes. As the debris particles enter Earth’s atmosphere, they burn up due to friction, creating a streak of light known as a meteor. Meteorites are fragments that make it through the atmosphere to the ground. The reason that the Lyrids meteor shower peaks in mid-late April each year is that the Earth encounters the same debris stream at the point on its orbit corresponding to mid-late April. Comets and their debris trails have very eccentric, but predictable orbits, and the Earth passes through the trail of Comet Thatcher in mid-late April every year. Additionally, Earth’s orbit intersects the trail at approximately the same angle every year, and from the perspective of an observer on Earth, the constellation Lyra most accurately matches up with the radiant point of the meteors when they are mapped onto the canvas of background stars in the night sky. The Lyrids meteor shower peaks in mid-late April each year. Image/ EarthSky.org This year, there is a fortunate alignment of celestial events. New Moon occurs on April 20th, meaning that by the time the Lyrids reach their maximum intensity, the Moon is only 6% illuminated, resulting in darker skies and an increased chance to see this dazzling display. Written by Joseph Brennan Related article: L onar Lake

  • Fake science websites | Scientia News

    Manufacturing doubt is another strategy where facts are intentionally changed to promote an agenda. It is used in the tobacco industry and against the climate crisis. Meaning articles can maintain the façade of using scientific methods by referencing sources that are difficult to interpret whilst research supported by sound evidence is labelled and downplayed. Go back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How fake science websites hijack our trust in experts to misinform and confuse Last updated: 07/11/24 Published: 29/12/22 In science, all research is peer-reviewed by experts. Now, fake science websites are mimicking these disciplines. These websites capitalise on our trust in experts. In some cases, these websites are paid to publish fake science. This is becoming more common. In a recent global survey, almost 50% of respondents said they see false or misleading information online daily. By understanding the methods these sites use we can prevent their influence. Hyperlinking is a technique used to convince website users. They reassure the user that the content is credible, but most people don’t have experience in analytical techniques and so these links aren’t questioned. Repetition is used to increase the visibility of fake science content but also saturate search engines. This content can be repeated and spread across different sites. Users of “lateral reading” get multiple websites that corroborate the fake science from the initial source. Many of these sites only choose articles that agree with their perspective and depend on the audience not taking time to follow up. Manufacturing doubt is another strategy where facts are intentionally changed to promote an agenda. It is used in the tobacco industry and against the climate crisis. Meaning articles can maintain the façade of using scientific methods by referencing sources that are difficult to interpret whilst research supported by sound evidence is labelled and downplayed. On fake science websites first, check the hyperlinked articles. These websites will use sites with repeated content from disreputable sites. Next, look at the number of reposts a website has. Legitimate science posts are on credible websites. Some websites investigate websites that feature fake science. Ultimately, these websites thrive on users not having the time or skills to look deeper into the evidence, so doing so will help expose the fake websites. Written by Antonio Rodrigues Related articles: Digital disinformation / COVID-19 misconceptions

  • A love letter from outer space: Lonar Lake, India | Scientia News

    The lunar terrain Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A love letter from outer space: Lonar Lake, India Last updated: 09/10/25, 10:05 Published: 10/04/25, 07:00 The lunar terrain Around 50,000 years ago, outer space gifted the earth with a crater that formed the foundations of the world’s third largest natural saltwater lake, situated within a flat volcanic area known as the Deccan Plateau. This resulted from a 2 million tonne meteorite tunnelling through the earth’s atmosphere at the velocity of 90,000km/hour and colliding into the Deccan Plateau. As time slipped away, pressure and heat melted the basalt rock tucked underneath the impact, and the accumulation of rainwater filled the crater with water. These foundations curated what is famously known today as the ‘Lonar Lake’. What is unique about the Lonar Lake is that it is the only meteorite-crater formed in basaltic terrain - synonymous to a lunar terrain. Additionally, the remnants bear similarities to the terrestrial composition of Mercury, which contains craters, basaltic rock and smooth plains resulting from volcanic activity. Many speculations have arisen to prove the theory of the crater forming from the impact of a meteorite. One such collaborative study conducted by The Smithsonian Institute of Washington D.C. USA, the Geological Survey of India and the US Geological Survey involved drilling holes at the bottom of the crater and scrutinising the compositions of rock samples sourced from the mining. When tested in the laboratory, it was found that the rock samples contained leftovers of the basaltic rock that were modified from the crater collision under high heat and pressure. In addition, shattered cone-shaped fractures, due to high velocity shock waves being transmitted into the rocks, were identified. These two observations align with the meteorite impact phenomenon. Additionally, along with its fascinating astronomical properties, scientists have been intrigued by the chemical composition of the lake within the crater. Its dark green colour results from the presence of the blue-green algae Spirulina. The water also has a pH of 10, making the water alkaline in nature, supporting the development of marine systems. One explanation for the alkalinity of the water is that it is a result of immediate sulphide formation, where the groundwater of meteorite origin contains CO2 undergoes a precipitation reaction with alkaline ions, leaving a carbonate precipitate with an alkaline nature. What is also striking about the composition of the water as well is its saline nature, which coexists with the alkaline environment - a rare phenomenon to occur in ecological sciences. The conception of the lake, from the matrimony of Earth with the debris within outer space, has left its imprints within the physical world. It's a love letter, written in basaltic stone and saline water, fostering innovation in ecology. The inscription of the meteorite’s journey within the crater has branched two opposing worlds, one originating millions of miles away from humans with one that resides in the natural grounds of our souls. Written by Shiksha Teeluck Related articles: Are aliens on Earth? / JWST / The celestial blueprint of time: Stonehenge REFERENCES Taiwade, V. S. (1995). A study of Lonar lake—a meteorite-impact crater in basalt rock. Bulletin of the Astronomical Society of India, 23, 105–111. Tambekar, D. H., Pawar, A. L., & Dudhane, M. N. (2010). Lonar Lake water: Past and present. Nature Environment and Pollution Technology, 9(2), 217–221. Project Gallery

bottom of page