Search Index
348 results found
- From genes to joints: how Ehlers-Danlos Syndrome is shaped by genetics | Scientia News
Mutations in collagen and related proteins are the primary cause of EDS Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link From genes to joints: how Ehlers-Danlos Syndrome is shaped by genetics 12/09/25, 11:12 Last updated: Published: 08/11/24, 11:40 Mutations in collagen and related proteins are the primary cause of EDS This is article no. 10 in a series on rare diseases. Next article: CEDS- a break in cell death . Previous article: Breaking down Tay-Sachs . Ehlers-Danlos Syndrome (EDS) is a group of 13 inherited disorders that affect connective tissues, particularly collagen. Collagen is a crucial protein in the body that provides structure and strength to skin, joints, and blood vessels. Mutations in collagen or collagen-modifying proteins are the primary cause of the types of EDS. EDS manifests through a range of symptoms that vary significantly depending on the specific type of EDS. However, there are common symptoms that many individuals with EDS experience, particularly related to joint and skin issues. For instance, joints can move beyond the normal range, leading to frequent dislocations and subluxations, also called joint hypermobility. Additionally, the skin can be stretched more than usual, which creates a soft and velvety appearance known as skin hyperextensibility. As mentioned in Figure 1 , skin bruising, scarring and tearing are common symptoms, leading to individuals often experiencing chronic pain. Life expectancy for individuals with EDS varies depending on the type of disorder an individual has. This is due to how specific forms can have structural changes in organs and tissues, which can lead to serious life-threatening complications. For example, vascular EDS (vEDS) is associated with a significantly reduced life expectancy due to the risk of spontaneous rupture of major blood vessels, intestines, and other hollow organs. Most other forms of EDS, such as classical EDS (cEDS), hypermobile EDS (hEDS), and kyphoscoliotic EDS (kEDS), generally do not significantly affect life expectancy. However, the health complications that patients have can substantially impact their quality of life. Genetic basis As stated, the various types of EDS encompass many genetic defects, for example, cEDS is linked to mutations in the COL5A1 or COL5A2 genes, which encode the α1 and α2 chains of type V collagen. Following an autosomal dominant inheritance pattern, 50% of cEDS diagnoses inherit the condition from an affected parent, while the other half from a new (de novo) pathogenic variant. Diagnosing EDS encompasses a variety of methods. Firstly, differential diagnosis may be used to distinguish between subtypes like cEDS and hEDS by evaluating clinical features such as the presence of joint hypermobility, skin characteristics, and scarring patterns. Clinicians use these specific symptoms along with family history to differentiate between the subtypes since some, like hEDS, lack identified genetic markers, making this clinical assessment essential for accurate diagnosis and management. This process helps exclude other conditions and accurately identify the EDS subtype. Also, suggestive clinical features identifying pathogenic or likely pathogenic variants in the COL5A1 or COL5A2 genes can be done through molecular genetic testing. This testing can be approached in two ways: targeted multigene panels, which focus on specific genes like COL5A1 and COL5A2 . Alternatively, comprehensive genomic testing, such as exome or genome sequencing, does not require preselecting specific genes and is useful when the clinical presentation overlaps with other inherited disorders. Mutations in COL5A1 and COL5A2 can include missense, nonsense, splice site variants, or small insertions and deletions, all of which impair the function of type V collagen. Missense mutations result in the substitution of one amino acid for another, disrupting the collagen triple helix structure and affecting its stability and function. On the other hand, nonsense mutations lead to a premature stop codon, producing a truncated and usually non-functional protein. Splice site mutations interfere with the normal splicing of pre-mRNA, resulting in aberrant proteins. These mutations in COL5A1 and COL5A2 lead to the characteristic features of cEDS, such as highly elastic skin and joint hypermobility. Furthermore, different types of EDS are caused by specific genetic mutations, each affecting collagen in distinct ways and necessitating varied treatment approaches. VEDS is caused by mutations in the COL3A1 gene, which affects type III collagen and leads to fragile blood vessels and a higher risk of organ rupture. kEDS results from mutations in the PLOD1 or FKBP14 genes, impacting collagen cross-linking, and presents with severe scoliosis and muscle hypotonia. Arthrochalasia EDS (aEDS), due to mutations in the COL1A1 or COL1A2 genes that affect type I collagen, features severe joint hypermobility and congenital hip dislocation. Dermatosparaxis EDS (dEDS) is caused by mutations in the ADAMTS2 gene, which is crucial for processing type I collagen, leading to extremely fragile skin and severe bruising. Each type of EDS highlights the critical role of specific genetic mutations in the structural integrity and function of collagen, which consequently influences treatment approaches. Treatment Treatments for EDS primarily focus on managing symptoms and preventing complications due to the underlying genetic defects affecting collagen. Pain relief through nonsteroidal anti-inflammatory drugs (NSAIDs), acetaminophen, and sometimes opioids is common, addressing chronic pain related to joint and muscle issues. Moreover, physical therapy may help strengthen muscles around hypermobile joints, reducing the risk of dislocations and improving stability. Orthopaedic interventions, such as braces and orthotics, are also used to support joint function, and surgery may be considered in severe cases. Cardiovascular care is crucial, especially for vEDS, involving regular monitoring with imaging techniques to detect arterial problems early. Preventive vascular surgery might be necessary to repair aneurysms or other vascular defects. Wound care includes using specialised dressings to handle fragile skin and prevent extensive scarring, relevant to mutations in genes like COL5A1 and COL5A2 in classical EDS. Understanding the specific genetic mutations helps tailor these treatments to address the particular collagen-related defects and associated complications in different EDS types. Moreover, clinical trials for treating EDS have shown both positive and negative results. For example, trials investigating the efficacy of physical therapy in strengthening muscles around hypermobile joints have shown positive outcomes in reducing joint instability and improving function. On the other hand, trials aiming to directly modify the underlying genetic defects in collagen production have faced significant challenges. Gene therapy approaches and other experimental treatments targeting specific mutations, such as those in COL5A1 or COL3A1 genes, have shown limited success and faced hurdles in achieving sufficient therapeutic benefit without adverse effects. This is evident as in mouse models the deletion of COL3A1 resulted in aortic and gastrointestinal rupture meaning that simply restoring one functional copy may not be sufficient to prevent the disease. Moreover, the unknown and partial success in identifying mutations responsible for all EDS cases has further bolstered the struggle for researchers to establish comprehensive treatment strategies. In vEDS, as it is a dominantly inherited disorder, adding a healthy copy of the gene (a common strategy in gene therapy) is ineffective because the defective gene still produces harmful proteins. Research has highlighted, however, that the combination of RNAi-mediated mutant allele-specific gene silencing and transcriptional activation of a normal allele could help as a promising strategy for vascular Ehlers-Danlos Syndrome. In the experiment, researchers used small interfering RNA (siRNA) to selectively reduce the mutant COL3A1 mRNA levels by up to 80%, while simultaneously using lysyl oxidase (LOX) to boost the expression of the normal COL3A1 gene. This dual approach successfully increased the levels of functional COL3A1 mRNA in patient cells, suggesting a potential therapeutic strategy for this condition. Conclusion In conclusion, EDS represents a diverse group of inherited connective tissue disorders, primarily caused by mutations in collagen or collagen-modifying proteins. These genetic defects result in a wide range of symptoms, including joint hypermobility, skin hyperextensibility, and vascular complications, which vary significantly across the 13 different types of EDS. Diagnosing and treating EDS is complex and largely dependent on the specific genetic mutations involved. While current treatments mainly focus on managing symptoms and preventing complications, advances in genetic research, such as RNAi-mediated gene silencing and transcriptional activation, show promise for more targeted therapies, especially for severe forms like vascular EDS. However, challenges remain in developing comprehensive and effective treatments, underscoring the need for ongoing research and personalised medical approaches to improve the quality of life for individuals with EDS. Written by Imron Shah Related articles: Hypermobility spectrum disorders / Therapy for skin disease REFERENCES Malfait, F., Wenstrup, R.J. and De Paepe, A. (2010). Clinical and genetic aspects of Ehlers-Danlos syndrome, classic type. Genetics in Medicine, 12(10), pp.597–605. doi: https://doi.org/10.1097/gim.0b013e3181eed412 . Miklovic, T. and Sieg, V.C. (2023). Ehlers Danlos Syndrome. [online] PubMed. Available at: https://www.ncbi.nlm.nih.gov/books/NBK549814/ . Sobey, G. (2014). Ehlers–Danlos syndrome – a commonly misunderstood group of conditions. Clinical Medicine, [online] 14(4), pp.432–436. doi: https://doi.org/10.7861/clinmedicine.14-4-432 . Watanabe, A., Wada, T., Tei, K., Hata, R., Fukushima, Y. and Shimada, T. (2005). 618. A Novel Gene Therapy Strategy for Vascular Ehlers-Danlos Syndrome by the Combination with RNAi Mediated Inhibition of a Mutant Allele and Transcriptional Activation of a Normal Allele. Molecular Therapy, [online] 11, p.S240. doi: https://doi.org/10.1016/j.ymthe.2005.07.158 . FURTHER READING The Ehlers-Danlos Society - A global organisation dedicated to supporting individuals with EDS and raising awareness about the condition by providing extensive information on the different types of EDS, updates on research, and resources for patients https://www.ehlers-danlos.com/ PubMed - For those interested in academic research, articles and studies on EDS. https://www.ncbi.nlm.nih.gov/pmc/?term=ehlers-danlos+syndrome Cleveland Clinic – A clinic with an extensive health library providing easy to understand and informative information about the syndrome. https://my.clevelandclinic.org/health/diseases/17813-ehlers-danlos-syndrome Project Gallery
- Totality- Our Perfect Eclipse | Scientia News
Total solar eclipses Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Totality- Our Perfect Eclipse 14/07/25, 15:05 Last updated: Published: 24/05/23, 10:05 Total solar eclipses We are all familiar with the characteristic depiction of a solar eclipse, when the Moon passes directly between the Sun and the Earth. However, the significance of solar eclipses extends far beyond their aesthetic appeal. Major scientific discoveries, cultural practices, and even the behaviour of wild animals are derived from total solar eclipses that we have the privilege of experiencing (See image 1). A solar eclipse occurs when the Earth, Moon, and Sun all appear to lie on a straight line. They are collinear. Total solar eclipses occur when the Moon completely obscures the Sun's photosphere, enabling prominences and coronal filaments to be seen along the limb. This phenomenon is unique to the Earth, Sun, and Moon system and to understand why we must explore the mathematics underlying these ‘orbital gymnastics’. We wish to compare the ‘apparent’ size of the Sun and Moon, a quantity proportional to the ratio of their size and distance from Earth. The Moon has a radius of around 3,400 km, and is approximately 384,000 km from Earth. The Sun has a much larger radius of 1.4 million km, and is located at a distance of 150 million km. By dividing the Sun's radius by the Moon's radius and dividing the Earth-Sun distance by the Earth-Moon distance, we can determine that the Sun is 400 times larger than the Moon and 400 times further away. This unique relationship allows for total solar eclipses, where totality indicates **the complete blocking of sunlight from the Sun’s disk by the Moon. In partial eclipses, only part of the Sun is obscured. One might wonder why we don’t have total solar eclipses every month, and the reason is that the plane of the Moon’s orbit around Earth is tilted at 5 degrees relative to Earth’s orbital plane. This hugely decreases the likelihood of such perfect alignment. Of the hundreds of moons orbiting planets in our Solar System, only our Moon totally eclipses the Sun. For example, none of Jupiter’s 95 moons have the correct size and orbital separation that completely block out the Sun from any point on Jupiter’s surface! Surely this serendipitous interplay of Earth, Sun, and Moon cannot be a coincidence? (See image 2) It is at this point where divine intervention is typically invoked. There are a few problems with doing this. The Moon's eccentric orbit around Earth means that it will be closer during some total solar eclipses than others, resulting in annular eclipses when the Moon is furthest from Earth. Additionally, the Moon is receding from the Earth at a rate of 4 cm/year, which means that total solar eclipses will only be observable for another 250 million years. (See image 3) For those of you who wish to make the most of this brief window of opportunity, this website shows the dates and locations of upcoming total solar eclipses. Written by Joseph Brennan REFERENCE Guillermo Gonzalez, Wonderful eclipses, Astronomy & Geophysics , Volume 40, Issue 3, June 1999, Pages 3.18–3.20, https://doi.org/10.1093/astrog/40.3.3.18 Project Gallery
- Exposing medication to extreme heat | Scientia News
And its chemical effects Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Exposing medication to extreme heat 09/07/25, 14:09 Last updated: Published: 08/10/23, 16:18 And its chemical effects Introduction The majority of us look forward to when summer is just around the corner. It is a time for parents to start planning days off to be able to go on holiday with their kids to relax from their studies and enjoy sunsets at the beach. But for people who take medication, whether this just be a week-long course of antibiotics or for long-term conditions, summer may also be a chance for some negligence to occur. Specifically, alongside making sure you have applied SPF to protect your skin from the sun’s rays, you should also protect your medicine as well. This applies to both oral and non-oral drugs. Experts at The Montreal Children’s Hospital say that “many prescription drugs are very sensitive to changes in temperature and humidity”; in this article, we will therefore discuss the effect of extreme heat on drugs from a medicinal chemistry perspective. Factors affecting drug activity due to heat Certain drugs may begin to degrade before their expiry date if not stored appropriately. This affects the efficacy, which is the maximum biological response that is achievable with a certain drug. A dose-response curve can be plotted (see Figure 1 ) to show the relationship between the two variables; the label Emax refers to the efficacy. During hot weather, the structure of the drug can change and therefore unable to bind to its target, causing a lowered and shifted Emax to be seen. Simply put, the medication will not relieve your symptoms as effectively. Another physiochemical property of a drug that can be altered in the heat is the potency. Many people confuse this term with efficacy, but potency refers to the concentration of a drug required to achieve 50% of its maximum therapeutic effect i.e., half the Emax. Potency is therefore also known as EC50, which abbreviates for ‘half maximal effective concentration’. The lower the concentration needed, the more potent your drug is. Like reduced efficacy, the drug’s potency will also decrease in the heat due to altered chemical structure. For drugs like antibiotics, it is crucial to note that if potency is reduced significantly, it could risk infection spreading to other parts of the body as the medication will not fight off bacteria as well as it should. Potentially dangerous! Finally, drug absorption is when a drug moves into the bloodstream after being administered. The chemical structure of the drug and the environment in which it is present hugely affects this; for example, if a lipophilic (‘fat loving’) drug is also present in a lipophilic surrounding, fast absorption is seen as they work well with each other. As you have probably guessed, high temperatures outside of the body can reduce drug absorption due to the above factors mentioned, as the drug is not in its optimal structure to be absorbed effectively. Examples of medicine that are heat sensitive Here is a list of some medicines that require extra care to prevent the above: 1) Nitroglycerin – used to treat chest pains for those with cardiovascular disease. It is especially sensitive to heat or light as it degrades very fast. Dr. Sarah Westberg, a professor at The University of Minnesota College of Pharmacy, says you should follow the storage instructions and replace them regularly. 2) Some antibiotics – research has shown that ampicillin, erythromycin, and furosemide show a reduction in activity in the heat, although this was found after storing them for a year in a car with a temperature exceeding 25 degrees Celsius. Other antibiotics such as cefoxitin are shown to have some “stability in warmer climates”. 3) Levothyroxine – used to treat an underactive thyroid, also known as hypothyroidism. This drug should be stored between 15 to 30 degrees Celsius, although even 30 is quite high so the lower the temperature the better. Interestingly, levothyroxine isn’t heat sensitive itself, it is the fact that the body becomes sensitive to the drug and may make a person feel strange in the heat. 4) Metoprolol succinate – used to treat high blood pressure, also known as hypertension, and heart failure in emergencies . The ideal storage conditions for this drug are 15 to 30 degrees Celsius, like Levothyroxine. Key things to look out for with your medicine in the heat Below are the 2 main things you should be checking for before taking your medicine in the summer: 1) Change in colour – Light can initiate all sorts of reactions, such as oxidation. If, for example, your medicine that is normally white has now changed into a different colour, this suggests that a reaction has taken place within your drug and will not be effective when administered. 2) Change in texture – Similar to change in colour, if a normally solid, oral tablet has become soft then this also suggests that the medication will not be as effective when consumed. How you can prevent your medicine from degrading To make sure you do not contribute to wasting medicine, you should do the following: 1) Check storage information – for any medication that you take, this will let you know how to store them correctly. 2) Travel with care – do not pack prescription drugs into your luggage, as it will almost always become very warm due to the surrounding environment. Instead, carry your medicine with you with the labels still on. 3) Do not leave medicine in any vehicle – everyday vehicles such as cars tend to get warm after a period , which can affect the colour and texture of your medicine. 4) Careful deliveries – for those who have their medicine delivered to them, you can request for your local pharmacy to deliver your medicine in temperature-controlled packages. Summary As discussed, chemicals in the majority of over-the-counter prescription drugs are heat sensitive and should therefore be handled with care, to prevent degradation of the drug. Changes in colour and texture are signs of degradation, which result in loss of efficacy, absorption, and potency. However, many other pharmacological factors interfere, so scientists especially involved in drug synthesis should (or continue to) take great precautions with the manufacturing process. Drugs are costly to make and require a lot of time, so the takeaway is to store them correctly! You should contact your pharmacist if you are still unsure about your prescription(s). Written by Harsimran Kaur Sarai Project Gallery
- Neuroimaging and spatial resolution | Scientia News
Peering into the mind Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Neuroimaging and spatial resolution 10/07/25, 10:24 Last updated: Published: 04/11/24, 14:35 Peering into the mind Introduction Neuroimaging has been at the forefront of brain discovery ever since the first ever images of the brain were recorded in 1919 by Walter Dandy, using a technique called pneumoencephalography (PET). Fast-forward over a decade and neuroimaging is more than just blurry singular images. Modern techniques allow us to observe real time changes in brain activity with millisecond resolution, leading to breakthroughs in scientific discovery that would not be possible without it. Memory is a great example - with functional magnetic resonance imaging (fMRI) techniques we have been able to demonstrate that more recent long-term memories are stored and retrieved with brain activity in the hippocampus, but as memories become more in the distant past, they are transferred to the medial temporal lobe. While neuroimaging techniques keep the doors open for new and exciting discoveries, spatial limitations leave many questions unanswered, especially at a cellular and circuit level. For example - within the hippocampus, is each memory encoded via complete distinct neural circuits? Or do similar memories share similar neural pathways? Within just a millimetre cubed of brain tissue we could have up to 57,000 cells (most of them neurons), all of which may have different properties, be part of different circuits, and produce different outcomes. This almost makes revolutionary techniques such as fMRI, with almost unparalleled image quality, seem pointless. To truly understand how neural circuits work, we have to dig as deep as possible to record the smallest regions possible. So that begs the question, how small can we actually record in the human brain? EEG 2024 marks a decade since the first recorded electroencephalography (also known as EEG) scan by Hans Berger in Germany. This technique involves placing electrodes all around the scalp to record activity throughout the whole outer surface of the brain ( Figure 1 ). Unlike the methods we see later on, EEG scans provide a direct measure of activity in the brain, by measuring electrical activity when the brain is active. However, because electrodes are only placed across the scalp, EEG scans are only able to pick up activity from the outer cortex, missing important activity in deeper parts of the brain. In our memory example, this means it would completely miss any activity in the hippocampus. EEG resolution is also quite underwhelming, typically being able to resolve activity with a few centimetres’ resolution - not great for mapping behaviours to specific structures in the brain. EEG scans are used in a medical environment to measure overall activity levels, assisting with epilepsy diagnosis. Let's look at what we can use to dig deeper into the brain and locate signals of activity… PET Position emission tomography (PET) scans offer a chance to record activity throughout the whole brain by ingesting a radioactive tracer, typically glucose labelled with a mildly radioactive substance. This tracer is tracked and uptake in specific parts of the brain is a sign for greater metabolic activity, indicating a higher signalling rate. PET scans already offer a resolution far beyond the capacities of EEG scans, distinguishing activity between areas with a resolution of up to 4mm. With the use of different radioactive labels, we can also detect activity of specific populations of neurons such as dopamine neurons to diagnose Parkinson's disease. In fact, many studies have reliably demonstrated the ability of PET scans to detect the root cause of Parkinson's disease, which is a reduced number of dopamine neurons in the basal ganglia, before symptoms become too extreme. As impressive as it sounds, a 4mm resolution can locate activity in large areas of the cortex, but is limited in its resolving power for discrete cortical layers. Take the human motor cortex for example - all 6 layers have an average width of only 2.79mm. A PET scan would not be powerful enough to determine which layer is most active, so we need to dig a little deeper… fMRI Since its inception in the early 90's, fMRI has gained the reputation of becoming the gold standard for human neuroimaging, thanks to its non-invasiveness, lack of artefacts, and reliable signalling. fMRI uses Nuclear Magnetic Resonance to measure changes in oxygenated blood flow, which is correlative of neural activity, known as BOLD signals. In comparison to EEG, measuring blood oxygen levels cannot reach a highly impressive temporal resolution, and is also not a direct measure of neural activity. fMRI makes up for this with its superior spatial resolution, resolving spaces as small as 1mm apart. Using our human motor cortex example, this would allow us to resolve activity between every 2-3 layers - not a bad return considering it doesn’t even leave a scar. PET, and especially EEG, pales in comparison to the capabilities of fMRI that has since been used for a wide range of neuroimaging research. Most notably, structural MRI has been used to support the idea of hippocampal involvement during spatial navigation from memory tasks ( Figure 2 ). Its resolving power and highly precise images also make it suitable to be used for mapping surgical procedures. Conclusion With a resolution of up to 1mm, fMRI takes the crown as the human neuroimaging technique with the best spatial resolution! Table 1 shows a brief summary of each neuroimaging method. Unfortunately though, there is still so much more we need to do to look at individual circuits and connections. As mentioned before, even within a millimetre cubed of brain, we have 5 figures worth of cells, making the number of neurons that make up the whole brain impossible to comprehend. To observe the activity of a single neuron, we would need an imaging technique with the power of viewing cells in the 10’s of micrometre range. So what can we do to get to the resolution we desire while still being suitable for humans? Maybe there isn't a solution. Instead, maybe if we want to record singular neuron activity, we have to take inspiration from invasive animal techniques such as microelectrode recordings. Typically used in rats and mice, these can achieve single-cell resolution to look at neuroscience from the smallest of components. It would be unethical to stick an electrode into a healthy human's brain and record activity, but perhaps in the future a non-invasive form of electrode recording could be developed? The current neuroscience field is foggy and shrouded in mystery. Most of these mysteries simply cannot be solved with the current research techniques we have at our disposal. But this is what makes neuroscience exciting - there is still so much to explore! Who knows when we will be able to map behaviours to neural circuits with single-cell precision, but with how quickly imaging techniques are being enhanced and fine-tuned, I wouldn't be surprised if it's sooner than we think. Written by Ramim Rahman Related articles: Neuromyelitis optica / Traumatic brain injuries REFERENCES Hoeffner, E.G. et al. (2011) ‘Neuroradiology back to the future: Brain Imaging’, American Journal of Neuroradiology, 33(1), pp. 5–11. doi:10.3174/ajnr.a2936. Maguire, E.A. and Frith, C.D. (2003) ‘Lateral asymmetry in the hippocampal response to the remoteness of autobiographical memories’, The Journal of Neuroscience, 23(12), pp. 5302–5307. doi:10.1523/jneurosci.23-12-05302.2003. Wong, C. (2024) ‘Cubic millimetre of brain mapped in spectacular detail’, Nature, 629(8013), pp. 739–740. doi:10.1038/d41586-024-01387-9. Butman, J. A., & Floeter, M. K. (2007). Decreased thickness of primary motor cortex in primary lateral sclerosis. AJNR. American journal of neuroradiology, 28(1), 87–91. Loane, C., & Politis, M. (2011). Positron emission tomography neuroimaging in Parkinson's disease. American journal of translational research, 3(4), 323–341. Maguire, E.A. et al. (2000) ‘Navigation-related structural change in the hippocampi of taxi drivers’, Proceedings of the National Academy of Sciences, 97(8), pp. 4398–4403. doi:10.1073/pnas.070039597. [Figure 1] EEG (electroencephalogram) (2024) Mayo Clinic . Available at: https://www.mayoclinic.org/tests-procedures/eeg/about/pac-20393875 (Accessed: 18 October 2024). [Figure 2] Boccia, M. et al. (2016) ‘Direct and indirect parieto-medial temporal pathways for spatial navigation in humans: Evidence from resting-state functional connectivity’, Brain Structure and Function, 222(4), pp. 1945–1957. doi:10.1007/s00429-016-1318-6. Project Gallery
- Can you erase your memory? | Scientia News
The concept of memory erasure is huge and complex Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Can you erase your memory? 09/07/25, 13:31 Last updated: Published: 23/11/23, 11:08 The concept of memory erasure is huge and complex What is memory? Our brain is a wiggly structure in our skull, made up of roughly 100 billion neurones. It is a wondrous organ, capable of processing 34 gigabytes of digital data per day, yet being able to retain information, and form memory – something that many would argue, defines who we are. So.. what is memory? And how does our brain form them? Loosely defined, memory is the capacity to store and retrieve information. There are three types of memory: short-term, working, and long-term memory (LTM). Today, we will be focusing on LTM. In order to form LTM, we need to learn and store memory. This follows the process of encoding, storage, retrieval, and consolidation. In order to understand the biochemical attributes of memory in our brain, a psychologist, Dr Lashley, conducted extensive experiments on rats to investigate if there were specific pathways in our brain that we could damage to prevent memory from being recalled. His results showed that despite large areas of the brain being removed, the rats were still able to perform simple tasks ( Figures 1-2 ). Lashley’s experiment transformed our understanding of memory, leading to the concept of “engrams”. Takamiya et al., 2020 defines “memory engrams” as traces of LTM consolidated in the brain by experience. According to Lashley, the engrams were not localised in specific pathways. Rather, they were distributed across the whole of the brain. Can memory be erased? The concept of memory erasure is huge and complex. In order to simplify this, let’s divide them into two categories: unintentional, and intentional. Let’s take amnesia for example. This is a form of unintentional memory ‘erasure’. There are two types of amnesia: retrograde amnesia, and anterograde amnesia. Retrograde amnesia is the loss of memory that was formed before acquiring amnesia. On the other hand, anterograde amnesia is the inability to make new memories since acquiring amnesia. Typically, a person with amnesia would exhibit both retrograde, and anterograde amnesia, but at different degrees of severity ( Figure 3 ). Can we ‘erase’ our memory intentionally? And how would this be of use to us? This is where things get really interesting. Currently, the possibility of intentional memory ‘erasure’ is being investigated in patients for the treatment of post-traumatic stress disorder (PTSD). In these clinical trials, patients with PTSD are given drugs that block these traumatic memories. For example, propranolol, an adrenergic beta receptor blocker impairs the acquisition, retrieval, and reconsolidation of this memory. Incredible, isn’t it? Although this is not the current standard treatment for PTSD, we can only imagine how relieving it would be for our fellow friends who suffer from PTSD if their traumatic memories could be ‘erased’. However, with every step ahead, we must always be extremely cautious. What if things go wrong? We are dealing with our brain, arguably one of the most important organs in our body after all. Regardless, the potential for memory ‘erasure’ in treating PTSD seems both promising and intriguing, and the complexities and ethical considerations surrounding such advancements underscore the need for careful and responsible exploration in the realm of neuroscience and medicine. Written by Joecelyn Kiran Tan Related articles: Synaptic plasticity / Boom, and you're back! (intrusive memories) / Sleep and memory loss Project Gallery
- Psychology of embarrassment: why do we get embarrassed? | Scientia News
Characteristics, triggers and theoretical models of embarrassment Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Psychology of embarrassment: why do we get embarrassed? 05/06/25, 10:07 Last updated: Published: 06/09/24, 11:07 Characteristics, triggers and theoretical models of embarrassment The six basic emotions proposed by Ekman and recognised worldwide are sadness, happiness, fear, anger, surprise and disgust- Ekman (1999). Recently, the list of basic emotions has expanded to include self-conscious emotions, such as embarrassment, pride and shame, as all of those emotions show evidence for cross-cultural and cross-species production and perception. According to Miller (1995), embarrassment is the self-conscious feeling individuals get after realising they have done something stupid, silly or dishonourable. Embarrassment is a social emotion that emerges at around 18 months of age and the development of which is related to self-recognition. Characteristics of embarrassment in humans are gaze aversion, downward head movements, controlled smile and face touching. Embarrassment has been linked to the two main personality dimensions proposed by Eysenck (1983): extraversion/introversion and neuroticism/emotional stability. Kelly & Jones (1997) found that neuroticism is positively associated with embarrassment, suggesting that the individuals who score highly in neuroticism are more prone to experiencing embarrassment. The same researchers also concluded that embarrassment is negatively related to extraversion, implying that introverted individuals are more likely to feel embarrassed than extroverted individuals. The three triggers of embarrassment, according to Sabini, Siepmann & Meyerowitz (2000), are faux pas, sticky situations and centre of attention. Faux pas causes embarrassment when an individual creates a social mistake that forces them to think of others’ evaluation, like misspelling a word in a presentation and only realising when presenting it to a supervisor. Sticky situations lead to embarrassment when they threaten an individual's role, not their self-esteem, such as a leader being challenged publicly by their second in command. Centre of attention describes an anomaly when embarrassment is not a result of failure but of increased attention, for example being at your own birthday party. The faux pas trigger aligns with the social evaluation model of embarrassment, whilst sticky situations are in line with the dramaturgic model of embarrassment. There are four prominent theories of embarrassment: the dramaturgic model, the social evaluation model, the situational self-esteem model and the personal standards model. The dramaturgic model proposed by Silver, Sabini and Parrott (1987) says that embarrassment is the flustered uncertainty that follows a poor public performance and leaves the individual at a loss of what to do. This model suggests that anxiety and aversive arousal trigger embarrassment after realising a performance has gone wrong (see Figure 4 ). In this model, concern about what others think accompanies embarrassment but does not cause it. Miller (1996) suggests that whilst the dramaturgic model has substantial support, it is difficult for a dramaturgic dilemma to cause embarrassment without simultaneously creating unwanted social evaluations, highlighting a limitation of this model. The social evaluation model of embarrassment put forward by Edelmann (1987) suggests that embarrassed individuals fear failure to impress others and feeling at a loss of what to do is a result of embarrassment, not the cause (see Figure 5 ). This model assumes that individuals are concerned about others’ opinions. Miller (1996) supports this theory, saying that negative evaluation from others is crucial to embarrassment. The situational self-esteem model by Modigliani (1971) proposes that the root cause of embarrassment is the temporary loss of self-esteem that results from public failures based on one’s own opinions of self and performance in faulty situations (see Figure 6). Miller (1995) does not support this theory, arguing that self-esteem plays a secondary role in embarrassment and states that susceptibility to embarrassment depends more on the persistent concern about others’ evaluations of us. The personal standards model of embarrassment introduced by Babcock (1988) presents the view that embarrassment is caused by the individual realising they have failed the standards of behaviour that they have set for themselves, implying that the situation does not matter and that individuals can feel embarrassment when they are alone (see Figure 7 ). Miller (1992) disagrees with this theory, saying that guidelines for self are linked to impressions made on other people and that embarrassment can happen due to poor audience reaction, not letting yourself down. Therefore, there are many plausible theories behind embarrassment that have been linked to various causes like dispositional, situational and personality factors. Whilst it is unlikely that one theory can perfectly explain such a complex social emotion like embarrassment, the consensus among psychologists in the recent years has created the most support for a combination of the dramaturgic and the social evaluation models. I agree with the consensus and think that the different theories behind embarrassment may all apply to a given situation. For instance, forgetting someone’s name may lead to embarrassment due to being at a loss of what to say (the dramaturgic model), unwanted social judgements (the social evaluation model), the negative effects of this situation on the self-esteem (the situational self-esteem model) and the painful realisation of letting yourself down (the personal standards model). Thus, like many subjects in psychology, embarrassment is a multidimensional concept that can be looked at from many different angles. Written by Aleksandra Lib Related articles: Chemistry of emotions / Unmasking aggression / Inside out: chemistry of depression REFERENCES Babcock, M. K. (1988). Embarrassment: A window on the self. Journal for the Theory of Social Behaviour . Edelmann, R. J. (1987). The psychology of embarrassment . John Wiley & Sons. Ekman, P. (1999). Basic emotions. Handbook of cognition and emotion , 98 (45-60), 16. Eysenck, H. J. (1983). Psychophysiology and personality: Extraversion, neuroticism and psychoticism. In Individual differences and psychopathology (pp. 13-30). Academic Press. Kelly, K. M., & Jones, W. H. (1997). Assessment of dispositional embarrassability. Anxiety, Stress, and Coping, 10 (4), 307-333. Lewis, M., Sullivan, M. W., Stanger, C., & Weiss, M. (1989). Self development and self-conscious emotions. Child development , 146-156. Miller, R. S. (1992). The nature and severity of self-reported embarrassing circumstances. Personality and Social Psychology Bulletin , 18 (2), 190-198. Miller, R. S. (1995). On the nature of embarrassabllity: Shyness, social evaluation, and social skill. Journal of personality , 63 (2), 315-339. Miller, R. S. (1996). Embarrassment: Poise and peril in everyday life. Guilford Press. Modigliani, A. (1971). Embarrassment, facework, and eye contact: Testing a theory of embarrassment. Journal of Personality and social Psychology , 17 (1), 15. Sabini, J., Siepmann, M., Stein, J., & Meyerowitz, M. (2000). Who is embarrassed by what?. Cognition & Emotion , 14 (2), 213-240. Silver, M., Sabini, J., Parrott, W. G., & Silver, M. (1987). Embarrassment: A dramaturgic account. Journal for the Theory of Social Behaviour , 17 (1), 47-61. Tracy, J. L., Robins, R. W., & Tangney, J. P. (2007). The self-conscious emotions. New York: Guilford . Project Gallery
- Cancer on the Move | Scientia News
How can patients with metastasised cancer be treated? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Cancer on the Move 09/07/25, 13:31 Last updated: Published: 30/01/24, 19:57 How can patients with metastasised cancer be treated? Introducing and Defining Metastasis Around 90% of patients with cancer die due to their cancer spreading (metastasis). Despite its prevalence, many critical questions remain in the field of cancer research about how and why cancers metastasise. The metastatic cascade has three main steps: dissemination, dormancy, and colonisation. Most cells that disseminate die once they leave the primary tumour, thus, posing an evolutionary bottleneck. However, the few that survive will face another challenge of entering a foreign microenvironment. Those circulating tumour cells (CTCs) acquire a set of functional abilities through genetic alterations, enabling them to survive the hostile environment. CTCs can travel as single cells or as clusters. If they travel in clusters, CTCs can be coated with platelets, neutrophils, and other tumour-associated cells, protecting CTCs from immune surveillance. As these CTCs travel further, they are named disseminated tumour cells (DTCs). These cells are undetectable by clinical imaging and can enter a state of dormancy. The metastatic cascade represents ongoing cellular reprogramming and clonal selection of cancer cells that can withstand the hostile external environment. How does metastasis occur, and what properties allow these cancer cells to survive? How & Why Does Cancer Metastasise? The Epithelial-to-Mesenchymal Transition (EMT) is a theory that explains how cancer cells can metastasise. In this theory, tumour cells lose their epithelial cell-to-cell adhesion and gain mesenchymal migratory markers. Tumour cells that express a mixture of epithelial and mesenchymal properties were found to be the most effective in dissemination and colonisation to the secondary site. It is important to note that evidence for the EMT has been acquired predominantly in vitro , where additional in vivo research is necessary to confirm this phenomenon. Nevertheless, although EMT does not accurately address why cancers metastasise, it provides a framework for how a cancer cell develops the properties to metastasise. Many factors contribute to why cancers metastasise. For example, a lack of blood supply, which occurs when a cancer grows too large, causes the cells in the centre to lack access to the oxygen carried by red blood cells. Thus, to evade cell death, cancer cells detach from the primary tumour to regain access to oxygen and nutrients. In addition, cancer cells exhibit a high rate of glycolysis to supply sufficient energy for its uncontrollable proliferation. However, this generates lactic acid as a by-product, resulting in a low pH environment. This acidic pH environment stimulates cancer invasion and metastasis as cancer cells move away from this hostile environment to evade cell death once again, an effect referred to as the ‘Warburg Effect’. In Figure 2, you can see that multiple interplaying factors that contribute to metastasis. So, how can patients with metastasised cancer be treated? Current Treatments and Biggest Challenges? Depending on what stage the patient presents at and what cancer type, the treatment options differ. Figure 3 shows an example of these treatment plans. For early stages I and II, chemotherapy and targeted treatments are offered, and in specific cases, local surgery is done. These therapies are done to slow the growth of the cancer or lessen the side effects of treatments. An example of treating metastasised prostate cancer includes hormone therapy, as the cancer relies on the hormone testosterone to grow. Currently, cytotoxic chemotherapy remains the backbone of metastatic therapy. However, there are emerging immunotherapeutic treatments under trial. These aim to boost the ability of the immune system to detect and kill cancer cells. Hopefully, these new therapies may improve the prognosis of metastatic cancers when used in complement with conventional therapies, shining a new light into the therapeutic landscape of advanced cancers. Future Directions Recent developments have opened new avenues to discovering potential treatment targets for metastatic cancer. The first is to target the dormancy of DTCs, where the role of the immune system plays an important part. Neoadjuvant ICI (immune checkpoint inhibitor) studies are anticipated to provide insight into novel biomarkers and can eliminate micro-metastatic cancer cells. Also, using novel technology such as single-cell RNA sequencing reveals complex information about the plasticity of metastatic cancer cells, allowing researchers to understand how cancer cells adapt in stressful conditions. Finally, in vivo models, such as patient-derived models, could provide crucial insight into future treatments as they reproduce the patients’ reactions to different drug treatments. There are many limitations and challenges to the research and treatment of cancer metastasis. It is clear, however, that with more studies into the properties of metastatic cancers and the different avenues of novel targets and therapeutics, there is a promising outcome in the field of cancer research. Written by Saharla Wasame Related articles: Immune signals and metastasis / Cancer magnets for tumour metastasis / Brain metastasis / Novel neuroblastoma driver for therapeutics REFERENCES Fares, J., Fares, M.Y., Khachfe, H.H., Salhab, H.A. and Fares, Y. (2020). Molecular principles of metastasis: a hallmark of cancer revisited. Signal Transduction and Targeted Therapy , 5(1). doi: https://doi.org/10.1038/s41392-020-0134-x . Ganesh, K. and Massagué, J. (2021). Targeting metastatic cancer. Nature Medicine , 27(1), pp.34–44. doi: https://doi.org/10.1038/s41591-020-01195-4 . Gerstberger, S., Jiang, Q. and Ganesh, K. (2023). Metastasis. Cell , [online] 186(8), pp.1564–1579. doi: https://doi.org/10.1016/j.cell.2023.03.003 . Li, Y. and Laterra, J. (2012). Cancer Stem Cells: Distinct Entities or Dynamically Regulated Phenotypes? Cancer Research , [online] 72(3), pp.576–580. doi: https://doi.org/10.1158/0008-5472.CAN-11-3070 . Liberti, M.V. and Locasale, J.W. (2016). The Warburg Effect: How Does it Benefit Cancer Cells? Trends in Biochemical Sciences , [online] 41(3), pp.211–218. doi: https://doi.org/10.1016/j.tibs.2015.12.001 . Mlecnik, B., Bindea, G., Kirilovsky, A., Angell, H.K., Obenauf, A.C., Tosolini, M., Church, S.E., Maby, P., Vasaturo, A., Angelova, M., Fredriksen, T., Mauger, S., Waldner, M., Berger, A., Speicher, M.R., Pagès, F., Valge-Archer, V. and Galon, J. (2016). The tumor microenvironment and Immunoscore are critical determinants of dissemination to distant metastasis. Science Translational Medicine , 8(327). doi: https://doi.org/10.1126/scitranslmed.aad6352 . Oscar Hernandez Dominguez, Yilmaz, S. and Steele, S.R. (2023). Stage IV Colorectal Cancer Management and Treatment. Journal of Clinical Medicine , 12(5), pp.2072–2072. doi: https://doi.org/10.3390/jcm12052072 . Steeg, P.S. (2006). Tumor metastasis: mechanistic insights and clinical challenges. Nature Medicine , [online] 12(8), pp.895–904. doi: https://doi.org/10.1038/nm1469 . Project Gallery
- Decoding p53: the guardian against cancer | Scientia News
Looking at p53 mutations and cancer predisposition Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Decoding p53: the guardian against cancer 09/07/25, 14:03 Last updated: Published: 23/11/23, 11:38 Looking at p53 mutations and cancer predisposition Being a tumour suppressor protein, p53 encoded by the TP53 gene plays a critical role in regulating cell division and preventing the formation of tumours. Its function in maintaining genome stability is vital in inhibiting cancer development. Understanding p53 Located on chromosome locus 17p13.1, TP53 encodes the p53 transcription factor 1. Consisting of three domains, p53 can directly initiate or suppress the expression of 3661 different genes involved in cell cycle control and DNA repair 2. With this control, p53 can influence cell division on a massive scale. Cancer is characterised by uncontrolled cell division, which can occur due to accumulated mutations in either proto-oncogenes or tumour suppressor genes. Wild-type p53 can repair mutations in oncogenes such that they will not affect cell division. However, if p53 itself is mutated, then its ability to repair DNA and control the cell cycle is inhibited, leading to the emergence of cancer. Mutations in TP53 are actually the most prevalent genetic alterations found in patients with cancer. The mechanisms by which mutated p53 leads to cancer are manifold. One such mechanism is p53’s interaction with p21. Encoded by CDKN1A , p21 is activated by p53 and prevents cell cycle progression by inhibiting the activity of cyclin-dependent kinases (CDKs). Therefore, we can see that a non-functional p53 would lead directly to uncontrolled cell division and cancer. Clinical significance The importance of p53 in preventing cancer is highlighted by the fact that individuals with inherited TP53 mutations (a condition known as Li-Fraumeni syndrome or LFS) have a significantly greater risk of developing any cancer. These individuals inherit one defective TP53 allele from one parent, making them highly susceptible to losing the remaining functional TP53 allele, ultimately leading to cancer. Loss of p53 also endows cells with the ability to ignore pro-apoptotic signals such that if a cell becomes cancerous, it is far less likely to undergo programmed cell death 3. Its interactions with the apoptosis-inducing proteins Bax and Bak, are lost when mutated, thus leading to cellular apoptosis resistance. The R337H mutation in TP53 is an example of the founder effect at work. The founder effect refers to the loss of genetic variation when a large population descends from a smaller population of fewer individuals. The descendants of the initial population are much more likely to harbour genetic variations that are less common in the species as a whole. In southern Brazil, the R337H mutation in p53 is present at an unusually high frequency 4 and is thought to have been introduced by European settlers several hundred years ago. It is responsible for a widespread incidence of early-onset breast cancers, LFS, and paediatric adrenocortical tumours. Interestingly, individuals with this mutation can trace their lineage back to the group of European settlers that set foot in Brazil hundreds of years ago. Studying p53 has enabled us to unveil its intricate web of interactions with other proteins and molecules within the cell and unlock the secrets of cancer development and potential therapeutic strategies. By restoring or mimicking the functions of p53, we may be able to provide cancer patients with some relief from this life-changing condition. Written by Malintha Hewa Batage Related articles: Zinc finger proteins / Anti-freeze proteins Project Gallery
- The future of semiconductor manufacturing | Scientia News
Through photonic integration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The future of semiconductor manufacturing 11/07/25, 10:03 Last updated: Published: 22/12/23, 15:11 Through photonic integration Recently the researchers from the University of Sydney developed a compact photonic semiconductor chip by heterogeneous material integration methods which integrates an active electro-optic (E-O) modulator and photodetectors in a single chip. The chip functions as a photonic circuit (PIC) offering a 15 gigahertz of tunable frequencies with a spectral resolution of only 37 MHz and is able to expand the radio frequency bandwidth (RF) to precisely control the information flowing within the chip with the help of advanced photonic filter controls. The application of this technology extends to various fields: • Advanced Radar: The chip's expanded radio-frequency bandwidth could significantly enhance the precision and capabilities of radar systems. • Satellite Systems: Improved radio-frequency performance would contribute to more efficient communication and data transmission in satellite systems. • Wireless Networks: The chip has the potential to advance the speed and efficiency of wireless communication networks. • 6G and 7G Telecommunications: This technology may play a crucial role in the development of future generations of telecommunications networks. Microwave Photonics (MWP) is a field that combines microwave and optical technologies to provide enhanced functionalities and capabilities. It involves the generation, processing, and distribution of microwave signals using photonic techniques. An MWP filter is a component used in microwave photonics systems to selectively filter or manipulate certain microwave frequencies using photonic methods (see Figure 1 ). These filters leverage the unique properties of light and its interaction with different materials to achieve filtering effects in the microwave domain. They can be crucial in applications where precise control and manipulation of microwave signals are required. MWP filters can take various forms, including fiber-based filters, photonic crystal filters and integrated optical filters. These filters are designed to perform functions such as wavelength filtering, frequency selection and signal conditioning in the microwave frequency range. They play a key role in improving the performance and efficiency of microwave photonics systems. The MWP filter operates through a sophisticated integration of optical and microwave technologies as depicted in the diagram. Beginning with a laser as the optical carrier, the photonic signal is then directed to a modulator where it interacts with an input Radio-Frequency (RF) signal. The modulator dynamically influences the optical carrier's intensity, phase or frequency based on the RF input. Subsequently, the modulated signal undergoes processing to shape its spectral characteristics in a manner dictated by a dedicated processor. This shaping is pivotal for achieving the desired filtering effect. The processed optical signal is then fed into a photodiode for conversion back into an electrical signal. This conversion is based on the variations induced by the modulator on the optical carrier. The final output which is represented by the electrical signal reflects the filtered and manipulated RF signal which demonstrates the MWP's ability in leveraging both optical and microwave domains for precise and high-performance signal processing applications. Extensive research has been conducted in the field of MWP chips, as evidenced by a thorough examination in Table 1 . This table compares recent studies based on chip material type, filter type, on-chip component integration, and working bandwidth. Notably, previous studies demonstrated noteworthy advancements in chip research despite the dependence on external components. What distinguishes the new chip is its revolutionary integration of all components into a singular chip which is a significant breakthrough that sets it apart from previous attempts in the field. Here the term "On-chip E-O" involve the integration of electro-optical components directly onto a semiconductor chip or substrate. This integration facilitates the interaction between electrical signals (electronic) and optical signals (light) within the same chip. The purpose is to enable the manipulation, modulation or processing of optical signals using electrical signals typically in the form of voltage or current control. Key components of on-chip electro-optical capabilities include: 1. Modulators which alter the characteristics of an optical signal in response to electrical input which is crucial for encoding information onto optical signals. 2. Photonic detectors convert optical signals back into electrical signals extracting information for electronic processing. 3. Waveguides guide and manipulate the propagation of light waves within the chip, routing optical signals to various components. 4. Switches routes or redirects the optical signals based on electrical control signals. This integration enhances compactness, energy efficiency, and performance in applications such as communication systems and optical signal processing. "FSR-free operation" refers to Free Spectral Range (FSR) which is a characteristic of optical filters and resonators. FSR is the separation in frequency between two consecutive resonant frequencies or transmission peaks. The column "FSR-free operation" indicates whether the optical processing platform operates without relying on a specific or fixed Free Spectral Range. It means that its operation is not bound or dependent on a particular FSR. This could be advantageous in scenarios where flexibility in the spectral range or the ability to operate over a range of frequencies without being constrained by a specific FSR is desired. "On-chip MWP link improvement" refers to enhancements made directly on a semiconductor chip to optimize the performance of MWP links. These improvements aim to enhance the integration and efficiency of communication or signal processing links that involve both microwave and optical signals. The term implies advancements in key aspects such as data transfer rates, signal fidelity and overall link performance. On-chip integration brings advantages such as compactness and reduced power consumption. The manufacturing of the photonic integrated circuit (PIC) involves partnering with semiconductor foundries overseas to produce the foundational chip wafer. This new chip technology will play a crucial role in advancing independent manufacturing capabilities. Embracing this type of chip architecture enables a nation to nurture the growth of its autonomous chip manufacturing sector by mitigating reliance on international foundries. The extensive chip delays witnessed during the 2020 COVID pandemic underscored the global realization of the chip market's significance and its potential impact on electronic manufacturing. Written by Arun Sreeraj Related articles: Advancements in semi-conductor technology / The search for a room-temperature superconductor / Silicon hydrogel lenses / Mobile networks Project Gallery
- Genetically-engineered bacteria break down plastic in saltwater | Scientia News
Unlocking the potential to tackle plastic pollution in oceans Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Genetically-engineered bacteria break down plastic in saltwater 09/07/25, 14:14 Last updated: Published: 29/09/23, 20:19 Unlocking the potential to tackle plastic pollution in oceans Groundbreaking discovery in the fight against plastic pollution North Carolina State University researchers have made a groundbreaking discovery in the fight against plastic pollution in marine environments. They have successfully genetically engineered a marine microorganism capable of breaking down polyethylene terephthalate (PET), a commonly used plastic found in water bottles and clothing, contributing to the growing problem of ocean microplastic pollution. Introducing foreign enzymes to V. natriegens The modified organism, created by incorporating genes from the bacterium Ideonella sakaiensis into the genome of Vibrio natriegens , can effectively degrade PET in saltwater conditions. This achievement marks the first time foreign enzymes have been successfully expressed on the surface of V. natriegens cells, making it a significant scientific breakthrough. PET microplastics pose a significant challenge in marine ecosystems, and current methods of removing them, such as extracting and disposing of them in landfills, are not sustainable. The researchers behind this study aim to find a more environmentally friendly solution by breaking down PET into reusable products, like thermoformed packaging (takeaway cartons) or textiles (clothing, duvets, pillows, carpeting). The team worked with two bacteria species, V. natriegens and I. sakaiensis . V. natriegens , known for its rapid reproduction in saltwater, served as the host organism, while I. sakaiensis provided the enzymes necessary for PET degradation. The researchers first rinsed the plastics collected from the ocean to remove high-concentration salts before initiating the plastic degradation process. Challenges ahead While this breakthrough is a significant step forward, three key challenges are still ahead. The researchers aim to incorporate the DNA responsible for enzyme production directly into the genome of V. natriegens to enhance stability. Because DNA is the genetic material responsible for the production of enzymes, and enzymes are proteins that are responsible for carrying out various chemical reactions in the body, by incorporating the DNA responsible for enzyme production into the genome of V. natriegens , the researchers can enhance the stability of the enzyme production. Thus, this DNA is essential for producing the enzymes necessary for PET degradation, as it contains the genetic information vital for encoding the proteins needed for PET breakdown. Additionally, the research team plans to modify V. natriegens further to feed on the byproducts generated during PET degradation. Lastly, they seek to engineer V. natriegens to produce a desirable end product from PET, such as a molecule that can be utilised in the chemical industry. Collaboration with industry groups Collaboration with industry groups is also crucial in determining the market demand for the molecules that V. natriegens can produce. The researchers are open to working with industry partners to explore the vast production scale and identify the most desirable molecules for commercial use. By introducing the genes responsible for PET degradation into V. natriegens using a plasmid, the researchers successfully induced the production of enzymes on the surface of the bacterial cells. The modified V. natriegens demonstrated its ability to break down PET microplastics in saltwater, providing a practical and economically feasible solution for addressing plastic pollution in marine environments. This research represents a significant advancement in the field, as it is the first time that V. natriegens has been genetically engineered to express foreign enzymes on its cell surface. This breakthrough opens up possibilities for further modifications, such as incorporating the DNA from I. sakaiensis directly into the genome of V. natriegens to make the production of plastic-degrading enzymes a more stable feature of the organism. The researchers aim to modify V. natriegens to feed on the byproducts produced during the breakdown of PET and create a desirable end product for the chemical industry. The researchers are open to collaborating with industry groups to identify the most desirable molecules to be engineered into V. natriegens for production. This groundbreaking research, published in the AIChE Journal with the support of the National Science Foundation under grant 2029327, paves the way for developing more efficient and sustainable methods for addressing plastic pollution in saltwater environments. Conclusion The research has made a breakthrough in the fight against plastic pollution in marine environments. By incorporating genes from the bacterium I. sakaiensis into the genome of V. natriegens , they created a genetically modified marine microorganism capable of breaking down PET. This achievement provides a practical and economically feasible solution to address plastic pollution in aquatic ecosystems. The researchers are now looking into further modifications to the organism to enable it to feed on byproducts and to produce a desirable end product that can be used in the chemical industry. This research highlights the potential of genetic engineering to create sustainable solutions to the growing problem of plastic pollution. Written by Sara Maria Majernikova Related article: Plastics and their environmental impact Project Gallery










