Search Index
278 items found
- A common diabetes drug treating Parkinson’s disease | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A common diabetes drug treating Parkinson’s disease 24/04/24, 10:14 Last updated: Treating this brain disorder with a diabetic drug A new investigational drug, originally developed for type 2 diabetes, is being readied for human clinical trials in search of the world's first treatment to impede Parkinson's disease progression. Parkinson's (PD) is the second most common neurodegenerative disorder. The connection between type 2 diabetes (T2DM) and PD was discovered in 1993, when PD patients with co-existing T2DM had worse motor symptoms and response to therapy. Dopaminergic neurons promote eating behaviour in hypoglycaemic states, mediated via insulin receptors in the substantia nigra, because dopaminergic neuronal loss affects glycaemic control. Thus, T2DM patients are more likely to acquire PD than people without diabetes. Excess glucose in the brain, as found in uncontrolled T2DM, may interact randomly with surrounding proteins and interfere with their function. These interactions also result in toxic end products promoting inflammation and α-synuclein clustering, both of which are PD characteristics. Over a 12-year period, retrospective data (N=8,190,323) showed that T2DM responders had considerably greater PD rates when compared to those without diabetes. The rise was significantly more pronounced among individuals with complex T2DM and those aged 25-44. Exenatide: Overview and Mechanism of Action Exenatide is a synthetic form of exendin-4, a naturally occurring protein identified in the saliva of the Gila monster (poisonous lizard endemic to the Southwest US) by Dr. Eng in the early 1990s. In humans, the chemical is produced after a meal to increase insulin production, decreasing blood sugar. GLP-1 degrades fast in humans, and its benefits are short-lived. However, investigations have shown effects of exendin-4 continue longer in people. This finally led to FDA clearance in 2005, when the product was sold as Byetta TM . Its current indications are for the treatment of balancing glucose levels in T2DM with or without additional oral hypoglycemic medications. This glycaemic control is an analogue of human GLP-1, used in T2DM treatment, either alone or in conjunction with other antidiabetic medications. Exendin-4's neuroprotective characteristics may aid in rescuing degenerating cells and neuron protection. Because T2DM and PD are linked, researchers want to explore its effectiveness as a PD therapy. Patients treated with exenatide for one year (in addition to standard medication) experienced less deterioration in motor symptoms when tested without medication compared to the control group. Research on Exenatide as a Potential Parkinson's Disease Therapy 21 patients with intermediate PD were assessed over a 14-month period, and their progress was compared to 24 other people with Parkinson's who served as controls. Exenatide was well accepted by participants, albeit some individuals complained about weight loss. Significantly, exenatide-treated participants improved their PD movement symptoms, while the control patients continued to deteriorate. The researchers investigate exenatide, a possible PD therapy, in an upcoming clinical study, lending support to the repurposing of diabetes drugs for Parkinson's patients. This research adds to the evidence for a phase 3 clinical trial of exenatide for PD patients. Data on 100,288 T2DM revealed that people using two types of diabetic medications, GLP-1 agonists and DPP4-inhibitors, were less likely to be diagnosed with Parkinson's up to 3.3 years follow-up. Those who used GLP-1 agonists were 60% less likely to acquire PD than those who did not. The results revealed that T2DM had a higher risk of Parkinson's than those without diabetes, although routinely given medicines, GLP-1 agonists, and DPP4-inhibitors seemed to reverse the association. Furthermore, a 2-year follow-up research indicated individuals previously exposed to exenatide displayed a substantial improvement in their motor characteristics 12 months after they ceased taking the medication. However, this experiment was an open-label research so the gains may be explained by a placebo effect. The research adds to the evidence that exenatide may assist to prevent or treat PD, perhaps by altering the course of the illness rather than just lowering symptoms. Other risk factors for PD should be considered by clinicians when prescribing T2DM drugs, although further study is required to clarify clinical significance. Findings from Clinical Trials and Studies Based on these findings, the UCL team broadened their investigation and conducted a more extensive, double-blind, placebo-controlled experiment. The findings establish the groundwork for a new generation of PD medicines, but they also confirm the repurposing of a commercially existing therapy for this illness. Patients were randomly randomised (1:1) to receive exenatide 2 mg or placebo subcutaneous injections once weekly in addition to their current medication for 48 weeks, followed by a 12-week washout period. Web-based randomisation was used, with a two-stratum block design depending on illness severity. Treatment allocation was concealed from both patients and investigators. The main outcome was the adjusted difference in the motor subscale of the Movement Disorders Society Unified Parkinson's Disease Rating Scale after 60 weeks in the realistically defined off-medication condition. Six major adverse events occurred in the exenatide group and two in the placebo group, but none were deemed to be connected to the research treatments in either group. It is unclear if exenatide alters the underlying illness mechanism or causes long-term clinical consequences. Implications and Future Directions Indeed, the UCL study showed that exenatide decreases deterioration compared to a placebo. However, participants reported no change in their quality of life. The study team would broaden their study to include a broader sample of people from several locations. Because PD proceeds slowly, longer-term trials might provide a better understanding of how exenatide works in these responders. Overall, findings suggest that gathering data on this class of medications should be the topic of additional inquiry to evaluate their potential. Exenatide is also being studied to see whether it might postpone the onset of levodopa-induced problems (e.g., dyskinesias). Furthermore, if exenatide works for Parkinson's, why not for other neurodegenerative illnesses (Alzheimer's, amyotrophic lateral sclerosis, Huntington's disease, multiple sclerosis) or neurological diseases (including cerebrovascular disorders, traumatic brain injury...)? Exenatide has been FDA-approved for diabetes for many years and has a good track record, but it does have some adverse side effects in Parkinson's patients, namely gastrointestinal difficulties (nausea, constipation). Exenatide as a prospective PD therapy is an example of medication repurposing or repositioning, an essential method for bringing novel therapies to patients in a timely and cost-effectively. However, further research is required, so it will be many years before a new therapy is licenced and available. Drug repurposing, or using authorised medicines for one ailment to treat another, opens up new paths for Parkinson's therapeutic development. Conclusion Exenatide shows potential as a therapy for Parkinson's disease (PD). Studies have shown that exenatide may help improve motor symptoms and slow down the progression of PD. However, further research and clinical trials are needed to fully understand its effectiveness and long-term effects. The findings also suggest that repurposing existing medications, like exenatide, could provide new avenues for developing PD therapies. While exenatide shows promise, it will likely be many years before it is licensed and widely available as a PD treatment. PROJECT GALLERY IMAGES DESCRIPTION Figure 1- The use of GLP-1 is beyond diabetes treatment. Nineteen clinical studies found that GLP-1 agonists can improve motor scores in Parkinson's Disease, improve glucose metabolism in Alzheimer's, and improve quality of. They can also treat chemical dependency, improve lipotoxicity, and reduce insulin resistance. However, adverse effects are primarily gastrointestinal. Thus, GLP-1 analogues may be beneficial for other conditions beyond diabetes and obesity. Figure 2- Potent GLP-1 agonists suppress appetite through a variety of mechanisms, including delayed gastric emptying, increased glucose-dependent insulin secretion, decreased glucagon levels, and decreased food ingestion via central nervous system effects. Short-acting agents, including exenatide, primarily function by impeding gastric evacuation, thereby leading to a decrease in postprandial glucose levels. On the contrary, extended-release exenatide and other long-acting agonists (e.g., albiglutide, dulaglutide) exert a more pronounced impact on fasting glucose levels reduction via their mechanism of action involving the release of insulin and glucagon. The ineffectiveness of long-acting GLP-1 receptor agonists on gastric evacuation can be attributed to the development of tolerance to GLP-1 effects, which is regulated by parasympathetic tone alterations. Figure 3- Illustrated is the cross-communication with insulin receptor signalling pathways and downstream effectors . Biomarkers can be derived from the formation and origin of extracellular vesicles, which indicate the initial inward budding of the plasma membrane. An early endosome is formed when this membrane fuses; it subsequently accumulates cytoplasmic molecules. As a consequence, multivesicular bodies are generated, which subsequently fuse with the plasma membrane and discharge their constituents into the extracellular milieu. Akt denotes protein kinase B; Bcl-2 signifies extracellular signal-related kinase; Bcl-2 antagonist of death; Bcl-2 extra large; Bcl-XL signifies Bcl-2; Bim signifies Bcl-2-like protein 11; cAMP signifies cyclic adenosine monophosphate; CREB signifies cAMP response element-binding protein; Erk1/2 signifies extracellular signal-related kinase IDE, insulin-degrading enzyme; IL-1α, interleukin 1α; IRS-1, insulin receptor signalling substrate 1; MAPK, mitogen-associated protein kinase; mTOR, mechanistic target of rapamycin; mTORC1, mTOR complex 1; mTORC2, mTOR complex 2; NF-kB, nuclear factor–κB; PI3-K, phosphoinositide 3-kinase; PKA, protein kinase; FoxO1/O3, forkhead box O1/O3, forkhead box O1/O3; GRB2, growth factor receptor-bound protein 2; GSK-3β, Written by Sara Maria Majernikova Related articles: Pre-diabetes / Will diabetes mellitus become an epidemic? Project Gallery
- The genesis of life | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The genesis of life 24/09/24, 13:15 Last updated: Life's origins Did the egg or the chicken come first? This question is often pondered regarding life’s origin and how biological systems came into play. How did chemistry move to biology to support life? And how have we evolved into such complex organisms? The ingredients, conditions and thermodynamically favoured reactions hold the answer, but understanding the inner workings of life’s beginnings poses a challenge for us scientists. Under an empirical approach, how can we address these questions if these events occurred 3.7 billion years ago? The early atmosphere of the Earth To approach these questions, it is relevant to understand the atmospheric contents of the primordial Earth. With a lack of oxygen, the predominant make-up included C02, NH3 and H2, creating a reducing environment for the drive of chemical reactions. When the earth cooled, and the atmosphere underwent condensation, pools of chemicals were made - this is known as “primordial soup”. It is thought that reactants could collide from this “soup” to synthesise nucleotides by forming nitrogenous bases and bonds, such as glycosidic or hydrogen bonds. Such nucleotide monomers were perhaps polymerised to create long chains for nucleic acid synthesis, that is, RNA, via this abiotic synthesis. Thus, if we have nucleic acids, genetic information could have been stored and passed later down the line, allowing for our eventual evolution. Conditions for nucleic acid synthesis The environment supported the formation of monomers for said polymerisation. For example, hydrothermal vents could have provided the reducing power via protons, allowing for the protonation of structures and providing the free energy for bond formation. Biology, of course, relies on protons for the proton gradient in ATP synthesis at the mitochondrial membrane and, in general, acid-base catalysis in enzymatic reactions. Therefore, it is safe to say protons played a vital role in life’s emergence. The eventual formation of structures by protonation and deprotonation provides the enzymatic theory of life’s origins. That is, some self-catalytic ability for replication in a closed system and the evolution of complex biological units. This is the “RNA World” theory, which will be discussed later. Another theory is wet and dry cycling at the edge of hydrothermal pools. This theory Is provided by David Deamer, who suggests that nucleic acid monomers placed in acidic (pH 3) and hot (70-90 degrees Celsius) pools could undergo condensation reactions for ester bond formation. It highlights the need for low water activity and a “kinetic trap” in which the condensation reaction rate exceeds the hydrolysation rate. The heat of the pool provides a high activation energy for the localised generation of polymers without the need for a membrane-like compartment. But even if this was possible and nucleic acids could be synthesised, how could we “keep them safe”? This issue is addressed by the theory of "protocells" formed from fatty acid vesicles. Jack Szostak suggests phase transition (that is pH decrease) allowed for the construction of bilayer membranes from fatty acid monomers, which is homologous to what we see now in modern cells. The fatty acids in these vesicles have the ability to “flip-flop” to allow for the exchange of nutrients or nucleotides in and out of the vesicles. It is suggested that clay encapsulated nucleotide monomers were brought into the protocell by this flip-flop action. Vesicles could grow by competing with surrounding smaller vesicles. Larger vesicles are thought to be those harbouring long polyanionic molecules - that is RNA - which creates immense osmotic pressure pushing outward on the protocell for absorption of smaller vesicles. This represents the Darwinian “survival of the fittest” theory in which cells with more RNA are favoured for survival. The RNA World Hypothesis DNA is often seen as the “Saint” of all things biology, given its ability to store and pass genetic information to mRNA and then mRNA can use this information to synthesise polypeptides. This is the central dogma of course. However, the RNA world hypothesis suggests that RNA arose first due to its ability to form catalytic 3D structures and store genetic information that could have allowed for further synthesis of DNA. This makes sense when you think about how the primer for DNA replication is formed out of RNA. If RNA did not come first, how could DNA replication be possible? Many other scenarios suggest RNA evolution preceded that of DNA. So, if RNA arose as a simple polymer, its ability to form 3D structures could have allowed ribozymes (RNA with enzymatic function) within these protocells. Ribozymes, such as RNA ligase and polymerase, could have allowed for self-replication, and then mutation in primary structure could have allowed evolution to occur. If we have a catalyst, in a closed system, with nutrient exchange, then why would life’s formation not be possible? But how can we show that RNA can arise in this way? The answer to this is SELEX - selective evolution of ligands by exponential enrichment (5). This system was developed by Jack Szostak, who wanted to show the evolution of complex RNA, ribozymes in a test tube was possible. A pool of random, fragmented RNA molecules can be added to a chamber and run through a column with beads. These beads harbour some sequence or attraction to the RNA molecules the column is selecting for. Those that attach can be eluted, and those that do not can be disregarded. The bound RNA can be rerun through SELEX, and the conditions in the column can be more specific in that only the most complementary RNAs bind. This allowed for the development of RNA ligase and RNA polymerase - thus, self-replication of RNA is possible. SELEX helps us understand how the evolution of RNA in the primordial Earth could have been possible. This is also established by meteorites, such as carbon chondrites that burnt up in the earth’s atmosphere encapsulating the organic material in the centre. Chondrites found in Antarctica have been found to contain 80+ amino acids (some of which are not compatible with life). These chondrites also included nucleobases. So, if such monomers can be synthesised in a hostile environment in outer space/in our atmosphere, then the theory of abiotic synthesis is supported. Furthermore, it is relevant to address the abiotic synthesis of amino acids since the evolution of catalytic RNA could have some complementarity for polypeptide synthesis. Miller and Urey (1953) set up a simple experiment containing gas representing the early primordial earth (Methane, hydrogen, ammonia, water). They used a conduction rod to provide the electrical discharge (meant to simulate lightning or volcanic eruption) to the gases and then condensed them. The water in the other chamber turned pink/brown. Following chromatography, they identified amino acids in the mixture. These simple manipulations could have been homologous to early life. Conclusion The abiotic synthesis of nucleotides and amino acids for their later polymerisation would support the theories that address chemistry moving toward biological life. Protocells containing such polymers could have been selected based on their “fitness” and these could have mutated to allow for the evolution of catalytic RNA. The experiments mentioned represent a small fragment of those carried out to answer the questions of life’s origins. The evidence provides a firm ground for the emergence of life to the complexity of what we know today. Written by Holly Kitley Project Gallery
- Immune signals initiated by chromosomal instability lead to metastasis | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Immune signals initiated by chromosomal instability lead to metastasis 05/12/24, 12:08 Last updated: Non-cell-autonomous cancer progression from chromosomal instability Unravelling the intricate relationship between immune cells and cancer cells through STING pathway rewiring. Introduction Chromosomal instability ( CIN ) has long been recognised as a prominent feature of advanced cancers. However, recent research has shed light on the intricate connection between CIN and the STING (Stimulator of Interferon Genes) pathway. Researchers at Memorial Sloan Kettering Cancer Center (MSK) and Weill Cornell Medicine conducted this ground-breaking study, which has provided fascinating insights into the function of the immune system and its interactions with cancer cells. In this article, we will delve into the findings of this study and explore the implications for future cancer treatments. STING pathway The STING pathway plays a crucial role in the response to cellular stress and the innate immunity response to DNA damage and chromosomal instability. Chromosomal instability refers to the increased rate of chromosomal aberrations, such as mutations, rearrangements, and aneuploidy, within a cell population. This instability can lead to genomic alterations that contribute to the initiation and evolution of cancer. This pathway is activated when the presence of cytosolic DNA is detected, which can be indicative of cellular damage or infection, triggering a cascade of signalling events leading to the production of type I interferons and other inflammatory cytokines. Many recent studies have revealed an intriguing relationship between chromosomal instability and the STING pathway, including the STING pathway’s ability to be activated by the accumulation of micronuclei resulting from chromosomal instability in cancer cells. This activation can lead to the promotion of anti-tumour immunity and the suppression of tumourigenesis. The Promise and Limitations of STING Agonist Drugs STING-agonist drugs have shown great potential in preclinical studies, arousing optimism for their use in cancer therapy. However, clinical trials have yielded disappointing results, with low response rates observed in patients. Dr. Samuel Bakhoum, an assistant member at MSK, highlights the discrepancy between lab findings and clinical outcomes. Only a small fraction of patients demonstrated a partial response, leading researchers to question the underlying reasons for this disparity. The Sinister Cooperation: CIN and Immune Cells Chromosomal instability acts as a driver for cancer metastasis, enabling cancer cells to spread throughout the body. The STING pathway, specifically, is where Dr. Bakhoum's team discovered that the immune system has a significant impact on this process. The cooperation between cancer cells with CIN and immune cells is orchestrated by STING, resulting in a pro-metastatic tumour microenvironment. This finding provides a crucial understanding of why STING-agonist drugs have not been effective in clinical trials. Introducing Contact Tracing: Unravelling Cell-to-Cell Interactions Researchers utilised a newly developed tool called ContactTracing to examine cell-to-cell interactions and cellular responses within growing tumours. By analysing single-cell transcriptomic data, they gained valuable insights into the effects of CIN and STING activation. The tool's capabilities allowed them to identify patients who could still mount a robust response to STING activation, enabling the selection of better candidates for STING agonist therapy. STING Inhibition: A Potential Solution Interestingly, the study suggests that patients with high levels of CIN may actually benefit from STING inhibition rather than activation. Treatment of study mice with STING inhibitors successfully reduced metastasis in models of melanoma, breast, and colorectal cancer. These findings open up new possibilities for personalised medicine, where patients can be stratified based on their tumour's response. By identifying the subset of patients whose tumours can still mount a strong response to STING activation, doctors could select better candidates for STING agonists. This biomarker-based approach could help figure out which patients would benefit from turning on STING and which would benefit from turning it off. This could lead to more targeted and effective treatments for people with advanced cancer that is caused by chromosomal instability. Conclusion Based on the research findings, it can be concluded that chronic activation of the STING pathway, induced by CIN, promotes changes in cellular signalling that hinder anti-tumour immunity and facilitate cancer metastasis. This rewiring of downstream signalling ultimately renders STING-agonist drugs ineffective in advanced cancer patients. However, the study also suggests that STING inhibitors may benefit these patients by reducing chromosomal instability-driven metastasis. The research highlights the importance of identifying biomarkers to determine which patients would benefit from STING activation or inhibition. Overall, these findings provide valuable insights into the underlying mechanisms of cancer progression and offer potential opportunities for improved treatment strategies for patients with advanced cancer. The study shown in figure 1, analysed 39,234 single cells within the tumour microenvironment (TME), categorised by cell subtype assignment. It showed that tumour cell rates of CIN were genetically dialled-up or dialled-down. The study also showed CIN-dependent effects on differential abundance at the neighbourhood level, grouped by cell subtype and ranked by mean log2 (FC) within each cell subtype. Node opacity was scaled by the p-value. Written by Sara Maria Majernikova Related articles: Cancer immunologist Polly Matzinger / The Hippo signalling pathway Reference: Li, J., Hubisz, M.J., Earlie, E.M. et al. Non-cell-autonomous cancer progression from chromosomal instability. Nature 620 , 1080–1088 (2023). https://doi.org/10.1038/s41586-023-06464-z Project Gallery
- Cryptosporidium: bridging local outbreaks to global health disparities | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Cryptosporidium: bridging local outbreaks to global health disparities 19/09/24, 10:43 Last updated: Investigating the outbreak in Devon, UK in May 2024 ! Widget Didn’t Load Check your internet and refresh this page. If that doesn’t work, contact us. Project Gallery !
- Physics in healthcare | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Physics in healthcare 18/11/24, 13:48 Last updated: Nuclear medicine When thinking about a career or what to study in university, many students interested in science think that they have to decide between a more academic route or something more vocational, such as medicine. Whilst both paths are highly rewarding, it is possible to mix the two. An example of this is nuclear medicine, allowing physics students to become healthcare professionals. Nuclear medicine is an area of healthcare that involves introducing a radioactive isotope into the system of a patient in order to image their body. A radioactive isotope is an unstable nucleus that decays and emits radiation. This radiation can then be detected, usually by a tool known as a gamma camera. It sounds dangerous, however it is a fantastic tool that allows us to identify abnormalities, view organs in motion and even prevent further spreading of tumours. So, how does the patient receive the isotope? It depends on the scan they are having! The most common route is injection but it is also possible for the patient to inhale or swallow the isotope. Some hospitals give radioactive scrambled eggs or porridge to the patient in gastric emptying imaging. The radioisotope needs to obey some conditions: ● It must have a reasonable half-life. The half-life is the time it takes for the isotope to decay to half of the original activity. If the half-life is too short, the scan will be useless as nothing will be seen. If it is too long, the patient will be radioactive and spread radiation into their immediate surroundings for a long period of time. ● The isotope must be non-toxic. It cannot harm the patient! ● It must be able to biologically attach to the area of the body that is being investigated. If we want to look at bones, there is no point in giving the patient an isotope that goes straight to the stomach. ● It must have radiation of suitable energy. The radiation must be picked up by the cameras and they will be designed to be most efficient over a specific energy range. For gamma cameras, this is around 100-200 keV. Physicists are absolutely essential in nuclear medicine. They have to understand the properties of radiation, run daily quality checks to ensure the scanners are working, they must calibrate devices so that the correct activity of radiation is being given to patients and so much more. It is essential that the safety of patients and healthcare professionals is the first priority when it comes to radiation. With the right people on the job, safety and understanding is the priority of daily tasks. Nuclear medicine is indeed effective and is implemented into standard medicine thanks to the work of physicists. Written by Megan Martin Related articles: Nuclear fusion / The silent protectors Project Gallery
- Hypermobile Ehler-Danos Syndrome and Hypermobility Spectrum Disorder | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Hypermobile Ehler-Danos Syndrome and Hypermobility Spectrum Disorder 09/01/25, 12:09 Last updated: The same condition after all? Practice and progress in rheumatology The relationship between hypermobile Ehler’s-Danlos Syndrome (hEDS) and Hypermobility Spectrum Disorder (HSD) has been hotly debated in recent years, with research being published on a near-constant basis attempting to establish a valid symptomatological or causalogical difference between the two disorders. Now, a paper by Ritelli et al. (2022) threatens to end the savage cycle for all. Using RNA sequencing techniques and immunofluorescence, Ritelli et al. found identical gene expression and cellular characteristics in dermal biopsies from those with both conditions. Through immunofluorescence of biopsies from 20 women with hEDS, 16 women and 4 men with HSD and 40 controls, it was found that the shape and components of the extracellular matrix were greatly different in those with HSD/hEDS in comparison to those in the healthy control group. Abnormalities were discovered in the expression of cadherin-11, snail1, and αvβ3, α5β1 and α2β1 integrins. Integrins mediate the connections between the cell cytoskeleton and extracellular matrix to ensure they stay together, cell-to-cell adhesion is initiated by cadherin-11, and snail1 is localised close to the cyclin-dependent kinase inhibitor 2B (CDKN2B) gene. Snail1 can activate CDKN2B gene products when Snail1 is overexpressed to the point of reaching the general localisation of the CDKN2B domain. This demonstrates that there may be a similar causative link between the widespread inflammation and chronic pain in HSD/hEDS and rheumatoid arthritis. Li et al. (2021) proved that the polarisation of macrophages (white blood cells which destroy foreign products) was carefully controlled by the CDKN2B-AS1/ MIR497/TXNIP axis- the increased activation of which in rheumatoid arthritis catalyses the excessive polarisation of macrophages, which causes the macrophages to attack healthy cells. In rat studies published by Tan et al. (2022), it was found that rats with diabetes and induced sepsis experienced greater intestinal injury that control rats without any medical pathology who experienced induced sepsis. This was demonstrated to be due to interruptions in the miR-3061/Snail1 communication pathway. Research on this phenomenon in humans may elucidate the relevance of snail1 overproduction in hEDS/HSD sufferers to their complex gastrointestinal symptoms. If this pathway works similarly in human models of sepsis or localised GI infection, it may intimate that snail1 overproduction is responsible for the hyperpolarisation of macrophages in response to foreign product detection, which may cause immunological damage in the intestines. However, the relevance of this study to hEDS/HSD should be considered questionable until further human research into this avenue has been completed. The result of this research is that academia can potentially derive a genetic cause of the complex phenotypes demonstrated by sufferers of hEDS/HSD. This can be achieved by visualising the human genome, and testing genes like those above, or those implicated in modulating the activity of the genes above. Once garnered, this genetic evidence will elucidate whether or not hEDS and HSD are one disorder, or both variants of the same disorder with differing genetic causes. This, in turn, could lead to the development of medications or treatments based on genetic phenotype. Written by Aimee Wilson Related article: Ehler-Danos syndrome Project Gallery
- Fake science websites | Scientia News
Go back Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How fake science websites hijack our trust in experts to misinform and confuse Last updated: 07/11/24 In science, all research is peer-reviewed by experts. Now, fake science websites are mimicking these disciplines. These websites capitalise on our trust in experts. In some cases, these websites are paid to publish fake science. This is becoming more common. In a recent global survey, almost 50% of respondents said they see false or misleading information online daily. By understanding the methods these sites use we can prevent their influence. Hyperlinking is a technique used to convince website users. They reassure the user that the content is credible, but most people don’t have experience in analytical techniques and so these links aren’t questioned. Repetition is used to increase the visibility of fake science content but also saturate search engines. This content can be repeated and spread across different sites. Users of “lateral reading” get multiple websites that corroborate the fake science from the initial source. Many of these sites only choose articles that agree with their perspective and depend on the audience not taking time to follow up. Manufacturing doubt is another strategy where facts are intentionally changed to promote an agenda. It is used in the tobacco industry and against the climate crisis. Meaning articles can maintain the façade of using scientific methods by referencing sources that are difficult to interpret whilst research supported by sound evidence is labelled and downplayed. On fake science websites first, check the hyperlinked articles. These websites will use sites with repeated content from disreputable sites. Next, look at the number of reposts a website has. Legitimate science posts are on credible websites. Some websites investigate websites that feature fake science. Ultimately, these websites thrive on users not having the time or skills to look deeper into the evidence, so doing so will help expose the fake websites. Written by Antonio Rodrigues Related articles: Digital disinformation / COVID-19 misconceptions
- Personalised medicine | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Personalised medicine 02/06/24, 10:25 Last updated: Treatment based on the individual's genetics In modern medicine, the concept of genetic risk factors is well understood. Certain individuals will be predisposed to disease based on their family history and DNA. Similar to how we inherit traits like eye colour from our parents, susceptibility to conditions such as diabetes or cancer can also be inherited. However, it is only recently that we have begun to understand that an individual's genetic makeup will affect not only their risk for disease but also their reaction to treatment. Understanding risk factors is crucial for diagnosing disease and implementing preventative measures to maintain a patient's health. Utilising a person’s unique DNA could provide insights into their genetic predisposition towards different health conditions, thus accelerating the diagnostic process. Giving patients the ability to make informed decisions about their health based on their genetic risk could help them prevent disease. For example, women carrying the BRCA1 gene may opt for mastectomies to reduce the risk of breast cancer later in life. Personalised medicine doesn’t only focus on risk; it can also directly influence how treatments are administered. Genomic data can indicate which medicines are most likely to be effective and whether there may be associated side effects. The Human Genome Project has made tremendous advancements in the last decade. Combining this data with medical records could provide doctors with insights into the molecular-level interactions of different drugs with individual patients. Personalised medicine in practice Cancer serves as the best example of the importance of personalised medicine. Patients have a unique combination of risk factors from their DNA and lifestyle. However, the same treatments are often offered to everyone with the same type of cancer. The specific mutations that cause a cell to become cancerous are unique to each patient. The genetic makeup of cancer cells may determine which treatment should be focused on, and this is where personalised medicine plays a critical role. An example of personalised medicine already in use is for lung cancer, particularly for cancers with mutated Epidermal Growth Factor Receptors (EGFRs). EGFRs are surface proteins involved in cell growth and division. If there is a mutation, it can result in unpredictable and uncontrollable cell proliferation. There are drugs specifically designed to treat lung cancer cells carrying this EGFR mutation, with their mechanism of action based on this. These drugs would likely be ineffective for lung cancers with different mutations, as they have different mechanisms of action. Personalised medicine tailors treatment to the genetic makeup of a person to achieve a bespoke and hopefully improved outcome. Transcriptomics, the study of RNA and its alterations instead of DNA, may be a future avenue of investigation in understanding cancer biology. Tumours can arise due to mutated RNA or abnormal transcription events, indicating that DNA is not the only genetic material relevant to oncology. There have been promising innovations in personalised vaccines tailored to each patient. Tissue from an individual is biopsied and studied, and using identified biomarkers, a custom mRNA vaccine can prime the immune system to attack cancer cells. Future potential Genetic variation in a patient’s response to drugs can significantly affect their reactions to treatment. By combining genomic data and AI technology, scientists are developing predictive algorithms to create individualised medication plans for patients, potentially eliminating the guesswork in prescriptions. Personalised precision medication holds great potential. However, the primary limitation currently lies in the cost of treatment. Medical services are stretched thin across the population, making bespoke treatments currently unfeasible. Personalised medicine is expected to improve as new genetic biomarkers are discovered and catalogued, leading to more sophisticated genomic databases over time. As sequencing technology becomes more mainstream, associated costs are likely to decrease, possibly making personalised medicine standard practice in the future. Written by Charlotte Jones Project Gallery
- A primer on the Mutualism theory of general intelligence | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A primer on the Mutualism theory of general intelligence 21/11/24, 12:04 Last updated: A new theory suggests intelligence develops through reciprocal interactions between abilities Introduction One of the most replicated findings in psychology is that if a sufficiently large and diverse battery of cognitive tests is administered to a representative sample of people, an all-positive correlation matrix will be produced. For a century, psychometricians have explained this occurrence by proposing the existence of g, a latent biological variable that links all abilities. G is statistically represented by the first factor derived from a correlation matrix using a method called factor analysis, which reduces the dimensionality of data into clusters of covariance between tests called factors. Early critics of g pointed out that nothing about the statistical g-factor required the existence of a real biological factor and that the overlap of uncorrelated mental processes sampled by subtests was sufficient. While the strength of correlations between subtests does generally correspond to intuitive beliefs about processes shared between them, this is not universally the case, and for this reason, sampling theory has never seen widespread acceptance. A new theory called mutualism has been proposed that explains the positive manifold without positing the existence of g. In mutualism the growth of abilities is coupled, meaning improvement in one domain causes growth in another, inducing correlations between abilities over time. The authors of the introductory paper demonstrated in a simulation that when growth in abilities is coupled the interaction between baseline ability, growth speed and limited developmental resources is sufficient to create a statistical general factor from abilities that are initially uncorrelated, offering a novel explanation for why abilities like vocabulary that are ‘inexpensive’ in terms of developmental resources explain the most variance in other abilities. Empirical evidence In the field of intelligence, mutualism has been tested twice among neurotypical children in the lab and once in a naturalistic setting with data from a gamified maths revision platform. Alongside these, a lone study exists comparing coupling in children with a language disorder and neurotypical children, however methodological issues related to attrition preclude it from discussion here. All studies used latent change score modelling (LCSM) to compare competing models of how intelligence develops over time. LCSM is a subset of structural equation modelling in which researchers compare the discrepancy between models of proposed causal connections between variables and their values in the real data using model fit indices. Three parameters resembling those used in the introductory paper’s simulations were used to represent causal connections between variables: the change score - wave #2 score minus wave #1 score of the same ability, the self-feedback parameter – the regression coefficient of baseline ability on the change score of the same ability and the coupling effect parameter – the regression coefficient of one ability at wave #1 on the change score of the other ability. The following models were compared: the g-factor model defined by the absence of coupling and growth driven by change in the g-factor, the investment model defined by coupling from matrix reasoning to vocabulary and the mutualism model defined by bidirectional coupling. Mutualism in the lab The first two lab studies investigated coupling between vocabulary and matrix reasoning in samples of 14-25 year olds and 6-8 year olds respectively. The mutualism model showed the best model fit in both studies albeit less decisively in the three wave younger sample, suggesting the stronger model fit of the first study may have been an artefact of regression to the mean. I think it’s problematic to interpret this as empirical support for mutualism due to issues that follow from only using two abilities. A g-factor extracted from two abilities may reflect specific non-g variance shared between tests as much as it does common variance caused by g. Adding to this ambiguity is the fact that the correlations between the change scores of the two tests after controlling for coupling and self-feedback effects were positive, reflecting the influence of an unmodelled third variable, be that g or unmeasured coupling. Another problematic feature of the studies comes from their model specification of the g-factor as being without coupling. This is despite the fact no latent change score modelling study of childhood development has ruled out that g may develop in a coupled or partially coupled manner. Studies using the methodology to study cognitive ageing have shown that some abilities are coupled whereas others are not suggesting that only sampling abilities that do show coupling may lead to a biassed comparison. Mutualism in the classroom Mutualism showed a marginally better fit than the investment model in explaining the development of counting, addition, multiplication and division over three years in a study featuring a sample of 12,000 Dutch 6-10 year olds using the revision platform Mathgarden. The change scores of each ability showed strong correlations after controlling for coupling and self-feedback effects. When considered in relation to the good model fit of the investment model, I believe this may reflect the standardised effect of the curriculum on the development of abilities independent of coupling and baseline ability. A finding with negative implications for mutualism from this study is the fact that the number of games played was not associated with any greater strength in coupling. This could reflect that coupling is a passive mechanism of development with little environmental input but it could equally reflect sorting of high ability students into a niche combined with self-feedback effects of their baseline ability impeding coupling. To observe the causal effect of effort on coupling after controlling for cognitive aging and the tendency of high ability people to train harder a randomised control trial of cognitive training is needed. Cognitive training Unfortunately, no cognitive training study has used latent change score modelling, meaning coupling must be inferred from the presence of far transfer (gains on untrained abilities), rather than directly estimated. COGITO’s youth sample resembled the first lab study to test mutualism in its age range and choice of fluid reasoning as a far transfer measure. Participants underwent 100 days of hour-long training sessions of working memory, processing speed and episodic memory. The authors found no near or far transfer gains for working memory and processing speed, possibly indicating developmental limits on their improvement. However, moderate effect sizes were found for fluid reasoning and episodic memory. The study’s results are lacklustre and developmentally bound but they offer an example of experimentally induced far transfer in a literature – in which it is a rarity – leaving open the possibility that the coupling effects observed in the lab studies were not mere passive effects of development. In contrast to COGITO which targeted young people at the tail end of their cognitive development, the Abecedarian Project started almost as soon as the subjects were born. Conceived of as a pre-school intervention to improve the educational outcomes of African Americans in North Carolina, the Abecedarian Project consisted of an experimental group that received regular guided educational play for infants aimed at building early language and a control condition which only received nutritional supplementation. At the entry of primary school, the experimental group showed a 7 point difference in IQ, which persisted in a diminished capacity at 4.4 IQ points by age 21. In contrast to previous early life interventions, in cognitive training studies and studies on the cognitive outcomes of adoption the gains were domain general rather than improvements on specific abilities. This provides causal evidence that if interventions are sufficiently early and target highly g-loaded abilities such as vocabulary they can induce cascades of domain-general improvement, a finding in line with the predictions of mutualism. It would be unfair to end this segment without mentioning perhaps the most standardised cognitive training regime there is: schooling. The causal effect of a year of schooling on IQ can be teased apart from the developmental effects of ageing by using a method called regression discontinuity analysis. In this method, the distance of a student’s birthday from the year cutoff for two year groups is used as a predictor variable alongside the school year in a multiple regression predicting IQ. A recent paper reanalysing data from a study using this method found that the subtest gains from a year of schooling showed a moderate negative correlation with their g loading. As mutualism states that g develops through coupling, this would lend credence to the view that coupling effects are passive mechanisms of g’s development rather than being inseparable from experience. Conclusion I believe that it’s more accurate to say there is evidence for coupling effects than it is to say there is evidence for mutualism. There is convergent evidence from a year of schooling effect, coupling effects not rising with the amount of maths games played and the COGITO intervention’s results that the environment has little causal role in coupling effects and their strength. Opposing evidence comes from the Abecedarian Project, however this is not an environmental stimulus to which most people will be exposed to. Therefore, more weight should be placed on the effects of a year of schooling because it is generalisable. To reconcile this conflicting evidence, future authors should seek to replicate the COGITO intervention in an early adolescent identical twin sample with co-twin controls. This would allow researchers to observe coupling effects while executive functions are still in development and give them a more concrete understanding of the self-feedback parameter grounded in developmental cascades of gene expression. A more readily available alternative would be to apply latent change score modelling to the Abecedarian Project dataset. I will end with a quote from a critic of mutualism, Gilles Gignac: I conclude with the suggestion that belief in the plausibility of the g factor (or mutualism) may be impacted significantly by individual differences in personality, attitudes, and worldviews, rather than rely strictly upon logical and/or empirical evidence. As the current evidence stands, this may be true, but with the availability of new developmental studies such as the Adolescent Brain Cognitive Development study and old ones like the Louisville twin study there’s less of an excuse than ever. Written by James Howarth Related article: Nature vs nurture in childhood intelligence REFERENCES Carroll, J. B. (1993). Human cognitive abilities: A Survey of Factor-Analytic Studies . Cambridge University Press Rindermann, H., Becker, D., & Coyle, T. R. (2020). Survey of expert opinion on intelligence: Intelligence research, experts’ background, controversial issues, and the media. Intelligence , 78 , 101406. https://doi.org/10.1016/j.intell.2019.101406 Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology , 15 (2), 201. https://doi.org/10.2307/1412107 Thomson, G. H. (1916). A hierarchy without a general factor. British Journal of Psychology 1904-1920 , 8 (3), 271–281. https://doi.org/10.1111/j.2044-8295.1916.tb00133.x Jensen, A. R. (1998). The g factor: The science of mental ability. Praeger Publishers/Greenwood Publishing Group Van Der Maas, H. L. J., Dolan, C. V., Grasman, R. P. P. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. J. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113 (4), 842–861. https://doi.org/10.1037/0033-295X.113.4.842 Johnson, W., Nijenhuis, J. T., & Bouchard, T. J. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence , 36 (1), 81–95. https://doi.org/10.1016/j.intell.2007.06.001 Project Gallery
- Delving into the world of chimeras | Scientia News
Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Delving into the world of chimeras 17/03/24, 17:56 Last updated: An exploration of this genetic concept The term chimera has been borrowed from Greek mythology, transcending ancient tales to become a captivating concept within the fields of biology and genetics. In mythology, the chimera was a monstrous hybrid creature. However, in the biological context, a chimera refers to an organism with cells derived from two or more zygotes. While instances of natural chimerism exist within humans, researchers are pushing the boundaries of genetics via the intentional creation of chimeras, consequentially sparking debates and breakthroughs in various fields, spanning from medicine to agriculture. Despite the theory that every cell in the body should share identical genomes, chimeras challenge this notion. For example, the fusion of non-identical twin embryos in the womb is a way chimeras can emerge. While visible cues, such as heterochromia or varied skin tone patches, may provide subtle hints of its existence, often individuals with chimerism show no overt signs, making its prevalence uncertain. In cases where male and female cells coexist, abnormalities in reproductive organs may exist. Furthermore, advancements in genetic engineering and CRISPR genome editing have also allowed the artificial creation of chimeras, which may aid medical research and treatments. In 2021, the first human-monkey chimera embryo was created in China to investigate ways of using animals to grow human organs for transplants. The organs could be genetically matched by taking the recipient’s cells and reprogramming them into stem cells. However, the process of creating a chimera can be challenging and inefficient. This was shown when researchers from the Salk Institute in California tried to grow the first embryos containing cells from humans and pigs. From 2,075 implanted embryos, only 186 developed up to the 28-day time limit for the project. Chimeras are not exclusive to the animal kingdom; plants exhibit this genetic complexity as well. The first non-fictional chimera, the “Bizzaria” discovered by a Florentine gardener in the seventeenth century, arose from the graft junction between sour orange and citron. Initially thought to be an asexual hybrid formed from cellular fusion, later analyses revealed it to be a chimera, a mix of cells from both donors. This pivotal discovery in the early twentieth century marked a turning point, shaping our understanding of chimeras as unique biological phenomena. Chimera is a common form of variegation, with parts of the leaf appearing to be green and other parts white. This is because the white or yellow portions of the leaf lack the green pigment chlorophyll, which can be traced to layers in the meristem (areas found at the root and shoot tip that have active cell division) that are either genetically capable or incapable of making chlorophyll. As we conclude this exploration into the world of chimeras, from the mythological realm to the scientific frontier, it’s evident that these entities continue to mystify and inspire, broadening our understanding of genetics, development, and the interconnectedness of organisms. Whether natural wonders or products of intentional creation, chimeras beckon further exploration, promising a deeper comprehension of the fundamental principles that govern the tapestry of life. Written by Maya El Toukhy Related article: Micro-chimerism and George Floyd's death Project Gallery