Search Index
355 results found
- What you should know about rAAV gene therapy | Scientia News
Recombinant adeno-associated viruses (rAAVs) Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link What you should know about rAAV gene therapy 14/07/25, 15:13 Last updated: Published: 01/10/23, 19:45 Recombinant adeno-associated viruses (rAAVs) Curing a disease with one injection: the dream, the hope, the goal of medicine. Gene therapy brings this vision to reality by harnessing viruses into therapeutic tools. Among them, adeno-associated viruses (AAVs) are the most used: genetically modified AAVs, named recombinant AAVs (rAAVs), are already used in six gene therapies approved for medical use. Over 200 clinical trials are ongoing. AAV, a virus reprogrammed to cure diseases Gene therapy inserts genetic instructions into a patient to correct a mutation responsible for a genetic disorder. Thanks to genetic engineering, researchers have co-opted AAVs (along with adenoviruses, herpes simplex viruses and lentiviruses) into delivering these instructions. Researchers have swapped the genes that allow AAVs to jump from person to person with genes to treat diseases. In other words, the virus has been genetically reprogrammed into a vector for gene transfer. The gene supplemented is referred to as transgene. Biology of AAVs AAVs were discovered in the 1960s as contaminants in cell cultures infected by adenoviruses, a coexistence to which they owe their name. AAVs consist of a protein shell (capsid) wrapped around the viral genome, a single strand of DNA long approximately 4,700 bases (4.7 kb). The genome is capped at both ends by palindromic repetitive sequences folded into T-shaped structures, the Inverted Tandem Repeats (ITRs). Sandwiched between the ITRs, four genes are found. They determine capsid components ( cap ) and capsid assembly ( aap ), genome replication ( rep ) and viral escape from infected cells ( maap ) ( Figure 1, top panel ). The replacement of these four genes with a transgene of therapeutic use and its expression by infected cells (transduction) lie at the heart of gene therapy mediated by rAAVs. Transgene transfer by rAAVs Researchers favour rAAVs as vectors because AAVs are safe (they are not linked to any disease and do not integrate into the genome), they can maintain the production of a therapeutic gene for over ten years and infect a wide range of tissues. In an rAAV, the ITRs are the only viral element preserved. The four viral genes are replaced by a therapeutic transgene, and regulatory sequences to maximise its expression. Therefore, an rAAV contains the coding sequence of the transgene, an upstream promoter to induce transcription and a downstream regulatory sequence (poly-A tail) to confer stability to the mRNA molecules produced ( Figure 1, bottom panel ). Steps of rAAV production Based on the disease, rAAVs can be administered into the blood, an organ, a muscle or the fluid bathing the central nervous system (cerebrospinal fluid). rAAVs dock on target cells via a specific interaction between the capsid and proteins on the cell surface that serve as viral receptors and co-receptors. The capsid mainly dictates which cell types will be infected (cell tropism). Upon binding, the cell engulfs the virus into membrane vesicles (endosomes) typically used to digest and recycle material. The rAAVs escape the endosomes, avoiding digestion, and enter the nucleus, where the capsid releases the single-strand DNA (ssDNA) genome, a process known as uncoating. The ITRs direct the synthesis of the second strand to reconstitute a double-strand DNA (dsDNA), the replication of the viral genome and the concatenation of individual genomes into larger, circular DNA molecules (episomes) that can persist in the host cell for years. Nuclear proteins transcribe the transgene into mRNAs; mRNAs are exported in the cytoplasm where they are translated into proteins. The rAAV has achieved successful transduction : the transgene can start exerting its therapeutic effects. A simplified overview of rAAV transduction is presented in Figure 2 . The triumphs of rAAV gene therapies rAAV gene therapies are improving lives and saving patients. Unsurprisingly, the most remarkable examples of this come from the drugs already approved. Roctavian is an rAAV gene therapy for haemophilia A, a life-threatening bleeding disorder in which the blood does not clot properly because the body cannot produce the coagulation Factor VIII. In a phase III clinical trial, Roctavian reduced bleeding rates by 85% and most treated patients (128 out of 134) no longer needed regular administration of Factor VIII, the standard therapy for the disease, for up to two years after treatment. Similar impressive results were noted for the rAAV Hemgenix, a gene therapy for haemophilia B (a bleeding disorder caused by the absence of the coagulation Factor IX). Hemgenix reduced bleeding rates by 65% and most treated patients (52 out of 54) no longer needed regular administration of Factor IX, for up to two years. The benefits of Zolgensma are even more awe-inspiring. Zolgensma is an rAAV gene therapy for spinal muscular atrophy (SMA), a genetic disorder in which neurons in the spinal cord die causing muscles to waste away irreversibly. The life expectancy of SMA patients can be as short as two years, therefore timing is critical. As a consequence, Zolgensma had to be tested in neonates: babies with the most severe form of SMA were dosed with the drug before six weeks of age and symptoms onset (SPRINT study). After 14 months, all 14 treated babies were alive and breathing without a ventilator, whilst only a quarter of untreated babies did. After 18 months, all 14 could sit without help, an impossible feat without Zolgensma. These and other resounding achievements are fuelling research on rAAVs gene therapies. Current limitations Scientists still have some significant hurdles to overcome : ● Packaging capacity: AAVs can fit in their capsids relatively short DNA sequences, which do not allow the replacement of many long genes associated with genetic disorders, ● Immunogenicity: 30-60% of individuals have antibodies against AAVs, which block rAAVs and prevent transduction, ● Tissue specificity: rAAVs often infect tissues which are not the intended target (e.g., inducing the expression for a transgene to treat a neurological disease in the liver rather than in neurons). Gene therapies, not only those delivered by rAAVs, face an additional challenge, this one only partially of a technological nature: their price tags. Their prices – rAAVs range from $850,000 (£690,000) to $3,500,000 (£2,850,000) – make them inaccessible for most patients. A cautionary tale is already out there: Glybera, the first rAAV gene therapy approved for medical use, albeit only in Europe (2012), was discontinued in 2017 because it was too expensive. Research is likely to reduce the exorbitant manufacturing costs , but the time may have come to reconsider our healthcare systems. Notes One non-viral vector exists , but its development lags behind the viral vector . Glybera for treating lipoprotein lipase deficiency, Luxturna for Leber congenital amaurosis, Zolgensma for spinal muscular atrophy, Roctavian for haemophilia A, Hemgenix for haemophilia B, and Elevidys for Duchenne muscular dystrophy. Written by Matteo Cortese, PhD Related articles: Germline gene therapy (GGT) / A potential treatment for HIV / Rabies / Antiretroviral therapy Project Gallery
- Which fuel will be used for the colonisation of Mars? | Scientia News
Speculating the prospect of habitating Mars Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Which fuel will be used for the colonisation of Mars? 01/10/25, 10:48 Last updated: Published: 30/04/23, 11:06 Speculating the prospect of habitating Mars The creation of a “Planet B” is an idea that has been circulating for decades; however we are yet to find a planet that is similar enough to our Earth that would be viable to live on without major modifications. Mars has been the most widely talked about planet in the media, and is commonly thought to be the planet that we know the most about. So, could it be habitable? If we were to move to Mars, how would society thrive? The dangers of living on Mars As a neighbour to Earth, Mars might be classed as habitable without more knowledge. Unfortunately, it is quite the opposite. On Earth, humans have access to air with an oxygen content of 21%, however Mars only has 0.13% oxygen. The difference in the air itself suggests an uninhabitable planet. Another essential factor of human life is food. There have indeed been attempts to grow crops in Martian soil, including tomatoes, with great levels of success. Unfortunately, the soil is toxic therefore ingesting these crops could cause significant side effects in the long term. It could be possible to introduce a laboratory that crops could be grown in, modelling Earth soil and atmospheric conditions however this would be difficult. Air and food are two resources that are essential and could not readily be available in a move to Mars. Food could be grown in laboratory style greenhouses and the air could be processed. It is important to note that these solutions are fairly novel. The Mars Oxygen ISRU Experiment The Mars Oxygen ISRU Experiment (MOXIE) was a component of the NASA Perseverance rover that was sent to Mars during 2020. Solid oxide electrolysis converts carbon dioxide, readily available in the atmosphere of Mars, into carbon monoxide and oxygen. MOXIE contributes to the idea that, in the move to Mars, oxygen would have to be ‘made’ rather than being readily available. The MOXIE experiment utilised nuclear energy to do this, and it was shown that oxygen could be produced at all times of day in multiple different weather conditions. It is possible to gain oxygen on Mars, but a plethora of energy is required to do so. What kind of energy would be better? With accessing oxygen especially, the energy source on Mars would need to be extremely reliable in order to ensure the population is safe. It is true that fossil fuels are reliable however it is increasingly obvious that the reason a move to Mars would be necessary is due to the lack of care of the Earth therefore polluting resources are to be especially avoided. A combination of resources is likely to be used. Wind power during the massive dust storms that find themselves on Mars regularly and solar power in clear weather, when the dust has not yet settled over the surface. One resource that would be essential is nuclear power. The public perception is mixed yet it is certainly reliable and that is the main requirement. After all, a human can only survive for around five minutes without oxygen. Time lost due to energy failures would be deadly. Written by Megan Martin Related articles: Exploring Mercury / Artemis: the lunar south pole base / Total eclipses Project Gallery
- Zoology | Scientia News
Conservation, diseases, animal behaviour, adaptation and survival. Expand your knowledge on the incredible diversity of life on Earth with these articles. Zoology Articles Conservation, diseases, animal behaviour, adaptation and survival. Expand your knowledge on the incredible diversity of life on Earth with these articles. You may also like: Biology , and Ecology Deception by African birds The species Dicrurus adsimilis uses deception by flexible alarm mimicry to target and carry out food-theft attempts An experiment on ochre stars Investigating the relative fitness of the species Pisaster ocharceus Orcinus orca A species report Rare zoonotic diseases We all know about COVID-19. But what about the other zoonotic diseases? Article #1 in a series on Rare diseases. Marine iguanas Their conservation The cost of coats 55 years of vicuna conservation in South America. Article #1 in a series on animal conservation around the world. Conserving the California condor These birds live on the west coast of North America. Article #2 in a series on animal conservation around the world. Emperor penguins Kings of ice. Article #6 in a series on animal conservation around the world. Protecting rock-wallabies in Australia A group of 25 animal species, and subspecies related to kangaroos. Article #7 in a series on animal conservation around the world. Do other animals get periods? Looking at menstruation in non-human animals e.g. monkeys, bats Same-sex attraction in non-human animals SSSB in birds, mammals, and invertebrates Changing sex in fish Why some fish change sex during their lifetimes
- How did bioinformatics allow for swift development of the SARS-CoV-2 vaccine? | Scientia News
Code to cure Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link How did bioinformatics allow for swift development of the SARS-CoV-2 vaccine? 30/01/25, 12:36 Last updated: Published: 03/09/24, 13:05 Code to cure Traditionally, vaccine development takes years. However, the urgent need for a vaccine to mitigate the effects of this pandemic sped up the process. Bioinformaticians played a crucial role in enabling the swift development of effective SARS-CoV-2 vaccines in many ways. Bioinformatics is the science of performing computational analysis and applying computational tools to capture and interpret biological data. The SARS-CoV-2 virus, with its rapid transmission and mutation rates, quickly became one of the most widespread and economically disruptive pandemics in history. According to Naseer et al. (2022), the global economy has been estimated to lose nearly 9 trillion due to the pandemic by the chief of the International Monetary Fund (IMF). Scientists sequenced the SARS-CoV-2 virus within the first few months of the viral outbreak, and the first SARS-CoV-2 genome sequence was published on GenBank on 10 January 2020. However, a sequence on its own means little, that is until the genes and regulatory elements present in the genome are determined. This was made possible by many bioinformatic tools and pipelines such as: - BLAST (Basic Local Alignment Search Tool): A sequence alignment tool used to find on regions of similarity and infer function and evolutionary relationships. - VADR (Viral Annotation DefineR): An automated annotation tool specifically for viral genomes - Velvet: A de novo sequence assembler i.e. it constructs a longer, full sequence from short read data obtained from next-generation sequencing. The information collected by different labs was shared worldwide, which allowed for a global collaborative effort towards developing a SARS-CoV-2 vaccine. Bioinformaticians also played a role in predicting the 3D structures of the proteins on the surface of the SARS-CoV-2 virus including the spike protein, which is protein against which vaccines build immunity. By using computational tools such as AlphaFold, they could model the structure of the spike protein and identify key sites to target in immunisation strategies. Another method used to identify key sites to target is Epitope Mapping, which is the identification of specific regions on an antigen that are recognised by parts of the immune system such as T Cell Receptors and antibodies. Tools such as IEDB Analysis Resource and BepiPred allow for the identification of epitopes on the SARS-CoV-2 spike that are highly immunogenic, meaning they are able to stimulate a strong immune response, and are therefore ideal targets for vaccines. SARS-CoV-2 is a highly mutagenic virus and one incredibly important bioinformatic platform known as GISAID which has enabled the real-time monitoring of these mutations. This comprehensive and open-access database was key to updating vaccine formulations and maintaining efficacy against emerging variants. In conclusion, although sometimes overlooked, bioinformatics played a crucial factor in fighting SARS-CoV-2 as efficiently and quickly as we did. From genome sequencing to mutation mapping, bioinformaticians have taken arms at every stage of battling the SARS-CoV-2 pandemic. Written by Devanshi Shah Related articles: Origins of COVID / COVID-19 glossary / Correlation between HDI and mortality rate during the pandemic / mRNA vaccines REFERENCES Chatterjee, R., Ghosh, M., Sahoo, S., Padhi, S., Misra, N., Raina, V., Suar, M. & Son, Y.-O. (2021) Next-Generation Bioinformatics Approaches and Resources for Coronavirus Vaccine Discovery and Development—A Perspective Review. Vaccines . 9 (8), 812. doi: 10.3390/vaccines9080812 . Hufsky, F., Lamkiewicz, K., Almeida, A., Aouacheria, A., Arighi, C., et al. (2020) Computational strategies to combat COVID-19: useful tools to accelerate SARS-CoV-2 and coronavirus research. Briefings in Bioinformatics . 22 (2), 642–663. doi: 10.1093/bib/bbaa232 . Ma, L., Li, H., Lan, J., Hao, X., Liu, H., Wang, X. & Huang, Y. (2021) Comprehensive analyses of bioinformatics applications in the fight against COVID-19 pandemic. Computational Biology and Chemistry . 95, 107599. doi: 10.1016/j.compbiolchem.2021.107599 . Torrington, E. (2022) Bioinformaticians: the Hidden Heroes of the COVID-19 Pandemic. BioTechniques . 72 (5), 171–174. doi: 10.2144/btn-2022-0039 . PYMOL: Schrödinger, LLC. (2024). PyMOL Molecular Graphics System (Version 2.5.4) [Software]. Available at: https://pymol.org/2/ [Accessed 3 Jul. 2024]. RCSB PDB 7T3M: Protein Data Bank. (2024). PDB ID: 7T3M, [online] Available at: https://www.rcsb.org/structure/7T3M [Accessed 3 Jul. 2024]. Project Gallery
- Exploring My Role as a Clinical Computer Scientist in the NHS | Scientia News
What my role entails Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Exploring My Role as a Clinical Computer Scientist in the NHS 17/04/25, 10:23 Last updated: Published: 06/05/24, 13:03 What my role entails When we think about career choices, we’re often presented with singular choices. Clinical Scientific Computing is a field that combines healthcare and computing. Despite it being relatively unknown, it is an important cog in the healthcare machine. When I applied for the Scientific Training Program in 2021, the specialism I applied for (Clinical Scientific Computing) had one of the lowest application rates out of approximately 27 specialisms. Awareness of this area has now improved, both thanks to better advertisement and exponential advancements in technology and healthcare. According to the NHS, there are now 26.8 thousand full-time equivalent healthcare scientists in England's NHS. As a clinical computer scientist, one's expertise can be applied in diverse settings, including medical physics laboratories and clinical engineering departments. My role in radiotherapy involves overseeing the technical aspects of clinical workflows, ensuring the seamless integration of technology in patient care. Training is a crucial part of being a proficient computer scientist. Especially with the growth of scientific fields in the NHS, there's always an influx of juniors and trainees, and that in turn, warrants the need for excellent trainers. A clinical scientist is someone who is proficient in their craft and able to explain complex concepts in layman's terms. As Einstein famously said: If you can't explain it to a 6-year-old, you can't understand it yourself. Although I am still technically a trainee, I am expected to partake in the training of the more junior trainees in my schedule. On a typical day, this may be as simple as explaining a program and demonstrating its application, or I may dismantle a PC and go through each component, one by one. At the core of clinical science is research. You won't go a day without working on at least one project and sometimes these may not even be your own. Collaboration with others is a huge part of the job. Every scientist has a different way of thinking about a problem, and this is exactly what keeps the wheels spinning in a scientific department. There are numerous times when I seek the help of others and vice versa. It is difficult to talk about 'typical' projects because they are often so varied in scientific computing, but it is likely that you will find yourself working on a variety of programming tasks. Having clinical know-how is crucial when working on projects in this field, and that aspect is exactly what separates the average computer scientist from the clinical computer scientist. A project I am currently working on involves radiation dose calculations, which naturally involves understanding the biological effects of radiation on the human body. This isn't a typical software development project so having a passion for healthcare is absolutely necessary. The unpredictability of technology means that troubleshooting is a constant aspect of our work. If something goes wrong in the department (which it often does), it is our responsibility as technical experts to quickly but effectively diagnose and fix the problems. The clinical workflow is highly sensitive in healthcare especially the cancer pathway where every minute counts. If a radiographer is unable to access patient records or there is an error with a planning system, this can have detrimental effects on the quality of patient care. Addressing errors, like those in treatment planning systems, necessitates a meticulous approach to diagnosis, often leading us from error code troubleshooting to on-site interventions. For example, I may be required to physically attend a treatment planning room and resolve an issue with the PC. This narrative offers a glimpse into the day-to-day life of a clinical computer scientist in the NHS, highlighting the critical blend of technical skill, continuous learning, and the profound impact on patient care. Through this lens, we can hopefully appreciate the essential role of clinical scientific computing in advancing healthcare, marked by innovation, collaboration, and a commitment to improving patient outcomes. This narrative offers a glimpse into the day-to-day life of a clinical computer scientist in the NHS, highlighting the critical blend of technical skill, continuous learning, and the profound impact on patient care. Through this lens, we can hopefully appreciate the essential role of clinical scientific computing in advancing healthcare, marked by innovation, collaboration, and a commitment to improving patient outcomes. For more information on this specialism . Written by Jaspreet Mann Related articles: Virtual reality in healthcare / Imposter syndrome in STEM Project Gallery
- The spread of digital disinformation | Scientia News
IT cells and their impact on public opinion Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The spread of digital disinformation 14/07/25, 15:04 Last updated: Published: 05/08/23, 10:06 IT cells and their impact on public opinion As of January 2023, the internet boasts a staggering 4.72 billion estimated social media accounts, with a 3% year-on-year growth of +137 million users and further expansion projected throughout the year. The average person now spends a substantial 6 hours and 58 minutes daily connected to online screens, underscoring the significant role the internet plays in our lives. Consequently, it comes as no surprise that governments worldwide have recognised its potential as a critical tool to advance their agendas, policies, and achievements. Through diverse digital channels, governments aim to reach a vast audience and change public perception, striving to build transparency, trust, and legitimacy while maintaining a powerful digital presence. However, this approach also raises concerns about bias, propaganda, and information manipulation, which can impact public perceptions in questionable ways. One such phenomenon that has emerged is the presence of IT cells, organised groups typically affiliated with political parties, organizations, or interest groups. These Information Technology cells dedicate themselves to managing and amplifying their respective organisations' online presence, predominantly on social media platforms and other digital avenues. During contentious political events or national issues, IT cells deploy coordinated messaging in support of government policies and leaders, inundating social media platforms. Unfortunately, dissenting voices and critics may face orchestrated attacks from these IT cells, aimed at discrediting and silencing them. While some IT cells may operate with genuine intentions, they have faced criticism for engaging in tactics that spread misinformation, disinformation, and targeted propaganda to sway public sentiment in favour of their affiliated organisations. In such instances, IT cells strategically amplify positive news and government achievements while downplaying or deflecting negative information. Social media influencers and online campaigns have become tools to project a positive image of the government and maintain public support. One striking example of how governments can exploit IT cells for their gain was evident in the infamous Cambridge Analytica scandal. In 2018, revelations exposed how the political consulting firm, Cambridge Analytica, acquired personal data from millions of Facebook users without consent. The firm then weaponised this data to construct highly targeted and manipulative political campaigns, including during the 2016 United States presidential election and the Brexit referendum. In India, the ruling BJP party has come under scrutiny for its orchestrated online campaigns through its social media cell. The cell allegedly intimidates individuals perceived as government critics and actively disseminates misogyny, Islamophobia, and animosity. According to Sadhavi Khosla, a BJP cyber-volunteer associated with the BJP IT Cell, the organisation promotes divisive content and employs trolling tactics against users critical of the BJP. Journalists and Indian film actors have also found themselves targeted by these campaigns. As technology continues to evolve, it is imperative to strike a balance between leveraging the internet for transparency and legitimacy while safeguarding against potential misuse that could erode trust in digital governance and public discourse. Monitoring and addressing the activities of IT cells can be a significant step towards ensuring responsible and ethical use of digital platforms in the political arena. Written by Jaspreet Mann Related articles: COVID-19 misconceptions / Fake science websites Project Gallery
- Artificial intelligence: the good, the bad, and the future | Scientia News
A Scientia News Biology collaboration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Artificial intelligence: the good, the bad, and the future 20/03/25, 12:01 Last updated: Published: 13/12/23, 17:10 A Scientia News Biology collaboration Introduction Artificial intelligence (AI) shows great promise in education and research, providing flexibility, curriculum improvements, and knowledge gains for students. However, concerns remain about its impact on critical thinking and long-term learning. For researchers, AI accelerates data processing but may reduce originality and replace human roles. This article explores the debates around AI in academia, underscoring the need for guidelines to harness its potential while mitigating risks. Benefits of AI for students and researchers Students Within education, AI has created a buzz for its usefulness in aiding students to complete daily and complex tasks. Specifically, students have utilised this technology to enhance their decision making process, improve workflow and have a more personalised learning experience. A study by Krive et al. (2023) demonstrated this by having medical students take an elective module to learn about using AI to enhance their learning and understand its benefits in healthcare. Traditionally, medical studies have been inflexible, with difficulty integrating pre-clinical theory and clinical application. The module created by Krive et al. introduced a curriculum with assignments featuring online clinical simulations to apply preclinical theory to patient safety. Students scored a 97% average on knowledge exams and 89% on practical exams, showing AI's benefits for flexible, efficient learning. Thus, AI is able to assist in enhancing student learning experiences whilst saving time and providing flexibility. Additionally, we gathered testimonials from current STEM graduates and students to better understand the implications of AI. In Figure 1 , we can see that the students use AI to benefit their exam learning, get to grips with difficult topics, and summarise long texts to save time whilst exercising caution, knowing that AI has limitations. This shows that AI has the potential to become a personalised learning assistant to improve comprehension and retention and organise thoughts, all of which allow students to enhance skills through support as opposed to reliance on the software. Despite the mainstream uptake of AI, one student has chosen not to use AI in the worry of becoming less self-sufficient, and we will explore this dynamic in the next section. Researchers AI can be very useful for academic researchers, such as making the process of writing and editing papers based on new scientific discoveries less slow or even facilitating it altogether. As a result, society may have innovative ways to treat diseases and increase the current knowledge of different academic disciplines. Also, AI can be used for data analysis by interpreting a lot of information, and this not only saves time but a lot of money required to complete this process accurately. The statistics and graphical findings could be used to influence public policy or help different businesses achieve their objectives. Another quality of AI is that it can be tailored towards the researcher's needs in any field, from STEM to subject areas outside of it, indicating that AI’s utilities are endless. For academic fields requiring researchers to look at things in greater detail, like molecular biology or immunology, AI can help generate models to understand the molecules and cells involved in such mechanisms sufficiently. This can be through genome analysis and possibly next generation sequencing. Within education, researchers working as lecturers can utilise AI to deliver concepts and ideas to students and even make the marking process more robust. In turn, this can decrease the burnout educators experience in their daily working lives and may possibly help establish a work-life balance, as a way to feel more at ease over the long-term. Risks of AI for students and researchers Students With great power comes great responsibility, and with the advent of AI in school and learning, there is increasing concern on the quality of learners produced from schools, and if their attitude to learning and critical thinking skills are hindered or lacking. This matter has been echoed in results from a study conducted by Ahmad et al. (2023), which studied how AI affects laziness and distorts decision making in university students. The results showed using AI in education correlated with 68.9% of laziness and a 27.7% loss in decision making abilities in 285 students across Pakistani and Chinese institutes. This confirms some worries that a former testimonial shared with us in figure 1 and suggests that students may become more passive learners rather than develop key life skills. This may even lead to reluctance to learn new things and seeking out ‘the easy way’ rather than enjoy obtaining new facts. Researchers Although AI can be great for researchers, it carries its own disadvantages. For example, it could lead to reduced originality while writing, and this type of misconduct jeopardises the reputation of the people working in research. Also, the software is only as effective as the type of data they are specialised in, so specific AI could misinterpret the data. This has downstream consequences that can affect how research institutions are run, and beyond that, scientific inquiry is hindered. Therefore, if severely misused, AI can undermine the integrity of academic research, which could hinder the discovery of life-saving therapies. Furthermore, there is the potential for AI to replace researchers, suggesting that there may be fewer opportunities to employ aspiring scientists. When given insufficient information, AI can be biased, which can be detrimental; an article found that its use in a dermatology clinic can put certain patients at risk of skin cancer and suggested that it receives more diverse demographic data for AI to work effectively. Thus, it needs to be applicable in a strategic way to ensure it works as intended and does not cause harm. Conclusion Considering the uses of AI for students and researchers, it is advantageous to them by supporting any knowledge gaps, aiding in data analysis, boosting general productivity and can be used to engage with the public and much more. Its possibilities for enhancing industries such as education and drug development are endless for propagating societal progression. Nevertheless, the drawbacks of AI cannot be ignored, like the chance of it replacing people in jobs or that it is not completely accurate. Therefore, guidelines must be defined for its use as a tool to ensure a healthy relationship between AI and students and researchers. According to the European Network of Academic Integrity (ENAI), using AI for proofreading, spell checking, and as a thesaurus is admissible. However, it should not be listed as a co-author because, compared to people, it is not liable for any reported findings. As such, depending on how AI is used, it can be a tool to help society or be detrimental, so it is not inherently good or bad for students, researchers and society in general. Written by Sam Jarada and Irha Khalid Introduction, and 'Student' arguments by Irha Conclusion, and 'Researcher' arguments by Sam Related articles: Evolution of AI / AI in agriculture and rural farming / Can a human brain be uploaded to a computer? Project Gallery
- The chemistry of an atomic bomb | Scientia News
Julius Oppenheimer Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The chemistry of an atomic bomb 04/07/25, 12:57 Last updated: Published: 23/08/23, 16:29 Julius Oppenheimer Julius Robert Oppenheimer, often credited with leading the development of the atomic bomb, played a significant role in its creation in the early 1940s. However, it is essential to recognise the collaborative effort of many scientists, engineers, and researchers who contributed to the project. The history and chemistry of the atomic bomb are indeed fascinating, shedding light on the scientific advancements that made it possible. The destructive power of an atomic bomb stems from the rapid release of energy resulting from the splitting, or fission, of fissile atomic nuclei in its core. Isotopes such as uranium-235 and plutonium-239 are selected for their ability to undergo fission readily and sustain a self-sustaining chain reaction, leading to the release of an immense amount of energy. The critical mass of fissionable material required for detonation ensures that the neutrons produced during fission have a high probability of impacting other nuclei and initiating a chain reaction. To facilitate a controlled release of energy, neutron moderation plays a crucial role in the functioning of an atomic bomb. Neutrons emitted during fission have high velocities, making them less likely to be absorbed by other fissile material. However, by employing a moderator material such as heavy water (deuterium oxide) or graphite, these high-speed neutrons can be slowed down. Slowing down the neutrons increases the likelihood of their absorption by fissile material, enhancing the efficiency of the chain reaction and the release of energy. The sheer magnitude of the energy released by atomic bombs is staggering. For example, one kilogram (2.2 pounds) of uranium-235 can undergo complete fission, producing an amount of energy equivalent to that released by 17,000 tons (17 kilotons) of TNT. This tremendous release of energy underscores the immense destructive potential of atomic weapons. It is essential to note that the development of the atomic bomb represents a confluence of scientific knowledge and technological advancements, with nuclear chemistry serving as a foundational principle. The understanding of nuclear fission, the critical mass requirement, and the implosion design were key factors in the creation of the atomic bomb. Exploring the chemistry behind this devastating weapon not only provides insights into the destructive capabilities of atomic energy but also emphasises the responsibility that accompanies its use. In conclusion, while Oppenheimer's contributions to the development of the atomic bomb are significant, it is crucial to acknowledge the collective effort that led to its creation. The chemistry behind atomic bombs, from the selection of fissile isotopes to neutron moderation, plays a pivotal role in harnessing the destructive power of nuclear fission. Understanding the chemistry of atomic weapons highlights the remarkable scientific achievements and reinforces the need for responsible use of atomic energy. Written by Navnidhi Sharma Project Gallery
- Brief neuroanatomy of autism | Scientia News
Differences in brain structure Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Brief neuroanatomy of autism 09/07/25, 13:29 Last updated: Published: 26/12/23, 20:38 Differences in brain structure Autism is a neurodevelopmental condition present in both children and adults worldwide. The core symptoms include difficulties understanding social interaction and communication, and restrictive or repetitive behaviours such as strict routines and stimming. When the term autism was first coined in the 20th century, it was thought of as a disease. However, it is now described as a cognitive difference rather than a disease; that is, the brains of autistic individuals – along with people diagnosed with dyslexia, dyspraxia, or attention deficit hyperactive disorder – are not defective, but simply wired differently. The exact cause or mechanism for autism has not been determined; the symptoms are thought to be brought about by a combination of genetic and environmental factors. Currently, autism disorders are diagnosed solely by observing behaviours, without measuring the brain directly. However, behaviours may be seen as the observable consequence of brain activity. So, what is it about their brains that might make autistic individuals behave differently to neurotypicals? Total brain volume Back before sophisticated imaging techniques were in use, psychiatrics had already observed the head size of autistic infants was often larger than that of other children. Later studies provided more evidence that most children who would go on to be diagnosed had a normal-sized head at birth, but an abnormally large circumference by the time they had turned 2 to 4 years old. Interestingly, increase in head size has been found to be correlated with the onset of main symptoms of autism. However, after childhood, growth appears to slow down, and autistic teenagers and adults present brain sizes comparable to those of neurotypicals. The amygdala As well transient increase of total brain volume, the size and volume of several brain structures in particular seems to differ between individuals with and without autism. Most studies have found that the amygdala, a small area in the centre of the brain that mediates emotions such as fear, appears enlarged in autistic children. The amygdala is a particularly interesting structure to study in autism, as individuals often have difficulty interpreting and regulating emotions and social interactions. Its increased size seems to persist at least until early adolescence. However, studies in adolescents and adults tend to show that the enlargement slows down, and in some cases is even reversed so that the number of amygdala neurons may be lower than normal in autistic adults. The cerebellum Another brain structure that tends to present abnormalities in autism is the cerebellum. Sitting at the back of the head near the spinal cord, it is known to mediate fine motor control and proprioception. Yet, recent literature suggests it may also play an important role in some higher other cognitive functions, including language and social cognition. Specifically, it may be involved in our ability to imagine hypothetical scenarios and to abstract information from social interactions. In other words, it may help us recognise similarities and patterns in past social interactions that we can apply to understand a current situation. This ability is poor in autism; indeed, some investigations have found the volume of the cerebellum may be smaller in autistic individuals, although research is not conclusive. Nevertheless, most research agrees that the number of Purkinje cells is markedly lower in people with autism. Purkinje cells are a type of neuron found exclusively in the cerebellum, able to integrate large amounts of input information into a coherent signal. They are also the only source of output for the cerebellum; they are responsible for connecting the structure with other parts of the brain such as the cortex and subcortical structures. These connections eventually bring about a specific function, including motor control and cognition. Therefore, a low number of Purkinje cells may cause underconnectivity between the cerebellum and other areas, which might be the reason for functions such as social cognition being impaired in autism. Written by Julia Ruiz Rua Related article: Epilepsy Project Gallery
- The future of semiconductor manufacturing | Scientia News
Through photonic integration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The future of semiconductor manufacturing 11/07/25, 10:03 Last updated: Published: 22/12/23, 15:11 Through photonic integration Recently the researchers from the University of Sydney developed a compact photonic semiconductor chip by heterogeneous material integration methods which integrates an active electro-optic (E-O) modulator and photodetectors in a single chip. The chip functions as a photonic circuit (PIC) offering a 15 gigahertz of tunable frequencies with a spectral resolution of only 37 MHz and is able to expand the radio frequency bandwidth (RF) to precisely control the information flowing within the chip with the help of advanced photonic filter controls. The application of this technology extends to various fields: • Advanced Radar: The chip's expanded radio-frequency bandwidth could significantly enhance the precision and capabilities of radar systems. • Satellite Systems: Improved radio-frequency performance would contribute to more efficient communication and data transmission in satellite systems. • Wireless Networks: The chip has the potential to advance the speed and efficiency of wireless communication networks. • 6G and 7G Telecommunications: This technology may play a crucial role in the development of future generations of telecommunications networks. Microwave Photonics (MWP) is a field that combines microwave and optical technologies to provide enhanced functionalities and capabilities. It involves the generation, processing, and distribution of microwave signals using photonic techniques. An MWP filter is a component used in microwave photonics systems to selectively filter or manipulate certain microwave frequencies using photonic methods (see Figure 1 ). These filters leverage the unique properties of light and its interaction with different materials to achieve filtering effects in the microwave domain. They can be crucial in applications where precise control and manipulation of microwave signals are required. MWP filters can take various forms, including fiber-based filters, photonic crystal filters and integrated optical filters. These filters are designed to perform functions such as wavelength filtering, frequency selection and signal conditioning in the microwave frequency range. They play a key role in improving the performance and efficiency of microwave photonics systems. The MWP filter operates through a sophisticated integration of optical and microwave technologies as depicted in the diagram. Beginning with a laser as the optical carrier, the photonic signal is then directed to a modulator where it interacts with an input Radio-Frequency (RF) signal. The modulator dynamically influences the optical carrier's intensity, phase or frequency based on the RF input. Subsequently, the modulated signal undergoes processing to shape its spectral characteristics in a manner dictated by a dedicated processor. This shaping is pivotal for achieving the desired filtering effect. The processed optical signal is then fed into a photodiode for conversion back into an electrical signal. This conversion is based on the variations induced by the modulator on the optical carrier. The final output which is represented by the electrical signal reflects the filtered and manipulated RF signal which demonstrates the MWP's ability in leveraging both optical and microwave domains for precise and high-performance signal processing applications. Extensive research has been conducted in the field of MWP chips, as evidenced by a thorough examination in Table 1 . This table compares recent studies based on chip material type, filter type, on-chip component integration, and working bandwidth. Notably, previous studies demonstrated noteworthy advancements in chip research despite the dependence on external components. What distinguishes the new chip is its revolutionary integration of all components into a singular chip which is a significant breakthrough that sets it apart from previous attempts in the field. Here the term "On-chip E-O" involve the integration of electro-optical components directly onto a semiconductor chip or substrate. This integration facilitates the interaction between electrical signals (electronic) and optical signals (light) within the same chip. The purpose is to enable the manipulation, modulation or processing of optical signals using electrical signals typically in the form of voltage or current control. Key components of on-chip electro-optical capabilities include: 1. Modulators which alter the characteristics of an optical signal in response to electrical input which is crucial for encoding information onto optical signals. 2. Photonic detectors convert optical signals back into electrical signals extracting information for electronic processing. 3. Waveguides guide and manipulate the propagation of light waves within the chip, routing optical signals to various components. 4. Switches routes or redirects the optical signals based on electrical control signals. This integration enhances compactness, energy efficiency, and performance in applications such as communication systems and optical signal processing. "FSR-free operation" refers to Free Spectral Range (FSR) which is a characteristic of optical filters and resonators. FSR is the separation in frequency between two consecutive resonant frequencies or transmission peaks. The column "FSR-free operation" indicates whether the optical processing platform operates without relying on a specific or fixed Free Spectral Range. It means that its operation is not bound or dependent on a particular FSR. This could be advantageous in scenarios where flexibility in the spectral range or the ability to operate over a range of frequencies without being constrained by a specific FSR is desired. "On-chip MWP link improvement" refers to enhancements made directly on a semiconductor chip to optimize the performance of MWP links. These improvements aim to enhance the integration and efficiency of communication or signal processing links that involve both microwave and optical signals. The term implies advancements in key aspects such as data transfer rates, signal fidelity and overall link performance. On-chip integration brings advantages such as compactness and reduced power consumption. The manufacturing of the photonic integrated circuit (PIC) involves partnering with semiconductor foundries overseas to produce the foundational chip wafer. This new chip technology will play a crucial role in advancing independent manufacturing capabilities. Embracing this type of chip architecture enables a nation to nurture the growth of its autonomous chip manufacturing sector by mitigating reliance on international foundries. The extensive chip delays witnessed during the 2020 COVID pandemic underscored the global realization of the chip market's significance and its potential impact on electronic manufacturing. Written by Arun Sreeraj Related articles: Advancements in semi-conductor technology / The search for a room-temperature superconductor / Silicon hydrogel lenses / Mobile networks Project Gallery










