top of page

Search Index

348 results found

  • Silicon hydrogel contact lenses | Scientia News

    An engineering case study Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Silicon hydrogel contact lenses 17/07/25, 11:08 Last updated: Published: 29/04/24, 10:59 An engineering case study Introduction Contact lenses have a rich and extensive history dating back over 500 years; when, in 1508, Leonardo Di Vinci first conceived the idea. It was not until the late 19th century that the concept of contact lenses as we know them now were realised. In 1887 F.E.Muller was credited with making the first eye covering that could improve vision without causing any irritation. This eventually led to the first generation of hydrogel-based lenses as the development of the polymer, hydroxyethyl methacrylate (HEMA), allowed Rishi Agarwal to conceive the idea of disposable soft contact lenses. Silicon hydrogel contact lenses dominate the contemporary market. Their superior properties have extended wear options and have transformed the landscape of vision correction. These small but complex items continue to evolve, benefiting wearers worldwide. This evolution is such that the most recent generation of silicon hydrogel lenses have recently been released and aim to phase out all the existing products. Benefits of silicon hydrogel lenses There are many benefits to this material’s use in this application. For example, the higher oxygen permeability improves user comfort and experience through relatively increased oxygen transmissibility that the material offers. These properties are furthered by the lens’ moisture retention which allows for longer wear times without compromising on comfort or eye health. Hence, silicon hydrogel lenses aimed to eradicate the drawbacks of traditional hydrogel lenses including: low oxygen permeability, lower lens flexibility and dehydration causing discomfort and long-term issues. This groundbreaking invention has revolutionised convenience and hygiene for users. The structure of silicon hydrogel lenses Lenses are fabricated from a blend of the two materials: silicon and hydrogel. The silicon component provides high oxygen permeability, while the hydrogel component contributes to comfort and flexibility. Silicon is a synthetic polymer and is inherently oxygen-permeable; it facilitates more oxygen to reach the cornea, promoting eye health and avoiding hypoxia-related symptoms. Its polymer chains form a network, creating pathways for oxygen diffusion. Whereas hydrogel materials are hydrophilic polymers that retain water, keeping the lens moist and comfortable as it contributes to the lens’s flexibility and wettability. Both materials are combined using cross-linking techniques which stabilise the matrix to make the most of both properties and prevent dissolution. (See Figure 1 ). There are two forms of cross-linking that enable the production of silicon hydrogel lenses: chemical and physical. Chemical cross-linking involves covalent bonds between polymer chains, enhancing the lens’s mechanical properties and stability. Additionally, physical cross-links include ionic interactions, hydrogen bonding, and crystallisation. Both techniques contribute to the lens’s structure and properties and can be enhanced with polymer modifications. In fact, silicon hydrogel macromolecules have been modified to optimise properties such as: improved miscibility with hydrophilic components, clinical performance and wettability. The new generation of silicon hydrogel contact lenses Properties Studies show that wearers of silicon hydrogel lenses report higher comfort levels throughout the day and at the end of the day compared to conventional hydrogel lenses. This is attributed to the fact that they allow around 5 times more oxygen to reach the cornea. This is significant as reduced oxygen supply can lead to dryness, redness, blurred vision, discomfort, and even corneal swelling. What’s more, the most recent generation of lenses have further improved material properties, the first of which is enhanced durability and wear resistance. This is attributed to their complex and unique material composition, maintaining their shape and making them suitable for various lens designs. Additionally, they exhibit a balance between hydrophilic and hydrophobic properties which have traditionally caused an issue with surface wettability. This generation of products have overcome this through surface modifications improving comfort by way of improving wettability. Not only this, but silicon hydrogel materials attract relatively fewer protein deposits. Reduced protein buildup leads to better comfort and less frequent lens replacement. Manufacturing There are currently two key manufacturing processes that silicon hydrogel materials are made with. Most current silicon hydrogel lenses are produced using either cast moulding or lathe cutting techniques. In lathe cutting, the material is polymerised into solid rods, which are then cut into buttons for further processing in computerised lathe - creating the lenses. Furthermore, surface modifications are employed to enhance this concept. For example, plasma surface treatments enhance biocompatibility and improve surface wettability compared to earlier silicon elastomer lenses. Future innovations There are various future expansions related to this material and this application. Currently, researchers are exploring ways to create customised and personalised lenses tailored to an individual’s unique eye shape, prescription, and lifestyle. One of the ways they are aiming to do this is by using 3D printing and digital scanning to allow for precise fitting. Although this is feasible, there are some challenges relating to scalability and cost-effectiveness while ensuring quality. Moreover, another possible expansion is smart contact lenses which aim to go beyond just improving the user's vision. For example, smart lenses are currently being developed for glucose and intraocular pressure monitoring to benefit patients with diseases including diabetes and glaucoma respectively. The challenges associated with this idea are data transfer, oxygen permeability and therefore comfort. (See Figure 2 ). Conclusion In conclusion, silicon hydrogel lenses represent a remarkable fusion of material science and engineering. Their positive impact on eye health, comfort, and vision correction continues to evolve. As research progresses, we can look forward to even more innovative solutions benefiting visually-impaired individuals worldwide. Written by Roshan Gill Related articles: Semi-conductor manufacturing / Room-temperature superconductor / Titan Submersible / Nanogels REFERENCES Optical Society of India, Journal of Optics, Volume 53, Issue 1, Springer, 2024 February Lamb J, Bowden T. The history of contact lenses. Contact lenses. 2019 Jan 1:2-17. Ţălu Ş, Ţălu M, Giovanzana S, Shah RD. A brief history of contact lenses. Human and Veterinary Medicine. 2011 Jun 1;3(1):33-7. Brennan NA. Beyond flux: total corneal oxygen consumption as an index of corneal oxygenation during contact lens wear. Optometry and vision science. 2005 Jun 1;82(6):467-72. Dumbleton K, Woods C, Jones L, Fonn D, Sarwer DB. Patient and practitioner compliance with silicon hydrogel and daily disposable lens replacement in the United States. Eye & Contact Lens. 2009 Jul 1;35(4):164-71. Nichols JJ, Sinnott LT. Tear film, contact lens, and patient-related factors associated with contact lens–related dry eye. Investigative ophthalmology & visual science. 2006 Apr 1;47(4):1319-28. Jacinto S. Rubido, Ocular response to silicone-hydrogel contact lenses, 2004. Musgrave CS, Fang F. Contact lens materials: a materials science perspective. Materials. 2019 Jan 14;12(2):261. Shaker LM, Al-Amiery A, Takriff MS, Wan Isahak WN, Mahdi AS, Al-Azzawi WK. The future of vision: a review of electronic contact lenses technology. ACS Photonics. 2023 Jun 12;10(6):1671-86. Kim J, Cha E, Park JU. Recent advances in smart contact lenses. Advanced Materials Technologies. 2020 Jan;5(1):1900728. Project Gallery

  • Which fuel will be used for the colonisation of Mars? | Scientia News

    Speculating the prospect of habitating Mars Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Which fuel will be used for the colonisation of Mars? 01/10/25, 10:48 Last updated: Published: 30/04/23, 11:06 Speculating the prospect of habitating Mars The creation of a “Planet B” is an idea that has been circulating for decades; however we are yet to find a planet that is similar enough to our Earth that would be viable to live on without major modifications. Mars has been the most widely talked about planet in the media, and is commonly thought to be the planet that we know the most about. So, could it be habitable? If we were to move to Mars, how would society thrive? The dangers of living on Mars As a neighbour to Earth, Mars might be classed as habitable without more knowledge. Unfortunately, it is quite the opposite. On Earth, humans have access to air with an oxygen content of 21%, however Mars only has 0.13% oxygen. The difference in the air itself suggests an uninhabitable planet. Another essential factor of human life is food. There have indeed been attempts to grow crops in Martian soil, including tomatoes, with great levels of success. Unfortunately, the soil is toxic therefore ingesting these crops could cause significant side effects in the long term. It could be possible to introduce a laboratory that crops could be grown in, modelling Earth soil and atmospheric conditions however this would be difficult. Air and food are two resources that are essential and could not readily be available in a move to Mars. Food could be grown in laboratory style greenhouses and the air could be processed. It is important to note that these solutions are fairly novel. The Mars Oxygen ISRU Experiment The Mars Oxygen ISRU Experiment (MOXIE) was a component of the NASA Perseverance rover that was sent to Mars during 2020. Solid oxide electrolysis converts carbon dioxide, readily available in the atmosphere of Mars, into carbon monoxide and oxygen. MOXIE contributes to the idea that, in the move to Mars, oxygen would have to be ‘made’ rather than being readily available. The MOXIE experiment utilised nuclear energy to do this, and it was shown that oxygen could be produced at all times of day in multiple different weather conditions. It is possible to gain oxygen on Mars, but a plethora of energy is required to do so. What kind of energy would be better? With accessing oxygen especially, the energy source on Mars would need to be extremely reliable in order to ensure the population is safe. It is true that fossil fuels are reliable however it is increasingly obvious that the reason a move to Mars would be necessary is due to the lack of care of the Earth therefore polluting resources are to be especially avoided. A combination of resources is likely to be used. Wind power during the massive dust storms that find themselves on Mars regularly and solar power in clear weather, when the dust has not yet settled over the surface. One resource that would be essential is nuclear power. The public perception is mixed yet it is certainly reliable and that is the main requirement. After all, a human can only survive for around five minutes without oxygen. Time lost due to energy failures would be deadly. Written by Megan Martin Related articles: Exploring Mercury / Artemis: the lunar south pole base / Total eclipses Project Gallery

  • A concise introduction to Markov chain models | Scientia News

    How do they work? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link A concise introduction to Markov chain models 20/03/25, 11:59 Last updated: Published: 09/03/24, 18:16 How do they work? Introduction A Markov chain is a stochastic process that models a system that transitions from one state to another, where the probability of the next state only depends on the current state and not on the previous history. For example, assuming that X 0 is the current state of a system or process, the probability of a state, X 1 , depends only on X 0 which is of course the current state of the system as stated. P ( X 1 ) = f ( P ( X 0 )) It may be hard to think of any real-life processes that follow this behaviour because there is the belief that all events happen in a sequence because of each other. Here are some examples: Games e.g. chess - If your king is in a certain spot on a chess board, there will be a maximum of 4 transition states that can be achieved that all depend on the initial position of chess piece. The parameters for the Markov model will obviously vary depending on your position on the board which is the essence of the Markov process. Genetics - The genetic code of an organism can be modelled as a Markov chain, where each nucleotide (A, C, G, or T) is a state, and the probability of the next nucleotide depends only on the current one. Text generation - Consider the current state as the most recent word. The transition states would be all possible words which could follow on from said word. Next word prediction algorithms can utilize a first-order Markov process to predict the next word in a sentence based on the most recent word. The text generation example is particularly interesting because only considering the previous word when trying to predict the next word sentence would lead to a very random sentence. That is where we can change things up using various mathematical techniques. k-Order Markov Chains (adding more steps) In a first-order Markov chain, we only consider the immediately preceding state to predict the next state. However, in k-order Markov chains, we broaden our perspective. Here’s how it works: Definition: a k-order Markov chain considers the previous states (or steps) when predicting the next state. It’s like looking further back in time to inform our predictions. Example: suppose we’re modelling the weather. In a first-order Markov chain, we’d only look at today’s weather to predict tomorrow’s weather. But in a second-order Markov chain, we’d consider both today’s and yesterday’s weather. Similarly, a third-order Markov chain would involve three days of historical data. By incorporating more context, k-order chains can capture longer-term dependencies and patterns. As k increases, the model becomes more complex, and we need more data to estimate transition probabilities accurately. See diagram below for a definition of higher order Markov chains. Markov chains for Natural Language Processing A Markov chain can generate text by using a dictionary of words as the states, and the frequency of words in a corpus of text as the transition probabilities. Given an input word, such as "How", the Markov chain can generate the next word, such as "to", by sampling from the probability distribution of words that follow "How" in the corpus. Then, the Markov chain can generate the next word, such as "use", by sampling from the probability distribution of words that follow "to" in the corpus. This process can be repeated until a desired length or end of sentence is reached. That is a basic example and for more complex NLP tasks we can employ more complex Markov models such as k-order, variable, n-gram or even hidden Markov models. Limitations of Markov models Markov models for tasks such as text generation will struggle because they are too simplistic to create text that is intelligent and sometimes even coherent. Here are some reasons why: Fixed Transition Probabilities: Markov models assume that transition probabilities are constant throughout. In reality, language is dynamic, and context can change rapidly. Fixed probabilities may not capture these nuances effectively. Local Dependencies: Markov chains have local dependencies, meaning they only consider a limited context (e.g., the previous word). They don’t capture long-range dependencies or global context. Limited Context Window: Markov models have a fixed context window (e.g., first-order, second order, etc.). If the context extends beyond this window, the model won’t capture it. Sparse Data : Markov models rely on observed data (transition frequencies) from the training corpus. If certain word combinations are rare or absent, the model struggles to estimate accurate probabilities. Lack of Learning: Markov models don’t learn from gradients or backpropagation. They’re based solely on observed statistics. Written by Temi Abbass Related articles: Latent space transformation s / Evolution of AI FURTHER READING 1. “Improving the Markov Chain Approach for Generating Text Used for…” : This work focuses on text generation using Markov chains. It highlights the chance based transition process and the representation of temporal patterns determined by probability over sample observations . 2 . “Synthetic Text Generation for Sentiment Analysis” : This paper discusses text generation using latent Dirichlet allocation (LDA) and a text generator based on Markov chain models. It explores approaches for generating synthetic text for sentiment analysis . 3. “A Systematic Review of Hidden Markov Models and Their Applications” : This review paper provides insights into HMMs, a statistical model designed using a Markov process with hidden states. It discusses their applications in various fields, including robotics, finance, social science, and ecological time series data analysis . Project Gallery

  • Iron deficiency anaemia | Scientia News

    A type of anaemia Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Iron deficiency anaemia 10/07/25, 10:20 Last updated: Published: 27/06/23, 17:10 A type of anaemia This article is no. 2 of the anaemia series. Next article: anaemia of chronic disease . Previous article: Anaemia . Aetiology Iron deficiency anaemia (IDA) is the most frequent in children due to rapid growth (adolescence) and poor diets (infants), and in peri and post -menopausal women due to rapid growth (pregnancy) and underlying conditions. Anaemia typically presents, in around 50% of cases as headache, lethargy and pallor depending on the severity. Less common side effects include organomegaly and Pica which occurs in patients with zinc and iron deficiency and is defined by the eating of things with little to no nutritional value. Pathophysiology Iron is primarily sourced through diet, as haem (Fe2+) and non-haem iron (Fe3+). Fe2+ is sourced through meat, fish, and other animal-based products, Fe2+ can be absorbed directly through the enterocyte via the haem carrier protein1 (HCP1). Fe3+ is less easily absorbed and is mostly found in plant-based products. Fe3+ must be reduced and transported through the duodenum by the enzyme duodenal cytochrome B (DcytB) and the divalent metal transporter 1 (DMT1), respectively. Diagnosis As with any diagnosis, the first test to run would be a full blood count and this will occur with all the anaemias. In suspected cases of anaemia, the Haemoglobin (Hb) levels would be lower than 130 in males and 120 in females. The mean cell volume (MCV) is a starting point for pinpointing the type of anaemia, for microcytic anaemias you would expect to see an MCV < 80. Iron studies are best for diagnosing anaemias, for IDA you would expect most of the results to be low. A patient with IDA has little to no available iron so the body would halt the mechanism’s for storing iron. As ferratin is directly related to storage, low ferratin can be a lone diagnostic of IDA. Total iron-binding capacity (TIBC) would be expected to be raised, as transferrin transports iron throughout the body, the higher it is the more iron it would be capable of binding to. Elliptocytes (tear drop) are elongated RBC, often described as pencil like in structure and are regularly seen in IDA and other anaemias. Typically, one would see hypochromic RBC as they contain less Hb than normal cells, the Hb is what gives red cells their pigment. It’s not uncommon to see other changes in RBC such as target cells, given their name due to the bullseye appearance. Target cells are frequently seen in cases with blood loss. Summary IDA is the most frequent anaemia affecting patients of all age ranges and usually presents with lethargy and headaches. Dietary iron from animal derivatives are the most efficient source of iron uptake. Diagnosis of IDA is through iron studies, red cell morphological investigations alongside clinical presentation, to rule out other causes. Written by Lauren Kelly Project Gallery

  • Orcinus orca | Scientia News

    (LINNAEUS, 1758) Killer Whale Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Orcinus orca 10/07/25, 10:27 Last updated: Published: 06/06/23, 15:19 (LINNAEUS, 1758) Killer Whale DEFINITION & DIAGNOSIS Orcinus orca comes from the Delphinadae family (oceanic dolphins) and are the largest members of the dolphin family. The largest male orcas can grow up to 10 metres while for females, upto 8.5 metres. They have proportionately bigger dorsal fins that other big delphinids ranging between 1/10 th to 1/5 th of their body length. Orcinus orca can be easily identified by their colouration- black and white. It has also been seen that their skull is larger and holds the largest brain compared to all the other dolphins except Pseudorca crassidens (false killer whale). TAXONOMIC HISTORY Species: Orcinus orca Genus Orcinus, Family Delphinidae, Class Mammalia, Phylum Chordata, Kingdom Anamelia . Synonyms : Orca ater, Orca capensis, Orcinus glacialis, Delphius gladiator, Orcinus Nannus, Orca recipinna, Delphinus orca FEATURES Killer whales are predominantly black with a white midsection mammals with a blunt head that has no distinct beak. Females usually grow to be 7m and males 8.2m. They also have large flippers which, in adult males, can measure up to 20% of their body length but in females and young males they will only achieve up to 11-13% of their body length. They also have a dorsal fin which in males can reach up to 1.8 m in length but in females it only reaches 0.9m. This difference in dorsal fin length can be useful in determining the sex of O.orca . The white midsection runs across the whole lower jaw but tightens between the flippers. This white area can be seen as more yellowish incertain oceans, mainly in the Antarctic, and more predominantly in adolescents. ANATOMY AND PHYSIOLOGY A killer whale’s skin is very smooth and the outer layer continuously sheds and behind the dorsal fin and back, there is a grey- white patch known as a ‘saddle patch’ (Figure.1). O.Orca have dorsal fins and have paddle shaped pectoral fins which help control directional movement. The skeleton of a killer whale is robust and long and has a skull, backbone and a bone structure of the pectoral flippers. In general it was seen that O.Orca’s facial anatomy was slightly different to a typical delphinid structure of an asymmetrical nasal sac but some structures were smaller compared to several other species. Their teeth are conical shaped and curved inward and backward (Figure 2). The temporal fossa is very large, showing that there is a strong temporal muscle helping with closure of the jaw. Meuth examined that the amino acid sequence of myoglobin of O.Orca had similarities to Globicephala than to other delphinids and phocoenids with myoglobin. The reniculi of the kidney of the killer whale was found to be in groups of four which are connected. Differences have been seen with Hyperoodon based on the venous return in the kidney and the O.Orca have no peripheral venous complex whilst Hyperoodon does. REPRODUCTIVE BEHAVIOUR The breeding cycles range over many months and vary depending on where the species are found. For example, in the northeast Atlantic, mating takes place between late autumn to midwinter. The approximate annual birth is between 4 to 5% and the annual pregnancy rates are roughly around 13.7 to 39.2%, and the growth spurt of an adolescent male killer whale varies within a range of 5.5 to 6.1 m, this is also the the time in which they reach sexual maturity. This was confirmed after comparing and examining two different male adolescents. The individual with 656-cm and testes masses of 3,632 g (R) and 2,270 g (L) was not sexually mature, but the individual that was 724-cm with 11,400 g (R) and 12,200 g (L) testes was sexually mature. A further examination of 57 mature males found in the Antarctic showed us that the average testis width is 22 cm and length is 55 cm. The average testis mass was calculated at 10,000 g with a maximum mass of 23,100 g. Prior to this peak, the growth curves of males are similar to that of females. The length for a female killer whale to become sexually mature ranges between 4.6 to 5.4 m. This length varies depending on whether the individual is found in the northeastern Atlantic or the Antarctic. If they are found in the Atlantic they become sexually mature around 4.6m and if they are found in the Antarctic they become sexually mature around 5.4m. The ovaries size range from 10 to 12 cm by 5 to 7 cm. The maximum size of foetuses varies geographically. The largest found in the North Pacific was 274cm, the one in North Atlantic is 255cm and 250 cm for the Antarctic. The smallest foetuses recorded are 228 cm for the North Pacific, 183cm for the North Atlantic, and 227 cm for the Southern Hemisphere . Calves are usually dependent for at least 2 years and weaning takes place when a calf grows to 4.3 m in length with lactation lasting for around 12 months . The sex ratios at birth on average looks like it is 1:1, however the ratio of males to females has been reported as 1.34:1 for the Marion Islands and 0.83:1 for the northeast Pacific. ECOLOGY O.orca are carnivores and also opportunistic feeders so their diets change seasonally and based on the region they’re in. They mainly consume fish but it has been found that they can also prey on seabirds and other marine mammals such as minke whale, squid and pinnipeds. The estimated daily food intake is thought to be around 4% of their body weight. Predation for killer whales can also determine their migration such as in the Atlantic it is dependent on the migration of herring. Although O.Orca predate on many different species’, the only predator for orcinus orca is humans. They are mainly hunted for oil and meat or killed as they are competition for fishermen. In Japan and Norway the fresh meat of killer whales is eaten and the old meat is usually used for fertilisers or for bait. To figure out the age of O.orca the teeth can be sectioned and the dentine or cementum layers can be counted but this can be hard to determine due to the presence of accessory layers as well. The estimated lifespan of killer whales is thought to be 25 years but could be as long as 35 to 40 years. Killer whales aren’t subject to many diseases but the main one they face is infection in the pulp cavity due to the wearing down of teeth. If the infection penetrates through the pulp cavity it can cause a jaw abscess. In captive killer whales the main killers are pneumonia, bacterial infections, systemic mycosis and mediastinal abscess. Written by Jeevana Thavarajah Related article: Why blue whales don't get cancer Project Gallery

  • Alzheimer's disease | Scientia News

    The mechanisms of the disease Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Alzheimer's disease 09/07/25, 10:46 Last updated: Published: 21/07/23, 09:36 The mechanisms of the disease Introduction to Alzheimer’s disease Alzheimer’s disease is a neurodegenerative disease that results in cognitive decline and dementia with increasing age, environmental and genetic factors contributing to its onset. Scientists believe this is the result of protein biomarkers that build-up in the brain and accumulate within neurones. As of 2020, 55 million people suffer with dementia, with Alzheimer’s being a leading cause. Thus, it is crucial we develop efficacious treatments, with final adverse effects. A new drug called Iecanemab, may be the key to a new era of Alzheimer’s treatment… The disease is most common in people over 65, with 1/14 affected in the UK, thus, there is a huge emphasis on defining the disorder and developing drug treatments. The condition results in difficulty with memory, planning, decision making and can result in co-morbidities such as depression or personality change. This short article will explain the pathology of the disorder and the genetic predispositions for its onset. It will also explore future avenues for treatment, such as the drug I ecanemab that may provide, “a new era for Alzheimer’s disease”. Pathology and molecular aspects The neurodegeneration seen in Alzheimer’s has, as far, been associated protein dispositions in the brain, such as the amyloid precursor protein (APP) and Tau tangles. This has been deduced by PET scans and post-mortem study. APP, located on chromosome 21, is responsible for synapse formation and signalling. It is cleaved to b-amyloid peptides by enzymes called secretases, but overexpression of both these factors can be neurotoxic (figure 1). The result is accumulation of protein aggregates called beta-amyloid plaques in neurons, impairing their survival. This deposition starts in the temporo-basal and front-medial areas of the brain and spreads to the neocortex and sensory-motor cortex. Thus, many pathways are affected, resulting in the characteristic cognitive decline. Tau proteins support nerve cells structurally and can be phosphorylated at various regions, changing the interactions they have with surrounding cellular components. Hyperphosphorylation of these proteins result in the Tau pathology in the form of tau oligomer (short peptides) that is toxic to neurons. These enter the limbic regions and neocortex. It is not clearly defined which protein aggregate proceeds the other, however, the amyloid cascade hypothesis suggests that b-amyloid plaque pathology comes first. It is speculated that b-amyloid accumulation leads to activation of the brain’s immune response, the microglial cells, which then promotes the hyperphosphorylation of Tau. Sometimes, there is a large release of pro-inflammatory cytokines, known as a cytokine storm, that promotes neuroinflammation. This is common amongst older individuals, due to a “worn-out” immune system, which may in part explain Alzheimer’s disease. Genetic component to Alzheimer’s disease There is strong evidence obtained through whole genome-sequencing studies (WGS), that suggests there is a genetic element to the disease. One gene is the Apoliprotein E (APOE) gene, responsible for b-amyloid clearance/metabolism. Some alleles of this gene show association with faulty clearance, leading to the characteristic b-amyloid build-up. In the body, proteins are made consistently depending on need, a dysregulation of the recycling process can be catastrophic for the cells involved. PSEN1 gene that codes for the presenilin 1 protein, part of a secretase enzyme complex. As mentioned, the secretase enzyme is responsible for the cleavage of APP, the precursor for b-amyloid. Variants of this gene have been associated with early onset Alzheimer’s disease, due to APP processing being altered to produce a longer form of the b-amyloid plaque. The genetic aspects to Alzheimer’s disease are not limited to these genes, and in actuality, one gene can have an assortment of mutation that results in a faulty protein. Understanding the genetic aspects, may provide avenue for gene therapy in the future. Treatment Understanding the point in which the “system goes wrong” is crucial for directing treatment. For example, we may use secretase inhibitors to reduce the rate of plaque formation. An example of this is the g- secretase BACE1 inhibitor. There is a need for this drug-type to be more selective to its target, as has been found to produce unwanted adverse effects. A more selective approach may be to target the patient’s immune system with the use of monoclonal antibodies (mAb). This means designing an antibody that recognises a specific component, such as the b-amyloid plaque, so it may bind and then encourage immune cells to target the plaque (figure 3). An example is Aducanumab mAb, which targets b-amyloid as fibrils and oligomers. The Emerge study demonstrated a decrease in amyloid by the end of the 78-week study. As of June 2021, Aducanumab received approval from the FDA for prescription of this drug, but this is controversial as there are claims it brings no clinical benefit to the patient. The future of Alzheimer’s disease Of note, drug development and approval is a slow process, and there must be a funding source in order to carry out plans. Thus, particularly in Alzheimer’s, it is relevant to educate the public and funding bodies to supply the financial support to the process. However, with many hits (potential drug candidates), these often fail at phase III clinical trials. Despite this, another mAb, lecanemab, has recently been approved by the FDA (2023), due to its ability to slow cognitive decline by 27% in early Alzheimer’s disease. The Clarity AD study on Iecanemab, found the drug benefited memory and thinking, but also allowed for better performance of daily tasks. This drug is currently being prescribed on a double-blind basis, meaning a patient may either receive the drug or the placebo. This study shows a hope for those suffering from the disease. Drugs that have targeted the Tau tangles, have as far, not been successful in clinical trials. However, the future of Alzheimer’s treatment may be in the combination therapy directed to both Tau protein and b-amyloid. Washington universities neurology department have launched a trial known as Tau NextGen, in which participants will receive both Iecanemab and tau-reducing antibody. Conclusion This article provides a summary to what we know about Alzheimer’s disease and the potential treatments of the future. Overall, the future of Alzheimer’s treatment lies in the combination therapy to target known biomarkers of the disease. Written by Holly Kitley Related articles: CRISPR-Cas9 as Alzheimer's treatment / Hallmarks of Alzheimer's / Sleep and memory loss Project Gallery

  • Artificial intelligence: the good, the bad, and the future | Scientia News

    A Scientia News Biology collaboration Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Artificial intelligence: the good, the bad, and the future 20/03/25, 12:01 Last updated: Published: 13/12/23, 17:10 A Scientia News Biology collaboration Introduction Artificial intelligence (AI) shows great promise in education and research, providing flexibility, curriculum improvements, and knowledge gains for students. However, concerns remain about its impact on critical thinking and long-term learning. For researchers, AI accelerates data processing but may reduce originality and replace human roles. This article explores the debates around AI in academia, underscoring the need for guidelines to harness its potential while mitigating risks. Benefits of AI for students and researchers Students Within education, AI has created a buzz for its usefulness in aiding students to complete daily and complex tasks. Specifically, students have utilised this technology to enhance their decision making process, improve workflow and have a more personalised learning experience. A study by Krive et al. (2023) demonstrated this by having medical students take an elective module to learn about using AI to enhance their learning and understand its benefits in healthcare. Traditionally, medical studies have been inflexible, with difficulty integrating pre-clinical theory and clinical application. The module created by Krive et al. introduced a curriculum with assignments featuring online clinical simulations to apply preclinical theory to patient safety. Students scored a 97% average on knowledge exams and 89% on practical exams, showing AI's benefits for flexible, efficient learning. Thus, AI is able to assist in enhancing student learning experiences whilst saving time and providing flexibility. Additionally, we gathered testimonials from current STEM graduates and students to better understand the implications of AI. In Figure 1 , we can see that the students use AI to benefit their exam learning, get to grips with difficult topics, and summarise long texts to save time whilst exercising caution, knowing that AI has limitations. This shows that AI has the potential to become a personalised learning assistant to improve comprehension and retention and organise thoughts, all of which allow students to enhance skills through support as opposed to reliance on the software. Despite the mainstream uptake of AI, one student has chosen not to use AI in the worry of becoming less self-sufficient, and we will explore this dynamic in the next section. Researchers AI can be very useful for academic researchers, such as making the process of writing and editing papers based on new scientific discoveries less slow or even facilitating it altogether. As a result, society may have innovative ways to treat diseases and increase the current knowledge of different academic disciplines. Also, AI can be used for data analysis by interpreting a lot of information, and this not only saves time but a lot of money required to complete this process accurately. The statistics and graphical findings could be used to influence public policy or help different businesses achieve their objectives. Another quality of AI is that it can be tailored towards the researcher's needs in any field, from STEM to subject areas outside of it, indicating that AI’s utilities are endless. For academic fields requiring researchers to look at things in greater detail, like molecular biology or immunology, AI can help generate models to understand the molecules and cells involved in such mechanisms sufficiently. This can be through genome analysis and possibly next generation sequencing. Within education, researchers working as lecturers can utilise AI to deliver concepts and ideas to students and even make the marking process more robust. In turn, this can decrease the burnout educators experience in their daily working lives and may possibly help establish a work-life balance, as a way to feel more at ease over the long-term. Risks of AI for students and researchers Students With great power comes great responsibility, and with the advent of AI in school and learning, there is increasing concern on the quality of learners produced from schools, and if their attitude to learning and critical thinking skills are hindered or lacking. This matter has been echoed in results from a study conducted by Ahmad et al. (2023), which studied how AI affects laziness and distorts decision making in university students. The results showed using AI in education correlated with 68.9% of laziness and a 27.7% loss in decision making abilities in 285 students across Pakistani and Chinese institutes. This confirms some worries that a former testimonial shared with us in figure 1 and suggests that students may become more passive learners rather than develop key life skills. This may even lead to reluctance to learn new things and seeking out ‘the easy way’ rather than enjoy obtaining new facts. Researchers Although AI can be great for researchers, it carries its own disadvantages. For example, it could lead to reduced originality while writing, and this type of misconduct jeopardises the reputation of the people working in research. Also, the software is only as effective as the type of data they are specialised in, so specific AI could misinterpret the data. This has downstream consequences that can affect how research institutions are run, and beyond that, scientific inquiry is hindered. Therefore, if severely misused, AI can undermine the integrity of academic research, which could hinder the discovery of life-saving therapies. Furthermore, there is the potential for AI to replace researchers, suggesting that there may be fewer opportunities to employ aspiring scientists. When given insufficient information, AI can be biased, which can be detrimental; an article found that its use in a dermatology clinic can put certain patients at risk of skin cancer and suggested that it receives more diverse demographic data for AI to work effectively. Thus, it needs to be applicable in a strategic way to ensure it works as intended and does not cause harm. Conclusion Considering the uses of AI for students and researchers, it is advantageous to them by supporting any knowledge gaps, aiding in data analysis, boosting general productivity and can be used to engage with the public and much more. Its possibilities for enhancing industries such as education and drug development are endless for propagating societal progression. Nevertheless, the drawbacks of AI cannot be ignored, like the chance of it replacing people in jobs or that it is not completely accurate. Therefore, guidelines must be defined for its use as a tool to ensure a healthy relationship between AI and students and researchers. According to the European Network of Academic Integrity (ENAI), using AI for proofreading, spell checking, and as a thesaurus is admissible. However, it should not be listed as a co-author because, compared to people, it is not liable for any reported findings. As such, depending on how AI is used, it can be a tool to help society or be detrimental, so it is not inherently good or bad for students, researchers and society in general. Written by Sam Jarada and Irha Khalid Introduction, and 'Student' arguments by Irha Conclusion, and 'Researcher' arguments by Sam Related articles: Evolution of AI / AI in agriculture and rural farming / Can a human brain be uploaded to a computer? Project Gallery

  • Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype | Scientia News

    Setting Neuropsychiatry In a Wider Medical Context Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Schizophrenia, Inflammation and Accelerated Aging: a Complex Medical Phenotype 20/02/25, 11:54 Last updated: Published: 24/05/23, 09:45 Setting Neuropsychiatry In a Wider Medical Context In novel research by Campeau et al. (2022), the proteomic analysis of 742 proteins from the blood plasma of 54 schizophrenic participants and 51 age-matched healthy volunteers. This investigation resulted in the validation of the previously-contentious link between premature aging and schizophrenia by testing for a wide variation of proteins involved in cognitive decline, aging-related comorbidities, and biomarkers of earlier-than-average mortality. The results from this research demonstrated that age-linked changes in protein abundance occur earlier on in life in people with schizophrenia. This data also helps to explain the heightened incidence rate of age-related disorders and early all-cause death in schizophrenic people too, with protein imbalances associated with both phenomena being present in all schizophrenic age strata over age 20. This research is the result of years of medical intrigue regarding the biomedical underpinnings of schizophrenia. The comorbidities and earlier death associated with schizophrenia were focal points of research for many years, but only now have valid explanations been posed to answer the question of the presence of such phenomena. The explanation for the greater incidence rate of early death in schizophrenia was described in this study as the increased volume of certain proteins. Specifically, these included biomarkers of heart disease (Cystatin-3, Vitronectin), blood clotting abnormalities (Fibrinogen-B) and an inflammatory marker (L-Plastin). These proteins were tested for due to their inclusion in a dataset of protein biomarkers of early all-cause mortality in healthy and mentally-ill people published by Ho et al. (2018) for the Journal of the American Heart Association. Furthermore, a protein linked to degenerative cognitive deficit with age, Cystatin C, was present in increased volume in schizophrenic participants both under and over the age of 40. This explains why antipsychotics have limited effectiveness in reducing the cognitive effects of schizophrenia. In this study, schizophrenics under 40 had similar plasma protein content as the healthy over-60 strata set, including both biomarkers of cognitive decline, age-related diseases and death. Schizophrenics under-40 showed the same likelihood for incidence of the latter phenomena compared to the healthy over-60 set. These results could demonstrate the necessity for use of medications often used to treat age-related cognitive decline and mortality-linked protein abundances to treat schizophrenia. One of these options include polyethylene glycol-Cp40, a C3 inhibitor used to treat nocturnal haemoglobinuria, which could be used to ameliorate the risk of developing age-related comorbidities in schizophrenic patients. This treatment may be effective in the reduction of C3 activation, which would reduce the opsonisation (tagging of detected foreign products in blood). When overexpressed, C3 can cause the opsonisation of healthy blood cells in a process called haemolysis, which can catalyse the reduction of blood volume implicated in cardiac events and other comorbidities. However, whether or not this treatment would benefit those with schizophrenia is yet to be proven. The potential of this research to catalyse new treatment options for schizophrenia cannot be understated. Since the publication of Kilbourne et al. in 2009, the impact of cardiac comorbidities in catalysing early death in schizophrenic patients has been accepted medical dogma. The discovery of exact protein targets to reduce the incidence rate of age-linked conditions and early death in schizophrenia will allow the condition to be treated more holistically, with greater observance to the fact that schizophrenia is not only a psychiatric illness, but also a neurocognitive disorder with affiliated comorbidities that have to be prevented adequately. Written by Aimee Wilson Related articles: Genetics of ageing and longevity / Ageing and immunity / Inflammation therapy Project Gallery

  • Why blue whales don't get cancer | Scientia News

    Discussing Peto's Paradox Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Why blue whales don't get cancer 14/07/25, 15:16 Last updated: Published: 16/10/23, 21:22 Discussing Peto's Paradox Introduction: What is Peto’s Paradox? Cancer is a disease that occurs when cells divide uncontrollably, owing to genetic and epigenetic factors . Theoretically, the more cells an organism possesses, the higher the probability should be for it to develop cancer. Imagine that you have one tiny organism – a mouse, and a huge organism – an elephant. Since an elephant has more cells than a mouse, it should have a higher chance of developing cancer, right? This is where things get mysterious. In reality, animals with 1,000 times more cells than humans are not more likely to develop cancer. Notably, blue whales, the largest mammals, hardly develop cancer. Why? In order to understand this phenomenon, we must dive deep into Peto’s Paradox. Peto’s paradox is the lack of correlation between body size and cancer risk. In other words, the number of cells you possess does not dictate how likely you are to develop cancer. Furthermore, research has shown body mass and life expectancy are unlikely to impact the risk of death from cancer . (see figure 1) Peto’s Paradox: Protective Mechanisms Mutations, otherwise known as changes or alterations in the deoxyribonucleic acid (DNA) sequence, play a role in cancer and ageing. Research scientists have analysed mutations in the intestines of several mammalian species , ranging from mice, monkeys, cats, dogs, humans, and giraffes, to tigers and lions. Their results reveal that these mutations mostly come from processes that occur inside the body, such as chemicals causing changes in DNA. These processes were similar in all the animals they studied, with slight differences. Interestingly, annually, animals with longer lifespans were found to have fewer mutations in their cells ( figure 2 ). These findings suggest that the rate of mutations is associated with how long an animal lives and might have something to do with why animals age. Furthermore, even though these animals have very different lifespans and sizes, the amount of mutations in their cells at the end of their lives was not significantly different – this is known as cancer burden. Since animals with a larger size or longer lifespan have a larger number of cells (and hence DNA) that could undergo mutation, and a longer time of exposure to mutations, how is it possible that they do not have a higher cancer burden? Evolution has led to the formation of mechanisms in organisms that suppress the development of cancerous cells . Animals possessing 1,000 times as many cells as humans do not display a higher susceptibility to cancer, indicating that natural mechanisms can suppress cancer roughly 1,000 times more efficiently than they operate in human cells . Does this mean larger animals have a more efficient protective mechanism against cancer? A tumour is an abnormal lump formed by cells that grow and multiply uncontrollably. A tumour suppressor gene acts like a bodyguard in your cells. They help prevent the uncontrollable division of cells that could form tumours. Previous analyses have shown that the addition of one or two tumour suppressor gene mutations would be sufficient to reduce the cancer risk of a whale to that of a human. However, evidence does not suggest that an increased number of tumour suppressor genes correlated with increasing body mass and longevity. Although a study by Caulin et al . identified biomarkers in large animals that may explain Peto’s paradox, more experiments need to be conducted to confirm the biological mechanisms involved. Just over a month ago, an investigation of existing evidence on such mechanisms revealed a list of factors that may contribute to Peto’s paradox. This includes replicative immortality, cell senescence, genome instability and mutations, proliferative signalling, growth suppression evasion and cell resistance to death. As far as we know, different strategies have been followed to prevent cancer in species with larger sizes or longer lifespans . However, more studies must be conducted in the future in order to truly explain Peto’s paradox. Peto’s Paradox: Other Theories There are several theories that attempt to explain Peto’s paradox. One of which explains that large organisms have a lower basal metabolic rate, leading to less reactive oxygen species. This means that cells in larger organisms incur less oxidative damage, causing a lower mutation rate and lower risk of developing cancer. Another popular theory is the formation of hypertumours . As cells divide uncontrollably in a tumour, “cheaters” could emerge. These “cheaters”, known as hypertumours, are cells which grow and feed on their original tumour, ultimately damaging or destroying the original tumour. In large organisms, tumours have more time to reach lethal size. Therefore, hypertumours have more time to evolve, thereby destroying the original tumours. Hence, in large organisms, cancer may be more common but is less lethal. Clinical Implications Curing cancer has posed significant challenges. Consequently, the focus on cancer treatment has shifted towards cancer prevention . Extensive research is currently underway to investigate the behaviour and response of cancer cells to the treatment process. This is done through a multifaceted approach; investigating the tumour microenvironment and diagnostic or prognostic biomarkers. Going forward, a deeper understanding of these fields enables the development of prognostic models as well as targeted treatment methods. One example of an exciting discovery is the revelation of TP53 . The discovery of this tumour suppressor gene indicates that it plays a role in making elephant cells more responsive to DNA damage and in triggering apoptosis by regulating the TP53 signaling pathway. These findings imply that having more copies of TP53 may have directly contributed to the evolution of extremely large body sizes in elephants, helping resolve Peto’s paradox . Particularly, there are 20 copies of the TP53 gene in elephants, but only one copy of the TP53 gene in humans (see figure 3 ). Through more robust studies and translational medicine, it would be fascinating to see how such discoveries could be applied into human medicine ( figure 4 ). Conclusion The complete mechanism of how evolution has enabled organisms that are larger in size and have longer lifespans than humans is still a mystery. There is a multitude of hypotheses that need to be extensively investigated with large-scale experiments. By unravelling the mysteries of Peto’s paradox, these studies could provide invaluable insights into cancer resistance and potentially transform cancer prevention strategies for humans. Written by Joecelyn Kirani Tan Related articles: Biochemistry of cancer / Orcinus orca (killer whale) / Canine friends and cancer Project Gallery

  • Hubble Tension | Scientia News

    Why the fuss over a couple of km/s/Mpc? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Hubble Tension 09/07/25, 14:20 Last updated: Published: 25/11/23, 11:10 Why the fuss over a couple of km/s/Mpc? You have probably heard that the universe is expanding, and perhaps even that this expansion is accelerating. A consequent observation of this is that distant objects such as galaxies appear to recede from Earth faster if they are further away. Here is a helpful analogy: imagine a loaf of raisin bread that is rising as it is baked. A pair of raisins on opposite sides of the loaf will move away from one another at a greater rate than a pair of raisins near the center. The more dough (universe) there is between a pair of raisins (galaxies), the faster they recede from one another. See Figure 1 . This phenomenon is encapsulated in Hubble’s Law, which relates specifically to the recessional velocity due to the expansion of space. Hubble’s Law is given by the equation v = H0 D . Where: v is the recessional velocity D is the distance to the receding object H0 is the Hubble constant It is worth noting that distant objects will often have velocities of their own due to gravitational forces - so-called ‘peculiar velocities’. In order to clarify the meaning of the title of this article, we must explore the unit in which the Hubble constant H0 is most often quoted: km/s/Mpc. This describes the speed (in kilometers per second) at which a distant object, such as a galaxy, is receding for every megaparsec of distance that galaxy is from Earth. Edwin Hubble is the name most often associated with this cosmological paradigm shift; however, physicists Alexander Friedmann and Georges Lemaître worked independently on the notion of an expanding universe, deriving similar results before Hubble verified them experimentally in 1929 at the Mount Wilson Observatory, California. What is the Hubble Tension? Hopefully the above discussion of units and raisin bread convinced you that the Hubble constant H0 is linked to the expansion rate of the universe. The larger H0 is, the faster galaxies are receding at a given distance, thus indicating a more quickly expanding universe. Therefore, cosmologists wish to accurately measure H0 in order to draw conclusions about the age and size of the universe. The Hubble Tension arises from the contradicting measurements of H0 obtained from different experiments. See Figure 2 of Edwin Hubble. CMB measurement One of these experiments uses the Cosmic Microwave Background (CMB), which can be thought of as an afterglow of light from near the time of the Big Bang. The wavelength of this light has expanded with the universe ever since the period of recombination - which I mentioned in my previous article on the DESI instrument. Our current best model of the universe, called ΛCDM, can describe how the universe evolved from a hot, dense state to the universe we see today, subject to a specifically balanced energy budget between ordinary matter, dark matter, and dark energy. From fitting this ΛCDM model to CMB data from missions such as ESA’s Planck Mission, one can derive a value for the expansion rate of the universe, i.e., a value for H0 . The Planck Mission measured temperature variations (anisotropies) across the CMB with unprecedented angular resolution and sensitivity. The most recent estimate for the Hubble constant using this method gave H0 = 67.4 ± 0.5 km/s/Mpc . Local Distance Ladder measurement Another technique to determine the value of H0 uses the distance-redshift relation. This is a wholly observational approach. It relies on the fact that the faster an object recedes from Earth, the more the light from that object is shifted towards longer wavelengths (redshifted). Hubble’s Law relates this recessional velocity to a distance; therefore, one can expect a similar relation between distance and redshift. A ‘ladder’ is invoked since astronomers wish to use objects that are visible from a vast range of distances; the rungs of the ladder represent greater and greater distances to the astronomical light source. Each rung of the ladder contains a different kind of ‘standard candle’, which are sources with reliable, well-constrained luminosities that translate to an accurate distance from Earth. I encourage you to look into these different types; some examples are Cepheid variables, Type Ia Supernovae, and RR Lyrae variables. When this method was employed using the Hubble Space Telescope and SH0ES (Supernova H0 for the Equation of State), a value of H0 = 73.04 ± 1.04 km/s/Mpc was obtained. The disagreement Clearly, these two values for the Hubble constant do not agree, nor do their uncertainty ranges overlap. Figure 3 shows some of the 21st-century measurements of H0 ; an excellent illustration of how the uncertainty has decreased for both methods, therefore making their disagreement more statistically significant. Many sources of scientific engagement with the public cite this disagreement as the ‘Crisis in Cosmology!’. In the author’s opinion, this is unnecessarily hyperbolic and plays on the human instinct to pick a side between two opposing viewpoints. In fact, new methods to measure H0 have been implemented using the tip of the Red-Giant branch (TRGB) as a standard candle, which demonstrate closer agreement with the value derived from the CMB. Some cosmologists believe that eventually this Hubble Tension will dissipate as our calibration of astronomical distances improves with the next generation of telescopes. Constraining the value of the Hubble constant is by no means low-hanging fruit for cosmologists, nor is the field in crisis. To see the progress we have made, one has to look back in time to 1929 when Edwin Hubble’s first estimate using a trend line and 46 galaxies gave H0 = 500 km/s/Mpc ! We must remain hopeful that the future holds a consistent approximation for the expansion rate and, with it, the age of our universe. Written by Joseph Brennan Project Gallery

bottom of page