top of page

Search Index

345 results found

  • The search for a room-temperature superconductor | Scientia News

    A (possibly) new class of semiconductors Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The search for a room-temperature superconductor 14/07/25, 15:02 Last updated: Published: 13/01/24, 15:19 A (possibly) new class of semiconductors In early August, the scientific community was buzzing with excitement over the groundbreaking discovery of the first room-temperature superconductor. As some rushed to prove the existence of superconductivity in the material known as LK-99, others were sceptical of the validity of the claims. After weeks of investigation, experts have concluded that LK-99 was likely not the elusive room-temperature superconductor but rather a different type of magnetic material with interesting properties. But what if we did stumble upon a room-temperature superconductor? What could this mean for the future of technology? Superconductivity is a property of some materials at extremely low temperatures that allows the material to conduct electricity with no resistance. Classical physics cannot explain this phenomenon, and instead, we have to turn to quantum mechanics to provide a description of superconductors. Inside superconductors, electrons are paired up and can move through the structure of the material without experiencing any friction. The pairs of electrons are broken up by the thermal energy from temperature, so they will only exist for low temperatures. Therefore, this theory, known as BCS theory after the physicists who formulated it, does not explain the existence of a high-temperature superconductor. To describe high-temperature superconductors, such as those occurring at room temperature, more complicated theories are needed. The magic of superconductors lies in their property of zero resistance. Resistance is a cause of energy waste in circuits due to heating, which leads to the unwanted loss of power, making for inefficient operation. Physically, resistance is caused by electrons colliding with atoms in the structure of a material, causing energy to be lost in the process. The ability for electrons to move through superconductors without experiencing any collisions results in no resistance. Superconductors are useful as components in circuits as they cause no wasted power due to heating effects and are completely energy-efficient in this aspect. Normally, using superconductors requires complex methods of cooling them down to typical superconducting temperatures. For example, the temperature at which copper becomes superconducting is 35 K, or in other words, around 130 °C colder than the temperature at which water freezes. These methods are expensive to implement, which prevents them from being implemented on a wide scale. However, having a room-temperature superconductor would allow access to the beneficial properties of the material, such as its resistance, without the need for extreme cooling. The current record holders for highest-temperature superconductors are the cuprate superconductors at around −135 °C. These are a family of materials made up of layers of copper oxides alternating with layers of other metal oxides. As the mechanism for superconductivity is yet to be revealed, scientists are still scratching their heads over how this material can exhibit superconducting properties. Once this mechanism is discovered, it may be easier to predict and find high-temperature superconducting materials and may lead to the first room-temperature superconductor. Until then, the search continues to unlock the next frontier in low-temperature physics… For more information on superconductors: [1] Theory behind superconductivity [2] Video demonstration Written by Madeleine Hales Related articles: Semiconductor manufacturing / Semiconductor laser technology / Silicon hydrogel lenses / Titan Submersible Project Gallery

  • Cancer on the Move | Scientia News

    How can patients with metastasised cancer be treated? Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Cancer on the Move 09/07/25, 13:31 Last updated: Published: 30/01/24, 19:57 How can patients with metastasised cancer be treated? Introducing and Defining Metastasis Around 90% of patients with cancer die due to their cancer spreading (metastasis). Despite its prevalence, many critical questions remain in the field of cancer research about how and why cancers metastasise. The metastatic cascade has three main steps: dissemination, dormancy, and colonisation. Most cells that disseminate die once they leave the primary tumour, thus, posing an evolutionary bottleneck. However, the few that survive will face another challenge of entering a foreign microenvironment. Those circulating tumour cells (CTCs) acquire a set of functional abilities through genetic alterations, enabling them to survive the hostile environment. CTCs can travel as single cells or as clusters. If they travel in clusters, CTCs can be coated with platelets, neutrophils, and other tumour-associated cells, protecting CTCs from immune surveillance. As these CTCs travel further, they are named disseminated tumour cells (DTCs). These cells are undetectable by clinical imaging and can enter a state of dormancy. The metastatic cascade represents ongoing cellular reprogramming and clonal selection of cancer cells that can withstand the hostile external environment. How does metastasis occur, and what properties allow these cancer cells to survive? How & Why Does Cancer Metastasise? The Epithelial-to-Mesenchymal Transition (EMT) is a theory that explains how cancer cells can metastasise. In this theory, tumour cells lose their epithelial cell-to-cell adhesion and gain mesenchymal migratory markers. Tumour cells that express a mixture of epithelial and mesenchymal properties were found to be the most effective in dissemination and colonisation to the secondary site. It is important to note that evidence for the EMT has been acquired predominantly in vitro , where additional in vivo research is necessary to confirm this phenomenon. Nevertheless, although EMT does not accurately address why cancers metastasise, it provides a framework for how a cancer cell develops the properties to metastasise. Many factors contribute to why cancers metastasise. For example, a lack of blood supply, which occurs when a cancer grows too large, causes the cells in the centre to lack access to the oxygen carried by red blood cells. Thus, to evade cell death, cancer cells detach from the primary tumour to regain access to oxygen and nutrients. In addition, cancer cells exhibit a high rate of glycolysis to supply sufficient energy for its uncontrollable proliferation. However, this generates lactic acid as a by-product, resulting in a low pH environment. This acidic pH environment stimulates cancer invasion and metastasis as cancer cells move away from this hostile environment to evade cell death once again, an effect referred to as the ‘Warburg Effect’. In Figure 2, you can see that multiple interplaying factors that contribute to metastasis. So, how can patients with metastasised cancer be treated? Current Treatments and Biggest Challenges? Depending on what stage the patient presents at and what cancer type, the treatment options differ. Figure 3 shows an example of these treatment plans. For early stages I and II, chemotherapy and targeted treatments are offered, and in specific cases, local surgery is done. These therapies are done to slow the growth of the cancer or lessen the side effects of treatments. An example of treating metastasised prostate cancer includes hormone therapy, as the cancer relies on the hormone testosterone to grow. Currently, cytotoxic chemotherapy remains the backbone of metastatic therapy. However, there are emerging immunotherapeutic treatments under trial. These aim to boost the ability of the immune system to detect and kill cancer cells. Hopefully, these new therapies may improve the prognosis of metastatic cancers when used in complement with conventional therapies, shining a new light into the therapeutic landscape of advanced cancers. Future Directions Recent developments have opened new avenues to discovering potential treatment targets for metastatic cancer. The first is to target the dormancy of DTCs, where the role of the immune system plays an important part. Neoadjuvant ICI (immune checkpoint inhibitor) studies are anticipated to provide insight into novel biomarkers and can eliminate micro-metastatic cancer cells. Also, using novel technology such as single-cell RNA sequencing reveals complex information about the plasticity of metastatic cancer cells, allowing researchers to understand how cancer cells adapt in stressful conditions. Finally, in vivo models, such as patient-derived models, could provide crucial insight into future treatments as they reproduce the patients’ reactions to different drug treatments. There are many limitations and challenges to the research and treatment of cancer metastasis. It is clear, however, that with more studies into the properties of metastatic cancers and the different avenues of novel targets and therapeutics, there is a promising outcome in the field of cancer research. Written by Saharla Wasame Related articles: Immune signals and metastasis / Cancer magnets for tumour metastasis / Brain metastasis / Novel neuroblastoma driver for therapeutics REFERENCES Fares, J., Fares, M.Y., Khachfe, H.H., Salhab, H.A. and Fares, Y. (2020). Molecular principles of metastasis: a hallmark of cancer revisited. Signal Transduction and Targeted Therapy , 5(1). doi: https://doi.org/10.1038/s41392-020-0134-x . Ganesh, K. and Massagué, J. (2021). Targeting metastatic cancer. Nature Medicine , 27(1), pp.34–44. doi: https://doi.org/10.1038/s41591-020-01195-4 . Gerstberger, S., Jiang, Q. and Ganesh, K. (2023). Metastasis. Cell , [online] 186(8), pp.1564–1579. doi: https://doi.org/10.1016/j.cell.2023.03.003 . Li, Y. and Laterra, J. (2012). Cancer Stem Cells: Distinct Entities or Dynamically Regulated Phenotypes? Cancer Research , [online] 72(3), pp.576–580. doi: https://doi.org/10.1158/0008-5472.CAN-11-3070 . Liberti, M.V. and Locasale, J.W. (2016). The Warburg Effect: How Does it Benefit Cancer Cells? Trends in Biochemical Sciences , [online] 41(3), pp.211–218. doi: https://doi.org/10.1016/j.tibs.2015.12.001 . Mlecnik, B., Bindea, G., Kirilovsky, A., Angell, H.K., Obenauf, A.C., Tosolini, M., Church, S.E., Maby, P., Vasaturo, A., Angelova, M., Fredriksen, T., Mauger, S., Waldner, M., Berger, A., Speicher, M.R., Pagès, F., Valge-Archer, V. and Galon, J. (2016). The tumor microenvironment and Immunoscore are critical determinants of dissemination to distant metastasis. Science Translational Medicine , 8(327). doi: https://doi.org/10.1126/scitranslmed.aad6352 . Oscar Hernandez Dominguez, Yilmaz, S. and Steele, S.R. (2023). Stage IV Colorectal Cancer Management and Treatment. Journal of Clinical Medicine , 12(5), pp.2072–2072. doi: https://doi.org/10.3390/jcm12052072 . Steeg, P.S. (2006). Tumor metastasis: mechanistic insights and clinical challenges. Nature Medicine , [online] 12(8), pp.895–904. doi: https://doi.org/10.1038/nm1469 . Project Gallery

  • The Dual Role of Mitochondria | Scientia News

    Powering life and causing death Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The Dual Role of Mitochondria 11/07/25, 09:57 Last updated: Published: 13/05/24, 13:38 Powering life and causing death Mitochondria as mechanisms of apoptosis Mitochondria are famous for being the “powerhouse of cells” and producing ATP for respiration by being the site for the Krebs cycle, the electron transport chain and the location of electron carriers. However, one thing mitochondria are not known for is mediating programmed cell death, or apoptosis. This is a tightly controlled process within a cell to prevent the growth of cancer cells. One way apoptosis occurs is through the mitochondria initiating protein activation in the cytosol (a part of the cytoplasm). Proteins such as cytochrome c activate caspases by binding to them, causing cell death. Caspases are enzymes that degrade cellular components so they can be removed by phagocytes. Mitochondrial apoptosis is also controlled by the B cell lymphoma 2 (BCL-2) family of proteins. They are split into pro-apoptotic and pro-survival proteins, so the correct balance of these two types of BCL-2 proteins is important in cellular life and death. Regulation and initiation of mitochondrial apoptosis Mitochondrial apoptosis can be regulated by the BCL-2 family of proteins. They can be activated due to things such as transcriptional upregulation or post-translational modification. Transcriptional upregulation is when the production of RNA from a gene is increased. Post-translational modification is when chemical groups (such as acetyl groups and methyl groups) are added to proteins after they have been translated from RNA. This can change the structure and interactions of proteins. After one of these processes, BAX and BAK (some examples of pro-apoptotic BCL-2 proteins) are activated. They form pores in the mitochondrial outer membrane in a process called mitochondrial outer membrane permeabilisation (MOMP). This allows pro-apoptotic proteins to be released into the cytosol, leading to apoptosis. Therapeutic uses of mitochondria Dysregulation of mitochondrial apoptosis can lead to many neurological and infectious diseases, such as neurodegenerative diseases and autoimmune disorders, as well as cancer. Therefore, mitochondria can act as important drug targets, providing therapeutic opportunities. Some peptides and proteins are known as mitochondriotoxins or mitocans, and they are able to trigger apoptosis. Their use has been investigated for cancer treatment. One example of a mitochondriotoxin is melittin, the main component in bee venom. This compound works by incorporating into plasma membranes and interfering with the organisation of the bilayer by forming pores, which stops membrane proteins from functioning. Drugs consisting of melittin have been used as treatments for conditions such as rheumatoid arthritis and multiple sclerosis. It has also been investigated as a potential treatment for cancer, and it induced apoptosis in certain types of leukaemia cells. This resulted in the downregulation of BCL-2 proteins, meaning there was decreased expression and activity.The result of the melittin-induced apoptosis is a preclinical finding, and more research is needed for clinical applications. This shows that mechanisms of mitochondrial apoptosis can be harnessed to create novel therapeutics for diseases such as cancer. It is evident that mitochondria are essential for respiration but also involved in apoptosis. Moreover, mitochondria are regulated by the activation of proteins like BCL-2, BAX and BAK. With further research, scientists can develop more targeted and effective drugs to treat various diseases associated with mitochondria. Written by Naoshin Haque Project Gallery

  • The endless possibilities of iPSCs and organoids | Scientia News

    iPSCs are one of the most powerful tools of biosciences Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The endless possibilities of iPSCs and organoids 11/07/25, 10:02 Last updated: Published: 20/01/24, 11:50 iPSCs are one of the most powerful tools of biosciences On the 8th of October 2012, the Nobel Prize in Physiology was given to Shinya Yamanaka and John B. Gurdon for a groundbreaking discovery; induced Pluripotent Stem Cells (iPSCs). The two scientists discovered that mature, specialised cells can be reprogrammed to their initial state and consequently transformed into any cell type. These cells can be used to study disease, examine genetic variations and test new treatments. The science behind iPSCs The creation of iPSCs is based on the procedure of cell potency during mammalian development. While the organism is still in the embryonic stage, the first cell developed is a totipotent stem cell, which has the unique ability to differentiate into any cell type in the human body. “Totipotent” refers to the cell’s potential to give rise to all cell types and tissues needed to develop an entire organism. As the totipotent cell grows, it develops into the pluripotent cell, which can differentiate into the three types of germ layers; the endoderm line, the mesoderm line and the ectoderm line. The cells of each line then develop into multipotent cells, which are derived into all types of human somatic cells, such as neuronal cells, blood cells, muscle cells, skin cells, etc. Creation of iPSCs and organoids iPSCs are produced through a process called cellular reprogramming, which involves the reprogramming of differentiated cells to revert to a pluripotent state, similar to that of embryonic stem cells. The process begins with selecting any type of somatic cell from the individual (in most cases, the individual is a patient). Four transcription factors, Oct4, Sox2, Klf4 and c-Myc, are introduced into the selected cells. These transcription factors are important for the maintenance of pluripotency. They are able to activate the silenced pluripotency genes of the adult somatic cells and turn off the genes associated with differentiation. The somatic cells are now transformed into iPSCs, which can differentiate into any somatic cell type if provided with the right transcription factor. Although iPSCs themselves have endless applications in biosciences, they can also be transformed into organoids, miniature three-dimensional organ models. To create organoids, iPSCs are exposed to a specific combination of signalling molecules and growth factors that mimic the development of the desired organ. Current applications of iPSCs As mentioned earlier, iPSCs can be used to study disease mechanisms, develop personalised therapies and test the action of drugs in human-derived tissues. iPSCs have already been used to model cardiomyocytes, neuronal cells, keratinocytes, melanocytes and many other types of cells. Moreover, kidney, liver, lung, stomach, intestine, and brain organoids have already been produced. In the meantime, diseases such as cardiomyopathy, Alzheimer’s disease, cystic fibrosis and blood disorders have been successfully modelled and studied with the use of iPSCs. Most importantly, the use of iPSCs in all parts of scientific research reduces or replaces the use of animal models, promising a more ethical future in biosciences. Conclusion iPSCs are one of the most powerful tools of biosciences at the moment. In combination with gene editing techniques, iPSCs give accessibility to a wide range of tissues and human disorders and open the doors for precise, personalised and innovative therapies. iPSCs not only promise accurate scientific research but also ethical studies that minimise the use of animal models and embryonic cells. Written by Matina Laskou Related articles: Organoids in drug discovery / Introduction to stem cells Project Gallery

  • Green Chemistry | Scientia News

    And a hope for a more sustainable future Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Green Chemistry 05/02/25, 16:33 Last updated: Published: 29/06/23, 10:33 And a hope for a more sustainable future Green Chemistry is a branch of chemistry that takes into consideration the design of synthetic reactions to minimise the generation of hazardous by-products, their impact on humans and the environment. Often reactions are designed to take place at low temperatures with short reaction times and increased yields. This is preferred as fewer materials are used and it is more energy efficient. When designing routes it is important to consider ‘How green is the process?’ in this way we are shifting focus to a more sustainable future where we are emitting fewer pollutants, using renewable feedstocks and energy sources with minimal waste. In 1998 Paul Anastas and John Warner devised the twelve principles of Green Chemistry. They serve as a framework for scientists to design innovative scientific solutions to existing and new synthetic routes. Scientists are looking into environmentally friendly reaction schemes which can simplify production as well as being able to use greener resources. It is impossible to fulfil all twelve principles at the same time but making attempts to apply as many principles as possible when designing a protocol is just as good. The twelve principles are: Prevention: waste should be prevented rather than treating waste after it has been created. Atom Economy: designing processes where you are maximising the incorporation of all materials so all reagents are in the final product. Less Hazardous Chemical Synthesis : synthetic methods should be designed to be safe and the hazards of all the substances should be reviewed. Designing Safer Chemicals: designed to eliminate chemicals which are carcinogenic, neurotoxic, etc. essentially safe to the Earth. Safer Solvents and Auxiliaries: using auxiliary substances and minimising usage of solvents to reduce waste created. Design for Energy Efficiency: designing synthetic methods where reactions can be conducted at ambient temperature and pressure. Use of Renewable Feedstock: raw materials used for reactions should be renewable rather than depleting. Reduce Derivatives: reducing the steps required in a reaction by using catalysts/ enzymes and adding protecting or deprotecting groups or temporary modification of functionality. Extra steps require more reagents and generate a lot of waste. Catalysis: catalysts lower energy consumption and increase reaction rates. They allow for decreased use of harmful and toxic chemicals. Design for Degradation: chemical products should be designed so that they can break down and have no harmful effects on the environment. Real-time analysis for Pollution Prevention: analytical techniques required to allow monitoring of the formation of hazardous substances. Inherently Safer Chemistry for Accident Prevention: involves using safer chemical alternatives to prevent the occurrence of an accident e.g. fires; explosions. Some examples of areas where Green Chemistry is implemented: Computer Chips: the use of supercritical carbon dioxide as a step for the preparation of a chip. This has reduced the quantities of chemicals, water and energy required to produce chips. Medicine: developing more efficient ways of synthesising pharmaceuticals e.g. chemotherapy drug Taxol. Green Chemistry is widely being implemented in academic labs as a way to reduce the environmental impact and high costs. As of today and the future mainstream chemical industries have not fully embraced green chemistry and engineering with over 98% of organic chemicals being derived from petroleum. This branch in Chemistry is still fairly new and will likely be one of the most important fields in the future. Written by Khushleen Kaur Related article: The challenges in modern day chemistry Project Gallery

  • Basics of transformer physics | Scientia News

    Ampere's Law and Faraday's Law Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Basics of transformer physics Last updated: 01/10/25, 10:49 Published: 24/04/25, 07:00 Ampere's Law and Faraday's Law Transformers have been around for decades. No, not the robots from the science fiction film franchise, although that would be amazing. Rather, the huge, technologically complex metal box-like things that play a key role in the electrical grid. You have likely seen transformers hidden behind extensive fencing, cabling, and ‘Danger! High Voltage!’ warning signs. These areas are not exactly accessible to tourists. Transformers play a crucial part in providing power to everything from your electric toothbrush to heating for your house to giant factories and just about anything in between. So it may come as a surprise that since their invention in the late 1800s, very little about them has changed. There are a number of different types of transformers that vary depending on voltage level, end user, location, etc. However, this article will only cover conventional transformers or, more specifically, the basic physics concepts behind how a typical transformer works. For those without a physical or electrical background, transformers can seem impossible to understand, but there are only two physics laws you need to understand: Ampere’s Law and Faraday’s Law. Ampere’s Law When charged particles like electrons flow in a particular direction, such as through a wire, this is an electric current . The moving charged particles affect the energy surrounding the wire, and we call this changing energy a magnetic field . Ampere’s Law mathematically describes the relationship between the flowing electrical current and the resultant magnetic field. The more intense the electrical current is, the stronger the magnetic field. Faraday’s Law Faraday’s Law allows us to predict how the magnetic field and the electrical current will interact. This interaction produces an electromotive force , which essentially means that as a magnetic field changes over time, it produces a force that creates or induces an electrical current. Basic physics of the transformer core Conventional transformers harness both Ampere’s Law and Faraday’s law in its core. The core is made of sheets of silicon steel, also known as electrical steel, that are very carefully stacked together. They are manufactured to form a square-like closed loop. A wire is wound on one side of the square loop, which carries the input current from the power source. On the opposite side of the square loop, a second wire is wound, which carries the output current leading farther downstream into the electrical grid. This may be to a ‘load’ or endpoint for the current, i.e. a house, warehouse, etc. Wire 1, carrying the input current, is not physically connected to Wire 2, the output current. These are two completely different wires. Ampere’s Law + Faraday’s Law is used to create, or induce , the output current in Wire 2. Recall that a moving electrical current creates a magnetic field. This is what occurs on the side of the core with Wire 1. The input current flows along Wire 1 as it coils around that side of the core, and a strong magnetic field is produced. For all intents and purposes, we can say that Wire 2 is ‘empty’, meaning that there is no input current here - it is not connected to a power source. However, as the current in Wire 1 produces a magnetic field, this field affects the energy around Wire 2 and induces a current in Wire 2, which then flows out of the transformer farther into the electrical grid. While there are different types of transformers with varying core configurations as well as additional complex physics to consider during manufacturing, it is too extensive to consider in this article. However, the processes described here form the basis of conventional transformer physics. Written by Amber Elinsky Related article: Wireless electricity Project Gallery

  • The chronotypes | Scientia News

    The natural body clock and the involvement of genetics Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link The chronotypes 10/07/25, 18:28 Last updated: Published: 27/11/24, 11:47 The natural body clock and the involvement of genetics Feeling like heading to bed at 9 pm and waking up at the crack of dawn? These tendencies define your chronotype, backed up by changes within your body. A generally overlooked topic, chronotypes affect our everyday behaviour. Many people innately associate themselves with a certain chronotype, but what do we know about how these physiological differences are caused at a molecular level? The word ‘chronotype’ was first coined in the 1970s, combining the Greek words chrono (time) and type (kind or form). While the term is relatively modern, the concept emerged in the 18th century. Researchers in the 1960s and 1970s, like Jürgen Aschoff, explored how internal biological clocks influence our sleep-wake cycles, leading to the classification of people into morning or evening types based on their activity patterns. The first evidence of body clocks was found in plants rather than humans, thus leading to the invention of flower clocks, which were used to tell the time of the day. Before delving into the details, let us be introduced to the general categories of chronotypes, which describe a person’s inclination to wake up and sleep while also affecting productivity periods. We know of the following three categories: The morning type (also referred to as larks): they are inclined to wake up and go to bed early because they feel most alert and productive in the mornings. The evening type (also called the owls): they feel most alert and productive in the evenings and onwards, so they are inclined to wake up and go to bed later. The intermediate types (also referred to as the doves): they fall in the middle of this range. Let’s explore what we know about the genetics that prove that chronotypes are a natural phenomenon. Genetics of chronotypes The main determining factor in our chronotypes is the circadian period. This is the body’s 24 hour cycle of changes that manifest into feelings of productivity and energy or tiredness. The length of this is crucial in determining our chronotypes. More importantly, specific physiological changes that cause these effects are melatonin and core body temperature. One study suggested that the morning types might have circadian periods shorter than 24 hours, whereas evening chronotypes might have circadian periods longer than 24 hours. A major clock gene is PER, which includes a collection of genes known as PER1, PER2 and PER3, which are thought to regulate circadian period factors. Specifically, it has been observed that a delay in the expression of the PER1 gene in humans causes an increased circadian period. Possible causes for this delay may be rendered to a variation within the negative feedback loop that PER1 operates in, including hereditary differences, environmental causes, changes to hormonal signals and age. This process may describe the mechanism behind the evening chronotype. Molecular polymorphs in the PER3 gene are thought to cause shorter circadian rhythms and the manifestation of the morning types. Similarly, a polymorph in the PER3 gene can be caused by a plethora of causes, as described for PER1. These nuances cause differences in the periodic release and stop of hormones which control the circadian rhythm, such as melatonin and body temperature. This is important in its power to control our energy levels, windows of productivity, and sleep cycles. The consensus remains that chronotypes are attributable to genetic premeditation by 50%, however, it has also been observed that chronotypes are prone to change with advancing age. Increased age is associated with an inclination towards an earlier phase chronotype. Age-related variation has been observed to be higher in men. There also exists an association between geographical locations and phase preference; increasing latitude (travelling North or South) from the earth's equator is associated with later chronotypes. Of course, many variations and factors come into play to affect these findings, such as ethnic genetics, climate, work culture and even population density. The effect on core body temperature and melatonin Polymorphisms in the PER3 cause a much earlier peak in body temperature and melatonin in the morning than in the evening and intermediate types. These manifest as the need to sleep much earlier in the morning and a decreased feeling of productivity later in the day. In contrast, the evening types experience a later release of melatonin and a drop in core body temperature, causing a later onset of tiredness and lack of energy. It can then be inferred that the intermediate types are affected by the expression of these genes in a way that falls in the middle of this spectrum. Conclusion Understanding differences in circadian rhythms and sleep-wake preferences offers valuable insights into human behaviour and health. Chronotypes influence various aspects of daily life, including sleep patterns and quality, cognitive performance and susceptibility to specific health conditions, including sleep-wake conditions. An extreme deviation in circadian rhythms and sleep cycles may lead to such conditions as Advanced sleep-wake phase Disorder (ASPD) and Delayed sleep-wake phase Disorder (DSPD). Recognising these variations is also helpful in optimising work schedules and aligning to jet lags, improving mental and physical health by tailoring our biological rhythms to our environments. Many individuals opt to do a sleep study at an institution to gain insights into their circadian rhythms. A healthcare professional may also prescribe this if they suspect you have a circadian disturbance such as insomnia. The Morning-Eveningness Questionnaire (MEQ) The MEQ is a self-reported questionnaire you may complete to gain more insight into your chronotype category. Clinical psychologist Micheal Breus created it and uses different animals to categorise the chronotypes further. The framework suggests that the Bear represents individuals whose energy patterns are entrained to the rising and the sun's setting and are the most common types in the general population. The Lions describe the early risers, and Wolves roughly equate to the evening types. Recently, a fourth chronotype has been proposed: the Dolphin, whose responses to the questionnaire suggest that they switch between modes. Whether you're a Bear, Lion, Wolf, or Dolphin, understanding your chronotype can be a game-changer in optimising your daily routine. So, what’s your chronotype—and how can you start working with your body’s natural rhythms to unlock your full potential? A sleep study ? The MEQ ? Maybe keeping a tracker. Written by B. Esfandyare Related articles: Circadian rhythms and nutrition / Does insomnia run in families? REFERENCES Emens JS, Yuhas K, Rough J, Kochar N, Peters D, Lewy AJ. Phase Angle of Entrainment in Morning‐ and Evening‐Types under Naturalistic Conditions. Chronobiology International. 2009 Jan;26(3):474–93. Lee, J.H., Kim, I.S., Kim, S.J., Wang, W. and Duffy, J.F. (2011). Change in Individual Chronotype Over a Lifetime: A Retrospective Study. Sleep Medicine Research , 2(2), pp.48–53. doi: https://doi.org/10.17241/smr.2011.2.2.48 . Ujma, P.P. and Kirkegaard, E.O.W. (2021). The overlapping geography of cognitive ability and chronotype. PsyCh Journal , 10(5), pp.834–846. doi: https://doi.org/10.1002/pchj.477 . Shearman LP, Jin X, Lee C, Reppert SM, Weaver DR. Targeted Disruption of the mPer3 Gene: Subtle Effects on Circadian Clock Function. Molecular and Cellular Biology. 2000 Sep 1;20(17):6269–75. Viola AU, Archer SN, James Lynette M, Groeger JA, Lo JCY, Skene DJ, et al. PER3 Polymorphism Predicts Sleep Structure and Waking Performance. Current Biology. 2007 Apr;17(7):613–8. Project Gallery

  • Mastering motion- reflex, rhythmic and complex movements | Scientia News

    The neural pathways behind movement Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Mastering motion- reflex, rhythmic and complex movements Last updated: 12/03/25, 11:49 Published: 03/04/25, 07:00 The neural pathways behind movement Introduction Movement is arguably the most fundamental aspect of human behaviour and is one of the most obvious features distinguishing plants and animals. The ability to physically respond to stimuli has enhanced our chances of survival an immeasurable amount. As such, our body’s ability to move has evolved and refined itself over many millennia, even developing new ways to move that protect us in many ways. For example, involuntary reflexes have reduced the computational demand on our brain to move parts of our body away from hot or painful objects, making the process almost instantaneous. Meanwhile, central pattern generators (CPGs) in our spinal cord have also reduced cognitive load by carrying out subconscious movement. This has allowed the motor cortex and cerebellum to focus on planning, coordinating, and refining purposeful movements in response to sensory feedback. While movement can be separated into even more categories, understanding the neural pathways of these three types can be beneficial to uncover core concepts of human neurophysiology, and even pave the way for treating movement disabilities. With that said, let’s take a deep dive into the circuitry and principles of reflex, rhythmic, and voluntary movement. Reflex movements Reflex movements are rapid, involuntary responses to stimuli that are commonly used to help us avoid danger or harm. An example includes touching a hot object and immediately jerking our hand away from it. The goal of this form of movement is to be as quick as possible in order to avoid injury. As such the neural pathway, known as a reflex arc, is simple and can take as few as three neurons. Firstly, sensory receptors detect a stimulus, such as heat, and send a signal up towards the central nervous system (CNS) through sensory neurons. Instead of going up to the brain for processing and movement planning, the sensory neuron connects with a relay neuron in the spinal cord, and then to motor neurons. This reduces the time taken to respond as it bypasses the brain’s processing circuitry. Motor neurons then carry a signal to relevant muscles to contract and move the body away from danger. Because the signal from the sensory receptors bypasses the brain, this movement is subconscious, meaning it happens without consciously deciding to move. This makes the movement rapid and stereotyped – the motion is predictable as there is minimal planning; just a need to move anywhere away from the stimulus. Central Pattern Generators (CPGs) CPGs are networks of neurons in the spinal cord that, when activated, produce rhythmic pattern-like movement such as walking or running. This type of movement is also subconscious as it does not require active focus to perform. However, unlike reflex movements, CPG output does not require sensory activation or feedback. Instead CPGs are activated by descending pathways from the medulla – a region of the brainstem that is responsible for performing involuntary movement. CPGs typically control movements that are necessary for survival such as breathing and heartbeats. The lack of need to consciously focus on these movements allows us to instead direct our attention to more complex situations, such as responding to stimuli or achieving a specific goal. This is where voluntary movements are required. Voluntary movements Any movement performed via conscious decision-making requires activity from a range of areas in the brain. To respond to our environment, we firstly need information on what is around us. This is largely handled by the frontal lobe which perceives our external environment through sensory input and attention. Human fMRI studies have highlighted increased activity in the frontal lobe as we switch our attention, thus perceiving different parts of our external environment. This information of our environment is sent to the motor cortex which plans our next movement. Complex multi-limb movements may require additional processing from premotor and association areas. Once the movement has been planned, it then has to pass through the cerebellum, which refines specific parts of the movement, such as precise finger motion. After refinement, the movement signal is then sent to relevant muscles via motor neurons to carry out the intended movement. An example of a complex movement is reaching out and grabbing an object. This seemingly simple task requires coordinated movement of the hand, arm, shoulder, and torso to ensure we move our arm the right amount – not too far so that we go past the object, and not too near so that we do not reach it. This also requires great precision to grab the object with appropriate force, to gain a firm grip while ensuring we do not break the object. A lot of planning goes into rudimentary movements, and yet sometimes we can still get things wrong. For instance, suppose we couldn’t see the object too well so we end up going too far and missing it. This will be picked up by our sensory organs, giving our brain feedback on what we ended up doing. By comparing the actual movement with our intended movement, we can create an error signal of how far we missed and in what direction. This drives learning – by using our previous errors, we can refine our future movements to eventually achieve our intended goal. In this example, we may learn that we keep extending our arm too far, and so with repetitive trials we eventually move the right amount in order to grab the object, as we intended. The cerebellum is largely seen as responsible for motor learning, however the deep underlying mechanism is still being researched. When the same complex movement is performed again and again, it can be trained to become subconscious movements activated by spinal CPGs, gradually requiring less coordination from the motor cortex to perform. This is how common movements such as walking, go from being a strenuous task as a toddler to a simple ability requiring minimal focus as an adult. Conclusion Overall, we can see a general trend of movements requiring more parts of the CNS as they become more complex. Precise, unfamiliar movements requiring multiple limbs are the most complex, thus recruiting decision-making and motor coordination areas in order to perform. By repeating an action again and again, we can train ourselves to perform it with less and less input from higher brain regions, until it eventually becomes a subconscious coordinated act that can be performed on demand. Written by Ramim Rahman Related articles: Dopamine in the movement pathway / Mobility disorders REFERENCES Dickinson, P.S. (2006) ‘Neuromodulation of central pattern generators in invertebrates and vertebrates’, Current Opinion in Neurobiology , 16(6), pp. 604–614. doi:10.1016/j.conb.2006.10.007. Latash, M.L. (2020) Physics of biological action and perception . London, United Kingdom: Academic Press. Brent Cornell (no date) BioNinja . Available at: https://old ib.bioninja.com.au/options/option-a-neurobiology-and/a4-innate-and-learned-behav/reflex-arcs.html (Accessed: 11 February 2025). Berni, D.J. (2023) The motor system , Introduction to Biological Psychology . Available at: https://openpress.sussex.ac.uk/introductiontobiologicalpsychology/chapter/the-motor-system/ (Accessed: 11 February 2025). Rossi, A.F. et al. (2008) ‘The prefrontal cortex and the executive control of attention’, Experimental Brain Research , 192(3), pp. 489–497. doi:10.1007/s00221-008-1642 z. Project Gallery

  • Creatio ex Nihilo: a Christian creation doctrine including physics | Scientia News

    The intersection of physics and religion: the redshift and expanding galaxies Facebook X (Twitter) WhatsApp LinkedIn Pinterest Copy link Creatio ex Nihilo: a Christian creation doctrine including physics Last updated: 09/11/25, 20:50 Published: 09/11/25, 20:43 The intersection of physics and religion: the redshift and expanding galaxies At first glance, physics seems like a fairly straightforward field. Maths is the language that explains how everything in the universe behaves in a particular way. But the more you delve into the field, the more you realise that it actually intersects with all other fields - biology, neuroscience, philosophy, religion, etc. The example covered in this article is the creation of the universe. One of the subfields of physics is cosmology - the study of the universe, or cosmos, including its origin, development and fate. The most famous piece of modern work to come out of this field is the Big Bang theory. This is the suggestion that 13.8 billion years ago, the universe started out as a very hot, very dense point smaller than the size of an atom before it suddenly and rapidly expanded - bang! Out of this came everything. Every atom for all known and unknown things in the universe, all of the laws of time and space, literally everything came into existence in a big explosion of energy. How do we know this? Well, there is evidence of the Big Bang theory all over the universe, as far as physicists can tell. Particles flying about the universe can provide information about where they came from. For example, if we study the light from other galaxies we can see that the light is ‘red-shifted’ - meaning that as the galaxies move away from us, it shows up differently on the light spectrum then it would if it was very close. Think of it like when you drop a stone in the middle of a pond. The ripples start out very close together, but as they move away from the center they stretch out. Light does the same thing and physicists can use this to determine how celestial objects are moving, which is how we know the universe continues to expand. Such evidence not only tells us a lot about the universe as it is now, but it also allows us to theorise about the universe’s beginning. Unfortunately, this then begs the question…what caused the Big Bang? Better yet, what was there before the Big Bang? Nothing? Perhaps, but then how did everything in the universe come into being from nothing? It is questions like these that create an opportunity for other fields to join the conversation. One suggested answer to this particular question comes from the long-held Christian doctrine ‘creatio ex nihilo’, which is Latin for creation from, or out of, nothing. This concept is found in Genesis 1:1, ‘In the beginning God created the heavens and the earth.’ The suggestion is that first, there was nothing (which physics cannot prove or disprove). Then, God the Creator began the act of creation, which physics describes as the Big Bang. Physics cannot prove or disprove God as Creator either. Therefore, the argument is that the creatio ex nihilo doctrine is technically a valid possibility. Regardless of whether these theories are true or not, the topic of creation is an example of how physics works with other fields like religion or philosophy. Physics cannot necessarily answer all of the big questions, but it can certainly help provide information about the universe we live in. Written by Amber Elinsky Related article: The Anthropic Principle- Science or God? Project Gallery

  • 404 | Scientia News

    There’s Nothing Here... We can’t find the page you’re looking for. Check the URL, or head back home. Go Home

bottom of page