top of page

Search Results

114 items found for ""

  • Behind the Mask

    Behind the Mask By Yvette Marris 23 March 2022 Edited by Tanya Kovacevic Illustrated by Quynh Anh Nguyen It would be hard to write about A Year in Science without the obligatory COVID article. We hear constantly about the stresses of being a frontline healthcare worker, the signs and symptoms of long COVID, and the endless vaccine scepticism. I’d like to tell a slightly different story. During the COVID pandemic, other infections didn’t just take a holiday and cancers didn’t just stop growing. More ordinary illness and injury continued behind the headlines. As a consequence of the pandemic, healthcare workers are additionally dealing with an abundance of patients, delays with diagnosis and some very complex medical cases. Megan Gifford worked in a hospital that didn’t primarily treat COVID-19 patients, but still had to adapt to the constant changing of rules, regulations and policies put in place to protect staff and patients alike from the virus. Now at the Peter MacCallum Cancer Centre in Melbourne, Gifford spoke to me about her experiences working at Townsville University Hospital in the only bone marrow transplant ward servicing a large population across regional Queensland. Gifford experienced the stress and burden of trying, not only to assuage their own anxieties but to also provide current, up-to-date information to patients and deliver high quality care. There were the frustrations of unavoidable logistical problems like border closures, stay-at-home orders, preventing access to crucial materials and patient transport. There was heartbreak of watching transplant patients deteriorate mentally, as their will to persist with treatments began to fade. Pathologists and haematologists also found themselves facing an unprecedented logistical nightmare, including re-allocation of diagnostic equipment and protective equipment for mass COVID testing. Access to essential biomedical material like blood and plasma became increasingly difficult and many suffered as a result. While pandemic consequences like long COVID and the increased prevalence of affective disorders, like depression and anxiety, are well documented in media and academia, post-traumatic stress disorder (PTSD) hasn’t gotten the same amount of attention. Statistics and anecdotes alike are staggering, both for patients and healthcare workers. With stressors like an unprecedented number of critically ill patients, capricious disease progressions, high mortality, and ever-changing treatment guidelines the world was sympathetic to healthcare workers’ struggles (3). Yet with the lockdowns and restrictions over, it would be naïve to think everything would just return to normal. It was found that 29% of healthcare workers had clinical or sub-clinical symptoms of PTSD (1), and that this figure was significantly higher for healthcare workers directly treating COVID patients (2). Gifford recalled anecdotes of “patients suffering anxiety attacks when they smell the hospital alcohol rub and hear the familiar beeping of the various equipment”. Even beyond the mental health scope, logistical issues like delayed learning for medical students or the backlog of elective procedures is still placing an enormous burden on healthcare workers, despite the immediate threat seemingly behind us. But to say that everything remains in shambles would frankly be insulting to healthcare workers, who are working tirelessly to deliver good quality healthcare. The speed at which pathologists and scientists have adapted to limited resources and supply shortages, and the way in which doctors and frontline workers have shifted their style of care and developed new problem-solving skills, are exceptional and should not go unnoticed or unappreciated. Importantly, the COVID-19 pandemic and its ripple effects have brought centre stage the consequences of under-resourced healthcare centres in a way that affected all people, irrespective of geography, class or reputation. The reality is that the conditions in which many metropolitan hospitals found themselves in, with never enough staff or supplies, is a condition that some hospitals experienced long before COVID-19 ever appeared, particularly in rural settings. To say that every dark cloud has a silver lining would be horribly cliché, but in this case, there may be truth to it. This edition of A Year in Science is a chance for us to reflect on all that COVID-19 has called attention to and decide to do something about it. References Carmassi C, Foghi C, Dell’Oste V, Cordone A, Bertelloni CA, Bui E, et al. PTSD symptoms in healthcare workers facing the three coronavirus outbreaks: What can we expect after the COVID-19 pandemic. Psychiatry Research. 2020 Oct;113312. Janiri D, Carfì A, Kotzalidis GD, Bernabei R, Landi F, Sani G. Posttraumatic Stress Disorder in Patients After Severe COVID-19 Infection. JAMA Psychiatry. 2021 Feb; Johnson SU, Ebrahimi OV, Hoffart A. PTSD symptoms among health workers and public service providers during the COVID-19 outbreak. Vickers K, editor. PLOS ONE. 2020 Oct 21;15(10):e0241032. Previous article Next article

  • Proprioception: Our Invisible Sixth Sense | OmniSci Magazine

    < Back to Issue 6 Proprioception: Our Invisible Sixth Sense by Ingrid Sefton 28 May 2024 Edited by Subham Priya Illustrated by Jessica Walton What might constitute a sixth sense? Perhaps, it involves possessing a second sight or superhuman abilities. A classic example of this would be Spider-Man and his ‘spidey-sense’ — an instinctual warning system that alerts him to imminent danger. Enhancing his reflexes and agility, his sixth sense enables him to evade threats with precision. Turns out Spider-Man is not the sole bearer of a ‘spidey sense’. While we may not be scaling walls anytime soon, we too possess a special sense that unconsciously guides our movements. It might sound peculiar, but knowing your arm is indeed your own arm involves a unique form of sensory processing. Considered by neuroscientists as our own ‘sixth sense’, proprioception is our own way of helping the brain to understand the position of our body and limbs in space (Sherrington, 1907). Consider a typical scenario: your first sip of coffee in the morning. Eyes shut, you savour your latte before the day begins. Such a simple act, yet impossible without proprioception. With closed eyes, how do you know where your mouth is? How do you gauge the position of your arm to ensure the coffee cup reaches your lips? Proprioception seamlessly transmits information about muscle tension, joint position, and force to the brain, making drinking your coffee an automatic and coordinated process. Proprioception operates on principles akin to those guiding our other senses. Specialised cells, known as receptors, are found in each sensory organ and receive information from the environment. Receptors in your eyes capture visual information, while those in your ears detect auditory stimuli. This sensory information is transduced through signals to the central nervous system – through the spinal cord and to the brain – where it’s integrated and processed to determine an appropriate response. Analogously, proprioceptive information is mediated by proprioceptors, a unique type of receptors located in your muscles and joints (Proske & Gandevia, 2012). Unlike our other senses, proprioception does not rely on input from the external environment. Rather, it provides feedback to the brain about what the body itself is doing. Changes in muscle tension and the position of our joints are relayed to the brain, ensuring awareness of the body’s whereabouts at any given moment. One implication of this ‘internal’ feedback loop is that proprioception never turns ‘off’. When you cover your ears, you experience silence. If you hold your nose, you can block out the smell. Yet even when still, in motion, or unconscious, your brain continuously receives proprioceptive input. Imagine this in the context of going to bed each night. What exactly prevents you from falling out of bed, once asleep? While most senses are subdued when sleeping, proprioception remains active, informing the brain about the slightest changes in the position of the body. This ensures a perpetual awareness of our body in space – and luckily for us, stops us from rolling out of bed (Proske & Gandevia, 2012). It can be hard to appreciate what our proprioceptive system allows us to do, given its unconscious nature and integration with our other senses. Rare neurological disorders affecting proprioception highlight just how critical this sense is in our daily lives. The case of Ian Waterman – now known as ‘the man who lost his body – offers profound insights into the significance of proprioception (McNeill et al., 2009). Following a fever in 1971 at age 19, a subsequent auto-immune reaction destroyed all his sensory neurons from the neck down–a condition termed ‘neuronopathy’. Despite retaining his intact motor functions, Waterman lost all proprioceptive abilities, rendering him unaware of his body's position in space. Although the viral infection’s initial effect was that of immobility, this loss was not due to paralysis. Rather, it was Waterman’s lack of control over his body that inhibited his ability to move. Sitting, walking, and manipulating objects became impossible tasks as a result of the absence of any proprioceptive feedback from the body. Remarkably, Waterman has been able to teach himself precise strategies to walk and function with a degree of normality (Swain, 2017). Yet, all movement requires concerted planning and relies entirely on vision to compensate for the unconscious proprioceptive processing. In the absence of any light, Waterman is unable to see his limbs, thus restricting his ability to move. An understanding of the molecular mechanisms underlying proprioception remains somewhat of a mystery compared to that of our other senses. However, recent genetic advancements are paving the way for the development of novel therapies aimed at neurological and musculoskeletal disorders (Woo et al., 2015). A study involving two young patients with unique neurological disorders affecting their body awareness revealed a mutation in their PIEZO2 gene (Chesler et al., 2016). Both individuals experienced significant challenges with balance and movement, coupled with progressive scoliosis and deformities in the hips, fingers, and feet. The PIEZO2 gene typically encodes a type of mechanosensitive protein in cells, r esponsible for generating electrical signals in response to alterations in cell shape (Coste et al., 2010). Mutations to this gene prevent signal generation and render the neurons incapable of detecting limb or body movement. These findings firmly establish PIEZO2 as a critical gene for facilitating proprioception in humans, a sense that is crucial for bodily awareness. PIEZO2 mutations have also been implicated in genetic musculoskeletal disorders (Coste et al., 2010). Joint problems and scoliosis experienced by the patients in a study suggest that proprioception may also indirectly guide skeletal development. These insights into the role of the PIEZO2 gene in proprioception and musculoskeletal development open up promising avenues for understanding and treating neurological and musculoskeletal disorders. It’s more than fitting to regard proprioception as our sixth sense. The capacity of our nervous system to seamlessly process vast amounts of information from our joints and muscles, all without any conscious effort on our part, is truly remarkable. So, the next time you have that eyes-shut first sip of coffee, give yourself a pat on the back. With your sixth sense at play, you’re clearly a superhero! References Chesler, A. T., Szczot, M., Bharucha-Goebel, D., Čeko, M., Donkervoort, S., Laubacher, C., Hayes, L. H., Alter, K., Zampieri, C., Stanley, C., Innes, A. M., Mah, J. K., Grosmann, C. M., Bradley, N., Nguyen, D., Foley, A. R., Le Pichon, C. E., & Bönnemann, C. G. (2016). The Role of PIEZO2 in Human Mechanosensation. N Engl J Med , 375 (14), 1355-1364. https://doi.org/10.1056/NEJMoa1602812 Coste, B., Mathur, J., Schmidt, M., Earley, T. J., Ranade, S., Petrus, M. J., Dubin, A. E., & Patapoutian, A. (2010). Piezo1 and Piezo2 are essential components of distinct mechanically activated cation channels. Science , 330 (6000), 55-60. McNeill, D., Quaeghebeur, L., & Duncan, S. (2009). IW - “The Man Who Lost His Body”. In (pp. 519-543). https://doi.org/10.1007/978-90-481-2646-0_27 Proske, U., & Gandevia, S. C. (2012). The Proprioceptive Senses: Their Roles in Signaling Body Shape, Body Position and Movement, and Muscle Force. Physiological Reviews , 92 (4), 1651-1697. https://doi.org/10.1152/physrev.00048.2011 Sherrington, C. S. (1907). On the proprio-ceptive system, especially in its reflex aspect. Brain , 29 (4), 467-482. Swain, K. (2017). The phenomenology of touch. The Lancet Neurology , 16 (2), 114. https://doi.org/10.1016/S1474-4422(16)30389-1 Woo, S. H., Lukacs, V., de Nooij, J. C., Zaytseva, D., Criddle, C. R., Francisco, A., Jessell, T. M., Wilkinson, K. A., & Patapoutian, A. (2015). Piezo2 is the principal mechanotransduction channel for proprioception. Nature Neuroscience , 18 (12), 1756-1762. https://doi.org/10.1038/nn.4162 Previous article Next article Elemental back to

  • Love and Aliens

    < Back to Issue 3 Love and Aliens By Gavin Choong 10 September 2022 Edited by Khoa-Anh Tran and Niesha Baker Illustrated by Ravon Chew Next Neither Daniel Love nor Brendan Thoms were Australian citizens, but they were both recognised as First Nations Australians by law. Under legislation, “aliens” who commit crimes with a sentence of over a year may be removed from the country. (1) Due to their non-citizenship, the then Minister for Home Affairs Peter Dutton classified these men as aliens and tried to deport them after they were convicted of serious crimes. This attempt failed. The High Court of Australia ruled, in the hotly contested landmark decision of Love v Commonwealth, that Indigenous Australians could not be considered aliens under Australian law because of the “spiritual connection” they hold with the lands and waters of the country we live in. (1) Effectively, this barred the deportation of Love and Thoms but also sent astronomical ripples through the fabric of our nation’s legal framework. This year, major challenges to the decision made in Love v Commonwealth have arisen. Of the arguments put forward, some protest the judicial activism of the judges – that is, them going above and beyond written law to produce a fairer ruling. For example, many contend the term spiritual connection bears no actual legal meaning. However, with a history dating back upwards of seventy-thousand years, two hundred and fifty languages and eight hundred dialects, complex systems of governance, deeply vested religious and spiritual beliefs, and a profound understanding of land, it would be ignorant to argue this rich culture should simply be disregarded in the face of the law. This article adopts a scientific lens and delves into an empirical basis for the spiritual connection Aboriginal Australians share with country, traversing from Dreamtime to spacetime and beyond. THE DREAMING: FROM NOTHING, EVERYTHING From nothing came everything. Nearly fourteen billion years ago, a zero-volume singularity held, tightly, all the energy, space, and time from our current universe. In the moment of creation, temperature and average energies were so extreme all four fundamental forces which shape the universe, as we know it, acted as one. Cosmological inflation followed, allowing for exponential expansion and rapid cooling. Within a picosecond, the four fundamental forces of nature – gravity, electromagnetism, weak interactions, and strong interactions – emerged independently. These forces interacted with matter, resulting in the formation of elementary particles now coined quarks, hadrons, and leptons. For twenty more minutes, elementary particles coupled to form subatomic particles (protons, neutrons) which in turn underwent nuclear fusion to create simple early atoms such as hydrogen and helium. From nothing, came everything. In an eternal present, where there had once been flat and barren ground, Ancestral and Creator spirits emerged from land, sea, and sky to roam the Earth. As they moved, man and nature – mountains, animals, plants, and rivers – were birthed into existence. Once these spirits had finished, instead of disappearing, they transformed into the world they had created, existing in sacred sites such as the night sky, monolithic rocks, and ancient trees. The Dreaming is a First Nations peoples’ understanding of the world and its creation. Importantly, it is an event which cannot be fixed in time – “it was, and is, everywhen,” continuing even today. Countless retellings have caused Dreamtime tales to diverge slightly, leading communities of Aboriginal Australians to identify with different variations of similar stories. (2) These fables refer to natural worldly features and sacred sites, whilst also incorporating favourable values such as patience, humility, and compassion. An example is the tale of the Karatgurk, told by the Wurundjeri people of the Kulin nation, about seven sisters representing what we now consider as the Pleiades star constellation. (3) The Karatgurk These seven sisters once lived by the Yarra River, where Melbourne now stands. They alone possessed the secret of fire, carrying live coals at the end of their digging sticks. (Crow ("trickster, cultural hero, and [another] ancestral being") called the sisters over claiming he had discovered tasty ant larvae. (3) The women began scouring, only to find viscious snakes underneath the dirt which they beat using their digging sticks. As they did so, the live coals flew off and were stolen by Crow who brought fire to mankind. The Karatgurk sisters were swept into the sky, with their glowing fire sticks forming the Pleiades star cluster. In theory, the extreme physical reactions occurring minutes after the Big Bang, paired with hyper-rapid cosmic inflation, should have resulted in a completely homogeneous universe with an even distribution of all existing matter and energy. Cosmological perturbation theory explains, however, that micro-fluctuations in material properties create gravitational wells resulting in the random grouping of matter. These aggregations formed the first stars, quasars, galaxies, and clusters throughout the next billion years. It took, however, another ten billion years for the solar system to form. Similar to Saturn’s planetary rings, the early Sun had its own rotating, circumstellar disc composed of dust, gas, and debris. According to the nebular hypothesis, over millions of years, enough particulates coagulated within the Sun’s spinning disc to form small, primordial planets. Early Earth was a hellish fire-scape as a result of constant meteoric bombardment and extreme volcanic activity. The occasional icy asteroids which collided with Earth deposited large amounts of water, vaporising upon contact – as our planet began to cool, these gaseous deposits condensed into oceans, and molten rock solidified into land mass. In the blink of an eye, early traces of modern humans fluttered into existence at the African Somali Peninsula. They were a nomadic people, travelling westwards and then north through modern day Egypt and into the Middle East. Ancestral Indigenous Australians were amongst the first humans to migrate out of Africa some 62,000 to 75,000 years ago. While other groups travelled in different directions filling up Asia, Europe and the Americas, ancestral Indigenous Australians took advantage of drastically lower sea levels during that time to travel south, as, back then, mainland Australia, Tasmania, and Papua New Guinea formed a single land mass (Sahul) while South-East Asia formed another (Sunda). In spite of this, the wanderers still had to possess the requisite sea-faring skills to traverse almost ninety kilometres of ocean. When the last ice age ended 10,000 years ago, rising waters from melting ice caps covered many of the terrestrial bridges early humans had once journeyed over. This severing allowed Indigenous Australians to foster culture and tradition in their very own passage of time, uninterrupted and independent until a British fleet of eleven ships approached Botany Bay thousands of years later. Significant parts of Australia’s coast were also submerged due to ice age flooding. As coastal Indigenous Australians observed this phenomenon, they recognised its significance through their tales. The Gimuy Walubara Yidinji, traditional custodians of Cairns and the surrounding district, are one of the many groups which reference coastal flooding in their geomythology. Gunya and the Sacred Fish Gunyah, who had lived on Fitzroy Island, went out to hunt for fish one day. Spotting a glimmer in the water, he plunged a spear towards it only to find he had attacked the sacred black stingray. The stingray beat its wing-like fins, causing a great, unending storm. Gunyah fled from the rapidly rising sea and managed to find refuge in a clan living on the cliffs of Cairns. Together, they heated huge rocks in a fire and threw them far into the sea. The pacific was once again pacified, and the Great Barrier Reef created. Isaac Newton proposed, in Principia Mathematica, that the strength of the force of gravity between two celestial bodies would be proportional to both of their masses. At the beginning of the twentieth century, Albert Einstein refined this concept with the theories of Special and General Relativity. His mathematical models suggested time and space were woven into a four-dimensional canvas of spacetime, and the presence of massive objects such as black holes and stars created gravitational wells which distorted spacetime. Within these distortions, bodies closer to large masses would conceive time and space differently than those further away. This unique phenomenon, for example, means astronauts living onboard the International Space Station age fractionally slower relative to us grounded on Earth. Einstein was also able to find that as the velocity of any given body increased to that near the speed of light, it would gain an almost-infinite mass and experience a drastically slowed perception of time relative to their surroundings. These once inconceivable findings had monumental implications in the sphere of theoretical physics, with two examples below. (4, 5) Dark Matter ‘Visible’, baryonic matter humanity is familiar with makes up less than a fifth of the known universe, with a hypothetical ‘dark’, non-baryonic matter comprising the rest. Dark matter lies between and within galaxies, driving baryonic matter to aggregate, forming stars and galaxies. As it cannot be detected using electromagnetic radiation, gravitational lensing provides the strongest proof of its existence. Gravitational lensing occurs when there is an interfering body between us, here on Earth, and a given target. As per Einstein’s relativity, the interfering body has mass which will bend space and therefore distort the image we receive of the target. There exists a mathematically proportional relationship between mass and distortion – the more massive an interfering body, the greater the distortion. Scientists performed calculations but found that the levels of distortion they observed correlated to masses much greater than that of the interfering body. Dark matter accounts for this invisible and undetectable missing mass. String Theory At its core, quantum physics deals with interactions at the atomic and subatomic level. This body of work has borne unusual findings – including that light can act both as a particle and wave, that we may never identify a particle’s position and momentum simultaneously with complete certainty, and that the physical properties of distant entangled particles can fundamentally be linked. On paper, however, there has been great difficulty reconciling quantum physics with relativity theory, as the former deals with interactions which occur in “jumps…with probabilistic rather than definite outcomes”. (4) String theory, however, seeks to settle this tension by proposing the universe is comprised of one-dimensional vibrating strings interacting with one another. This theoretical framework has already bore fascinating fruit – it has been hypothesised that the universe has ten dimensions (nine spatial, one temporal) and during the Big Bang, a “symmetry-breaking event” caused three spatial dimensions to break from the others resulting in an observable three-dimensional universe. (5) On 21 September 1922, astronomers in Goondiwindi, Queensland, used a total solar eclipse to successfully test and prove Einstein’s theory of relativity. Aboriginal Australians present believed they were “trying to catch the Sun in a net”. (6) Western academics were far from the only ones who sought to explain natural phenomena. From the ancient Egyptians to Japanese Shintoists and South American Incas, many civilisations of the past revered the Sun and Moon, having been enthralled by the two celestial bodies. Indigenous Australians were one such people, wanting to understand why the sun rose and set, how moon cycles and ocean tides were related, and what exactly were the rare solar and lunar eclipses. Such occurrences had a mystical property about them, reflected in a rich collection of traditional tales which looked to illuminate these astronomical observations. (7) Walu the Sun-woman Told by the Yolngu people of Arnhem Land, Walu lights a small fire every morning to mark that dawn has arrived. She paints herself with red and yellow pigment with some spilling onto the clouds to create sunrise. Walu lights a bark torch and carries it across the sky from East to West, creating daylight. Upon completing her journey, she extinguishes her torch and travels underground back to the morning camp in the East. While doing so, she provides warmth and fertility to the very Earth surrounding her. Ngalindi the Moon-man Told by the Yolngu people of Arnhem Land, “water fill[s] Ngalindi as he rises, becoming full at high tide”. (6) When full, he becomes gluttonous and decides to kill his sons because they refuse to share their food with him. His wives seek vengeance by chopping off his limbs, causing water to drain out. This is reflected by a waning moon and ebb in the tides. Eventually, Ngalindi dies for three days (New Moon) before rising once again (waxing Moon). Bahloo and Yhi Told often by the Kamilaroi people of northern New South Wales, Yhi (Sun-woman) falls in love with Bahloo (Moon-man) and tries to pursue him across the sky. However, he has no interest in Yhi and refuses her advances. Sometimes, Yhi eclipses Bahloo and tries to kill him in a fit of jealously, but the spirits holding up the sky intervene allowing Bahloo to escape. In 1788, British colonists prescribed the fictitious doctrine of terra nullius which treated land occupied by Indigenous peoples as “territory belonging to no-one,” susceptible to colonisation. (8) It is apparent, however, that Indigenous Australians did and still do belong, having a greater, more unique, and nuanced relationship to our lands and waters than we can ever hope to have. This article shows that as detailed and prescriptive our modern scientific understanding is, First Nations peoples will have an equally if not richer perspective, woven through their stories, languages, and practices. To argue that the spiritual connection Indigenous people share with country is not recognised by law would be wilfully making the same mistake our early settlers made two and a half centuries ago. It would be allowing the continuance of intergenerational trauma and suppression. For those reasons, despite the assertive legal challenges being brought against Love v Commonwealth, its judgement must be upheld. References 1. Love v Commonwealth; Thoms v Commonwealth [2020] HCA 3. 2. Stanner WE. The Dreaming & other essays. Melbourne (AU): Black Inc.; 2011. 3. Creation Stories [Internet]. Victoria: Taungurung Lands & Waters Council [cited 2022 Apr. Available from: https://taungurung.com.au/creation-stories/ 4. Powell CS. Relativity versus quantum mechanics: the battle of the universe [Internet]. The Guardian; 2015 Nov 4 [cited 2022 Apr 17]. Available from: https://www.theguardian.com/news/2015/nov/04/relativity-quantum-mechanics-universe-physicists 5. Wolchover N. String theorists simulate the Big Bang [Internet]. Live Science; 2011 Dec 14 [cited 2022 Apr 17]. Available from: https://www.livescience.com/17454-string-theory-big-bang.html 6. Hamacher DW. On the astronomical knowledge and traditions of Aboriginal Australians [thesis submitted for the degree of Doctor of Philosophy]. [Sydney]: Macquarie University; 2011. 139 p. 7. Mathematics, moon phases, and tides [Internet]. Melbourne: University of Melbourne [cited 2022 Apr 17]. Available from: https://indigenousknowledge.unimelb.edu.au/curriculum/resources/mathematics,-moon-phases,-and-tides 8. Mabo v Queensland (No 2) [1992] HCA 23. Previous article Next article alien back to

  • PT | OmniSci Magazine

    < Back to Issue 4 PT by Saachin Simpson 1 July 2023 Edited by Caitlin Kane, Rachel Ko and Patrick Grave Illustrated by Jolin See 'Pt' (medical abbreviation for ‘patient’) recounts a patient visit on an early-morning ward round at Footscray Hospital in my first placement as a second-year medical student. The line “I came to hospital with my innocence” was actually said by the patient and stuck with me, eventually inspiring this poem, which I wrote in a Narrative Medicine class run by Dr Fiona Reilly and Dr Mariam Tokhi. The poem depicts a dramatic rise and fall in tension during the patient visit. It is bookended by soulless technical medical abbreviations that exemplify patient notes on electronic medical records. Pt Pt alert and oriented, sitting upright in chair. Breathing comfortably, responsive to questions. Bilat basal creps, bilat pitting oedema to knee. Pt gazes out window at the opposite concrete wall Pt’s cataracts suddenly shimmer, a sorcerer’s crystal ball. Pt need not speak for his stony grimace conveys Pt’s sheer and utter avowal of his final dying days. Pt’s power becomes apparent in his mighty ocular grip Pt’s lungs echo black tattered sails of a ramshackle timber ship. “I came to hospital with my innocence” Professional, qualified eyes dart from computer To patient And back. “and now I muse on dark and violent tricks” Med student looks at intern looks at reg looks at consultant. Feet shuffle, lips purse Pretending not to hear. “Your poisons gift no remedy, your words fat and hollow” Like a serpentine hiss, his derision rings through sterile air 5-step Therapeutic Guidelines for Reassurance (vol 23.4, updated 2023) does little for his despair. Pt need not speak for his stony grimace conveys Pt’s sheer and utter avowal of his final dying days. Pt need not speak for his stony grimace conveys Pt’s sheer and utter avowal of his final dying days. Pt to await GEM. Frusemide 40mmHg. Cease abx. Refer physio. Refer OT. Call family. For d/c Monday. Previous article Next article back to MIRAGE

  • Neuralink: Mind Over Matter? | OmniSci Magazine

    < Back to Issue 7 Neuralink: Mind Over Matter? by Kara Miwa-Dale 22 October 2024 edited by Weilena Liu illustrated by Aisyah Mohammad Sulhanuddin What if I told you that you could control a computer mouse with just your thoughts? It sounds like something straight out of a sci-fi movie, doesn’t it? But this isn’t fiction… Welcome to the brain-computer interface, a device which is able to record and interpret neural activity in the brain, enabling direct communication between your mind and a computer. Tech billionaire Elon Musk founded ‘Neuralink’, a company developing coin-sized brain-chips that can be surgically inserted into the brain using a robot. Neuralink made headlines a few months ago by successfully implanting their brain-chip, dubbed ‘Telepathy’, into their first trial patient, Noland Arbaugh. While there were a few technical glitches, it seems to be working relatively well so far. Noland has been able to regain some of the autonomy that he lost following a devastating spinal cord injury. He is even able to play video games with a superhuman-like reaction speed, thanks to the more direct communication route between the Neuralink implant and his computer. But it doesn’t stop there; Elon Musk’s ultimate vision is to have millions of people using Neuralink in the next 10 years, not only to restore autonomy to those with serious injuries, but to push the boundaries of what the human brain is capable of. He thinks that Neuralink will allow us to compete with AI and vastly improve our speed and efficiency of communication, which is ‘pitifully slow’ in comparison to AI. Neuralink implants may seem like an incredible leap in scientific technology, but what will happen if they become normalised in our society? Let’s imagine for a moment … Jade, April 7th 2044 Shoving my jacket into my bag, I dart out of the hospital and pull onto the main road in my Tesla. As I speed past the intersection, I see a giant advertisement plastered on a sleek building: ‘Neuralink: Seamless Thoughts, Limitless Possibilities’. When I signed up to get a Neuralink implant, all I’d thought about were the infinite possibilities of how it would change my life – not what could go wrong. I wish I could say that I was brainwashed into getting a Neuralink, or that I had no choice in the matter. But the truth? I got an implant so that I could be ‘ahead of the crowd’ and because I was so frustrated at feeling inadequate compared to the other doctors at my hospital. When I graduated medical school, at the top of my class, people told me that I would do ‘great things’ and ‘change the world’. I followed the standard path, landing my first job and climbing the ranks one caffeine-fuelled shift at a time. I loved my job. Every time I saved a life, it felt like all my effort had paid off. Then Neuralink happened. I still remember the day Dr Maxwell - a doctor I worked with - proudly announced that he’d ‘bitten the bullet’ and gotten the implant. Over the coming weeks, we watched in awe: his diagnoses were quicker and more accurate than any human could imagine, and he went home as energetic as he’d arrived. Now, the extra hours I spent figuring out tricky cases were no longer a representation of my work ethic, but a symptom of my inadequacy compared to the Neuralink-enhanced doctors. One by one, my colleagues signed up for the implant. I hated the thought of having something foreign nestled in my brain, recording my brain’s neurons every second of the day. I told myself I wouldn’t let peer pressure get to me. But, as I watched those around me get promoted while I continued to work endless days, the frustration started to build. One afternoon, the department head came into my office to tell me that they were reconsidering the renewal of my contract. I wasn’t ‘keeping up’ with my Neuralink-enhanced colleagues. “We respect your personal decision, of course,” she said with hollow politeness. I wasn’t keen on being pressured into it, but at the same time, I genuinely believed that the implant would improve my life. When I told my friends and family about getting an implant, they were concerned. They tried to list all the things that could go wrong, but I came up with enough reasons to convince myself that it was the right decision. Once they saw how incredible the Neuralink device was, I thought, they would want one too. *** I’m jolted back to reality as the car veers slightly left, and I manually yank the wheel to correct it. Perhaps my implant glitched for a second… *** Everything changed after I had my Neuralink implanted. I was the only person in my family who had one, although a couple of friends did. At first, I felt invincible. The phenomenal speed with which I was able to come up with previously challenging diagnoses was thrilling. I was able to process enormous amounts of data and draw connections that I had never been able to before. It was addictive to feel that I was working at my full potential, using my newfound ‘superpower’ to save more lives than ever. About a month in, my thoughts began racing uncontrollably, until I felt like I was drowning in a flood of information. Sometimes, the input was so overwhelming that my head pounded and I struggled to breathe. My thoughts didn’t even feel like mine anymore. Family and friends started to grow more and more distant from me. This device was stuck inside my brain like superglue, and sometimes I just wanted to dig it right out of my skull. When I asked the doctor about removing it, he looked at me and smirked, “Why on earth would you want to get rid of such a game-changing device? Neuralink’s the new normal, honey. Get used to it.” *** A honk startles me as a car zooms past, nearly colliding with mine. I turn into a quieter street to regain my composure. But then – suddenly – thoughts of accelerating the car bombard my mind – so loud that I can barely hear myself think. The speedometer rises from 60 to 80 to 100 km an hour. I desperately try to disconnect my Neuralink from the car, to manually override the system – anything that will slow the car down. I start pushing random buttons hoping that I will get some kind of response. A red light flashes on my dashboard. ERROR. SIGNAL DISRUPTED BY UNKNOWN USER. I look up and meet the panicked eyes of a woman pushing a man in a wheelchair. Noah, April 7th 2044 The sun makes its final, glorious descent below the horizon, painting a beautiful array of pinks and oranges across the sky. I take a deep breath as Sophia, my support worker, pushes me along the road. We’re on our way to the grocery store, just in time for the end of day specials, which are all I can afford right now. Since my accident, I’ve tried my best to appreciate what I have, but it isn’t easy. Some days, I’m filled with rage as I struggle to complete daily tasks that I did on autopilot before my accident – back when I wasn’t confined to a wheelchair. It’s been hard to come to terms with this new body that I’m stuck with, and all the ways it seems to betray me. I miss the simple things – going to the grocery store by myself or playing board games with friends. But most of all, I miss working as an architect. I loved seeing my clients’ faces light up as they imagined the memories they would make in the new homes I had designed. This sense of satisfaction was taken from me the moment I was paralysed from the neck down. It’s why I’m so desperate to get a Neuralink implant. I would get one right this second if they weren’t so expensive. The Neuralink device isn’t covered by my insurance because the government claims that it wouldn’t be ‘cost effective’. While it won’t restore movement in my arms and legs, this implant would give me some precious freedom back. Maybe if I keep saving and take out a loan, I’ll have just enough to cover it and get my life back … *** “God, these Tesla drivers think they own the road!” I chuckle at Sophia, as a Tesla races towards the crossing in this 40km zone. As we begin to cross the road, I realise that the Tesla is showing no signs of slowing down. The car swerves violently, hurtling towards us without mercy. Sophia’s face pales as she frantically tries to push me out of the road. I squeeze my eyes shut, bracing for impact. Bibliography: Cernat, M., Borțun, D., & Matei, C. (2022, April). Human-Computer Interaction: Ethical Perspectives on Technology and Its (Mis) uses. In International Conference on Enterprise Information Systems (pp. 338-349). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-39386-0_16 Fridman, Lex. (Host). (2024, August 3rd). Elon Musk: Neuralink and the Future of Humanity (No 438). [Audio podcast episode]. In Lex Fridman Podcast. https://lexfridman.com/elon-musk-and-neuralink-team/ Jawad, A. J. (2021). Engineering ethics of neuralink brain computer interfaces devices. Perspective , 4 (1). https://doi.org/10.23880/abca-16000160 Oravec, B. Neurotechnology, Ethical Privacy, and Information Technology. Knighted , 36. https://www.mga.edu/arts-letters/docs/knighted-journal/Issue-6.pdf#page=37 Youssef, N. O. A., Guia, V., Walczysko, F., Suriyasuphapong, S., & Moslemi, C. (2020). Ethical concerns and consequences of Neuralink. Natural Science. https://rucforsk.ruc.dk/ws/files/75503337/NIB3_Group1_Neuralink.pdf Previous article Next article apex back to

  • 404 ERROR PAGE | OmniSci Magazine

    Cover Image: Aisyah Mohammad Sulhanuddin 404 Oh no! It appears you have drifted off course. Take a trek to our homepage, you might find what you're looking for... Back to Homepage

  • ABOUT US | OmniSci Magazine

    About Us OmniSci Magazine is a science magazine at the University of Melbourne, run entirely by students, for students. Our team consists of talented feature writers, columnists, editors, graphics designers, social media and web development officers, all passionate about communicating science! Past Contributor Interviews Editors-in-Chief Ingrid Sefton President Aisyah M. Sulhanuddin President Current Committee Lauren Zhang Secretary Andrew Shin General Committee Ethan Bisogni Treasurer Luci Ackland General Committee Kara Miwa-Dale Events and Socials Hendrick Lin General Committee Elijah McEvoy Events and Socials Past Editors-in-Chief Rachel Ko 2022-2024 Sophia Lin 2021-2022 Patrick Grave 2021-2023 Maya Salinger 2021-2022 Caitlin Kane 2022-2023 Felicity Hu 2021-2022 Yvette Marris 2022-2023

  • OmniSci Magazine

    Issue 7: apex Cover Art: Ingrid Sefton READ NOW Welcome to OmniSci Magazine OmniSci Magazine is a student-led science magazine and social club at UniMelb. We are a group of students passionate about science communication and a platform for students to share their creativity. Read More More from OmniSci Magazine Previous Issues Illustration by Louise Cen READ ISSUE 6 National Science Week 'SCIENCE IS EVERYWHERE' PHOTO/ART COMPETITION VIEW SUBMISSIONS

  • The Rise of The Planet of AI | OmniSci Magazine

    The Rise of The Planet of AI By Ashley Mamuko When discussing AI, our minds instinctively fear of sentience and robotic uprising. However, is our focus misplaced on the “inevitable” humanoid future when AI has become ubiquitous and undetectable in our lives? Edited by Hamish Payne & Katherine Tweedie Issue 1: September 24, 2021 Illustration by Aisyah Mohammad Sulhanuddin On August 19th 2021, Tesla announced a bold project on its AI Day. The company plans to introduce humanoid robots for consumer use. These machines are expected to perform basic, mundane household tasks and streamline easily into our everyday lives.With this new release, the future of AI seems to be closing in. No longer do we stand idle, expecting the inevitable humanoid-impacted future. By 2022, these prototypes are expected to launch. It seems inevitable that our future would include AI. We have already familiarised ourselves with this emerging technology in the media we continue to enjoy. Wall E, Blade Runner, The Terminator, and Ex Machina are only a few examples of the endless list of AI-related movies, spanning decades and detailing both our apprehension and acceptance through multiple decades. Most of these movies portray these machines as sentient yet intrinsically evil, as they pursue human destruction. But to further understand the growing field of study of AI, it’s important to first briefly introduce its history and procurement before noting the growing concerns played up in the Hollywood Blockbusters. The first fundamental interpretations of Artificial Intelligence span a vast period of time. Its first acknowledgement may be attributed to the 1308 Catalan poet and theologian Ramon Llull. His work Ars generalis ultima (The Ultimate General Art) advanced a paper-based mechanical process that creates new knowledge from a combination of concepts. Llull aimed to create a method of deducing logical religious and philosophical truths numerically. In 1642, French mathematician Blaise Pascal invented the first mechanical calculating machine; the first iteration of the modern calculator (1). The Pascaline, as it is now known, only had the ability to add or subtract values using a dial and spoke system (2). Though these two early ideas do not match our modern perceptions of what AI is, they lay the foundation of pushing logical processes to do more than just mechanical means. These two instances in history foreshadow the use of mechanical devices in performing human cognitive functions. Not till the 1940s and early 1950s did we finally obtain the necessary means of more complex data processing systems. With the introduction of computers, the novelty of algorithms created a more streamlined function of storing, computing, and producing. In 1943, Warren McCulloch and Walter Pitts founded the idea of artificial neural networks in their paper “A Logical Calculus of Ideas Immanent in Nervous Activity” (3). This presented the notion of computers behaving similar to a human mind and introduced the subsection of “deep learning”. Alan Turing proposed a test to assess a human’s ability to differentiate between human behaviour and robotic behaviour. In 1950, the Turing Test (later known as the Imitation Game) asked participants to identify if the dialogue they were engaging with was with another person or a machine (4). Despite the breakthroughs made in this expertise, the term Artificial Intelligence wasn’t finally coined till 1955 by John McCarthy of AI. Later on, McCarthy along with many other budding experts would hold the famous 1956 Dartmouth College Workshop (5). This meetup of a few scientists would later be pinpointed in history as the birth of the AI field. As the field continued to grow, more public concerns were raised alongside the boom of science fiction literature and movies cropping up. The notorious 1968 movie 2001: A Space Odyssey shaped such a role into the public perception of the field that by the 1960s and 1970s, an AI Winter occurred. Very little notable progress was made in the field due to the lack of funding based on fear (6). Finally after some time had passed and some more advancements were made with algorithm technology, the notable Deep Blue chess game against Gary Kasparov. The event occurring in May 1997 where the Deep Blue robot beat world champion chess superstar Gary Kasparov marked a silence ushering of perhaps a “decline in human society” at the fall of the machine. Fast forward to now, AI has traversed through leaps and bounds to achieve a much more sophisticated level of algorithms and machine learning techniques. To further understand the uses of AI, I interviewed Dr Liz Sonenberg, a professor in the School of Computing and Information Systems at The University of Melbourne and is a Pro Vice-Chancellor (Research Infrastructure and Systems) in Chancellery Research and Enterprise. She’s an expert in the field and has done a multitude of research. "Machine learning is simply a sophisticated algorithm to detect patterns in data sets that has a basis in statistics." With this algorithm, we have been able to implement it in a variety of our daily tech encounters. AI sits behind the driving force of Google Maps and navigation, as well as voice control. It can easily be found anywhere. “Just because these examples do not exhibit super intelligence, does not mean they are not useful,” Dr Sonenberg explains. Dr Sonenberg alludes that the real problem with AI lies within it’s fairness. These “pattern generating algorithms” at times “learn from training sets not representative of the whole population, which can end up with biased answers.” With a flawed training set, a flawed system is in place. This can be harmful to certain demographics and cause a sway on consumer habits. With AI-aided advice, the explanation behind outcomes and decisions are not supported either. Algorithms are only able to mechanically produce an output, but not explain them. With more high-stakes decisions untrusted upon the reliability of AI, the issue of flawed algorithms becomes more pronounced. With my interview with Dr Sonenberg, not one moment was the fear of super-intelligence, robot uprisings, and the likes brought up... With the new-found knowledge of AI’s current concerns I brought up with Dr Sonenberg, I conducted another interview with Dr Tim Miller, a Professor of Computer Science in the School of Computing and Information Systems at The University of Melbourne, and Dr Jeannie Paterson, a Professor teaching subjects in law and emerging technologies in the School of Law at The University of Melbourne. They both are also Co-Directors at The Centre for Artificial Intelligence and Digital Ethics (CAIDE). As we began the interview, Dr Miller explained again that AI “is not magic” and implements the use of “math and statistics”. Dr Paterson was clear to bring up that anti-discrimination laws have been in place but as technology evolves and embeds itself more into public domain, it must be scrutinised. The deployment of AI can easily cause harm to people due to systems not being public, causing sources to be difficult to identify and causily attribute. With the prospect of biased algorithms, a fine dissonance occurs. Dr Miller elaborated on the use of AI in medical imaging used in private hospitals. As private hospitals tend to attract a certain echelon of society, the training set is not wholly representative of the greater population. “A dilemma occurs with racist algorithms… if it is not used [outcomes] could be worse.” When the idea of a potential super-intelligent robot emerging in the future was brought into conversation, the two didn’t seem to be very impressed. “Don’t attribute superhuman qualities [to it],” says Dr Paterson. Dr Miller states that the trajectory of AI’s future is difficult to map. Predictions in the past of how AI progresses with it’s abilities have occurred, but they occur much later than expected… easily decades later. The idea of super-intelligence also poses the question on how to define intelligence. “Intelligence is multidimensional, it has its limits,” says Dr Miller. In this mystical future world of AI, a distinction is placed not just on, “what will machines be able to do but what will not have them do,” states Dr Miller. “This regards anything that requires social interaction, creativity and leadership”; so the future is aided by AI, not dictated by it. However, in a more near future, some very real concerns are posed. Job security, influence on consumer habits, transparency, law approach, and accountability are only a few. With more and more jobs being replaced by machines, every industry is at stake. “Anything repetitive can be automated,” says Dr Miller. But this does not instinctively pose a negative, as more jobs will be created to further aid the use of AI. And not all functions of a job can be replaced by AI. Dr Paterson explains with the example of radiology that AI is able to diagnose and interpret scans, but a radiologist does more than just diagnose and interpret on a daily basis. “The AI is used to aid in the already existing profession, not simply overtake it.” Greater transparency is needed in showing how AI uses our data. “It shouldn’t be used to collect data unlimitedly,” says Dr Paterson, “is it doing what’s being promised, is it discriminating people, is it embedding inequality?” With this in mind, Dr Paterson suggests that more law authorities should be educated on how to approach topics regarding AI. “There needs [to be] better explanation… [We] need to educate judges and lawyers.” With the notorious Facebook-Cambridge Analytica scandal of 2018, the big question of accountability was raised. The scandal involved the unwarranted use of data from 87 million Facebook users by Cambridge Analytica which served to support the Trump campaign. This scandal brought to light how the data we used can be exploited nonconsensually and used to influence our behaviours, as this particular example seemed to sway the American presidential election. Simply put, our information can be easily exploited and sent off to data analytics to further influence our choices. This creates the defence that apps “ merely provide a [service], but people use [these services] in that way,” as said by Dr Miller. Simply put, the blame becomes falsely shifted onto the users for the spread of misinformation. The impetus, however, should lie with social networking sites disclosing to it’s users more transparency on their data usage and history as well as providing adequate protection on their data. To be frank, the future of robotic humanoid AI integrating seamlessly into human livelihoods will not occur within our lifetimes, or potentially even our grandchildren’s. The forecast seems at best, unpredictable; and at worst, unattainable due to the complexity of what constitutes full “sentience”. However, this does not indicate that AI lies dormant within our lives. The fundamental technology based in computing, statistics, and information systems lays most of the groundwork for most transactions we conduct online, whether monetary or social or otherwise. AI and it’s promises should not be shunted aside due to the misleading media surrounding it’s popularised definition and “robot uprisings” but rather taught more broadly to all audiences. So perhaps Elon Musk’s fantastical ideas of robotic integration will not occur by 2022 but the presence of AI in modern technologies should not go unnoticed. References: 1. "A Very Short History of Artificial Intelligence (AI)." 2016. Forbes. https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/?sh=38106456fba2. 2. “Blaise Pascal Invents a Calculator: The Pascaline.” n.d. Jeremy Norma's Historyofinformation.com. https://www.historyofinformation.com/detail.php?id=382. 3, 4, 6. “History of Artificial Intelligence.” n.d. Council of Europe. https://www.coe.int/en/web/artificial-intelligence/history-of-ai. 5. Smith, Chris, Brian McGuire, Ting Huang, and Gary Yang. 2006. “The History of Artificial Intelligence,” A file for a class called History of Computing offered at the University of Washington. https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf.

  • The Intellectual's False Dilemma | OmniSci Magazine

    The Intellectual’s False Dilemma: Art vs Science By Natalie Cierpisz The age-old debate once again resurfaces. How do art and science truly interact? Is one dependent on the other? How does the ‘art intellectual’ embrace science, and how does the ‘science intellectual’ embrace art? Is this all a meaningless debate anyway? Edited by Andrew Lim, Mia Horsfall & Hamish Payne Issue 1: September 24, 2021 Illustration by Casey Boswell The autumnal Melbourne wind whistles through the naked plane trees lining South Lawn, the sky is flat and grey. Two individuals who regard themselves and only themselves as ‘intellectual paragons’ are seated on a somewhat uncomfortable wooden bench, a perfect perch for people-watching, yet they are rather egotistical and notice only their own presence. One carefully places down their black coffee to light a hand-rolled cigarette; they are a liberal arts intellectual. As the wind grows stronger, the other tightly wraps a lab coat around themselves, and pushes a pair of wire-rimmed spectacles up their nose for the nth time. This would be our scientist. “So, are you still fooling around with your test tubes and pretty lights?” asks the liberal arts academic, cigarette hanging out the corner of their mouth. “If you mean, am I still investigating antiprotons using laser spectroscopy, then yes, indubitably so. How’s your fooling around with Hegel going?” replies the scientist, again pushing their glasses back up to a suitable height. The liberal arts intellectual is quick to retort the scientist’s trite remarks - they are in fact composing a Hegelian analysis of The Communist Manifesto, and not ‘fooling around’ by any means. The tension between the two self-professed intellectuals is building. The two appear to be fighting for dominance in their passive attacks on ego. So goes the age-old feud between the arts and the sciences. These two shallow characters play into the false dilemma that science and art are separate, distinct, alien. Two polar opposites. A total and unequivocal dichotomy. In all fairness, it is difficult to imagine many people will take this polarised a stance on the relationship between art and science. And now, as we delve into the complex relationship between the two domains, it should become clear that science and art are functionally interdependent (1), and considering art and science as totally separate is simply absurd. Let’s get back to our two feuding intellectuals. There seems to be much stereotypical disjunction between the two. But how does this translate to the true relationship between art and science? If the liberal arts intellectual and scientist were not so wrapped up in their self-interested ways, perhaps their gaze would slowly drift to the grandiose arches and imposing columns of the Old Quad. The harmonious form and mathematical ratios of these monuments are an enduring reminder of the architectural leaps and bounds made in the early 14th century, a blended pursuit of art and science. Ergo, we will head to one of the greatest paradigm shifts in Western history – the Renaissance. The Renaissance roughly spanned from the 14th to the 17th century and was a period of complete intellectual revolution – for both science and the arts (2). Everyone is familiar with Leonardo da Vinci, the great Renaissance artist. Less people know that he was also an inventor and a man whose artistic practice was heavily influenced by science (3). To ensure his paintings were as realistic as possible, Da Vinci dissected cadavers to better understand human anatomy, and studied optics and astronomy to perfect his use of space and form in paintings like The Last Supper. Likewise, scientists like Nicholas Copernicus and Galileo Galilei kickstarted a revolutionary paradigm shift towards the heliocentric model, their work in optics and astronomy being heavily reflected in artworks of the same era. Both science and art challenged what was for centuries prior considered the status quo. Source: Leonardo da Vinci, The Last Supper, 1498, tempera on gesso, pitch, and mastic, 460 cm × 880 cm, Wikipedia, https://en.wikipedia.org (4). This certainly isn’t a call for readers to head to the Melbourne General Cemetery and begin digging up specimens, nor to transfer to a double degree in fine arts and biomedicine. Instead, the point is more about how fruitful interaction between the two domains can be, and how one requires the other to flourish. Returning briefly to South Lawn, the snarky liberal arts intellectual continues looking bored and takes out their copy of The Myth of Sisyphus. Sitting directly opposite them the scientist has gone back to finishing the latest New Scientist podcast and calculating a quantum theory of gravity. We have seen that science can inspire art, but how can art inspire science? “The greatest scientists are artists as well.” (5) So said perhaps the most well-known scientist of the modern century. Not only did Albert Einstein develop the special and general theory of relativity (we won’t get into the mathematical specifics for both our sakes), he was also a talented violinist and pianist. Einstein often credited his artistic side for his success in science, testifying that, "the theory of relativity occurred to me by intuition, and music is the driving force behind this intuition. My parents had me study the violin from the time I was six. My new discovery is the result of musical perception.” (6) We have already seen how science prompts art to create new visions, and Einstein was no exception. His revolutionary ideas about space and time have been acknowledged as a prime artistic influence for Picasso’s arguably infamous Cubist style, as well as for the Surrealist art movement. (7) But the arts are not just confined to visual and musical expression. How about the area of expertise of our liberal arts friends? Liberal arts as they are known today, include sociology, literature, philosophy, psychology, politics, and more. The knowledge and, most importantly, critical thinking that is learnt through humanistic education is perhaps key to the future of science. As the world changes and evolves, humans must change and evolve with it, creating innovative solutions along the way. If we shift our focus to around the 1st century BCE, we will encounter what is widely regarded as the coining of the term artes liberales, or liberal arts. Roman statesman, scholar and writer Marcus Tullius Cicero wrote extensively about a wide array of topics, from politics and education to Stoic philosophy. “Artes liberales” roughly translates to “subjects worthy of a free person” - academic study that would enable one to actively participate in society (8). This curriculum consisted of a focus on seven key disciplines of rhetoric, geometry, grammar, music, astronomy, arithmetic, and logic. Liberal arts by nature are not the antithesis of science. From the crux of the artes liberales evolved the study of mathematics, physics, philology, history, and so on. Today we have reached a point where these seven disciplines have evolved and branched out so expansively that we have lost sight of the fact that our modern-day science and arts curriculums are sown from the same seed. Both science and art stem from the real world. Simply put, science is a lens into the study of this world and the inhabitants within it. Art is another lens into this complex system, providing a different but equally valuable perspective. Life is not binary, so neither should be our approach to studying it, and by virtue studying ourselves. Now is the time to embrace such transdisciplinary thinking. We need to bridge the gap between rigorous climate science facts and currently inadequate policy making, assess the ethics of the future of gene-editing, and ultimately become better thinkers. The combined intellectual strength of analytical thinking associated with science, where we learn how to test hypotheses, interpret data and draw valid conclusions; and the arts, where we learn critical thinking, how to develop arguments, how to understand a diverse audience, is necessary to keep humanity’s head above water as our world rapidly changes. Take for example the future of the CRISPR-Cas9 editing tool. This enzyme-based tool allows scientists to remove or add sections of DNA sequence in our genome, our code for life. With this ‘hand of God’ comes great responsibility. Collaboration needs to be made between scientific thinkers and humanistic thinkers to identify what type of robust legislation needs to be implemented to ensure ethical use of this tool. It is no longer a case of scientists working in isolation in underground bunkers. Scientists are making huge strides in research that extend to and greatly impact the wider community. Cases like CRISPR-Cas9 demand a lens from science and a lens from the arts in order to see the full picture – and in this case, to ensure the ethical and safe practise of a tool that has potential to save lives and improve individuals’ quality of life – but this only happens if science and art function in harmony. So back to you, the reader. Perhaps think about enrolling in that philosophy breadth subject next semester that your liberal arts friend raves about. Pick up that popular science book you have been eyeing off at Readings on Lygon St. Listen to that science podcast that keeps popping up on your Spotify homepage (The BBC’s The Infinite Monkey Cage is excellent). Pick up that paintbrush. Go visit Science Gallery Melbourne, a recent art scene addition affiliated with University of Melbourne – how fitting! This isn’t Romeo and Juliet, where you are either a Capulet or a Montague. Rather, this is a case of wave-particle duality, where an electron is both a wave and a particle, and you are both an artist and a scientist. As the typical Melbourne wind continues to pick up and the Old Arts clocktower strikes 7:00 pm, it appears the liberal arts intellectual just swapped their copy of The Myth of Sisyphus for the scientists’ copy of Brief Answers to the Big Questions. Looks like they’re making progress. References: 1. Richmond, Sheldon. “The Interaction of Art and Science.” The MIT Press 17, no. 2 (1984): 81-86. https://www.jstor.org/stable/1574993 . 2. History.com Editors. “Renaissance.” History.com. April 4, 2018. https://www.history.com/topics/renaissance/renaissance . 3. Powers, Anna. “Why Art is Vital to the Study of Science.” Forbes. July 13, 2020. https://www.forbes.com/sites/annapowers/2020/07/31/why-art-is-vital-to-the-study-of-science/?sh=7dfd8f8942eb . 4. Da Vinci, Leonardo. The Last Supper. 1498. Tempera on gesso, pitch, and mastic. 460 cm × 880 cm. Wikipedia. https://en.wikipedia.org . 5, 6. Root-Bernstein, Michelle. “Einstein On Creative Thinking: Music and the Intuitive Art of Scientific Imagination.” Psychology Today. March 31, 2010. https://www.psychologytoday.com/au/blog/imagine/201003/einstein-creative-thinking-music-and-the-intuitive-art-scientific-imagination . 7. Muldoon, Ciara. “Did Picasso know about Einstein?” Physics World. November 1, 2002. https://physicsworld.com/a/did-picasso-know-about-einstein/ . 8. Tempest, Kathryn. “Cicero’s Artes Liberales and the Liberal Arts.” Ciceronian on Line 4, no. 2 (2020): 479-500. https://doi.org/10.13135/2532-5353/5502 . Feynman, Richard, P. The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman. New York: Basic Books, 2005. Science Gallery Melbourne. “Inspiring and Transforming Curious Minds.” Published 2021. https://melbourne.sciencegallery.com/what-we-do . White, Fiona. “Why art and science are better together.” The University of Sydney News. September 17, 2020. https://www.sydney.edu.au/science/news-and-events/2020/09/17/arts-and-science-better-together.html .

  • Our Microbial Frenemies | OmniSci Magazine

    Our Microbial Frenemies By Wei Han Chong How could it be that some of the smallest organisms known to mankind can hold so much influence and cause such calamity in our lives? The significance of these microorganisms have long eluded the greatest microbiologists. But has our perception of these microbes blinded us to their advantages, if any? Edited by Khoa Anh Tran & Tanya Kovacevic Issue 1: September 24, 2021 Illustration by Rachel Ko Throughout human history, diseases and plagues have amassed death tolls reaching hundreds of millions, if not billions. From the Black Death in the 14th century, which killed about 200 million people, or about 30–50% of Europe’s population, to outbreaks of tuberculosis and typhoid fever, resulting in 1.4 million and 200,000 deaths every year, respectively (1, 2, 3). It should come as no surprise then that we have long perceived these microorganisms as a threat to public health and have consequently sought to eradicate these microbes from our environment. But have we been looking at them the wrong way? First and foremost, we know very little about the microorganisms living around us. In bacterial species alone, some scientists have estimated around a billion species worldwide, though even this value is believed to be a gross underestimation (4). Before the germ theory, the most widely accepted theories were the spontaneous generation and miasma theories. Spontaneous generation was a simple theory, believing that living organisms could develop from nonliving matter, such as maggots developing from rotting flesh. The miasma theory, on the other hand, was more prevalent throughout both ancient and modern history. From this perspective, “toxic” vapours from rotting organisms or unsanitary locations were believed to have caused disease (5). This all changed with the germ theory of disease: an idea that would revolutionise our understanding of microorganisms for centuries to come. First theorised as “invisible seeds” by Italian scholar Girolamo Fracastoro in 1546, Fracastoro believed that these seeds could cause disease when spread from infected to healthy individuals (6). For the most part, the basis of the germ theory would continue to follow this logic of a specific microorganism, a “germ”, that could cause a specific disease when invading its host (7). Yet, it was not until nearly 200 years later that the field of microbiology would see huge developments. In 1861, French scientist Louis Pasteur had disproved the spontaneous generation theory by means of sterilisation and proper sealing of food items, which would prevent microbial growth (8). However, Louis Pasteur would not be the only one contributing to developments in microbiology. In 1884, German scientist Robert Koch would be the first to develop a classification system for establishing a causative relationship between a microorganism and its respective disease, effectively confirming the germ theory of disease (9). Even to this day, Koch’s system is still very much influential in microbial pathogenesis, albeit refined to a higher standard. Now known as Koch’s Molecular Postulates — as opposed to Koch’s Original Postulates — which is a model that places a greater emphasis on the virulence genes causing disease, rather than the microorganism itself (10). Today, while we have much to thank Pasteur and Koch for in laying the foundation of modern microbiology, undoubtedly one of the biggest discoveries in microbiology was the discovery of the human microbiota. When we think of microbial life, we usually think of diseases and plagues, cleanliness and dirtiness. Rarely do we ever consider the idea of microbes living inside and around us. Yet, even less so can we begin to comprehend the sheer number of microorganisms that live and proliferate all around ourselves. In our gastrointestinal tract, estimates suggest that there are some 100 trillion microorganisms encoding three million genes altogether, which is 130 times more than what we encode ourselves (11). Figure 1. Microbes in Food (25) So, what do we know about the microbiota; specifically, our microbiota? Firstly, we know that the microorganisms occupying our gut do not cause disease, under normal circumstances. Secondly, we know that they can provide us with a multitude of benefits, such as helping us digest complex organic molecules, and preventing invasion of foreign microbes by directly competing for resources and keeping the immune system stimulated. These are just a few of the advantages our microbial allies provide us. However, that is not to say that they pose no danger to ourselves either. Typically, these microorganisms are categorised into being in a beneficial, pathogenic or commensal relationship with its host. Beneficial microbes, or probiotics, are as the name suggests: these microbes typically provide some form of health benefit to the host and are usually non-pathogenic. Many of the bacterial species found in our gut lumen, for example, have the capability to digest cellulose. As such, without these microbes, digesting vegetables would be a much harder and less rewarding task. Most of the probiotics found in our microflora are of lactic acid bacteria origin and are most common in diets that incorporate fermented dairy products (12). Pathogenic microbes, on the other hand, mostly describe microbes of foreign origin. These microorganisms will infect and exploit the host’s cells, ultimately causing disease. Commensal microorganisms walk an interesting line, in comparison to beneficial and pathogenic microbes. This group of microbes encompasses all of the characteristics described above, depending on circumstance. This ranges from benefiting both the host and microbe, the microbe itself, or even causing disease within its host when given the opportunity. An example of a commensal microorganism is Escherichia coli, or E. coli. It is a bacterium that colonises our gastrointestinal tract as soon as we are born, where it fends off more than 500 competing bacteria species, thanks to its versatility and adaptations to our gut environment (13). Furthermore, the presence of E. coli along our gut epithelium helps to stimulate mucin production, inhibiting any foreign microbes from invading the epithelium (14). However, as is typical of a commensal organism, when given the chance, E. coli is capable of causing intestinal or extraintestinal disease in our bodies. Urinary tract infections due to E. coli are among the most common causes of a microflora-associated infection and often occur when the bacterium is allowed to enter the urinary tract via cross contamination with the anus, where E. coli is typically shed as part of the faeces (15). Typically, these beneficial and commensal bacteria are found all over our body. They can be found in our hair, on our skin, and as we have discussed, in our gut. Malassezia, for example, is a fungus that colonises our scalp, and is what causes dandruff in most people. While dandruff may be a nuisance to those who experience it, do the disadvantages necessarily outweigh the benefits? The presence of Malassezia on our scalps means that other, possibly dangerous, microorganisms will have to compete with Malassezia in order to invade. Additionally, the stimulation of our body’s defenses due to Malassezia aids in repelling foreign invaders (16). Staphylococcus aureus is another example of a commensal microbe, and an even better example of an opportunistic pathogen that can be found living harmoniously on our skin and nasal passages, helping us fend off other competing microbes just as Malassezia does on our scalp. However, when the skin is pierced, whether by means of injury or even medically through surgeries or treatments, the Staphylococcus bacteria will opportunistically attempt to invade and infect its host (17). As such, Staph infections and outbreaks are among some of the most common forms of hospital-related infections (18). Source: Thomas L Dawson, “What causes dandruff, and how do you get rid of it?” February 10, 2021, Ted-Ed video (19). Looking to the future, we have begun to see a spike in non-communicable diseases as opposed to microorganism-based diseases. These include most forms of heart diseases, cancers, diabetes, and others. Still, while the rise of non-communicable diseases is arguably a cause for concern, the return of long extinct diseases and antibiotic resistant pathogens may prove costly. Staph infections, as previously mentioned, are extremely common in hospital environments where continued usage of antibiotics such as penicillin or methicillin has produced a “super strain” of Staphylococcus that is resistant to most commercially available drugs (20). Currently, superbugs such as multidrug-resistant mycobacterium tuberculosis and methicillin-resistant Staphylococcus aureus are most common in healthcare settings, but community transmissions have become a concern (21). As such, with our current practices of antibiotic overprescriptions and continued reliance on sterilisation, future outbreaks of mutated and resistant pathogens may be inevitable. That being said, should we redefine what “clean and sterile” means to us? Should “sterile” necessarily be a microbe-free environment? Our perception of microbial life has consistently been “antibacterial” and believed to have been a threat to public health ever since the inception of the germ theory. However, the fact of the matter is that these microorganisms are unavoidable. There are microorganisms living all over us. Our fingers, our phones, even the soles on our shoes carry certain microorganisms. In hospital rooms, the composition of microbes is constantly changing as patients and visitors enter and leave (22). Besides, the composition of microbes in the environment is not determined solely by its occupants. Other factors, such as ventilation and even architecture, can determine what microbes we find in our environment. In fact, hospital rooms with more airflow and humidity were found to have suppressed the growth of potential pathogens and had fewer human-associated bacteria in its microbial composition (23). Just as the microbe composition in the environment can be determined by architectural and building factors, the microbe composition in our microflora can hold incredible influence over our physiology. Dysbiosis, an imbalance in our microflora, can occur as a result of repeated consumption of antibiotics, and it is a serious illness resulting in a significant loss of beneficial and commensal microbes (24). Consequently, invasion and colonisation capabilities of foreign pathogens is increased; as has been shown in antibiotic-treated mice exposed to M. tuberculosis, where pathogenic colonisation was promoted when in a dysbiotic state (25). Other factors, such as diet and lifestyle, also contribute as “disturbance” factors that influence dysbiosis, as can be seen in typical Western-style diets that mostly consist of high fatty and sugary foods (26). In the future, while the crises of pandemics originating from drug-resistant superbugs loom over us, our understanding of microbial life has come far; from its humble beginnings as a rejected theory amongst scholars, to the discovery of an extensive microbial ecosystem inside of our guts. Despite that, our comprehension of this “hidden world” remains lacking, and we have yet to fully realise the potential of microbial life. Throughout history we have constantly taken an antimicrobial stance to preserve public health, but in recent times it has become increasingly clear that these microorganisms play a much greater role in health. References: 1. LePan, Nicholas. “Visualizing the History of Pandemics.” Visual Capitalist. Last modified September 2021. https://www.visualcapitalist.com/history-of-pandemics-deadliest/ . 2. World Health Organization. “Tuberculosis.” Published October 2020. https://www.who.int/news-room/fact-sheets/detail/tuberculosis . 3. Centers for Disease Control and Prevention. “Typhoid Fever and Paratyphoid Fever.” Last modified March 2021. https://www.cdc.gov/typhoid-fever/health-professional.html . 4. Dykhuizen, Daniel. “Species Numbers in Bacteria.” Supplement, Proceedings. California Academy of Science 56, no. S6 (2005): 62-71. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3160642/ . 5. Kannadan, Ajesh. “History of the Miasma Theory of Disease.” ESSAI 16, no. 1 (2018): 41-43. https://dc.cod.edu/essai/vol16/iss1/18/ . 6, 8. Greenwood, Michael. “History of Microbiology – Germ Theory and Immunity.” News-Medical. Last modified May 2020. https://www.news-medical.net/life-sciences/History-of-Microbiology-e28093-Germ-Theory-and-Immunity.aspx . 7. Britannica. “Germ theory.” Last modified April 2020. https://www.britannica.com/science/germ-theory . 9, 10. Gradmann, Christoph. “A spirit of scientific rigour: Koch’s postulates in twentieth-century medicine.” Microbes and Infection 16, no. 11 (2014): 885-892. https://doi.org/10.1016/j.micinf.2014.08.012 . 11. Valdes, Ana M, Jens Walter, Eran Segal, and Tim D Spector. “Role of the gut microbiota in nutrition and health.” BMJ 361, no. k2179 (2018): 36-44. https://doi.org/10.1136/bmj.k2179 . 12, 24. Martín, Rebeca, Sylvie Miquel, Jonathan Ulmer, Noura Kechaou, Philippe Langella, and Luis G Bermúdez-Humarán. “Role of commensal and probiotic bacteria in human health: a focus on inflammatory bowel disease.” Microbial Cell Factories 12, no. 71 (2013): 1-11. https://doi.org/10.1186/1475-2859-12-71 . 13, 15. Leimbach, Andreas, Jörg Hacker, and Ulrich Dobrindt. “E. coli as an All-rounder: The Thin Line Between Commensalism and Pathogenicity.” In Between Pathogenicity and Commensalism, edited by Ulrich Dobrindt, Jörg Hacker and Catharina Svanborg, 3-32. Springer: Berlin, 2013. 14. Libertucci, Josie, and Vincent B Young. “The role of the microbiota in infectious diseases.” Nat Microbial 4, no. 1 (2019): 35-45. https://doi.org/10.1038/s41564-018-0278-4 . 15. Harvard Medical School. “When urinary tract infections keep coming back.” Published September 2019. https://www.health.harvard.edu/bladder-and-bowel/when-urinary-tract-infections-keep-coming-back . 16. Saunders, Charles W, Annika Scheynius, Joseph Heitman. “Malassezia Fungi Are Specialized to Live on Skin and Associated with Dandruff, Eczema and Other Skin Diseases.” PLoS pathogens 8, no. 6 (2012): 1-4. https://doi.org/10.1371/journal.ppat.1002701 . 17. Cogen, A. L., V. Nizet, and R. L. Gallo. “Skin microbiota: a source of disease or defence?” British journal of dermatology 158, no. 3 (2008), https://doi.org/10.1111/j.1365-2133.2008.08437.x . 18, 20. Klein, Eili, David L Smith, and Ramanan Laxminarayan. “Hospitalizations and Deaths Caused by Methicillin-Resistant Staphylococcus aureus, United States, 1999–2005.” Emerging infectious diseases 13, no. 12 (2007): 1840-1846. https://doi.org/10.3201/eid1312.070629 . 19. Dawson, Thomas L. “What causes dandruff, and how do you get rid of it?” February 10, 2021. Ted-Ed video, 5:04. https://youtu.be/x6DUOokXZAo . 21. Better Health. “Antibiotic resistant bacteria.” Last modified March 2017. https://www.betterhealth.vic.gov.au/health/conditionsandtreatments/antibiotic-resistant-bacteria#bhc-content . 22, 23. Arnold, Carrie. “Rethinking Sterile: The Hospital Microbiome.” Environmental health perspective 122, no. 7 (2014): A182-A187. https://doi.org/10.1289/ehp.122-A182 . 25. Khan, Rabia, Fernanda C Petersen, and Sudhanshu Shekhar. “Commensal Bacteria: An Emerging Player in Defense Against Respiratory Pathogens.” Frontiers in Immunology 10, no. 1 (2019): 1203-1211. https://doi.org/10.3389/fimmu.2019.01203 . 26. Schippa, Serena, and Maria P Conte. “Dysbiotic Events in Gut Microbiota: Impact on Human Health.” Nutrients 6, no. 12 (2014): 5786-5805. https://doi.org/10.3390/nu6125786 . 27. Sottek, Frank. Microbes in Food. c. 1904. The Tacoma Times, Tacoma. https://commons.wikimedia.org/wiki/File:Sottek_cartoon_about_microbes_in_food.jpg .

  • Griefbots: A New Way to Grieve (or Not) | OmniSci Magazine

    < Back to Issue 5 Griefbots: A New Way to Grieve (or Not) Akanksha Agarwal 24 October 2023 Edited by Celina Kumala Illustrated by Louise Cen Trigger warning: This article mentions themes of death or dying. If at any point the content is distressing, please reach out for support via Griefline or refer to the services listed at the end of this article. Rumi once wrote, ‘Anything you lose comes round in another form.’ (Goodreads, n.d., p. 1). There are many ritualistic ways to memorialise the death of a loved one, but what if they had never “died”? Over the past decade, the intersection of technology and mental health has given rise to innovative solutions for various psychological conditions. From virtual reality therapy for Post-traumatic Stress Disorder (Kothgassner et al., 2019) to prescription video games aimed at helping children manage Attention Deficit Hyperactivity Disorder (Tiitto & Lodder, 2017), the mental health technology industry has expanded significantly. Enter, a recent addition to this landscape - the grief bot. In 2015, Roman Mazurenko, an entrepreneur and prominent figure in Moscow’s night-life scene, suddenly passed away from a fatal car accident (Newton, 2016). His close friend, Eugenia Kuyda, proceeded to create a “digital monument” in his memory (Newton, 2016). While grieving, she found herself re-reading all his old messages, feeling nostalgic at Roman’s unique word choices, and spelling. Kuyda had previously founded a startup involving artificially intelligent chat bots. After the incident, she fed her bot with Roman’s text exchanges. The bot then adopted Roman’s speech pattern, enabling her to chat with a version of him. This marked the birth of the griefbots, or chat bots programmed using digital remains (emails, text messages, social media posts) of a deceased individual to support their grieving loved ones. In other words, using natural language processing, these bots are able to mimic conversational patterns using the data of the deceased. Are these conversational patterns accurate? How then, does this impact the way we grieve? Should we even be using griefbots? To answer these, we could attempt to understand grief. Grief is a complex emotion. You could be grieving the loss of a loved one, a relationship, an object, or even an abstract idea (e.g. familiarity). Grief can also manifest at different times for each individual. According to the Australian Psychological Society ‘grief is the natural response to loss and can influence the physical, emotional, cognitive, behavioural and spiritual aspects of our lives.’ (APS, n.d, p. 1). In their book, Elizabeth Kubler-Ross and David Kessel (2014) coined ‘The Five Stages of Grief: denial, anger, bargaining, depression and acceptance. Essentially, the model suggests an initial reaction of symbolic denial or shock. Following this is typically a phase of emotional support through vocalising the experience or making meaning. The final stage being acceptance, or moving forward. “Are they really gone?” Denial is viewed as a protective mechanism to meet the psyche where it is. “Why me?” Anger is interestingly framed as an anchor to connect you to someone you’ve lost. “What if they suddenly return?” Bargaining shifts from the past to the future, until the truth sinks in. “What’s the point?” Depression is protecting the nervous system from overload, and is arguably natural to grief, when not clinical. “I lost them, but I am going to be okay.” Acceptance as you start to move forward, with some stability. Now, each of these questions might manifest in different ways, and require different coping mechanisms. However, they do give us an indication of generic phases across unique manifestations of grief. In other words, these are not linear, clear-cut stages, rather, there is an element of individuality in the way we experience each stage. We might experience one stage before another, or circle back, or take a completely new route. In any case, this is one way to make sense of grief. Other theories around grief include Bowlby’s attachment theory (1980) which suggests that our response to losing someone is coded in the way our attachments develop. Silverman and Klass (1996) put forth the idea of continuing bonds, where the meaning of loss changes with the deceased living on in memory. On the other hand, Strobe and Schut (1999) posit a dual process with individuals constantly switching from avoidance or confrontation of loss. Regardless of your theoretical inclinations, chances are that one might seek closure, a sense of reconciliation or even self-fulfilment after experiencing loss. What, then, would be the wellbeing impacts of artificial chat bots, that are designed to adopt the language patterns of those we have lost, on the grieving process? Grief can result in cognitive changes, such as confusion, identity disturbances, dysphoria, and yearning among others (Bonnano & Kaltman, 2001). Norlock (2016) proposes that imaginal relationships with the deceased can reflect relational value, ethical behaviour (such as forgiveness), and relationship maintenance. Furthermore, it is argued that continued internal representations of people who have passed away might also add value to future relationships. In contrast, some may argue that interacting with an artificial grief bot might engender para-social relationships where the user is investing time into a relationship (Vost & Kamp, 2022); however, the receipt is unaware (similar to celebrities and their fans). Furthermore, anthropomorphising a non-living chatbot, and conflating this for a person might distort reality, take wrongful advice, delay grief, or fabricate new false memories (Vost & Kamp, 2022). It leads one to wonder, just what are the potential ethical issues surrounding griefbots? Data is impermanent, with the ability to be wiped (Grandinetti et al., 2020). Data is deeply contextual, contingent, and unstable (Grandinetti et al., 2020). In order to understand how the bot is responding, ensuring no advice is given, and also preserving the griever’s best interest, is a complex task. Moreover, viewing griefbots as permanent or true representations of the dead is another issue. There are also ethical questions around consent and whether the deceased are capable of giving consent to the usage of their data, along with users. Whether companies can be transparent about how the data is being handled, and the algorithms generated, remains unclear. Would knowing how the responses were generated changed the way people viewed grief bots, and would that defeat the purpose? Yet, there are broader challenges. If users disclose private information to profit-driven companies based on the trust with the person they have lost, the data could be misused. The role of protection plans in the event of deep fakers or hackers, becomes paramount. The large amount of data used also raises questions about the sustainability of such bots. Additionally, the high cost of sophisticated bots might create greater disparities in access to support. While autonomy may improve with access to immediate technology, the addictive interaction patterns could lead to dependence, overuse, and potentially social withdrawal. Furthermore, gender, age, sensitive content, changing political landscapes, might potentially bias the bot inherently. Griefbots remain a hotly contested topic, with widespread caution surrounding potential impacts. There have been attempts to design similar bots with ethical features in mind, and even suggestions to medically regulate or test such devices. However, this use for AI bots opens up a multitude of questions. By 2025, Vorst and Kamp (2022) speculate that holographic avatars could be generated through photographs, physical and digital remnants, even voice recordings. Ultimately, the impact of griefbots on our perception of mortality and memory challenges us to reconsider the boundaries of life, death, and the enduring essence of human connection in a digital age. Support resources If you are experiencing prolonged symptoms of grief or depression, please seek support via the following resources with different options for support: Grief Australia: counselling services, support groups, app https://www.grief.org.au/ga/ga/Get-Support.aspx?hkey=2876868e-8666-4ed2-a6a5-3d0ee6e86c30 Griefline: free telephone support, community forum and support groups https://griefline.org.au/ Better Health Channel: coping strategies, list of support services, education on grief https://www.betterhealth.vic.gov.au/health/servicesandsupport/grief Beyond Blue: understanding grief, resources, support, counselling https://www.beyondblue.org.au/mental-health/grief-and-loss Lifeline: real stories, techniques & strategies, apps & tools, support guides, interactive https://toolkit.lifeline.org.au/topics/grief-loss/what-is-grief?gclid=CjwKCAjw-KipBhBtEiwAWjgwrE1pJaaBabh3pT_UR0PlVBZTFMEA26NVJe2ue8sqCF0BLg2rMI4i2xoCp5IQAvD_BwE Reach Out Australia: coping strategies https://au.reachout.com/articles/working-through-grief?gclid=CjwKCAjw-KipBhBtEiwAWjgwrKXLb9w-wXXVLIbhZDkPumIF6ebe-0Pk77Hv7-cK4dLDrHJxCRkyRBoC2B4QAvD_BwE Find a Helpline: for international/country-specific helplines https://findahelpline.com/ This list is not exhaustive, please refer to your area’s specific services for additional support. References Albert, S., & Bowlby, J. (1982). Attachment and loss: Sadness and depression . Journal of Marriage and the Family , 44(1), 248. https://doi.org/10.2307/351282 APS. (n.d.). Grief | APS . Australian Psychological Society | APS. https://psychology.org.au/for-the-public/psychology-topics/grief Basom, J. (2021, May 19). The ethical, social, and political implications of “Griefbots”. Medium . https://jonathanb108.medium.com/the-ethical-social-and-political-implications-of-griefbots-48780fd1d1c2 Bonanno, G. A., & Kaltman, S. (2001). The varieties of grief experience . Clinical Psychology Review , 21(5), 705-734. https://doi.org/10.1016/s0272-7358(00)00062-3 Craytor, J. K., & Kubler-Ross, E. (1969). On death and dying. The American Journal of Nursing , 69(12), 2710. https://doi.org/10.2307/3421124 Elder, A. (2019). Conversation from beyond the grave? A Neo‐confucian ethics of chatbots of the dead. Journal of Applied Philosophy , 37(1), 73-88. https://doi.org/10.1111/japp.12369 Goodreads. (n.d.). A quote by Rumi . Goodreads | Meet your next favorite book. https://www.goodreads.com/quotes/32062-don-t-grieve-anything-you-lose-comes-round-in-another-form Grandinetti, J., DeAtley, T., & Bruinsma, J. (2020). The dead speak: Big data and digitally mediated death . AoIR Selected Papers of Internet Research . https://doi.org/10.5210/spir.v2020i0.11122 Jiménez-Alonso, B., & De Luna, I. B. (2022). Correction to: Griefbots. A new way of communicating with the dead? Integrative Psychological and Behavioral Science . https://doi.org/10.1007/s12124-022-09687-3 Klass, D. (2021). The sociology of continuing bonds. Culture, Consolation, and Continuing Bonds in Bereavement, 113-128. https://doi.org/10.4324/9781003243564-11 Kothgassner, O. D., Goreis, A., Kafka, J. X., Van Eickels, R. L., Plener, P. L., & Felnhofer, A. (2019). Virtual reality exposure therapy for posttraumatic stress disorder (PTSD): A meta-analysis. European Journal of Psychotraumatology, 10(1). https://doi.org/10.1080/20008198.2019.1654782 Kübler-Ross, E., & Kessler, D. (2014). On grief and grieving: Finding the meaning of grief through the five stages of loss. Simon & Schuster. https://books.google.com.au/books?hl=en&lr=&id=0TltiT8Y9CYC&oi=fnd&pg=PR11&dq=grief+&ots=S1j1XyF91N&sig=pDnxX-bJQIJIFeX074oGrHRD0Ms&redir_esc=y#v=onepage&q=grief&f=false Lindemann, N. F. (2022). The ethics of ‘Deathbots’ . Science and Engineering Ethics , 28(6). https://doi.org/10.1007/s11948-022-00417-x Newton, C. (2016, October 6). When her best friend died, she used artificial intelligence to keep talking to him . TheVerge.com . https://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot Norlock, K. J. (2016). Real (and) imaginal relationships with the dead. The Journal of Value Inquiry , 51(2), 341-356. https://doi.org/10.1007/s10790-016-9573-6 Santa Clara University. (n.d.). AI, death, and mourning . https://www.scu.edu/ethics/focus-areas/internet-ethics/resources/ai-death-and-mourning/ Schut, M. S. (1999). The dual process model of coping with bereavement: Rationale and description. Death Studies , 23(3), 197-224. https://doi.org/10.1080/074811899201046 Shardlow, J. (2022). Temporal perspectives and the phenomenology of grief. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-022-00659-5 Tiitto, M. V., & Lodder, R. A. (2017). Therapeutic Video Games for Attention Deficit Hyperactivity Disorder (ADHD) . WebmedCentral , 8(11). https://doi.org/10.1101/2020.10.26.355990 Van der Vorst, R., & Kamp, J. M. (2022). 12. Designing a griefbot-for-good . Moral design and technology, 215-241. https://doi.org/10.3920/978-90-8686-922-0_12 Wicked back to

OmniSci Magazine acknowledges the Traditional Owners and Custodians of the lands on which we live, work, and learn. We pay our respects to their Elders past and present.

Subscribe to the Magazine

Follow Us on Socials

  • Facebook
  • Instagram
  • LinkedIn
UMSU Affiliated Club Logo
bottom of page