You might have noticed that there haven’t been any new posts since November 2020. Which, to be fair, is not unusual for me – I am definitely infamous for having quite an unreliable writing schedule, despite my best efforts!
So what have I been up to for the last half-year, and what is happening with Jineral Knowledge?
Well, you’ll be happy to know that the reason I’ve been on hiatus from blogging isn’t because I’ve been sick or burnt out or not intending to write, but quite the opposite. I started a new project: a podcast called Explain This!
I’ve always wanted to experiment with starting a creative project that isn’t pure writing, like Youtube videos or podcasts. After a lot of planning and thinking and learning, I started making Explain This in November last year, and have been producing episodes steadily since then.
If you’re a fan of this blog, I’m sure you’d enjoy Explain This too! It’s a podcast where I try to explain various difficult concepts, ranging from physics principles like Schroedinger’s cat, to medical phenomena like fevers, to philosophical musings such as optimistic nihilism. If you’re an avid reader, you’ll probably notice a similar type of topics to what I write here.
The podcast lets me go much deeper in some topics, in more of an “explanation” and teaching tone, rather than in an encyclopaedic manner. It’s been a blast to make and I’m planning on continuing it for the foreseeable future!
But because it does involve quite a bit of work producing a podcast – and I still have other things like work and exam study going on, I’m going on hiatus from blog writing for the rest of this year. There may be random posts here or there, but there won’t be a regular schedule until I’ve finished my exams!
In the meantime, come check out Explain This to continue learning random things about the world from me. You can find it on Spotify, Apple Podcasts, Podcast Addict, or wherever you get your podcasts. Otherwise, you can find us on social media in the below links!
Come give us a listen, a share or a comment! Until next time~
In 1954, an 18-year-old Australian man by the name of James Harrison began donating blood. This is certainly not an unusual fact – over 100 million units of blood are donated worldwide each year. Harrison required massive amounts of blood transfusion during a major chest surgery he required 4 years prior. Knowing that he owed his life to the generous gift of blood from others, he pledged to donate blood as soon as he met the required age for it.
Soon after the first few donations, it was discovered that his blood had a peculiar property. Harrison’s blood contained unusually strong, long-lasting antibodies against a protein called the D Rh group antigen, or Rh(D).
Why was this such an important discovery?
In blood transfusion, blood types are crucial as transfusing the wrong type of blood can trigger an immune reaction in the donor’s body, potentially killing them. This is because red blood cells (that carry oxygen in the blood) are coated with different proteins called antigens. Your body ignores antigens that it is used to, but if it detects any new protein, it will create antibodies and viciously attack the cell as it thinks it is an infection.
Most people know their blood type as A, B, AB or O. A and B are the two most prominent antigens for red cells. If you are type A, you have A antigens. Type B have B antigens. Type AB have both antigens, while type O have neither antigens.
So, for example, if you transfuse type AB blood into a type B person, the donor’s immune system ignores the B antigen, but since it has never seen A antigens, it attacks the new blood and can make the donor very sick (or die).
The second most prominent antigen is Rh(D) (previously Rhesus factor). If you have it, you get a “+” next to the ABO typing (e.g. B+); if you don’t, you get a “–” (e.g. O-, the “universal donor” blood).
Rh(D) was a huge issue in medicine because it resulted in many babies dying or suffering brain damage due to haemolytic disease of the newborn (HDN) – a transfusion reaction where the mother’s immune system attacks the fetus due to different blood typing.
The so-called “anti-D” antibodies that were discovered in Harrison’s blood provided scientists with a weapon to fight against HDN.
HDN happens when a Rh negative woman develops antibodies to Rh(D), then has a baby with Rh positive blood. For example, if a Rh- woman has a Rh+ baby in the first pregnancy, the body detects the baby’s blood during birth, senses the Rh(D), then develops antibodies so it can fight it next time.
If you give the woman anti-D before the body has a chance to detect the antigen, the anti-D will immediately attach to all the Rh(D) antigens, shielding it from the body. Therefore, the woman never becomes sensitised to the antigen and doesn’t make antibodies. No antibodies, no HDN.
James Harrison was well aware of the power of his blood, so he proceeded to donate blood every two weeks for 57 years – over a 1000 donations. It is estimated that his blood helped save the lives of 2.4 million babies worldwide, including his own grandchild. Hence, he is known as The Man With The Golden Arm.
In 1980, statistician Stephen Stigler suggested that in the history of science, no scientific discovery is really ever named after its original discoverer. This is because eponymous laws and discoveries tend to be named after the person that made it widely known.
Take the example of the famous Pythagorean theorem, which was known to Babylonians before Pythagoras was even born. Halley’s comet had been documented by astronomers since 240 BC. Fibonacci numbers were well-known to Indian mathematicians since 200 BC – 1400 years before being described by Fibonacci.
You could make the argument that these discoveries could not be traced to the original discoverer as it was too long ago and the discoverer’s name was never documented. This is certainly one reason for Stigler’s law being true. Usually, scientific discoveries tend to be popularised and named after the discoverer has already died, meaning that there is a chance they could be forgotten already.
There are plenty of examples where the original discoverer is known, thanks to historians of science, but it is too late to reverse the eponym as the name has firmly rooted itself into people’s vocabulary.
Take Alzheimer’s disease, which was discovered by Beljahow in 1887, but named after Alois Alzheimer in 1901. The bacteria Salmonella was identified by Theobald Smith in 1885, but as he was a junior inspector, his boss Daniel E. Salmon took the credit instead.
It is also worth noting that throughout history, there have been many cases of discoveries being made simultaneously by independent scientists.
Sometimes, the scientists credit each other and share the fame, such as Charles Darwin who decided to co-present his theory of natural selection with Alfred Russel Wallace, another scientist who came to similar conclusions at the same time.
Other times, scientists may fight aggressively to assert their credit, such as Isaac Newton and Gottfried Leibniz, who both discovered calculus around a similar time, but fought to claim that they were first.
An important lesson to learn here is that as much as we love stories where a brilliant individual changed the course of history, most advancements in human history happen thanks to collaboration and inherited knowledge over time. Things rarely happen in a vacuum and we all rely on each other’s experiences and knowledge, building on our predecessors to achieve greatness.
Ironically, Stigler’s law follows its own law, as Stigler identified sociologist Robert K. Merton as the original discoverer of this law.
In 1964, a graduate student named Donald Currey was studying the history of glaciers in Nevada, USA. He came across some bristlecone pine trees, which he suspected may give him some clues about the Ice Age. This is because previously, another scientist had discovered similar bristlecone pines nearby in California that dated between 3000-5000 years old.
So Currey decided to sample some of these trees to determine their ages. He thought that if he could show that the trees uphill were much younger than the trees at the base, it may prove that glaciers expanded down the mountain and pushed the trees back downhill.
Currey came across a particularly old-looking tree. He got permission from the Forest Service first, then cut the tree down to count its growth rings – the most accurate way to figure out how old a tree is. As the tree’s bark changes in a predictable fashion, a tree will gain one ring for every year of its life. An interesting point is that trees are often dated nowadays by taking a core sample instead of cutting it down (and thus killing it). It is uncertain why Currey and the Forest Service opted to cut the tree down instead.
After counting the rings of this tree, Currey realised that the tree (dubbed “Prometheus” by local naturalists) was 4844 years old – the oldest tree in existence in the world. Well, it was until it was cut down.
More modern analysis of Prometheus’ remains revealed that the tree was likely closer to just over 5000 years old at the time of its death, which makes Prometheus the oldest known tree and (non-clonal) organism in recorded history.
As expected, when Currey published his results, there was a massive outcry. In the name of understanding nature better, he inadvertently killed the world’s oldest tree.
Since the demise of Prometheus, another tree by the name of Methuselah has taken the crown of “oldest tree in the world”, at the age of 4852 as of 2020.
The lesson here is clear: before cutting a tree down, check that you are not accidentally killing the oldest tree in the world.
Growth is such an important aspect of life. From a young age, we are encouraged to grow in all senses of the word.
We eat well and drink milk to grow tall and strong and healthy. We study to grow and cultivate our minds. We find a job and learn financial skill to grow our savings. We chase our passions and interests to culture ourselves and grow our character. We devote time and energy to upskilling and pushing ourselves to our limits to grow our careers and capabilities. We learn to love, to hurt, to dream and yearn, to lose and to self-soothe to grow our emotional intelligence.
All in all, the world tells us we must grow, grow and grow. Almost to the point of an obsession.
In some ways, it’s true – we must strive to keep growing as a person. Think of the countless people who stop growing after reaching adulthood. Instead of reading, thinking, introspecting and connecting, they choose to live in an echo chamber, spouting off misinformed opinions and closing their minds to any new experiences or viewpoints.
When we don’t challenge ourselves to grow, we cannot achieve flow state and instead become lazy and unmotivated. We run the risk of becoming stagnant or regressing.
In this sense, encouraging our own growth is one of the most useful life skills we can obtain as a young adult. No one other than ourselves will truly care about our growth as a person. This is why goals, systems and insight are important in life.
However, growth also comes with stress.
To grow, we need to put in time, energy and resources, such as money. Sometimes, it is hard enough surviving life, let alone thinking about growing. Sometimes, depression and anxiety gets the better of us and we are in no mental state to strive to thrive. Sometimes, we are barely keeping up with life’s numerous demands, meaning that growth could be the straw that breaks the proverbial camel’s back.
Think of growth in nature. Barely anything grows perpetually. Plants will grow in the summer, then recede for the winter, saving up energy for the next summer. Mountains grow with plate tectonics, then erode with the weather. Even economies will go through cycles of growth and depression, while even the most mighty empires rise and fall. In fact, the only thing in nature that only grows is cancer.
Simply put, uncontrolled, continuous growth is unsustainable.
If a plant was to only grow upwards, it will collapse under its own weight. Businesses that grow beyond their capabilities will stumble and fall as they exhaust resources, or end up relying on shady, unethical practices.
The take-home message is this: it is okay not to always be growing.
Of course, we should strive to never stop growing on the whole, but this should be sustainable growth. It is okay to take a step back for every two steps you take forwards. It is even okay to feel like you are taking one step forward, one step back. Sometimes, it is not a ripe time to grow, like a seed biding its time through the winter.
Be kind to yourself and be patient. Whether you are going through the most amazing growth spout or feel like you are at your lowest point, this too shall pass.
In the 1790’s, an Oxford University student by the name of William Buckland pulled an unusual prank. One night, he took buckets of bat guano and spread it on the Oxford College lawn, spelling out “GUANO” with the material. For those who have never heard of guano, it is the excrement of a bird or bat. You could imagine the shock of Oxford authorities the next day at the sight of poop on their prestigious lawn.
The guano was cleaned up immediately. But after a while, a mysterious phenomenon occurred. Everyone could see “GUANO” clearly spelled out on the lawn in tall, luscious grass, rising above the surrounding grass. Even after it was freshly mowed, the letters kept growing back, thicker and faster than the other grass.
The reasoning behind this is that guano is an excellent fertiliser. Animal excrements have long been used in farming as a fertiliser, as they contain vital nutrients such as nitrogen, phosphate and potassium that plants need to grow. Bat and bird poop in particular contain large concentrations of these nutrients.
Guano had been used as a fertiliser for over a millenium in the Americas, such as by the Incans. It was known to the West by the 1700’s, but the thought of applying poop to prestigious English gardens did not sit right, therefore it was not widely used for a long time.
But by the 1800’s, many European scientists noted the potent ability of guano in transforming sterile fields into plentiful farmlands in Peru. Demand for guano rose rapidly as people caught on to how guano could drastically improve crop output and food production.
Ramon Castilla, the president of Peru, capitalised on this by exporting large quantities of guano to Europe. Peru had some of the largest deposits of high-quality guano thanks to its native seabird population, with entire mountains and islands of guano being available for mining. The massive spike in guano trade resulted in Peru’s greatest age of prosperity – known as the Guano Era. Peru used this newfound economic boom to abolish slavery, eliminate head taxes on indigeneous populations and started a public education system.
This sounds like a success story, where a developing nation enjoys dramatic growth with improved quality of life for its citizens thanks to good timing and natural resources. However, the guano story has a far darker side.
To have better access to guano deposits, wars were fought and genocide committed. Chile invaded Bolivia to seize control of Guano Islands, while the USA and Britain started colonising and annexing Pacific islands to access guano reserves. Aggressive mining of guano disrupted natural habitats of seabirds and bats, resulting in destruction of entire ecosystems dependent on guano for nutrients. In the late 1800’s, approximately 53 million seabirds lived on the Guano Islands, whereas only 4.2 million lived there by 2011. Many people were forced into slavery or exploited to decrease costs of mining guano, with miners often working in horrific conditions.
Guano dominated global economics and politics of the 19th century, but its reign ended in 1913 when German chemists Fritz Haber and Carl Bosch successfully started mass-producing ammonia using a novel chemical process (Haber-Bosch process), fixing nitrogen from the air to cheaply produce nitrogen-based fertiliser. This ended the need for guano as a source of nitrogen, while dramatically improving agriculture and food supplies around the world.
Isn’t it fascinating to see how much impact bird poop has had on world history?
One of the most famous arguments in popular culture history is why at the end of the movie Titanic, Jack had to die when it clearly looked like there was enough space for both him and Rose to lie on the floating door.
Since the movie’s release in 1997, countless fans have lamented how the birds-eye view shows that both people could have laid side by side to fit on the door.
But alas, science is an unforgiving mistress and it has since been shown that it would have been physically impossible for the two lovers to survive together on that makeshift raft (which was a wooden panel, not a door).
The film actually shows Jack trying to get on to the panel, when it tilts and starts to submerge, nearly flicking Rose off. Jack realises that the panel would not support both of them and chooses to only keep his upper body on it, while fending off other survivors trying to latch on. Unfortunately, this is not enough to keep him alive as he quickly succumbs to hypothermia and sinks to the bottom of the ocean.
The important question is not whether the two would fit on the panel, but whether the panel is buoyant enough to support both of them.
Buoyancy is the force that makes things float in liquids. It depends on the volume of the floating object and the density of the liquid it floats in. If buoyancy is greater than the pull of gravity, the object floats.
Now, let us calculate how much buoyancy we would need to keep the panel, Rose and Jack afloat.
For the two to survive, no more than the door itself can be submerged, keeping the bodies above water level. Therefore, the volume of the submerged object is the volume of the raft. Estimating from stills from the film and Kate Winslet’s height, we can calculate the raft as being roughly 1.85m x 0.95m x 0.15m, or 0.264m³.
Ergo, the buoyancy of the panel would be Volume x Density of ice cold salt water x force of gravity = 0.264m³ x 1000kg/m³ x 9.8m/s² = 2587N (Newtons). If more than 2587N of weight is placed on top (including the panel itself), it would sink.
At the time of the production of Titanic, the estimated weight of Kate Winslet and Leonardo DiCaprio were around 549N and 686N respectively (note that in physics, weight is mass times the acceleration of gravity, measured in Newtons).
Subtracting these values from 2587 leaves us with 1352N free for the panel. Since we know the volume of the panel, as long as we know what wood it was made out of, we can find the density and calculate the final weight.
Three types of wood were commonly used on the Titanic: teak, oak and pine. The densities of these woods are 980kg/m³, 770kg/m³ and 420kg/m³ respectively, meaning that the door would be 2535N if it was made of teak, 1992N for oak and 1087N for pine.
Therefore, the maths show that for the two to have a snowball’s chance in hell of surviving together on the panel, it had to be made of pine. Teak and oak would have been too heavy.
This is where the final key becomes relevant: the wooden panel was likely made of oak.
The Maritime Museum of the Atlantic in Halifax, Nova Scotia, holds the largest piece of debris from the actual wreckage of RMS Titanic. If you look at this wooden panel (from above a doorframe), it looks remarkably similar to the wooden panel that Rose survives on. In fact, a replica of this debris was used for the filming of the film. The material of the actual wooden panel? Oak.
If the panel was made out of oak, it could only hold Rose, as 1992 + 549 = 2541N, which is just enough for Rose to stay afloat above the water level.
And there you have it. Not even the power of love can overcome the cold-hearted, brutal law of the universe that is science.
As a species, we excel at managing emergencies. From natural disasters such as catastrophic earthquakes, to man-made tragedies such as acts of terrorism, to rapidly evolving global-scale deadly events such as pandemics, humanity has shown time and time again that we can band together, strategise and deploy resources to fight against acute emergencies.
From a young age, we learn how to approach urgent issues, such as calling for emergency services, putting out fires, dealing with wounds et cetera. Much of growing up is learning how to deal with various kinds of emergencies: how to organise and balance your finances when you are made redundant, how to console a friend when they are struck with grief, how to fix a car when it breaks down, how to get the internet working again…
But when it comes to “chronic emergencies”, we become stumped. These are urgent, pressing issues – from personal to global scales – that stay for long periods of time or come in waves, rather than as a single event..
Let us consider some examples of how humanity struggles with chronic issues compared to acute ones.
When a hurricane or wildfire strikes, we respond rapidly to provide relief and supplies to help those in need. But when it comes to climate change, we have difficulty coordinating our efforts or even agreeing what the problems are.
When you have acute pain such as a broken bone or a kidney infection, doctors and nurses will provide effective pain relief and treatments to make you better. But when it comes to chronic conditions and chronic pain, you will have to navigate frustrating labyrinths of medication regimens and bouncing from system to system with suboptimal control of symptoms.
Most of us will be adept at dealing with acute stresses in a relationship such as a fight involving a specific event or when our partner has a bad day at work or is faced with disappointment. However, we struggle to deal with chronic issues such as when our partner does not get along with our family or they are battling through serious mental health issues.
The list goes on and on and most readers would be able to think of specific examples from their own lives.
There are many reasons as to why we are better at managing acute emergencies over chronic ones.
Chronic emergencies tend to be more complex with various layers and factors. Because they are long-term problems, they require long-term solutions with sustained effort and careful planning. Due to the chronicity of the problem, people start to lose interest in fighting the problem, or simply become exhausted and exasperated, leading them to give up or accepting it as a new norm.
So what can we do about this? Obviously the solution isn’t to give up on chronic emergencies at first sight. Just as we strive in life to be better and better at addressing acute emergencies, so should we learn to better manage chronic emergencies.
As highlighted above, chronic emergencies are inherently different types of problems that demand a different approach.
The first step is recognising that there is a chronic emergency. Often, chronic emergencies appear as a string of acute situations, so we fight and fight until we burnout. Formally declaring an issue as a chronic emergency helps you stop and rethink your approach. We must also accept that there are no easy or quick fixes for most chronic problems. They take time and effort, with many ups and downs.
To tackle a long-term problem, we need a long-term plan. It is easy to get distracted dealing with individual fires, while failing to see that the entire forest is burning down. Instead of treating each facet of the emergency individually, we can come up with standardised approaches and protocols to automatically respond to recurrent issues, while using our energy and resources to devise a more sustainable solution, treating the root causes.
An example is how a student who comes up with long-term study plans and dedicates time for assignments over a long period of time does much better academically compared to the average student who frantically pulls an all-nighter every time an assignment is due.
Lastly, it is important to recognise that this will be a hike, rather than a sprint. You cannot use maximum effort and burn the candle at both ends right from the start, only to burnout and be overwhelmed by the problem. Instead, pace things out, plan breaks, set realistic, achievable and incremental goals rather than attempting to end the solution.
In chronic emergencies (and life in general), consistent growth and recovery is far more valuable than absolute targets.
Be kind to yourself and others involved when there are failures or unexpected disappointments, because you will have to continuously adapt, learn and be better to ultimately overcome the challenge.
These points are particularly relevant to personal chronic emergencies and long-term hardships such as bad economies, break-ups, grief and mental health conditions.
What drives our morality? Philosophers have argued and pondered for millennia where our sense of selflessness, altruism and honesty come from. Are we inherently good or evil? Do we only help others when it benefits us? How can we motivate people to act more morally?
One interesting research reveals a startling truth about our morality.
In 2006, psychologist Melissa Bateson published a research where she experimented with eyes. Their university tea room had an honour-based coffee and tea system, where you pay the price of the beverage into a box. Because there was no one keeping guard over the box, you could choose to cheat the system by taking a free drink without paying. Bateson wanted to see if she could influence how often people paid by making a simple alteration to the notice banner.
The notice banner had the prices for tea, coffee and milk. Bateson decided to add an image above the prices: a pair of eyes, or flowers. She would alternate the image used week by week, then recorded the total earnings and the number of drinks purchased. She would use different flowers and different eyes from various genders, ethnicities and expressions, but the eyes all had something in common: they stared directly at you.
The results were fascinating: on weeks where the notice banner included pictures of eyes, people paid 2.76 times as much compared to the flower weeks.
Turns out, seeing a depiction of eyes makes us more honest and cheat less. The same effect has been seen when using cartoons or drawings of eyes, resulting in less littering, more donations, less crime and overall more pro-social behaviours. This is called the watching-eye effect.
Why do harmless pictures of eyes make us want to do good?
The effect is likely to be an unconscious, automatic reaction. Our brains are remarkably sensitive to eyes and gaze – which is why we can easily spot people staring at us and why we are so good at reading emotions from eyes.
Furthermore, we are social animals and thus have evolved to show pro-social behaviours so that we fit into the group and live together harmoniously.
This means that when we see even a symbol of an eye, our brain automatically thinks that we are being watched by someone, pushing us to act morally to avoid punishment or embarrassment. This suggests that our desire to preserve our social reputation plays a significant role in our morality (but by no means the only factor).
The other thing to consider is that as we grow up, we are continuously taught that we are being watched, to dissuade us from bad behaviour. God will send you to hell, Santa Claus will put you on the naughty list and Big Brother will send you to prison. All of these stories and cultural beliefs fuel our subconscious paranoia of being watched and fear of consequences.
So if your lunch keeps getting stolen from the fridge, try sending a message by putting a photo of eyes on it to see if it deters your coworkers.
The Second Law of Thermodynamics, one of the fundamental principles of physics, dictates that the entropy of an isolated system never decreases over time. If left alone, an isolated system will always progress towards thermodynamic equilibrium: a state of maximum entropy.
These are very long, technical words: what is entropy and why should you care?
Simply put, entropy can be thought of as a marker of how chaotic and disordered a system is. This is a misleading simplification, as entropy actually is more about energy moving from a concentrated state to a more dispersed state, but it is easier to understand this way.
An example would be a hot cup of coffee cooling down. The hot coffee is a concentrated locus of energy. But over time this energy gets dispersed throughout the coffee and into the surrounding (cooler) air and converts the water into steam. Energy slowly disperses out, until the coffee becomes room temperature.
This makes the Second Law of Thermodynamics more relatable. When have you ever seen a cold cup of coffee heat up by itself without any heat source? For that matter, a spilt glass of milk never reassembles itself. Balls sporadically arranged on a pool table will never form an orderly triangle by themselves. A dead person cannot miraculously come back to life. Without external influence, you cannot reverse the entropy of a system.
In a way, you could define life itself as a battle against entropy.
The cells in our body are continuously fighting to preserve order and energy in our body, such as actively pumping salts in and out to maintain concentration gradients, rigorously preserving our body temperature to ensure that enzymes can function optimally and breaking down food to fuel all the processes keeping us alive. If we are left truly isolated (no heat, no food, no oxygen), then entropy will build in our body until we die.
The concept of entropy can be particularly motivating when we consider that entropy doesn’t just apply to physical energy. Our brains are also subject to entropy where, if left alone, it will default to the lowest energy state.
The best way to counter this is by focussing on a key part of the Second Law of Thermodynamics. It says that entropy never decreases in an “isolated” system. Some things are irreversible, like the inevitable heat death of the universe or our own mortality, but we can enact some change to restore some order to some systems. A cold cup of coffee can be reheated in a microwave. A leaking tire can be patched up and inflated. A spilt glass of milk can be turned back upright and refilled with more milk.
Ergo, we must prevent ourselves from being isolated systems. There are three main ways we can do this.
The first is to stimulate ourselves from external sources, reheating our metaphorical cup of coffee. This includes hobbies and interests, learning new things and expanding your horizons. Our brains are naturally fuelled by curiosity, passion and experiences.
The second is to connect with other people. Healthy social interactions keep us grounded to reality and inspire us to be better versions of ourselves. People can provide us with new knowledge, insights, wisdoms and love.
The last and most important is channelling our own willpower. We must fight against our natural instinct to be lazy by pushing ourselves to get off the couch, to exercise, to work, to create, to produce, to live. This is also the hardest because if it was easy for us to “Just Do It”, we wouldn’t even be discussing how to beat entropy. Therefore, we need to create systems, habits and routines to trick our brain into working and being productive. In no time, you will find yourself auto-adjusting your life to prevent entropic laziness from taking over your life, like homeostasis.
Isaac Asimov’s short story The Last Question tells the story of how even the most powerful supercomputer in the cosmos cannot answer the question of how we can meaningfully reverse entropy in the universe. But turns out we can reverse the entropy of our brain and it is damn well worth the effort.