Should we really believe that a cat can be both dead and alive because ‘science’ told us so?
Are you telling me that everything we ever have and ever will see in this gigantic Universe was once squashed into a point smaller than an atom?
When we try to look into the scientific reasoning and explanations for why these absurd conclusions could be true, they’re often shrouded in a black box of equations and jargon that really doesn’t feel like it’s worth the time to understand completely. At least that’s how I feel about my reading assignments.
These questions we have may seem trivial and borderline fussy to a scientist, but they’re crucial when explaining science to general audiences. Can new science materialise in society in the forms of vaccines, driverless cars or telecommunication systems if it’s inaccessible to patients, patent offices or policymakers? The role of science in society shouldn’t be hindered by how foggy its inner workings look… right?
This esoteric nature of cutting-edge science probably wasn’t intentional. It might just be a result of how science builds on pre-existing work. Calculus was the cutting edge of maths centuries ago but is now covered in advanced high school and college courses. There’s no doubt that a lot of the low hanging fruit has already been plucked, and today’s scientists have to delve much deeper into the intricacies of previous work to find ‘new' science. It makes sense that the cogs and wheels behind most of today’s science feel obscure – it usually has to be!
But as highlighted earlier, this casts a shadow of doubt towards the bold claims of General Relativity describing space as ‘stretchy’ or thermodynamics explaining how there’s a small chance of your sugar ‘undissolving’ from your tea. Take classical physics for instance. We know that the Earth appears to pull us all down leading to this idea of gravity, but does the letter $g$ in our equations mean there’s a physical gravitational field permeating through all of space? Or, in a more familiar light, the countless examples of how we ignore air resistance when predicting how objects move. It really does look like we’re playing to our advantages. These examples might make you feel like scientists have created this wonderful theoretical bubble where they can simplify and abstract reality to their whims. Does the maths only work when we say it does?
Speaking of maths, a lot of the justification for our scientific models is grounded in good old numbers. It isn’t very convincing when learners find out that a lot of our understanding of reality (everything from the electronics of the screen you’re reading this on, to the very nature of existence on subatomic scales) is built on allegedly ‘imaginary’ numbers. When faced with existential questions like this, it’s often worth taking a look back at history to see if this has happened before. More often than not, whatever you’re currently thinking probably has been thought of before as at least a passing thought in the several decades of life from each of the last 100 billion humans that have ever existed [1].
Let’s start with maths. Interestingly, maths isn’t a stranger to absurd new constructions. Wasn’t it quite strange for someone who had been accustomed to the numbers $1, 2, 3, 4, 5, \dots$ all their life to be introduced to the concept of $0$? A number designed to NOT do exactly what all the other infinite numbers are built for – representing a value. Well, that’s precisely what Aryabhatta did [2]! And how weird would it have been to be told that there are even more numbers hidden between $1$ and $2$ – and not just a few, but an uncountable infinity of them! And don’t even get me started on how painstaking it must have been to tell the average Joe that negative numbers are a thing! Numbers, something initially made for counting, now have signs? What does $-3$ days mean (besides the time left on that next deadline)?! So you have every right to raise an eyebrow at the notion of taking the square root of a negative number and slapping an ‘imaginary’ label on it.
Much like the aforementioned individuals, you’re probably scratching your head figuring out how to reasonably place imaginary numbers in the framework of concepts you’re used to. It doesn’t help that they’re called ‘imaginary’ – in fact, Gauss suggested they should be called ‘lateral’ numbers because of how they can be pictured at right angles to the real number line [3]. The square root of $4$ gives you the side length of a square with an area of $4$. What’s a negative area? And how do we multiply a number by itself to get a negative number? Turns out all the maths we’re comfortable with was just a special case of a much broader number system.
This idea of generalising what we already know into contexts we hadn’t thought of before doesn’t just happen in pure maths. Electromagnetism showed us electricity and magnetism were two sides of the same coin. The same phenomenon just from different perspectives. Even in classical mechanics, it’s possible to work out analogies between Newton’s equations for moving in a straight line and for rotating. Generalisation is a vital step towards better describing the special cases we observe every day.
But this still doesn’t answer the question of whether or not scientists can assume all the necessary assumptions for their abstract, generalised fancy machinery to work and pass the conclusions they fish out from it as ’new’ science. Well, just like before, this conflict was probably wrestled with at some point in history. And it has! Enter, the scientific method.
You see, in contrast with how it’s often taught, science is an inherently curious subject studied by imaginative people. Hypotheses are generated by thinking out scenarios, wondering the ‘what ifs’ and the ‘why nots’. So we can come up with the whackiest models we could possibly conjure, but the gatekeeper that decides whether or not it makes it into the textbooks is experimentation. Namely, setting up what you have imagined, and working out if it’s true and significant enough. It doesn’t stop there. You’ll have to get a bunch of other fellow scientists on board with your idea and convince them that you’re not making this up and then get your paper published. Of course, experiments never definitively confirm something is a $100 \%$ true. They always have a significance level to be accountable to [4]. But that’s more of a fruit of thriving scepticism, than a red flag for made-up science.
So science isn’t broken after all! It may seem like a bunch of convoluted steps and generalisations, but as the esteemed physicist and science communicator Richard Feynman said “If it disagrees with experiment, it’s wrong. In that simple statement is the key to science” [5].
A lot of the time when we encounter new or complicated-looking science, we shouldn’t question whether the ends justify the means, but rather check if the abstract ‘means’ actually allow us to reach the real world ‘ends’ that they claim.
References
[1] Kaneda, Toshiko, and Carl Haub, How many people have ever lived on earth?, Population Reference Bureau 9 (2018)
[2] Bodleian Library, Carbon dating finds Bakhshali manuscript contains oldest recorded origins of the symbol 'zero', University of Oxford, hyperlink , last accessed: 20th Feb 2021
[3] C. F. Gauss, Theoria residuorum biquadraticorum. Commentatio secunda, Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores (1831)
[4] E. Seigel, Scientific Proof Is A Myth, Forbes, hyperlink , last accessed: 20th Feb 2021
[5] R. Feynman, The Character of Physical Law Lecture Series, Cornell University (1964), hyperlink , last accessed 21 st Feb 2021