Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website,

Saturday, 11 March 2017

That's Not The Way To Do It, You Blithering Idiots

Would you watch an hour-long documentary on YouTube if you could only watch it in 10 second clips and the next one didn't autoplay ? Of course not. But apparently people are fine with reading longish, very serious articles on twitter. Twitter ! In 140 character snippets ! The hell is wrong with you people ??? Have you taken leave of your senses ?!? And then, just to make the stupidity truly unbearable, when someone writes a stream of connected tweets, how do people reshare them ? By collecting them all together to form a single block of readable narrative text ? NO ! They copy and past screenshots !! Aaaaaaarrrgghhh !!!

Come on people. Please. For goodness bloody sake. It's not like I'm asking you to, oh, I don't know, re-edit the entire of Beowulf from poetic verse into prose. Just string a bloody sentence together, you twerps. And just because the BBC like writing articles in one-sentence "paragraphs" doesn't bloody mean you have to.

Look, I'll give you an example. Scott Adams, the creator of the genuinely amusing Dilbert webcomic, recently wrote a blog post ostensibly about how to convince climatologists to convince skeptics that global warming is a real, human-caused problem. Actually any genuine good points about how to communicate are lost in the general fuggy tone of "I wish I was a lunatic but I can't quite make it". The thing feels like oh-so-much bullshitting to me, presenting an apparently reasonable goal and then proceeding in a bizarre, overtly hostile fashion.

Yes, I'm aware of the irony of attacking someone for being hostile, so don't bother pointing that out.

Aaaanyway you don't have to take my word for it, and nor should you. I'm forever going on about the need for actual climatologists to engage with people instead of ignorant astronomers like me, and cometh the hour, cometh the climatologist. In this case a guy named Gavin Schmidt, director of NASA's Goddard Institute for Space Studies. Clearly not a dull chap at all, but how did he respond to Adams ? Through a series of tweets. Which were then recollected by some blogger into this unreadable format.

Right. Fine. Let's put this in a readable format then. The text below is entirely by Dr Schmidt, I've simply joined them all together, removed linking ellipses, replaced & with and, 3rd with third and suchlike, invented paragraphs, and generally tidied up. I will keep the title on the original blog. All of my own edits are clearly marked in red. Don't bother debating any of the issues with me, I'm just presenting this as a public service because I wanted it in a readable format.

Here we go then.

Dr. Gavin Schmidt’s Epic Response to Scott Adams

Scott Adams says, "You scientists can't make me look beyond tired talking points and clich├ęs !" This is of course true. Sad. But since I'm in the mood for a totally futile exercise, here's why his points are disingenuous at best.

Let's start with models. Remember George Box ? "All models are wrong, but some are useful". It's not just true for climate, but also quantum physics, GR, SM etc. It would be criminally irresponsible for scientists not to explore real world impacts of real structural uncertainty in complex systems. And we know that climate is complex :

It's because some aspects of climate change are robust to model differences that we have confidence those aspects reflect reality, but of course we need to (and do) evaluate predictions of these models in out-of-sample tests (including but not exclusively the future). For example here are some climate model predictions made ahead of time: Mt. Pinatubo; spatial patterns; stratospheric cooling; reconciling paelo-data.

And there are more. Indeed, the collection of models in the CMIP3 database (created in 2004), can also be tested against observations.

Adam's third point is just bizarre : All of the warming in the last 60 years is human-caused. See here for references.

He is confused that the IPCC indicates that human contribution is actually more than the observed trend, but ignores the possibility that natural factors would have likely lead to cooling over that time. See here for more on that.

His fourth point is just off the wall. Someone else can deal with that.

Point five is just the 'climate has changed before' trope. Duh. Of course it has and for many different reasons: Asteroids, plate tectonics, orbital wobbles, evolution, volcanoes, the sun, fires, ocean circulation, etc. But just like a crime scene with multiple suspects, scientists look for fingerprints in the data to match up potential causes with reality. We know that orbital wobbles drove last 2.5 Myr of ice age cycles and that greenhouse gas and dust changes amplified them. We have good evidence Cenozoic cooling was due to decreases in CO2, combined with tectonic triggers that changed circulation, isolated Antarctica and set the stage for recent glacial periods.

All these past changes are fascinating and piecing together the evidence is fun but in no cases do we have as much information as for the 20th century. Our ability to pin down the details for the last 60 years is much, much better than for the ice ages (despite the bigger signal). A claim that attribution of recent change is compromised because attribution from some earlier, data-poorer period is unclear is equivalent to claiming that a recent murder conviction should be vacated because an ancient Roman skull has just been found in Europe.

(bear with me people, we are almost done)

Point seven : The earth has warmed as predicted you pillock.

Point eight : Record high temperatures and the rate of warming are proof of something : that predicted changes in the system are occurring.

Point nine : Records break all the time, but not equally. Far more hot records are being broken than cold ones.

Point ten : Really ? We can't ignore deeper understanding of processes, better data, bug fixes over time. But the basic results have not changed.

Point eleven : No idea where his local beaches are, but insurance companies pay so much attention to SLR threats that you can't get private insurance along much of the East Coast.

Point twelve : Oh please. The issue is not the absolute temperature. If humans had evolved in the much-warmer Eocene, I'm sure we'd have been fine. Sea level was 80m higher and no-one could live in the tropics, so cities would've been built further inland/poleward. Unfortunately that isn't where people live now ! Roughly 100 million people live within 1m of high tide, many trillions of dollars in infrastructure too.  More than half the world lives in the tropics, farmers rely on climate (temperature/rain) to grow food etc. etc.

Finally, climate change impacts are happening now : Greenland's losing mass, heat waves are worse, rainfall more intense, Arctic ice is disappearing, permafrost is melting etc. It is neither a hoax nor a distant possibility. It's here, it's now and risks of much worse are real.

I'm sorry that Scott Adams feels that people are not respecting his oh-so-clever concern trolling. Truly. I mean, why aren't I being nicer ? Surely, I should take his well-meaning advice and up our communications game, convincing him and others that it isn't a hoax? But this misses this point entirely because scientists/communicators/National Academies have done all of these things and more for years. And yet, people like Scott Adams still repeat nonsense, choose to misunderstand points and prefer to argue rather than deal.

At some point, you have to ask yourself, maybe the problem is not the communications or the communicators ? Sure - we should continue to hone messaging, listen, adjust the frames, use trusted messengers, make better visualizations, answer questions, increase relevance but the deeper issues of why people selectively reject evidence that goes against their group ideology go mostly unaddressed.

I don't expect this tweet storm to impact Scott Adams at all. That wasn't the point. Sometimes it just feels good to vent. Done.

Thursday, 9 March 2017

Let Me Entertain You, With A Great Big Book

Specifically this book. It's big enough to clobber people with.

What's it about ? Well, once upon a time philosophy and science were the best of friends. They went everywhere together. They played together. They solved problems together. Hell, they discovered logic together. Really, it was one of the most adorable things you've ever seen.

But something went badly wrong. Philosophy and science grew more and more distant. Science kept telling philosophy that it wasn't doing anything useful any more. Philosophy accused science of forgetting its most basic principles. Things were said. Mistakes were made.

Philosophy gets a very bad rap from a surprising number of prominent scientists and science advocates. I've never really understood why - it feels like cutting off your nose to spite your face. Trying to understand your own internal biases, the biases of other scientists, the reasons for those preferences, whether your methodology is correct or another would be more appropriate and why, whether the observed correlation could have an underlying cause, whether you're really ruled out a hypothesis or if it can be saved, if you should try and falsify a hypothesis or seek to prove it, what the implications of your research will be... these are all very basic philosophical questions. Ones you literally can't do proper science without asking, at least subconsciously.

Thankfully, in my experience at least, most scientists are interested in these questions - regardless of what nonsense the TV scientists have to say about it.

Philosophy's role in the day-to-day process of doing science cannot be overstated. But I'm not working my way (500 pages at present, out of 1700) through the complete works of Plato for hitherto unknown tips and tricks to get me ahead of my philistine competitors. Plato wasn't a scientist or even a natural philosopher : he was a metaphysician*, an ethical and political inquirer. So although many of his teachings are directly applicable to science, they also have much broader implications. And while it's true that studying philosophy is a way to understand logic, which is definitely important in science, reading Plato gives insights into far more than the basic principles of reasoning. It reveals a world of ruthless logic and irrational devotion, moral principles that are both inspiring and terrifying, thought processes that are sometimes simplistic and sometimes incredibly sophisticated - and often all this at the same time. Far more important than teaching what to think or even how to think, it reveals that there are entirely different ways to think - and leaves the reader to decide for themselves which, if any, is correct.

* I wonder why it's "metaphysician" and not "metaphysicist" ? Surely a physicist would be more appropriate since physicists study the world whereas physicians try and heal people ? Do metaphysicians try and heal people through discussion ? Answers on a postcard in the form of a Socratic dialogue, please...

That's some fairly heavy stuff, so let's start with some simple examples. Things that are directly relevant for the scientific, logical approach we take for granted today.

Figuring Stuff Out

In Plato's dialogue of the same name, Parmenides demonstrates that it's not enough to investigate if one idea is true. To fully examine the issue you should also consider the case that it's not true. Which is a running "theme" of Plato, in a sense - a superficial analysis is just not good enough. You need to think very deeply about whatever it is you're proposing, attack it from many directions and see if it stands up. Similarly, you should not assume you've found the correct explanation just because it works - another explanation might do an equally good job or even better.

In another dialogue, Plato has Socrates reveal one of the most crucial lessons of examination of all :

The point of dialogue is not to attack each other personally, or even to attack each other's ideas. Attacking ideas is only a component part of the real goal : to get at the truth. There's a peculiar idea in vogue at the moment that science can only falsify things and not prove them; not everyone agrees, myself included. Plato took it for granted that the whole point of investigation was to understand how things actually are, not just keep chucking out wrong ideas.

But there's a far more important point to the quote. Many of Plato's works are clearly set-pieces, using dialogue as a convenient way to expound his own views with a readable narrative. But others are far more earnest. Many don't establish a firm conclusion at all : the participants simply go away with a shrug, saying, "oh well, we've failed".

There are probably several reasons Plato bothered to record such discussions. I suppose a few might be because he simply gave up, or never got around to finishing. But in at least some cases - the Apology purporting to be one of them - it seems that they are a fair approximation of a real discussion that actually took place. They certainly feel very earnest, a quality not easily expressed in a single quote or two - a genuine and sincere attempt to establish the truth. But still - the discussion fails, so why bother writing it down ? For future readers to build upon so they don't have to start from scratch ?

Perhaps. But I think a far more important reason is because the process matters infinitely more than the conclusions. We often today use "talking shop" as derogatory, a means by which nothing will be accomplished. Plato makes a powerful case for talking shops as valuable in their own right. The dialogues encourage us not to reach conclusions so much as they simply inspire us to think. To examine an issue for its own sake, to exercise the mind and not let it go stale (to mix metaphors), is sufficiently its own reward. In that sense the discussions never fail.

The second quality that the above quote hints at rather nicely is the nature of the discussion. Because the participants enter into the debate with the knowledge that they are after the truth and enjoy thinking for the pleasure of thinking, they are free to attack each other's ideas without mercy. They can be freely dismissed as absurd and ridiculous without fear of causing offence. The playful nature of the discussion is apparent on many occasions, and would have been much more so in the original Greek - Plato, it seemed, loved a good pun, though of course they're sadly lost in translation.

Then there was that time in Theaetetus that Socrates opened a discussion by checking if his opponent was really as ugly as the rumours claimed. Or that eloquent speech in the Symposium where he essentially says, "I was deeply moved by your beautiful pack of lies."
This absolute freedom to shoot down other ideas, even mock them, without the other person taking it personally - this is something that too often seems almost completely lost today. The notion that someone could be raising an idea just for the sake of discussing it, rather than it being their own pre-existing staunch conviction, is almost unheard of unless it's excessively, fawningly prefixed with "I'm just playing devil's advocate" or something similar - and even then we almost never really believe the person is sincere. Consequently when people change their minds during a discussion, they're seen as contradicting themselves, as being foolish and stupid even when they now profess to agree with us. The default is for one side to declare victory and continue attacking their opponent even after having won (especially common is to use their previous, retracted argument against them as evidence of their stupidity or poor moral character), rather than both sides rejoicing that they've found out something they didn't know before. In Plato's dialogues, having the courage of your pre-existing convictions is nowhere near as important as the courage to seek the truth and change your mind.

That's the heart of it : you are free to consider any idea without anyone having the foolish notion that you actually believe it. You are merely entertaining it, exploring it for the sake of exploration. Of course, Plato certainly had his own convictions just like everyone else does, but the ideal of a dialogue as such a pure search for truth is no less potent for that.

So it doesn't seem strange when Plato's characters freely criticise each others ideas or even their own ideas. Or when they point out some fundamental flaw or self-contradiction they've just spotted. The process is itself worthwhile whatever the outcome. I suppose that if Plato had access to a modern word processor, it's quite conceivable that he wouldn't have done it this way, that he'd have gone back and edited it. But I rather doubt it, and in case case who gives a damn ? Like the value of the process over the conclusions, the ideals of the technique are worth aspiring to regardless of whether the ancient philosophers really behaved like this or not.

Not Always Stuck In The Clouds

One of the common accusations levelled against Greek philosophers is that they were far too concerned with theory and didn't believe in observation. The idea goes that the Greeks believed that the senses could fool and deceive us, whereas the mind (or the soul) was above such petty corruptions. As such only pure reason could lead to truth, and it took the more practical Romans to actually do anything useful.

It wasn't really like that. Without doubt an element of this sort of thinking did exist, but Plato makes a very great deal out of observational analogies. Observation is clearly how we learn about the world around us whether we like it or not. Analogies aren't merely so prominent in Plato in order to make the discussions lively and engaging - they're essential. Although he was extremely interested in pure thought for thought's sake (what do we really mean by knowledge ? what is number ? more on that later), he was also deeply concerned with very practical issues : justice, virtue, ethics. And Plato also understood how people think : not how he might like them to think - as rational and motivated by their own self-interest - but as they actually do think. 23 centuries before psychologists Dunning and Kruger realised that stupid people don't realise they're stupid, Plato had jumped ahead to the reason this should be :

Plato describes Socrates describing a "wise woman" who knew about sacrifices to avert the plague. Which immediately brought to mind the Wise Woman from Blackadder. Yes I know this isn't the wise woman, but I don't care.
Understanding such processes is vital for just about every field of human endeavour. For though science doesn't care what people think of it, people will not act on rational findings unless you can convince them. Unless of course you just put the scientists in charge...

Not that such understanding is limitless. In a rare moment of despair, in Crito the normally dauntless Socrates laments the behaviour of the masses :

My point is that Plato was not some ivory-tower intellectual sitting around thinking the whole day and knowing nothing of the real world. He pleads with his readers to ignore the pursuit of fame and fortune and to make self-examination - not blindly trusting his own conclusions - and the quest for self-improvement their goal above all others. This is spectacularly epitomised in the Apology : arguably one of the greatest works of human achievement in any field, ever. Seriously. I considered just copying and pasting the entire thing here, but I resisted. Though if you just want an executive summary, go here*.

* "Apology" just derives from the Greek word for legal defence. That it contains not one word of actual apology is something that Plato, ever the fan of puns, would certainly have appreciated the irony of.

There's much more of course. But the Apology deals almost exclusively with ethics, and here I'm trying to loosely concentrate on rational thinking.


And yet... for all his exhortations to get his fellow citizens to think for themselves, Plato ultimately reached a conclusion that today looks very disturbing indeed. It's a running theme which appears again and again. While we can never be certain if many of his quotes really came from the people in his writings, this one is such a constant that it's hard to believe it didn't originate from Plato himself.

In one sense this can be seen in an entirely favourable light, if we allow for a minor change. Something that more expert people agree to is surely worthy more than something non expert people agree to, within the specified field of the experts. If this wasn't true, you'd have florists building particle accelerators and molecular biologists doing safety inspections on oil rigs : as far as establishing what the truth is, then of course it doesn't matter what the great unwashed think of it. How we should apply that truth is another matter entirely.

But Plato often takes this very much further. Although he was aware that not everything should be taken to extremes, he did have a tendency to slip into this way of thinking. And this is one idea he develops to its full, terrifying potential in Republic. Kallipolis would be a state not merely run by its famous philosopher kings, but with everyone at every level of society with a single job to do, allotted them by committee. Since people (Plato reasoned) perform best when concentrating on a single job, everyone would have but one task for their entire lives that they would literally perform as best as humanly possible. It sounds bizarre and monstrous when you put it like that - and it is bizarre and monstrous - but what's really terrifying is just how damn reasonable, even benevolent, the whole thing sounds when you read Republic in full. For example, if people aren't really "better" than other people, just different, then what the hell do we mean when we talk of self-improvement ?

Plato, then, was just as flawed as the rest of us. The notion that some people are better at some things than other people slips so easily into the notion that some people are just better. It has thoroughly racist overtones, and Plato (thus far in my reading) didn't give the existence of slaves much pause for thought either. Could even Plato's genius eventually have salvaged this idea and turned it into something more moderate ? We'll never know, but as it stands the idea is certainly not worth defending. So the next quote I have to take from another source entirely.

For all his wisdom, in this instance Plato had not subtlety. He surprisingly failed to consider another very basic property of human nature, one the writers of Futurama understood very well.

But that doesn't mean he wasn't a rich, complex genius, and it would be rather small-minded of us to dismiss Plato's entire corpus because of this one idea. After all, he never had even any opportunity of putting his idea into practise - it was only ever an intellectual exercise. And it would be rather hypocritical to suggest that one of the best lessons of Plato is that we can merely entertain an idea without believing it, only to then use this one mad, offensive notion against him. Many of his other works are quite different in tone : playful, exploratory, far less certain, sometimes being explicitly much more concerned with methods of inquiry than conclusions. This is emphatically not someone who wants us to accept his conclusions as dogma.

What's It All About, I Mean, Really, When You Get Right Down To It ?

In his quest to uncover objective (though perhaps not absolute) ethical truths, Plato outlined many lessons important in the modern scientific method - some of which people tend to forget more than they should. But instead of applying these techniques to science itself, he used them to examine the even more important fields of ethics, psychology and metaphysics. Behind the dramatic flair of the Apology and the Symposium lies some incredibly sophisticated, complex thinking.

For all that the conclusions of the dialogues are not so important as the method, if there was one thing Plato valued above all else it was surely rigour. If he came to a wrong conclusion, it was not because he hadn't tried damn hard to make his argument as waterproof and unassailable as possible (of course at his worst that means "rationalising" the argument, deliberately trying to logically justify it without ever really finding the truth).

Socrates is famous for being pronounced as the wisest man because he alone knew his own ignorance, whereas everyone else thought they knew things they didn't : "I am wiser than he is to this small extent, that I do not think that I know what I do not know." But Plato is clear that this isn't supposed to be the limit of wisdom. The purpose of knowing one's ignorance is so that you can entertain possibilities you wouldn't have otherwise considered, and to finally start learning. Sure, your conclusions might be wrong - but you have a duty to try and establish them as rigorously as possible.

"Multitudes" hear not referring to people, simply numbers : he just means that quantifying things is a fundamentally good thing.
To reach that level of rigour Plato often uses seemingly tautologous statements to explore an issue. Many individual steps in the argument are each incredibly simple and extremely difficult (though not always impossible) to argue with. Yet the overall process can sometimes seem like it's turning black into white, turning an apparently simple, self-evident concept into a tortured, screaming mess. It's a bit like this :

Such an examination can lead to some concepts we often think of as being very modern scientific advances. For example in Parmenides (and elsewhere) Plato hints strongly at the concept of time travel, noting that it's not possible for something "which comes to be older to come to be younger than itself". The implicit assumption is that time flows in one direction. In Cratylus he describes what's now sometimes known as the Leibniz Identity (or more accurately the Identity of Indiscernibles), that if one thing is absolutely identical to another it must actually be that thing :
"Names would have an absurd effect on the things they name, if they resembled them in every respect, since all of them would be duplicated, and no-one would be able to say which was the thing and which was the name."
Think this is without any real-world consequences ? Perhaps it is for now... but it's been bugging the hell out of Star Trek fans for over fifty years. What comes out of a teleporter : is it you, or do you die and get replaced by a copy ? True, we still can't teleport people, and perhaps we never will. But quantum theory has particles "tunnelling" through barriers all the time. Are they the same particle that went in or "just" a copy ? What does identity really mean ?

Philosophy, unlike science, is not constrained by observation and experiment : it can explore raw concepts. This makes it tremendously powerful, but also sometimes ferociously difficult... and extremely boring. In Sophist Plato gets hung up on the bizarre idea that false knowledge is impossible - which he then tiresomely refutes. Parmenides is a particularly interesting but also incredibly tedious discussion on the nature of one. It's a tough piece. If I say I understand more than a fifth of it, I'm probably deceiving myself... yet I'd dare to venture that the main point is profound. While we often say that mathematics is the universal language, we don't often stop to think what that means.

Language is how we describe the world. Mathematics is often held to be a more objective language, at least when coupled with observations, because numbers don't lie. But it's still a language, in Plato's view, with all the subjective problems that implies. Can you define a word without referring to other words ? Perhaps - in some cases. "Bird" is an easy one. But "justice" is not.

In Parmenides we endure a frightfully laborious description of what "one" is. If we don't even understand that, how can we possibly claim to understand anything else ? And yet, perhaps we do. Perhaps language - mathematics included - simply invokes a deeper understanding within the brain : it's an abstract concept, but we all know what one is. True understanding may not come from verbal language or even mathematics, it's something ineffable. Plato's wise woman in the Symposium hints at that, noting that people can sometimes make correct judgements without being able to give a reason. Being able to articulate a thing (either mathematically, verbally, or otherwise) is not the same as understanding a thing, and vice-versa. Rather those expressions of a thing may only invoke that deeper understanding.

When you think, do you think in sentences ? I do, most of the time. So my subconscious has already done the tricky part of assembling my true thoughts into intelligible sentences, putting one word ahead of another to form something that's usually coherent. Occasionally, especially with very difficult problems, it's possible to sense that deeper understanding falling into place more directly, an, "ohhhh" moment before a verbal description is possible.

While for science we have to assume that mathematics is a language without the subjective flaws of poetry and literature, philosophy does not. It doesn't have to award mathematics the exalted position of giving a more meaningful description of reality. Mathematics may help us understand the world in a different way, but that's hardly true for most people. Personally I would say that raw numbers and equations are far less meaningful to me than a verbal description of a process. They can certainly be more useful, but that's not the same thing as meaning at all.

If we take mathematics to be just another language - albeit a powerful one that opens up capabilities we would never have with other languages is - that changes the way we see the world. Mathematics is revealed to be full of abstract concepts just as much as the rest of language. Might we eventually find an even more powerful language that gives us the objectivity of mathematics with the clarity of meaning of ordinary dialect ? Time for another guest to intrude on Plato's moment of blog-based glory.

My point is that you can swap, "the kind of language you use to fix your Volkswagen" and "poetry" here and still have the statement hold true.


If you read philosophy looking for answers, you're doing it wrong. If you judge an era exclusively by the standards of your own, you're doing it wrong too. Plato did propose answers, and many of them have stood the test of time - especially the methods of inquiry. But for all its similarities to modern Western culture - it was after all hugely influential in shaping the Western mindset - other aspects of this early culture are distinctly alien. Modern society struggles with homophobia; ancient Athens was extremely homophilic. For men to love other men was seen as far more manly. The Symposium features a long monologue in which the drunken Athenian general Alcibiades describes, basically, just how much he'd like to bone Socrates. In the Symposium, Pausanias even feels it necessary to state, "As a matter of fact, there should be a law forbidding affairs with young boys."

Moreover, despite the importance of the Greeks in shaping our very notions of rational, logical thinking, spirituality played a much larger role in their investigations that would ever be permitted in modern science. For example, although we sometimes think of "intelligent design" as a relatively modern religious invention (one which makes no sense because the Universe is a stupendously badly-designed place for us to live), Philebus reveals that this argument is ancient. In the dialogue, Socrates questions whether this assumption is really true (at the end of the Apology he also gives a beautiful defence of agnosticism, despite having protested his theism very loudly), but his opponent Protarchus is quite adamant :

To a modern scientist the notion that a mind could order reality, let alone would be necessary to maintain that order, looks extremely strange. It hasn't lost its hold over religious thinkers though, who take this to be true in varying degrees.
Clearly there were very mixed feelings in ancient Athens towards questioning the gods. Theist or not, Plato seldom comes across as irrational, though many of his characters do. In any case he rarely gets bogged down in theological issues - in this instance deftly ignoring the central point as to whether reason orders the cosmos, choosing to turn his attentions to reason itself instead.

So Plato reveals a world both very different and very similar to our own. His methods of discovering the the truth are not the only weapons at our disposal, but they are still every bit as relevant. His aim of discovering the truth is even more important : another theme wending through his work is that truth is not always as we might wish it, but we must seek it out even so. That is a lesson that it seems fundamentally difficult for many people to learn. Our society is not ancient Athens, and ancient Athens anyway wasn't the paragon of democracy and rationality that it's often made out to be - but that doesn't mean it doesn't have some valuable lessons to teach us.

No-one sane would claim that out own society is perfect, but at least now it's an extreme minority who claim that slavery is perfectly natural, that war is virtuous, or take the principles of racial superiority for granted. We can either act with extreme cynicism and hypocrisy, dismissing the ancients as worthless even as our own society struggles with wealth inequality, double standards, discrimination and jingoism... or we can try and examine the past and learn from it - both from its mistakes and its wisdom. So I will end with how Alcibiades described Socrates, and advise you to apply this advice to this very quote. Don't look at the mere surface. Look deeper, search for meaning and interpretation, consider radical alternatives for the sheer joy of doing so. Think.

Sunday, 5 March 2017

Ask An Astronomer Anything At All About Astronomy (XXXIV)

So here it is - after almost two years, the 300th question !!! Let joy be unconfined ! Let there be dancing in the streets ! Let there be questions with answers both sincere and sarcastic !

1) Do irregular galaxies have black holes in their centres ?
Your mum's got... oh no. Too much. BAD RHYS !

2) Why doesn't FAST have an asteroid radar system ?
That's what you get from a cheap Chinese knock-off.

3) Can we see the colours of the flags from the Apollo Moon landings ?
Nah, it was all a hoax. They never took any flags, just some old curtains.

4) Should asteroid 2017 EA be a cause for horror ?

5) Are there any advances in astronomy that you're excited about ?
No. I cry myself to sleep every night.

Friday, 3 March 2017

This Is Not The Crisis You're Looking For

There's no such thing as perfect research. Consequently there's no perfect way to review research either. Yet there seems to be no shortage of "peer review scandal" articles which, taken out of context, can give the erroneous impression that we're in the middle of some sort of crisis. Well, we are, but it's not a crisis of the peer review method - it's a crisis of funding.

In an ideal world all research would get funding. It's incredibly difficult to determine which is the best research to pursue, because even if you don't care about knowledge for knowledge's sake, you can't be certain which research will generate the coolest spin-offs. Alas, for the moment we live in a world of finite resources and so someone does have to choose what to fund and what not to fund.

The funding crisis in research manifests itself in some big, obvious ways. Unique, world-class facilities are being threatened with closure, massive cuts, or being forced to privatise. The world's largest steerable telescope, the GBT, is now relying on funding to search for aliens. Aliens ! For goodness bloody sake. I'm not saying the search for aliens isn't worthwhile, but it's an all-or-nothing deal : you either find 'em, or you don't. Past methods have been based on piggybacking the alien hunting on regular science projects, which was fine, but prioritising the search for aliens (which will take decades with no guarantee or success) over standard projects (which are 100% certain to detect something and increase knowledge, even if it wasn't what was expected) is depressing.

This is a consequence of poor marketing : exciting but very unlikely breakthroughs are hugely emphasised to the detriment of more ordinary but equally valuable research, which is slow, careful, and while it may be interesting to a degree it can hardly be accused of being exciting. It's also a result of the long-enduring myth of the lone genius, which is subtly but fundamentally wrong. Yes, geniuses do make revolutionary breakthroughs from time to time... but no, they pretty much never do so entirely on their own without any influence from their peers and predecessors, i.e. devoid of any contact from the legions of ordinary, careful, methodical, "slow" researchers.

But I digress, as usual. The GBT, the ongoing saga of Arecibo and the cancellation of several fully armed and operational space telescopes are the most obvious examples of the funding shortage. Debacles in peer-reviewed publications are, I believe, a more subtle effect.

Science is a creative process, but it doesn't exist outside of real-world concerns like where the next pay cheque is coming from - or indeed where the next job is coming from. Even with the best will in the world, scientists have to eat, so they can't escape the pressures forced on them by funding. While we can and must campaign to increase funding, we also have to accept that dramatic changes are unlikely on a short timescale. So how can we ensure that good science gets done in this overly-competitive, depressingly business-like environment ?

The problem is not too many cooks, but a lot of spoiled broth

Scientists are a lot like chefs. They spend most of their time in their own kitchens (research institutes) producing delicious meals (research papers) but do sometimes go along to each other's restaurants to sample their rival's cuisine (review their papers).
With fierce competition for jobs, it's inevitable that employers resort to using very simple methods to assess candidates : largely publication rate, hence the "publish or perish" guideline. Hence the obvious tactic : publish lots of mediocre papers. This is not the way it should be, of course, because while you can't know for sure which research is valuable, you can certainly make some very good guesses. You can also know when mistakes have been made, and some publications are full of crap; not everything falls in the grey area of controversial research. That is, after all, why we have peer review in the first place.

Ideally employers should actively examine the publications of their prospective candidates, but that's not possible when there are a hundred people applying for one position and they each have ten publications. One other method they use is to look at citation rate - how often each paper has been cited by other researchers. But that's of limited benefit as well, because high-quality research can slip through the cracks while provocative bullshit often generates a furore, with most of the citations dismissing the research rather than praising it.

The problem is that the metric of publication rate is too simple. It makes sense to consider this at some level - do you want a junior researcher or someone with more experience ? But a publication is a publication. There's no way to judge by glancing at a C.V. whether that research was really top-grade or just plain mediocre. But what if there was ?

Let us know what we're eating !

Currently, scientists have a choice of either publishing in a regular journal, or Nature or Science. That's largely the extent of the differentiation between the journals : really prestigious, or normal. The system is very easy to game - you just do some minor, incremental research and publish it. It's not necessarily even bad research, it's just not particularly interesting but it boosts the publication rate just as much as a more careful or thought-provoking piece would. My proposed solution is that we reform the publication system in quite a simple way that would make the system harder to abuse.

We need these small, incremental papers, but we shouldn't pretend that all papers are created equal. Nor are all referee reports are of equal rigour. We shouldn't try to suppress the weaker papers any more than we should insist on absurdly high levels of careful reviewing, because reviewers, being human, are subject to their own biases and we don't want to risk chucking out a good idea because one individual got up on the wrong side of bed. What I'm proposing is that we try and label the papers as a guideline by which employers can quickly assess performance based on something more than sheer number of publications.

I say "label" rather than "grade" because this can be a complex non-linear system. It might, for instance, be useful to label papers according to content. Some papers consist of nothing more than an idea and simple calculations. This is undoubtedly useful to report to the community but it hardly compares to a comprehensive review or a series of relativistic magnetohydrodynamic simulations combined with years of observations. Other papers consist entirely of catalogues of observational data while others are purely numerical simulations. Which ones are the most valuable ? For science, all of them ! But for an employer, it depends what sort of person they're looking to hire. A wise employer will seek to have a diverse range of skill sets, from the uber-specialists to those with more broad-ranging experience.

What this would change is that your employer would no longer see from your C.V. that you have twenty papers and instantly think, "Wow !". They'd see at a glance that you have, say, fifteen simple idea-based papers, three observational papers and two based on simulations. They'd know not just how much research you were doing, but what sort of research it was. You'd still communicate your findings to the community - maybe you'd even publish more under this system - but any half-witted employer would see than a candidate with three papers all combining detailed observations and simulations was better than one with thirty incremental results.

When submitting a paper, the authors could suggest a possible label but it would be up to the referee and the editor to decide if they'd accept it or not. This wouldn't stop so-called "salami publishing", where people publish endless variations on a theme or re-analyses that didn't find anything new with no novel techniques or methods used, but it would make them easier to identify. This might also cut down on the sheer volume of information we have to read. If authors are allowed and indeed encouraged to recognise the non-importance interesting but not Earth-shattering nature of their results they will a) reduce the levels of publications of the worst, most pointless "findings" of all and b) reduce the size of incremental papers. A lot of full-length papers under the current system would become more like letters - very short reports that get right to the point and don't have to re-iterate the methods that were explained in detail in some existing publication.

We'd have to call them something other than, "short, incremental, not very interesting articles" though. "Essays", maybe. I'll get back to the mechanics of how such a labelling system could be applied later on.

And let us know how well it was prepared, too

Papers could be labelled not only by content but also review rigour - not to be confused with research quality, because that's not the same thing. Indeed this might be necessary under the new system. If more complex papers are to be seen as more valuable, they'll need more careful review. All levels of peer review are going to need some basic guidelines, which will require some thinking about what we want journal-based peer review to actually mean. Currently, reviewers are given a free hand to request whatever changes they like.

For instance, the lowest level of review (for an "essay" paper, maybe) might be a single referee doing a check to make sure there are no internal inconsistencies, known problems with the methodology, factual errors etc. It would still be pretty rigorous, but the referee wouldn't be expected to check every calculation, numerical value, or citation. For the highest level, there might be three or more independent reviewers checking everything with a fine-toothed comb, and they'd be expected to check everything. But even this should not mean - unlike currently - that they get to dispute interpretative statements (i.e. "in our opinion...") unless those statements were in flat contradiction to the facts. Some standard principles need to be clearly spelled out if referee quality is going to be even remotely homogeneous.

One of the other dangers of an overly-competitive environment is that legitimate skepticism ends up becoming hostility. When your career depends very strongly on your publications, not only are you going to be reluctant to point out your own mistakes but you'll also have a vested interested in finding those of others. This is healthy in that it encourages reviewers to spot flaws and think deeply about the issues, but refereeing is not supposed to be the same as debunking. There ought to be a certain joy in considering new ideas, especially ones that contradict your earlier ones.

I'm not sure there's a review system which can incentivise this (more funding is really the best answer here), but certainly good oversight by the editor can prevent papers being shot down in flames because the referee was grumpy. One thing that ought to be a requirement at any level of review is that if you attack an idea, you should state explicitly what changes you want : do you insist that the authors remove an idea completely or merely want more emphasis on the uncertainties ? You should propose an alternative, not just say, "this is wrong".

Put the kitchen on display but allow the chefs to wear masks

Gosh, this is a useful analogy, isn't it ?
For all this to work, the review process would have to be completely transparent so that everyone could check if the reviewers were adhering to the rules - currently the author-referee exchanges are only seen by the journal editors. Admittedly, exactly what fraction of the early versions of the paper should be made public is a complicated detail, but having the author-referee exchanges public would give strong accountability to the system and make sure everyone was doing what they were supposed to be doing.

Reviewer identity is currently kept secret (to the author but not the editor) unless they choose to reveal themselves, and this needs to remain status quo. Anonymity helps protect reviewers if they worry that their support for a theory widely perceived as wrong would be detrimental to their own career, or if they have to work with the author in the future, or if they're simply a much more junior researcher and afraid to publicly criticise a more senior author. It also reduces the chance that the author and reviewer will be in cahoots and stop the author tempering their responses to appeal to the specific reviewer. Anonymity, rather than transparency, is in this case more encouraging to describing the facts rather than pandering to the reviewer's ego.

There's another important aspect to the visible-kitchen analogy that works quite well too : it helps researchers understand the exact method of their peers. And not only the process, but the results too. For instance, many simulation papers do not describe their precise initial conditions (even the most basic details like number of particles are sometimes missing), and if they do they're sometimes hidden in the main text - not in a clear, obvious table. And they still only show selected snapshots, not usually complete movies - if a picture paints a thousand words but uses a thousand times more memory, then a movie paints a million words and uses a million times more memory.

But nowadays we've got a million times more memory : showing the movies in the online material should be considered essential, not a nice bonus. I want to be able to interpret for myself what's happening, not rely exclusively on the author's interpretation. Rather than preparing a meal, it's more like trying to follow origami instructions : possible via book but far easier via video.

In as much as is possible, the precise conditions and process needed to replicate the result should be described. That's not always possible since raw data can be very large indeed, but in many cases this should now be the exception rather than the rule. The "essay" style papers I've described also wouldn't be expected to go in to such depth, but they should reference "recipe book" papers where the method was described in detail, and their really precise setups could always be included in the online material. So can raw observational data (where feasible) and analysis code, for that matter.

Everyone should try your signature dish

So far we've improved the way a C.V. can be used to see at a glance just what it is a researcher does and how thoroughly tested their investigations have been. We've also cut down on the amount that other researchers have to read without preventing anyone from publishing anything, and made the review process transparent so everyone can see if it was really up to the mark. We've essentially given everyone a menu : you can see at a glance exactly what sort of research a person does and to what general standard. Just like food, you won't know for sure until you try it for yourself - but this is obviously much better than if you don't have any kind of menu at all.

What we haven't quite addressed is replication : can another researcher, following your recipe, exactly reproduce your findings ? Now, the "essay" style papers should not necessarily contain every single step of the process, but full articles should. The problem is that you've got to encourage reproduction or it won't happen : people will just keep eating the same meal from the same restaurant. After all, it's a time consuming procedure, with a high potential just to confirm the previous findings and not learn anything new. And if you don't confirm the findings, there's the more political concern that you might embarrass the original authors - potentially alienating a future collaborator.

One way to offset this would be to award replication studies an extra level of prestige : insist that these studies be subject to the highest level of review possible. Getting such a paper accepted would be a real challenge and recognised as such. So there would be a motivational balance of glory on the one hand, difficulty and low likelihood of new discoveries on the other. A successful replication study could also have a transformative effect on the original paper, changing it from a merely interesting result to one that deserves strong attention. That in turn encourages everyone to publish research which can be replicated, because a replication study which failed would be a pretty damning indictment. Couple that with the labelling system that identifies the originality and nature of the research and that would be, I think, a pretty powerful reform of the system.


So that's my proposed solution to the so-called crisis in peer review and replication studies. In order to stop people publishing mediocre papers, force papers to be labelled as such. That's a slightly cruel and cynical way to put it. A nicer and actually more accurate description is that we need to be able to disseminate as many ideas as possible to our colleagues and papers are still the best way to do that, but we have to recognise that not all papers either have or need the same level of effort to produce. Consequently they don't require the same level of review rigour either.

It's hard to predict exactly what the effects of this be. I lean toward thinking that it would actually increase publication rate rather than lessening it : everyone has lots of ideas but we don't often get to discuss them with the wider community as often as we might like. That sort of discussion isn't appropriate to the current publication system, which is heavily geared towards much more in-depth articles. Those types of papers would probably decrease in number, since now no-one would feel the need to make a mountain out of a molehill or slice the salami too thin. So ideas wouldn't be stifled or suppressed, but we'd have less tedious, long-winded papers that were only ever written to bolster publication rates to get through.

How exactly would a paper be labelled ? There are several different methods. One would be to have more and more specialist journals, which is somewhat happening at the moment. For example the Journal of Negative Results specialises in - you've guessed it - results that were unexpectedly negative. Or Astronomy & Computing, specialising in different computational techniques and code in astrophysics rather than new results about how the Universe works. There are also occasional special editions of regular journals focusing on specific topics.

But perhaps new journals are overkill. Most regular journals already have a main journal plus a "letters" section which publishes much shorter, timely articles. Why not extend this further ? Instead of just MNRAS Letters, also have MNRAS Observational Catalogues, MNRAS Numerical Simulations, MNRAS Serendipitous Discoveries, MNRAS Data Mining, MNRAS Clickbait, MNRAS Essays, MNRAS Breakthroughs, MNRAS Replication Studies, MNRAS Things I Just Thought Up Off The Top Of My Head While I Was On The Toilet, MNRAS Things Some Bloke In A Pub Said Last Tuesday, etc.

Literally the only change this needs is a label in the bibliographic code of the article. Similarly, recognising that we're now in an almost purely online world, some sort of code could be applied to articles on their replication status : not applicable, attempted but failed, successfully reproduced etc (just as ADS lets you see who cites any given article and other metadata). Then everyone knows, instantly, that the research is or is not obviously wrong - but only provided that the review rigour on the replication studies be of the highest level possible. Otherwise you just get huge numbers of people publishing crap replication studies that don't mean anything.

The hard part would be to agree some broad common standards, which would likely only be possible after practise. Nature retains its pre-eminence as much by reputation as the actual quality of its publications (some would say more so). If other journals started divisions which only published potentially major discoveries with a high level of reviewer scrutiny, the pre-eminence of one journal over another would be broken... but only so long as that was actually recognised by researchers. Hence the need for common standards, reviewer guidelines and a more transparent review process. If you can see that the Bulgarian Journal of Bovine Psychology is publishing results as important and as rigorous as Nature, who would still favour one over the other ?

This is not the only possible approach. A more radical idea is that we largely abandon papers and move deeper into the purely online realms of blogs, forums and social media, making science communication far more fluid. I do not like this idea. A paper presents a permanent, standalone record of the state of the art at some time given the available evidence - it can be checked years later relatively easily. A better approach would be for the existing major journals and arXiv to run forums, e.g. each paper automatically starts a discussion thread. This would be a vastly superior way to find out what the community thinks of each paper than waiting for citations from other researchers, which usually takes months and is often limited to a single unhelpful line, "as shown by Smith et al. (2015)".

Of course, what labelling papers won't necessarily do is make the damn things any easier to read. But that's another story.