Skip to content
Archive of entries posted by

Should Mister P be allowed/encouraged to reside in counter-factual populations?

Lets say you are repeatedly going to recieve unselected sets of well done RCTs on various say medical treatments. One reasonable assumption with all of these treatments is that they are monotonic – either helpful or harmful for all. The treatment effect will (as always) vary for subgroups in the population – these will not […]

Zombie student manipulation of symbols/taking of course notes

As with those who manipulate symbols without reflective thought, that Andrew raised, I was recently thinking abouts students who avoid any distraction that might arise by their thinking about what the lecturer is talking about – so that they are sure to get the notes just right. When I was a student I would sometimes […]

When engineers fail the bridge falls down: When statisticians fail millions of dollars of scarce research funding is squandered and serious public health issues are left far more uncertain than they needed to be

Saw a video link talk at a local hospital based research institute last Friday

Usual stuff about a randomized trail not being properly designed nor analyzed – as if we have not heard about that before

But this time is was tens of millions of dollars and a health concern that likely directly affects over 10% of the readers of this blog – the males over 40 or 50 and those that might care about them

Its was a very large PSA screening study and and

the design and analysis apparently failed to consider the _usual_ and expected lag in a screening effect here (perhaps worth counting the number of statisticians in the supplementary material given)

for an concrete example from colon cancer see here

And apparently a proper reanalysis was initially hampered by the well known – “we would like to give you the data but you know” …. but eventually a reanalysis was able to recover enough of the data from the from published documents

but even with the proper analysis – the public health issue – does PSA screening do more good than harm ( half of US currently males get PSA screening at some time? ) will likely remain largely uncertain or at least more uncertain than it needed to be

and it will happen again and again (seriously wasteful and harmful design and analysis)

and there will be a lot more needless deaths from either “screening being adopted” if it truly shouldn’t have been or “screening was not more fully adopted, earlier” when it truly should have been (there can be very nasty downsides from ineffective screening programs, including increased mortality)







Statistics is easy! part 2.F making it look easy was easy with subtraction rather than addition

After pointing out that getting a true picture of how log prior and log likelihood add to get the log posterior – was equivalent to getting a fail safe diagnostic for MCMC convergence

I started to think that was bit hard – to just get a display to show stats was easy …

But then why not just subtract?






Statistics is easy! part 2.1 – can we avoid unexpected bumps when making it look easy?

I increased the range of the plot from Statistics is easy! part 2 and added the 2.5% and 97.5% percentiles from a WinBugs run on the same problem … using bugs() of course

And then started to worry about that nasty bump on the right of the 97.5% percentiles

plot3.png






Statistics is easy! part 2 – can we at least make it look easy?

Well can we at least make it look easy?

For the model as given here, there are two parameters Pc and Pt – but the focus of interest will be on some parameter representing a treatment effect
– Andrew chose Pt – Pc.

But sticking for a while with Pt and Pc – the prior is a surface over Pt and Pc as is the data model (likelihood)

In particular, the prior is a flat surface (independent uniforms)
and the likelihood is Pt^1 (1 – Pt)^29 * Pc^3 (1 – Pc)^7 (the * is from independence)

(If I reversed the treatment and control groups – I should be blinded to that anyways)






Getting confidence into the scaffolding – even if Bayes did or did not intend that.

After noticing an event for my first stats prof I made the mistake of downloading one of his recent papers After suggesting that Bayes might have actually been aiming at getting confidence intervals – the paper suggests “Bayes posterior calculations can appropriately be called quick and dirty” means to obtain confidence intervals. It avoids obvious […]

When experts disagree – plot them along with their uncertainties.

This plot is perhaps an interesting start to pinning down experts (extracting their views and their self assessed uncertainties) – contrasting and comparing them and then providing some kind off overall view. Essentially get experts to express their best estimate and its uncertainty as an interval and then pool these intervals _weighting_ by a pre-test […]

What’s most cool – the question mark in the name or the modelling of zombies?

Some recent interest has been raised by the following publication zombies by an seemingly unknown author – well not quite Smith? I have not had anything to do with predator/prey models since reading Gregory Bateson’s Steps towards an Ecology of Mind – but a question mark in one’s name – that just too cool to […]