Don’t forget the brackets

I have been helping a user who found some discrepancies between the results from JAGS and OpenBUGS. Such problems always concern me and I give them high priority. In this case, there were several differences in parameterizatino between the JAGS and OpenBUGS models. Among them, the JAGS model included this line:

prec[g] <- 1.0 / sigma[g]*sigma[g]

Most users will recognize the calculation as a transformation from a standard deviation (sigma) to a precision parameter (prec). It is common practice to define a prior on standard deviation and then transform it to the precision parameter required by the dnorm distribution. Unfortunately, this expression is incorrect. To understand why you need to know something about operator precedents.

Like all other languages, operators in JAGS has built-in precedence. For example, multiplications (*) and divisions (/) are carried out before additions (+) and subtractions (-). The multiplication and division operators have the same precedence and when they appear in the same expression they are evaluated from left to right. So in the above expression  JAGS first divides 1.0 by sigma[g] and then multiplies it by sigma[g]. The two operations cancel out so the relation that defines prec[g] is equivalent to

prec[g] <- 1.0

In a small model the consequences of this should be easy to spot. Since the value of sigma[g] has no influence on any other node, its posterior distribution is the same as the prior. Spotting this requires the user to monitor sigma[g] and in a large model with many sigma parameters it might be easy to miss this step.

The solution of course is to put brackets in the expression so that the order of evaluation is clear

prec[g] <- 1.0 /( sigma[g]*sigma[g])

Do not hesitate to do this. Brackets are never harmful (provided you put them in the right place, of course).

The standard-deviation-to-precision transformation is such a common practice that I am now wondering how much code is out there that repeats this simple error. It would be helpful for JAGS to raise some kind of “red flag” to the user in cases such as this.

What’s new in JAGS 4.0.0 part 1/4: Reproducibility and testing

As promised, this is the first of a series of posts explaining some of the changes in JAGS 4.0.0 compared to previous versions. This post deals with two important changes that are may not be obvious


Previous versions of JAGS had problems with reproducibility. Specifically, JAGS did not produce exactly the same output when given the same seeds for the random number generators in each chain. This problem is now fixed in JAGS 4.0.0. It required a major overhaul to fix, but the effort was worthwhile. It is important for scientific research to be reproducible and it is a requirement in commercial environments where analyses need to be validated.

The underlying problem was that the very first versions of JAGS stored nodes in memory in an order that was, in principle, arbitrary, with the consequence that the sampling order of the nodes was also arbitrary. This was not obvious at first because I tested JAGS on small problems and used the standalone command line interface. Under these conditions the internal ordering of nodes was the same from one run to another. But when people started using JAGS with larger problems and the R interface was developed, the reproducibility problem emerged. It first came to my attention 3 years ago.

All analyses in JAGS 4.0.0 should be reproducible, and if they are not then please let me know. Please note that I can only guarantee reprodicibility of analyses using exactly the same JAGS version, i.e. when the next release comes out it may produce different results from JAGS 4.0.0, although it should still be internally consistent. One isse to watch out for if you care about reproducibility is that some highly optimized math libraries (such as Intel MKL) sacrifice bitwise reproducibility for speed.This behaviour can be modified but is much slower, losing the advantage of a high-performance library.


JAGS is becoming quite popular. Version 3.4.0 was downloaded nearly 50,000 times from the Sourceforge web site. With this popularity comes the responsibility to ensure that it is working correctly.  So JAGS 4.0.0 introduces unit tests to ensure that the functions and distributions in JAGS are working as documented.

If you are using a binary version of JAGS this will not affect you (You can assume the person who built the binary package ran the tests for you). If you are installing JAGS from source then you can run the tests from the build directory with

make check

The underlying test framework id CppUnit, so you need to have this installed to run the tests. This choice may have been a mistake because it turns out that CppUnit is no longer actively developed.  So I may change to an alternative testing framework in the future, but my immediate priority is to improve coverage of the tests during the 4.x.y release series.

JAGS 4.0.0beta is available

This took somewhat longer than planned. Due to the long lapse  between JAGS 3.4.0 and JAGS 4.0.0 there will be quite a long beta period. I shall explain some of the important changes in a series of follow-up posts.

I cannot currently upload the files to Sourceforge. They are still recovering from a major outage from a couple of weeks ago. For now, you can download the source tar balls for JAGS-4.0.0 beta and the R package rjags_4-1 from my Google Drive. (Click the icon and then the download symbol that appears in the black bar at the top of the screen).


Scotland says no

This is a quick update to yesterday’s post on polling for the referendum on the independence of Scotland from the UK. The final result is 55.3% for No (i.e. staying in the union), 44.7% for Yes.  This is consistent with the opinion polls, which predicted a win for No, but the margin is certainly bigger  than predicted by the opinion polls, especially in the run-up to the referendum.

The image below from shows a “poll of polls” – a running mean of opinion polls – which is a smoothed version of the figure I showed yesterday. The average of the last 6 polls on the day before the referendum gave a prediction of 52% for Yes – only a 4 percentage point difference compared with the 11 point difference seen in the referendum. This vindicates Stephen Fisher’s analysis showing that opinion polls tend to overestimate the desire for constitutional change.

Slide19If you ignore the narrowing in the gap that apparently occurred in the last two weeks, the polls seemed to give a pretty good prediction from April to August. So perhaps this wasn’t  a last minute surge for Yes, but a “patriotic spiral of silence” from No voters, as Martin Boon suggested. But I’ll leave that question to the psephologists.