This is of great interest to the kind of study I was doing. There the variations were assumed random, and assessed for significance on that basis. But of course we don't know they are random - it's just an model in the absence of better information. F&R have used more information, so I want to see what the effect is.
First a review of the methods. Some posts starting here looked at the pattern of temperature trends you could create with all possible start and end points over a period. Then I looked at how allowance for statistical significance changed the picture, and then at how a similar picture could be drawn of upper and lower CI's.
I want to show principally the latter analysis. Here's an example which will be enlarged later:
On the left you see the trends marked in color. The x-axis shows the end year of the trend period; the y-axis shows the start. The faint white lines at 45° show constant trend period, shown on the right axis.
On the right you see, in this case, the lower bound trends. That is the highest trend which allows you to say that the observed trend is significantly greater, at 95% confidence. It gives a cool side check to the trend. You can take a value, look up its color, and say that where you see that color or redder, you know the trend significantly exceeds that value.
Now the lower bound plot. It shows that trend which is less than the observed trend, but is the highest of those for which the difference from the observed is significant. In most plots, for the longer periods, the color corresponds to about 1.3°C, so the observed (adjusted) trend is significantly higher than this.
Now the upper bound plot, converse of the above. It shows that trend which is greater than the observed trend, but is the least of those for which the difference from the observed is significant. In most plots, for the longer periods, the color corresponds to about 1.8°C, so the observed (adjusted) trend is significantly lower than this.