BLIPMAP Analysis Notes
Dr. John W. (Jack) Glendening, Meteorologist 

I can't myself analyze BLIPMAP predictions that are of interest to others, so this page presents some basic things that I look for when analyzing BLIPMAP predictions for a given day.  This material presumes some familiarity with BLIPMAP parameters, with how meteorological models work, and model limitations.  As always, more knowledge allows a better assessment of a forecast.  Please note that here I use the term analysis to indicate an evaluation of the "why" behind the predictions - the parameters and the order in which I would look at them for an analysis differs from what I would use to simply make a soaring forecast for a given day. 

These analysis notes are basic and are intended as starting points. 

General comments:

      Humidity predictions are more difficult than thermal predictions, so dry thermals can be better predicted than can clouds.  So any cloud predictions should always be taken with a larger grain of salt.  Whenever a forecast is in error, my first guess is that the model's internal cloud predictions were not correct  (which is not the same as a BLIPMAP cloud prediction, which is based on external post-processing of that model output)

Surface Heating:

      In analyzing forecast accuracy, the first parameter I look at is "Surface Heating" (officially known as the surface heat flux).  If thermals are going to occur, and be predicted, there has to be significant heating of the ground!  This parameter is computed by the model itself and comes directly from its output data (i.e. no post-processing is done). 

      Caveat:  the above description is for ground-based thermals, i.e. it excludes upward motion resulting from condensation-induced buoyancy (sometimes inelegantly termed "cloud suck").  BLIPMAPs do not consider or include such buoyancy and hence will under-predict updrafts when "cloud suck" (sic) actually occurs. 

      Because surface heating is so important to thermal production, it's an important factor in the BLIPMAP calculation of the Thermal Updraft Velocity (W*), Height of Critical Updraft (Hcrit), and Buoyancy/Shear Ratio (B/S) parameters - so an inaccurate surface heating prediction by the model will result in inaccurate forecasts for those parameters too.  Surface heating also affects other parameters such as BL top, but much less strongly. 

      If the forecast surface heating is small, the most likely cause is model-predicted cloudiness limiting the solar radiation reaching the surface.  A secondary possibility is a large model-predicted soil moisture, as can occur when the model has predicted rain to occur in that location in the last (say) 24 hours - but the importance of that factor in any forecast is difficult to evaluate by the user since no BLIPMAP plots of soil moisture are provided. 

Surface Sun:

      For the NAM model the existence of model-predicted clouds can be checked by examining the "Surface Sun" parameter (officially the downward short-wave radiation at the surface) - thick clouds will produce a greatly reduced value.  For the RAP model that parameter is unfortunately not available.  This parameter is computed by the model itself and comes directly from its output data. 

RAP-predicted Cloudiness - an aside:
      A crude test for clouds predicted internally by the forecast model can be made for the RAP model by checking the "Explicit CloudWater Total" parameter, which provides the forecast cloud water integrated through the entire depth of the atmosphere.  Explicit cloud water is only predicted for large clouds, i.e. ones which fill an entire grid cell.  If explicit cloud water is predicted, then one can infer that significant cloudiness exists.  However this is not a definitive test for model-forecast cloudiness, since even if the explicit cloud water is zero it is still possible the model forecasted parameterized cloud water, i.e. clouds which are individually smaller than a grid cell but which in the aggregate can significantly reduce solar radiation.  However there is no model-provided output parameter of this parameterized cloudiness (except indirectly through NAM's "Surface Sun" parameter).  [I should note that the BLIPMAP-calculated "Cumulus Potential" parameter is intended to provide a forecast of small-scale BL cloudiness, similar to a cloudiness parameterization internal to the model, but it is a separate estimate and not a definitive measure of what the model thought the parameterized clouds were.]

BLIPMAP Cloud Predictions:

      Note that BLIPMAP forecasts for Cumulus and OvercastDevelopment are not based directly on model-predicted cloudiness, since the BLIPMAP predictions are for smaller-scale clouds than predicted by the model's explicit clouds, and are based upon the model-predicted temperature and humidity fields (the latter being the more critical factor). 

BL Top:

      The BL Top is the height to which mixing occurs, e.g. if smoke is released in the BL it would mark the upper limit of that smoke layer.  It can be useful as an indicator of the height that a sailplane can achieve, but only if that mixing is due to thermals and not if the mixing primarily results from small-scale mixing such as created by vertical wind shear.  So a large BL Top is not a good predictor of sailplane height when the Surface Heating is small and/or the B/S Ratio is small.  For that reason the BL Top can sometimes be misleading when used to estimate the maximum soaring height.  (In contrast, the "Height of Critical Updraft" parameter does include a dependence on the Surface Heating so will be small when surface heating is small even if the BL Top is large.) 

      I shouldn't have to say this, but of course BL Top will also not not be a good indicator of the maximum soaring height if thick clouds form in the BL such that sailplane height is limited by the cloud base (since the top of the BL is then well above cloud base). 

Thermal Height Variability:

      The BL Top forecast depends upon the predicted surface temperature, as well as upon the atmospheric temperature structure above the surface.  Sometimes a small change in the surface temperature will produce a relatively large change in the BL Top.  Such sensitivity is measured by the "Thermal Height Variability" parameter, which indicates the expected change in the BL Top should the surface temperature actually be 4 degF warmer than predicted (which is roughly the accuracy which can be expected from surface temperature predictions).  So this parameter should be looked at if the BL Top appears to have been in error, to see if the BL Top prediction error lies within that expected from surface temperature errors. ,

Forecast Soundings:

      While a BL created simply by surface heating is relatively straightforward, a complex BL can result from multiple phenomena.  The single number produced for each parameter by the BLIPMAP calculation cannot reveal that complexity, so it is often useful to analyze a forecast sounding when the BL forecast is puzzling (this can be done, for example, by using the sounding pop-ups available when using the BLIPMAP viewers) .  Such analysis does require a deeper understanding of the atmosphere and is too complex to be discussed here.  I wish I could provide a good description of how sounding analysis is done and what it can reveal, but none of the on-line descriptions I have read -- such as the Weatherjack sounding tutorial -- fully meets that need, often focusing primarily on cloud predation.  I do feel it is helpful as a general overview to read my The Convective Boundary Layer and Sounding Analysis, which gives a basic overview of the convective Boundary Layer whys & wherefores of soundings analysis. 

Uncertainty and forecast differences between models:

      While all meteorological models work similarly, they can produce different forecasts for the same time due to  (1) differences in the initial conditions used at the model start time,  (2) differences in the boundary conditions applied during the model run,  and (3) different treatments of the physical phenomena within each model, particularly those phenomena which must be "parameterized".  For example, one model may move a front or low differently than another due to differences in (1) or (2).  Or one may produce rain and the other not, due to differences in (3).  Such different histories can produce significantly different results - for example, as model which thinks rain has occurred in the last 24 hours predicting a much lower surface heat flux than does a model which thinks no rain has occurred. 

      Although it would be nice if model forecasts would always produce a single number which could be relied upon, like an airspeed indicator, that is not the reality of weather prediction.  The atmosphere is chaotic, and weather forecasts are very noisy (as is the weather itself).  Much forecast uncertainty results simply because the state of the atmosphere used to initialize the model is not known in sufficient detail - sounding observations in the US are spaced roughly 500 km apart but significant atmospheric variations occur at much smaller scales. 

      Since a single forecast number cannot be relied upon, professional forecasters look for ways of evaluating the uncertainty in any forecast.  One method of doing that is to run the same model with differing initial and boundary conditions and then, for the same forecast period, compare results to find locations where forecasts are more/less sensitive to such variations.  Such predictions -- called "ensemble forecasts" -- are now being made by NWS for the global model.  While useful for evaluating the uncertainty in a forecast, their drawback is the need for much computer horsepower and the difficulty in analyzing and summarizing differing forecasts from, say, 10 model runs. 

      A "poor man's" assessment of BLIPMAP uncertainly can be instead made by comparing the results from two different models, as by RAP vs. NAM BLIPMAPs.  When the models are in agreement one has greater confidence in the forecast - if they disagree significantly, then the forecast is clearly uncertain.  Professional meteorologists do this, and those who read the on-line "Forecast Discussions" from NWS forecast offices can see how NWS forecasters try to determine which model is likely to be more accurate in each circumstance (and sometimes fudge by choosing a compromise between them).  Experienced BLIPMAP users can do the same, by using their past experience and their knowledge of model strengths/weaknesses to decide which forecast to weight more heavily.  One factor behind my decision to augment the original RAP model BLIPMAPs with NAM model BLIPMAPs was to allow a better assessment of forecast uncertainty - such evaluation is difficult when forecasts are available from a single model only.  And I believe that seeing how forecasts can differ between different models, even though the models work very similarly, is very educational for non-meteorologists who have only a vague idea of the chaotic nature of the atmosphere and the uncertainties in trying to predict its evolution.