Whenever we’re listening or reading about weather and forecasting, the weather models always figure prominently in the discussion. Read the forecast discussion on the your local Weather Forecast Office’s web site and it’s hard to find a day where one or more weather prediction models aren’t mentioned. So, what are these models and how do they work?
Weather models are predictive software programs that are loaded with massive amounts of data such as surface observations and observations aloft. The models then run the data through algorithms, or equations, that attempt to determine how and when weather systems might move, the areas these systems will affect, and with what impact. These models are often referred to as numerical weather prediction.
Meteorologists run the models on regular schedules, sometimes modifying the data slightly to create “what if” scenarios. Different models are run and their results compared. Models that trend toward agreement give forecasters high confidence that a certain meteorological outcome or event will take place and what the effects will be. When models diverge in their results or give different outcomes, the forecasters will usually say they have low confidence in their forecast.
Models sometimes give different results several days away from the prediction period and then gradually come into general, or even close, agreement. This allows the forecasters’ confidence level to increase as the weather factors approach the forecast area, often called the CWA, or county warning area. This confidence is further bolstered if weather station observations and satellite telemetry show that what’s actually happening on the ground is within the models’ prediction parameters.
So…what are the models and how are they run?
There are three primary models frequently mentioned by Weather Forecast Offices in their forecast discussions—the NAM, or North American Mesoscale Forecast System; the GFS, or Global Forecast System; and the ECMWF, or European Center for Medium-Range Weather Forecasts. Each model has its different approach to the data and for varying periods of time. Best of all, each of these models is freely available to the public on various NWS web sites. Let’s take a look at each.
The operational North American Meso (NAM) is run four times daily at 00, 06, 12, and 18Z and all cycles extend out 84 hours (three and a half days).
The Global Forecast System (GFS) is one of the operational forecast models run at the National Centers for Environmental Prediction (NCEP). It is also run four times daily, with its forecast output extended to 384 hours, or 16 days. NCEP, part of the National Weather Service, provides nationwide computerized and manual guidance to Warning and Forecast Offices concerning the forecast of basic weather elements.
The European Center for Medium-Range Weather Forecasts (ECMWF) refers to their medium-range numerical forecast model, which runs out for ten days.
Each of the models digests the observational data in different ways and at different resolutions (basically the size of the grid squares, or domains) to produce forecasts that extend outward for differing lengths of time. As the models look farther ahead, the resolution generally decreases (i.e. the grid squares get larger) to account for the uncertainty. Meteorologists can easily compare the models to each other for the time periods where they overlap, with confidence generally decreasing the farther into the future the forecast model looks.
How do they work?
All of the data to be used in the model is assigned to the model’s corresponding grid points, which tells the model the current state of the atmosphere at the beginning of the model run. Then, the computer runs the data through the equations for up to 50 levels of the atmosphere, a process that may involve billions of calculations, and a result is produced for a relatively short period of as little as five to ten minutes. These new values are then processed through the equations again for the next five or ten minute period. This process continues until the end of the forecast period. As mentioned, different models produce forecasts over different periods of time, from as little as six hours to two weeks.
At this point, meteorologists working with the models begin applying Model Output Statistics (MOS) that corrects for errors that each model may make consistently, such as forecasting too much or too little rain, overly strong winds, or temperatures that are too hot or too cold. MOS may also correct for local conditions, or microclimates, unique to different regions that may affect the forecast’s accuracy.
After the forecast is created, meteorologists compare the output of the forecasts with the actual conditions that occurred during the forecast period, constantly modifying and tweaking the models to improve accuracy. Needless to say, garbage in, garbage out, and the models are heavily dependent on the accuracy of the data they are fed.
What is this “ensemble” they keep mentioning?
Ensemble forecasting is the process of producing several forecasts by slightly changing the forecast parameters while staying within the error range of observational instruments. If the ensemble forecast tracks the original forecast fairly closely, the meteorologists will have a higher confidence in the accuracy of their forecast.
The final ingredient…
Last but not least, the final arbiter of what is ultimately presented to the public is the local forecasters’ experience and intuition. Knowledge of local microclimates and institutional knowledge based on past weather events serve to temper computer generated forecasts with real-world adjustments that computers simply cannot make, at least not yet. This is especially true of a hard-to-forecast region like Colorado with its wildly varying terrain.
The advent of supercomputers and better observation data has enabled meteorologists to develop computer models that are remarkably accurate at predicting future weather conditions, at least in the relatively short term. They will almost certainly continue to improve and evolve as constant tweaks and adjustments are made and new computer models are developed. Nevertheless, they will never be 100% accurate because our knowledge of the atmosphere and its processes is incomplete and chaos and unexpected events and millions of tiny variables will always insure that close is probably the best we can hope for.