On the Use of Mesoscale and Cloud-Scale Models in Operational Forecasting

 

HAROLD E. BROOKS, CHARLES A. DOSWELL III, AND ROBERT A. MADDOX

 

NOAA/National Severe Storms Laboratory, Norman, Oklahoma

 

[27 August 1991 and 8 November 1991]

 

Corresponding author address: Dr. Harold E. Brooks, NSSL, 1313 Halley Circle, Norman, OK 73069.

NOTICE: This manuscript appeared in Weather and Forecasting, 7, 120-132. As a U.S. Government manuscript, it is not copyrighted by the AMS; however, see their policy statement.


ABSTRACT

In the near future, the technological capability will be available to use mesoscale and cloud-scale numerical models for forecasting convective weather in operational meteorology. We address some of the issues concerning effective utilization of this capability. The challenges that must be overcome are formidable. We argue that explicit prediction on the cloud scale, even if these challenges can be met, does not obviate the need for human interpretation of the forecasts. In the case that humans remain directly involved in the forecasting process, another set of issues is concerned with the constraints imposed by human involvement. As an alternative to direct explicit prediction of convective events by computers, we propose that mesoscale models be used to produce initial conditions for cloud-scale models. Cloud-scale models then can be run in a Monte Carlo-like mode, in order to provide an estimate of the probable types of convective weather for a forecast period. In our proposal, human forecasters fill the critical role as an interface between various stages of the forecasting and warning process. In particular, they are essential in providing input to the numerical models from the observational data and in interpreting the model output. This interpretative step is important both in helping the forecaster anticipate and interpret new observations and in providing information to the public.


1. Introduction

Since the 1950s, numerical weather prediction has become an integral part of the process of forecasting the large-scale weather. Until the late 1970S, resolution and physical parameterizations within the numerical models were limited, owing, in large part, to the inadequacy of computational resources. As the speed and memory size of computers has increased, it has become possible to carry out numerical integrations with resolution on the order of a few kilometers, allowing the models to capture details of mesoscale and cloud-scale features and, presumably, make more accurate predictions of weather than previous models. As a part of the ongoing modernization and reorganization of the National Weather Service (NWS), great emphasis will be placed on forecasting mesoscale (and smaller) weather. In the context of this emphasis, some have suggested the development of mesoscale and cloud-scale forecast models for use at individual operational forecast offices; Fritsch and Rodgers (1981) indicated that numerical models would play a key role in solving what they referred to as the "short-term forecast enigma." Orville (1980) began work with one- and two-dimensional models in order to develop forecast aids. Serious efforts in the direction of producing three-dimensional operational mesoscale and cloud-scale models have begun only recently, as access to and performance of supercomputers have continued to improve (Droegemeier 1990; Lilly 1990). While research models on these scales (e.g., Anthes and Warner 1978;Klemp and Wilhelmson 1978) have been valuable tools for improving our understanding of small-scale phenomena for many years, it is not straightforward to apply those models in a forecast mode (Warner and Seaman 1990).

The mesoscale forecast problem we currently face is fundamentally different than the large-scale forecast problem that faced meteorologists in the 1950s. At the time that NWP was in its infancy, the basic conceptual building blocks of a unified theory of large-scale motion (baroclinic instability and quasigeostrophic theory) were already in place. Using these ideas, Phillips (1951) was able to show that a simple, two-layer, quasigeostrophic forecast model reproduced much of the large scale behavior of the atmosphere. In contrast, as of this writing, no simple (i.e., not the raw equations of motion) yet comprehensive theory of motion on the mesoscale or cloud-scale exists. Modern attempts to produce small-scale numerical forecasts do not start from an equivalent theoretical foundation as the large-scale forecasts of 40 years ago. Technology currently is far ahead of the theoretical development of the subject, while the opposite case held as the NWP era began. Therefore, analogies with the rapid progress made in NWP in the 1950s are not necessarily pertinent to the mesoscale situation in the 1990s.

In the spring of 1991, we participated in an experimental forecast project[1] with operational forecasters from the National Weather Service Forecast Office in Norman, Oklahoma. As part of that project, an experimental forecast team predicted environmental conditions for a specific time and location using observations, numerical guidance from the National Meteorological Center's models, and any other means at their disposal, and those conditions were used to initialize the three-dimensional cloud-scale model of Wicker and Wilhelmson (1990). The model was run in a forecast mode to provide information back to the experimental forecast team as to the nature of the expected convection that afternoon and evening. This project represented one of the first attempts[2] to use three-dimensional numerical cloud-scale models in forecast applications. While some of the predictions were extremely good, others bore little resemblance to the actual weather. Our experience with this project led us to consider the role of subsynoptic-scale numerical models in operational settings. Problems with forecasting the environment, initializing the model, and interpreting and communicating the model results to the forecast team in a timely fashion all came up at various stages of the project. Discussions about these problems helped us in formulating the ideas presented here.

In general, the forecast utility of any numerical prediction model is limited to the time after which linear extrapolation of present conditions (called nowcasting) is valid (Zipser 1983; Doswell 1986a). The time when models begin to add information to help the forecaster is a function of the weather situation and how rapidly it is evolving (Fig. 1). When changes are slow, it may be a long time before the model is useful. In rapidly changing environments, model data may become important very quickly to forecasters attempting to anticipate those changes. In general, numerical prediction models do not produce a weather forecast (see Doswell 1986a). They produce a form of guidance that can help a human being decide upon a forecast of the weather. Just as with any other information source, numerical models can help or hinder a forecaster, depending on his or her experience, understanding of the model and its shortcomings, and the weather situation. Operational forecasters quickly become aware of problems with NWP models that may affect their forecast area (Fawcett 1969). They take note of phenomena that are not handled properly by the model, and their confidence in the model prediction is a function of what they know about the model and the weather situation. Some of these problems that exist with the current generation of cloud and mesoscale models may be solved with sufficient research, but others are of such a fundamental nature that they may resist easy solution. If so, we must either compensate for the shortcomings in operational applications or consider the model predictions in ways that will maximize their utility.

Fig. 1. Schematic illustrating effectiveness of approaches to short-range forecasting (from Doswell 1985).

In this paper, we shall address several issues concerning the use of mesoscale and cloud-scale models in an operational forecast setting. Specifically, we shall focus on three primary problem areas: 1) the predictability of mesoscale and cloud-scale phenomena, which embraces the issue of the use of numerical models, 2) the quantity and quality of the observational data, and 3) practical constraints on small-scale, short-term forecasts. Finally, we shall outline an approach to apply numerical models to operational mesoscale and cloudscale meteorology effectively. Even though the challenges of cold-season mesoscale meteorology are both interesting and great, we shall limit our attention in this paper to the problems associated with convective weather. Also, while some of our examples will refer directly to individual numerical models, our comments our not intended to be model specific. Rather, our arguments are intended to be general and meant to develop a philosophical framework within which to use numerical models.

 

2. Predictability of mesoscale and cloud-scale phenomena

A fundamental unresolved question concerns the theoretical limits of predictability of mesoscale (and smaller) weather. Anthes (1986) covers the mesoscale issues in depth and we review his discussion only briefly here. Using arguments based on three-dimensional turbulence, the limit on synoptic-scale predictability is set by nonlinear interactions between the various wave components of the energy spectrum. The rapid transfer of energy from large to small scales in three-dimensional turbulence implies, at first glance, that small scale phenomena may have less inherent predictability than those on the large scale. However, the intermittency of atmospheric phenomena, particularly on the mesoscale, may indicate that three-dimensional turbulence is not the best model for the energy spectrum in that range. The persistent nature of some features, such as fronts, implies that they "resist" the energy cascade present in three-dimensional turbulence. For example, Lilly (1986) has argued that the helicity of supercell thunderstorms may make them more predictable than regular thunderstorms. Quantitative knowledge of the details of the forcing (such as surface heating, topography, and synoptic-scale disturbances ) of mesoscale processes also improve predictability. Some studies of limited-area numerical models have indicated that those models are less sensitive to data uncertainties than would be predicted by considering turbulence alone (e.g., Errico and Baumhefner 1987). Errico and Baumhefner (1987) proposed that this resulted from the nesting of the model within a larger scale model with one-way interaction on the boundaries. "Perfect" data from the large-scale model was continuously advected inward, so the apparent enhancement of predictability may have been an artifact of the experimental design.

Not all of the evidence concerning mesoscale features lends optimism to the question of predictability, however. Berri and Paegle (1990) reported significant error growth in models with horizontal resolution on the order of 20 km. Vukicevic and Errico (1990) did not find significant error growth in their model at similar resolution, but they pointed out the strong model dependence of this result. More importantly, they found that the predictability of their model was a function of the synoptic situation. Specifically, errors grew faster in rapidly changing synoptic settings and strongly baroclinic systems, compared to quiescent conditions. This result is similar to that found by Kallen and Huang (1988) for forecasts from the European Centre for Medium-Range Weather Forecasting (ECMWF) model, when they analyzed cases where a single observation greatly influenced the analysis. Rapidly changing, strongly baroclinic synoptic environments are important systems for the mesoscale forecaster, so that the greater sensitivity of the models in those situations is crucial. The result implies that, in those baroclinic cases where the mesoscale forecaster needs help the most, the large-scale numerical model guidance is most likely to be less reliable.

Unfortunately, forecast difficulties are not confined to strongly baroclinic systems; they also arise commonly in weakly forced synoptic situations (Barnes 1986). When a reasonably strong synoptic-scale signal is present, mesoscale models may be relatively insensitive to the initial conditions (Zhang and Fritsch 1986, 1988). However, in the absence of such a large-scale signal, it has not yet been shown that mesoscale models can generate accurate convective-scale evolutions without detailed initial conditions ( Stensrud and Fritsch 1991).

Issues of predictability and robustness of solutions can be considered from the standpoint of nonlinear dynamics. Lorenz (1963) has shown sensitivity to initial conditions in a simple model of Rayleigh-Benard convection consisting of three equations and three unknowns. The notion of attractor basins in phase space for solutions of systems of nonlinear partial differential equations is useful in exploring these issues. A map of the basins of attraction for the Lorenz model is relatively simple, but even so, small changes in the initial state can move the model into an entirely different basin. Given the sensitivity seen in the results of cloud models using simple initialization schemes, it is unlikely that the sensitivity to initial conditions will decrease as we begin to use more complicated models with more degrees of freedom. Consider using the output of mesoscale models as the input to cloud-scale models; this constrains the degrees of freedom, limiting the range of possible solutions that the cloud-scale model can produce. Thus, it is analogous to the constraints imposed by the use of large-scale models to provide boundary conditions for mesoscale models, as proposed by Errico and Baumhefner (1987). If the mesoscale model does not provide "perfect" predictions, however, it is possible that it will move the cloudscale model solution into the wrong attractor basin. Providing more "realistic" boundary conditions for cloud-scale models will not necessarily produce a better prediction of cloud-scale weather. It may help to produce better simulations and allow the models to recreate a larger range of phenomena, but it may not produce better predictions.

Stensrud and Bao (1991) have considered the sensitivity of the Lorenz model to initial conditions and data assimilation, using the concept of "decision points" (see Nese 1987) to evaluate model evolution. In the Lorenz model, when the trajectory of a model solution approaches the region between the two fixed points, it may cross from one to the other and begin to orbit the second fixed point ( Fig. 2). They point out that, in more complicated models, significant decision points exist as well. Should the model make a "wrong" decision at one of these decision points, owing to such factors as numerical error, problems with the initial conditions, or imprecisions in the model formulation, it can make a transition to the "wrong" solution. This may be an explanation for the cloud-model sensitivities previously discussed. Another area in which this concept may be important is in the parameterization of convection in mesoscale models, which are too coarse to resolve convection explicitly. The onset of convection can be regarded as a decision point ( Stensrud and Bao 1991); once convection begins, certain possible evolutions of the atmosphere are no longer possible. The sensitivity of mesoscale models to small changes in the onset of convection has been pointed out by Fritsch and Chappell ( 1980) and Stensrud and Fritsch ( 1991 ) in their modeling studies. Given this sensitivity, explicit prediction on the mesoscale appears to be extremely difficult.

On the cloud-scale, it recently has been suggested by Droegemeier (1990) that, depending on the nature of the situation, the location, timing, intensity, and type of precipitation are predictable on time scales up to 12 h and spatial scales greater than 10 km. Droegemeier states that the intensity and location of new thunderstorms might be predictable up to 2 h, while the detailed evolution of existing thunderstorms might be predictable for 3 to 6 h. Obviously, such a level of predictability would have great benefits for improving forecasts and warnings. A numerical simulation of the 3 April 1964 splitting storms (Wilhelmson and Klemp 1981) has been used often as evidence that models might be able to predict supercell storms over a several-hour period (Fig. 3). Comparison of the observations and model results shows a more rapid evolution in the model storm and a difference in storm velocity of approximately 5 m s-1, which leads to an error in the position of 18 km after 1 h. While this is not a large error in terms of simulating the storm, it is significant for an operational warning process, where the features for which warnings are issued may be small. Also, in an environment with strong horizontal gradients, position errors this great could result in a change in the type of convective storm. The significance grows when issues such as the possible initiation of new convection on outflow boundaries of the initial storm are considered. Recent numerical studies by McPherson and Droegemeier (1990) and Brooks (1990) show important differences in simulated convection, depending on the specification of the initial thermal perturbation.

Fig. 2. Two trajectories around the Lorenz attractor. The end points of the first trajectory are labeled by 1 and 2, and the end points of the second are labeled by a and b. Note that the trajectories initially are very close and then orbit separate stationary points.

This sensitivity of cloud model simulations to the fine-scale details of the initial conditions raises two distinct concerns. First, the simulations begin with an unbalanced thermal perturbation; this is almost certainly an unphysical representation of the way in which deep, moist convection begins. As yet, the details of those processes leading to initiation of deep convection remain rather poorly understood. If nonhomogeneous initial conditions are used, it is not yet obvious what dynamic balances might be appropriate to mitigate the time needed for the model to develop its own internal consistency among model variables. Therefore, our present situation with regard to initiating convection in cloud models is tainted by this uncertainty. Second, if details of initiating mechanisms must be described as accurately as suggested by the results of McPherson and Droegemeier (1990) or Brooks (1990), it is not clear that any current or currently proposed observing system will provide that sort of data density and accuracy. Using models to infer unobserved variables (as in the case of single-Doppler radial velocity being used to infer not only the unobserved components of the flow, but also the thermodynamic structure) is not yet a proven concept even in theory, except under highly idealized conditions (Stensrud and Bao 1991). Issues concerning the quality and density of the observational data will be discussed further in section 3.

The Wilhelmson and Klemp (1981) result also illustrates the fundamental difference between simulation and prediction. A simulation will be considered successful if it replicates the actual weather event qualitatively. Successful prediction, on the other hand, requires quantitative accuracy in time and space. Droegemeier (1990) points out that almost all research to date with subsynoptic-scale models has been pointed toward simulation, not prediction. Current cloud models have been most successful at simulating a rather narrow range of convective phenomena, particularly supercell thunderstorms. Whether those or similar models will be able to predict the same phenomena is an open question. It is certain, however, that what the models cannot simulate, they will be unable to predict.

Recent experiments concerning the ability of current models to predict storms explicitly have not been encouraging. On a scale smaller than the Wilhelmson and Klemp result, Lin et al. ( 1990) used the Colorado State University Regional Atmospheric Modeling System (CSU-RAMS) (Tripoli and Cotton 1982) to simulate the 20 May 1977 Del City tornadic storm with fields retrieved from multiple-Doppler radar observations of the wind and derived thermodynamic fields. The model fields between 5 and 10 min into the simulation resemble a set of observations taken 14 min after the initialization, but by 15 min into the simulation, the modeled storm's vorticity center has weakened and there is little agreement between the model and observations. The model evolution is much too fast. It indicates that this model is capable of simulating the observed storm behavior, but its predictive capability is limited to significantly less than 15 min. This is a severe limitation, particularly considering that the initial conditions contained fully three-dimensional winds, and that future operational models would be limited to radial winds from only a single Doppler radar.

Fig. 3. Observed and simulated radar echo evolution for 3 April 1964 splitting storm (from Wilhelmson and Klemp 1981).

Although most of the recent attention of cloud-scale modeling has been on supercell thunderstorms, in part because of their association with violent tornadoes, supercells are not the only mesoscale phenomenon of importance to operational forecasting or to the public. For example, on an annually averaged basis, flash floods kill more people than tornadoes (Mooney 1983) and most flash floods are not associated with supercell s. Flash floods may occur as the result of relatively short lived convective events, such as the 1981 Austin, Texas, flash flood (Maddox and Grice 1986) or the 1990 Shadyside, Ohio, flash flood (NOAA 1991). Prediction of these events would have required a very precise 3-h numerical forecast. Small errors in the forecast timing and motion of the Austin and Shadyside storms, such as described above for the 3 April 1964 simulation, would have resulted in a poor forecast of the location of the precipitation, moving it out of the drainage basin in which the flooding occurred.

Another difficult situation to predict occurs when the atmosphere is extremely unstable, there is no capping inversion to inhibit storm development, and there is little shear to organize the storms (e.g., Stensrud and Fritsch 1991). In such an environment, many convective cells may form over a mesoscale area at about the same time. Severe weather from an individual cell may begin within minutes of the first echo and continue for less than an hour before the storm collapses rapidly. Because of their rapid growth and decay, these "pulse" storms have been recognized as posing significant problems for meteorologists with warning responsibility for some time (Doswell 1985). Beyond the problem of warning on the initial storm, the further evolution for the day presents serious difficulties. The outflow boundaries laid down by the first convection will provide forcing for new convection. As a consequence, a perfect forecast of the final small-scale weather situation would require perfect forecasts of all of the convection during the day. Most importantly, it would require a perfect forecast of the initial convection. Small errors in details at any time would amplify through the remainder of the forecast period (Fritsch and Chappell 1980).

Numerical models may well exhibit serious difficulties in predicting these kinds of situations. Unfortunately, these are precisely the events for which forecasters need the most assistance. Exploration of such events with mesoscale models is just now beginning (e.g., Stensrud and Fritsch 1991 ) and they have yet to be considered in three-dimensional cloud-scale numerical simulations. Because of their long life, isolated supercells that produce severe weather for several hours often present fewer operational warning problems than short-lived convective events that may be occurring in marginal situations. Short-lived events, on the other hand, allow little time to anticipate the changes and can lead to detection failures early and false alarms at the end of the storm's lifetime. The greatest forecast problems are with the events that appear to be the least predictable. Thus, using the high predictability of supercell thunderstorms to develop general arguments regarding the predictability of mesoscale phenomena in general is quite optimistic, especially with respect to assisting operational forecasters. Supercell predictability already is capitalized on in field operations. Because so little experimentation has been done for nonsupercell situations, it is possible that greater improvements can be made in forecast quality with nonsupercell thunderstorms than with supercells. It is not clear, however, that explicit prediction using numerical models will help significantly in this area.

 

3. Possible data problems

The quality and availability of data for forecast models are associated with their own distinct set of issues. Observational data form the foundation of any forecast. Bad observations can lead to bad forecasts. A human forecaster can disregard questionable data if it is deemed appropriate. For input into numerical models, data-checking procedures must be highly automated in order to prevent bad data from contaminating the numerical forecast, and to allow for certain physical assumptions to be met, such as an initial state with balanced flow of some kind. These processes of checking, initialization, and assimilation are designed to "protect" the model from potential data problems. Techniques currently in use or development for large scale and mesoscale models have been discussed elsewhere (e.g., Derber 1989; Stauffer and Seaman 1990; Sun et al. 1991). Our concerns are of a general nature and do not depend, for the most part, on the details of any individual scheme.

The development of subsynoptic-scale prediction models will require the use of data at much higher resolution, both in space and time, than large-scale models. Aircraft, wind profilers, radar acoustic sounding system (RASS) temperature sensors, and surface mesonet sites may provide some of these data. Another of the proposed primary data sources for mesoscale and cloud-scale prediction models will be the WSR88D (Doppler) network radars. These radars will cover most of the United States and provide reflectivity and radial velocity information (Fig. 4). Problems exist with the coverage of the radar system; e.g., the radars will miss a cone above each site. Lilly (1990) points out that at 200 km from the radar, the across-beam horizontal resolution of the radar will be 3.5 km, and the lowest sample will be at 3.2 km above the ground in regions where precipitation is occurring. The lack of data in the region below the radar beam is significant. On the storm scale, outflow pools frequently will be missed, with corresponding problems in initiating new convection properly. Important details of the prestorm, environmental low-level wind profile will also be missed, which may be critical in assessing the tornadic potential of a given environment (Patrick and Keck 1987; Davies-Jones et al. 1990).

Fig. 4. Areal coverage at 10 000 ft above site level for the proposed WSR-88D radar network. Stippled areas are not covered below 10 000 ft above site level.

The paucity of data in the clear-air, preconvective environment poses serious problems, since it is information in that region at that time that is especially important to the prediction of new convection. Most Doppler radar clear-air returns are from the boundary layer. The maximum useful range for data from clear-air returns is about 100 km (Burgess 1991, personal communication.) With the radar operating in the velocity azimuth display (VAD) mode, the DOPLIGHT'87 project (Forsythe et al. 1989) found that the average maximum height of clear-air returns varied from 0.7 km in March to 3.5 km in June for the four months of the study. At the 100-km range, there would be only one or two elevations in the vertical. Thus, even though the horizontal coverage of the WSR-88D network may be excellent, the vertical coverage of the data and horizontal resolution over a significant portion of the domain may be insufficient to initialize explicit cloud-scale prediction models adequately. While large-scale models, such as the National Meteorological Center's nested-grid model, take relatively coarse data and make reasonably successful forecasts of smaller-scale features with a resolution of ~80 km, it is not obvious that this extrapolates to smaller spatial scales. The large-scale models perform best when the situation is dominated by quasigeostrophic forcing, which is resolved by the relatively sparse synoptic-scale data network (Antolik and Doswell 1989). The importance of mesoscale features and processes in determining the exact location and timing of convective initiation has been discussed in observational studies by Doswell (1987) and Rockwood and Maddox (1988). Stensrud and Maddox (1988) reported a case where well-defined, merging mesoscale outflow boundaries failed to initiate new convection in a conditionally unstable atmosphere, indicating the complexity of anticipating the evolution of convective systems. We are aware of mesoscale modeling work that seems to indicate that coarse input data are not always sufficient to predict features on smaller scales (i.e., at the resolution of the model). Unfortunately, since this work fails to produce adequate simulations of the observed weather, it typically remains unpublished.

Beyond the coverage and resolution issues, some of the problems seen with observational data on larger scales will remain and may be more important on the subsynoptic scale. Chang et al. (1986) have shown how a single missing wind observation can alter numerical simulations of synoptic-scale environments dramatically. This has serious implications for analysis schemes in the model initialization stage. Typical initialization schemes start with the results of a previous numerical prediction as the "first-guess field" for the analysis. The initialization scheme then drives the analyzed fields toward the new observations. Frequently, as was the case in the Chang et al. study, important observations deviate significantly from the first-guess field and, as a result, the objective analysis scheme either removes or seriously alters them. The problem of distinguishing between "bad" observations and "good" but unusual observations is extremely difficult to solve with objective techniques. Unfortunately, it is frequently these unusual observations that are the initial clues that a significant weather situation is developing rapidly. The problem is almost certain to be more acute at smaller spatial scales where the time between the observations and the valid forecast time is shorter, allowing less time for analysis of the observational data. It is not always possible to distinguish a priori between bad observations and those that indicate important changes in the atmosphere. Thus, in the process of eliminating bad data, initialization also can eliminate the most important observations.

As mentioned before, the WSR-88D radars are expected to become an important component of subsynoptic-scale meteorology in years to come. The primary problem with it as a source of data for numerical models is that it can only provide data relevant to two of the variables of a cloud model (one component of velocity and the reflectivity, which generally has not been inserted directly into numerical simulation models). Work has been done on the recovery of three-dimensional wind and temperature fields from single-Doppler data (Sun et al. 1991), but experiments to date have only considered relatively simple flow fields. Stensrud and Bao (1991) simulated these retrievals by using the nudging technique (Anthes 1974) with data from only one model variable in the Lorenz model. They found a significant loss in the accuracy of the model prediction in such cases. The results were strongly dependent on the quality of the first-guess field used in the initialization technique. Liou (1990) reported that even small random errors in the wind fields seriously degraded the temperature retrieval. Weygandt et al. (1990) examined the response of a two-dimensional cloud model to data insertion under highly idealized conditions and found that any small errors at the end of the data-insertion period grew rapidly during the forecast. Given the error growth and sensitivity in these simplified conditions, significant work must be done before methods can deal adequately with the complexities of real data.

 

4. Practical constraints

The problems outlined above clearly present very substantial challenges. However, we believe that additional important issues that must be considered to determine how effectively numerical predictions can be applied in forecast and warning operations. Even with the assumption that numerical prediction problems can be solved, and there is no reason to believe the solutions will be simple, there are other important issues to resolve. The production of a meteorologically acceptable forecast or warning is not the end of the forecast process.

The final goal of any forecast procedure is the communication of a meaningful product to an end user in time for that user to take appropriate action.[3] It is a difficult problem, and one that consistently has received little attention from scientists concerned with public forecasting. Different user communities may have very different needs from the same forecast. For example, in a coastal region in a winter storm situation, oceangoing vessels might be most concerned with high winds leading to large waves, while the trucking industry would be interested in frozen precipitation on the highways. Both of these problems would be covered by the forecaster in determining the meteorological situation, but specialized forecasts would be desired by both groups. In a policy statement on the roles of the NWS and the private sector, the NWS has claimed the areas of data collection, issuance of general forecasts and weather guidance, and the issuance of warnings in life-threatening situations (NWS 1991). The production of tailored forecasts for individual "weather and water resources-sensitive" users is left to private-sector meteorologists . We shall not discuss the merits of the policy, but we want to point out the recognition of an implicit division between the production of a meteorologically correct (accurate) forecast and a usable forecast. While the generation of an accurate mesoscale and cloud-scale numerical prediction is fraught with difficulty, some of which we have discussed previously, there are serious problems in the generation of a usable forecast even if an accurate numerical prediction is available.

The creation of a usable forecast product from an accurate forecast involves at least two important aspects. The first is the interpretation of the results of the accurate forecast. Just as with mesoscale models, numerical cloud-scale models will not directly produce a complete description of the weather. Obviously, individual hailstones will not be resolved in such models. Instead, some parameterization of hail will have to be used to produce a distribution of hail sizes. A surface layer wind may not be included in the model formulation, and may need to be derived from data at the lowest model level. While both the generation of a hail size distribution and surface wind may be automated, it will be essential for the forecaster to understand the methods that are used in that process. Use of forecast models as a "black box" will lead to an inability to distinguish between those occasions on which the model forecast is good from those where the forecast is poor. Interpretation of model results also requires some knowledge of the needs of the user community to whom the forecast is being issued. In the case of severe thunderstorm or tornado warnings being disseminated to the general public, the obvious product is the character of the surface weather. However, the aviation community, for example, would be interested in the amount of turbulence and size of hail aloft, questions not of particular interest to the general public.

After interpreting the model, the forecaster must communicate that interpretation to the public. This is done on two time levels, represented by forecasts and warnings. We want to draw attention to the difference between these two kinds of communication with the public. Forecasts are predictions of events or weather processes that are not imminent. The final stage of a forecaster's interaction with the public currently is by means of warnings, which represent specific calls to immediate action. Warnings do not represent prediction in a strict sense. Operationally, they almost always are issued for events or processes that are already occurring (or are indicated indirectly to be occurring). In order to be effective, they must be issued with sufficient lead time and with enough information about the threat to allow for the appropriate safety action to be taken. Warnings that do not reach the public in time for them to respond to the event are useless, even though they may count as successes in verification. Such a warning would be another example of a meteorologically perfect, but practically useless, forecast.

All the stages of the forecast procedure require time. The first step, the production of the numerical model data, can be reduced by the use of faster computers and more efficient models. The second and third stages, the production of a usable forecast and the communication of that forecast to the public, each have aspects that constrain how quickly they can be carried out. The presence of irreducible time aspects makes the practical mesoscale and cloud-scale forecasting problem extremely difficult. The numerical forecast must be completed several minutes before the event if its output is to have a practical warning impact. Similar constraints are seen in large-scale forecasting, but the time period for which such a forecast is made is significantly longer than the proposed time of mesoscale and cloud-scale models. For a 24-hour forecast, taking about an hour to interpret and communicate the forecast, and for the public to act upon it, is no problem. For a 30-minute forecast (as from a cloud-scare model), such a time lag would be unacceptable.

 

5. A possible strategy

Given the difficulties discussed above, we would like to outline an approach to the use of three-dimensional numerical mesoscale and cloud-scale models that we feel will maximize the utility of the products they generate. While it may be possible to derive forecast aids from simpler models, our proposal focuses on using three-dimensional models designed initially for simulation to produce useful predictions. Inherent in this strategy is the notion of probabilistic forecasting. We concur with Sanders (1963), who pointed out that probabilities are the only proper way to express forecasts. Probabilities explicitly express the uncertainty associated with the meteorological situation. This becomes even more important as we go from the synoptic scale to the mesoscale, where, as pointed out before, our understanding of the weather is incomplete and uncertainties are correspondingly higher. Further, the relatively small size and short duration of the forecast events would require that the forecasts exhibit considerable detail in both space and time to resolve the events. Another advantage of probabilistic forecasting is that a forecaster, in the process of assessing which events are the most likely, is forced to consider which events are unlikely. By doing so, the forecaster has to develop scenarios that lead to those less likely events. The forecaster can then focus on the observations that will distinguish between the possible outcomes, and be better prepared for unusual events.

We envision a forecast cycle that begins with a mesoscale model producing a forecast for 6 to 12 hours in advance. The mesoscale model would support several uses. It would provide guidance to the operational meteorologist during the course of the forecast. As observations are taken, the forecaster can compare the evolution of the atmosphere with the evolution of the model forecast. When they are similar, model guidance can be used to anticipate changes later in the period. When the observations and model differ in early stages of the forecast (after the model has "settled down" from any adverse response to the initialization), the forecaster can have some degree of confidence that the final model solution is not correct. In a convective environment, the mesoscale model can provide approximate areas and times for the initiation of convection. The guidance could also be used in providing the public with information for that day. In particular, one area in which mesoscale models could be extremely helpful is in severe weather preparedness. Both the general public and, particularly, civil defense personnel with responsibility for severe-weather spotter coordination would benefit greatly from improved prediction of likely convective initiation.

As we have discussed already, it is not clear to us that explicit prediction of convective evolutions by a cloud model are going to be practical, so a direct link between mesoscale and cloud-scale models, leading to explicit forecasts of convective events, is not likely to be a fruitful path. However, we want to propose an alternative way to connect mesoscale model output to cloud-scale models. Specifically, mesoscale model output can be used to determine initial conditions near the time and place where convection is anticipated. Using these initial conditions as a starting point, or a "guidance forecast," we suggest that the cloud-scale model could be run with a variety of initial conditions, representing what the forecaster believes is a reasonable range about the mesoscale model's guidance. This is similar in concept to a so-called Monte Carlo forecast mode, except that the number of perturbations about the initial guidance would necessarily be rather limited. [Mullen and Baumhefner (1991) have shown that a Monte Carlo technique with a low-resolution global can produce more skillful forecasts of explosive cyclogenesis than a single prediction with twice the horizontal resolution. See Baumhefner et al. (1988) for a more general discussion of Monte Carlo forecasting in a large-scale context.] With only the time and resources for a small number of runs, it would not be possible to span the entire space of all possible perturbations about the mesoscale model's guidance forecast. It would be up to the forecaster to develop some plausible variations on the basic forecast produced by the mesoscale model, in order to determine how robust the cloud model's forecast is in any particular situation.

In this quasi-Monte Carlo approach, if a cloud model forecast on a particular day produces supercells with every plausible variation of its initial conditions, then the forecaster's confidence in a forecast that day for supercell storms is correspondingly enhanced. However, if each perturbation produces a quite different sort of cloud model forecast, then it is probable that storm-type predictability is rather low on that particular day. In effect, the quasi-Monte Carlo approach allows one to predict cloud model predictability. On days with relatively low predictability, observation of those variables relevant to deciding the nature of the cloud-scale evolution becomes the focus of the forecaster's attention. For example, if the storm type predicted by the cloud model was very sensitive to vertical wind shear on a given day, then the forecaster would be alerted to pay particular attention to the vertical wind shear evolution (using, say, vertical wind profilers). It seems plausible to suggest that, given an envelope of possibilities on a particular day, the forecaster would assign subjective probabilities to each of them. Therefore, the likelihood of an event occurring completely by surprise (both to the forecaster and/or to the public) would be reduced. A single cloud model forecast would not offer much to achieve the desirable goal of reducing forecast surprises.

Clearly, the subjective nature of this approach makes its success contingent on the forecaster's skills in anticipating which variations in initial conditions are worth exploring. There is no doubt that forecaster abilities in this capacity might vary over a wide range. This situation, however, is no different from what already exists with respect to forecaster skill. Moreover, there is an obvious benefit associated with requiring forecasters to consider plausible scenarios for the day's evolution, deciding among them which are more likely than others. The quasi-Monte Carlo approach also allows good forecasters to add significant value to the numerical model output, doing precisely what forecasters at their best are good at and leaving details and quantitative aspects to computer models, at which the models excel.

The most important stage, in our view, of a proper forecast cycle using subsynoptic-scale numerical models is the utilization of the results of the model by the human forecaster. Guidance from the model may be useful in helping the forecaster evaluate possible severe weather scenarios. By considering the data available from conventional surface data, radar, profilers, and other new technology, the forecaster can monitor the accuracy of the guidance. That information can then be used to anticipate the developing weather situation. If, for example, the quasi-Monte Carlo approach to cloud-scale modeling was used and a series of forecasts produced two distinct types of storms, with different storm motions, the forecaster might use radar-derived storm motion and reflectivity to determine which of the two storm types was imminent. The critical way in which numerical modeling should be used on any scale, and particularly on the mesoscale and cloud scale, is to assist the human forecaster in anticipating the range and likelihood of possible weather situations that will occur. That, in turn, allows the forecaster to pay attention to the important observations that distinguish between the possible situations and to respond more quickly to those observations.

Our proposed strategy can be illustrated with a simple schematic showing its application to a severe weather situation (Fig. 5). Some important points can be seen in the schematic. The first is the two-way nature of the communication of information between various levels of the process. The second is the importance of the integration of data at appropriate spatial and temporal scales throughout the process. Finally, and perhaps most importantly, properly trained and equipped human forecasters are essential to the success of the system. By utilizing the skills that are unique to humans, such as the ability to synthesize qualitative data and make subjective judgments (see Doswell 1986b), in conjunction with the developing technology, we believe that significant improvements can be made in mesoscale forecasting and severe-weather warnings.

Fig. 5. Flow chart showing proposed strategy to use numerical models on subsynoptic scales. Triangles represent data sources, rectangles -- numerical models; rounded rectangles -- human forecasters; circles -- forecast products; hexagons -- nonforecast agencies that communicate with the public; ovals -- public.

 

6. Summary and conclusions

Numerical modeling of mesoscale and convective scale weather for operational purposes soon will be technically possible. The challenges associated with it are fundamentally different from those associated with large-scale numerical modeling. As a result, the choice of a path to move along in solving the challenges is not as straightforward as was the case when NWP was in its infancy. There, a series of models of increasing complexity could be developed that expressed the various levels of approximation that typically are made in studying the synoptic-scale atmosphere (e.g., barotropic, equivalent barotropic, baroclinic). We are approaching the era when the technological capability of operational mesoscale and convective-scale modeling is present, but without the same theoretical underpinnings that fostered the NWP era. Simply because we are able to produce numerical model products on the mesoscale does not necessarily mean that they will improve the quality of the forecast, especially in a practical sense, as discussed above. Careful planning must occur in order to maximize the utility of this powerful tool for improving forecasts of severe weather associated with deep convection. Examination of the problems that a subsynoptic-scale numerical forecasting effort will face should help avoid some of the potential pitfalls. Among the most serious difficulties are:

1) Predictability of the phenomena: Although supercell thunderstorms may be more predictable than consideration of three-dimensional turbulence would indicate, many mesoscale and convective-scale features have short lifetimes. In fact, it is these features that may be the most important, and yet most difficult, to predict. The current generation of cloud-scale models do a reasonable job of simulating that atmosphere, but are likely to fail in explicit prediction. Parameterized convection in mesoscale models acts as a switch, and the resulting simulations are sensitive to where and when that switch is turned on (Kain and Fritsch 1991).

2) Data: Even with WSR-88D data, sampling of the atmosphere on scales necessary for subsynoptic-scale models will be incomplete. The impact of singular observations on synoptic-scale forecasts will most likely carry over to smaller scales. Data assimilation schemes must reject bad data while retaining all of the good data, a difficult accomplishment. Further, experiments with simple models indicate that errors amplify quickly at the end of data-assimilation schemes.

3) Communication: There are irreducible time delays between the completion of numerical prediction models and the time the public has to take action. These aspects are not particularly critical at synoptic model time scales, but become important at the mesoscale, and are dominant at the cloud scale.

A strategy that can yield benefits for operational forecasting is to use mesoscale and cloud-scale models to determine a probable evolution of the atmosphere. By using mesoscale models to produce likely initial states for a cloud-scale model, and then using the cloudscale model in a quasi-Monte Carlo approach, it should be possible to simulate likely scenarios for subsynoptic-scale development in the atmosphere. This strategy emphasizes the notion of probability in forecasting and requires the forecaster to consider different possible evolutions of weather on a given day. By doing so, the number of surprise events should be lessened. It would also allow for more complete information to be provided to the public about approaching convective weather. We strongly support efforts to improve mesoscale and cloud-scale models by including more realistic parameterizations of physical processes and testing them in a wide variety of environments. Attempting to test such models in an operational, or simulated operational, environment as they are being developed is essential. The advantages of operational testing of these models are manifold. Frequent use of numerical models in a wide variety of environments is an extremely good way to find problems with codes and parameterizations. Testing models with operational forecasters allows researchers to work on ideas of how to communicate information from the model in ways that operational meteorologists find useful. It also develops familiarity with the model products in the operational community, so that when fully operational models become available, forecasters will be more comfortable with them.

Subsynoptic-scale numerical models present operational meteorology with a great opportunity and an even greater challenge. The technology will exist to allow them to be used on a daily basis for helping with convective forecasting in the near future. Successful use of these models has certain important requirements. Chief among them is that the processes they are describing must be well understood. This includes not only a general improvement in the science of mesoscale meteorology, but also an improvement in the understanding of subsynoptic processes within the operational community. The primary function of the modeling effort we are discussing should be to help the human forecaster anticipate the development of convective weather. That allows the forecaster to concentrate on the observations, so that the model guidance can be useful. High-quality observations also can be used to drive improvements in the formulation of the models.

We do not believe that explicit prediction of individual convective phenomena is the best use of the current and forthcoming generation of numerical models. The serious difficulties that exist at all levels of the numerical forecasting process are not going to be solved simply by faster computers and increasingly powerful technology. We hope that by raising these issues, we can help focus attention on ways to maximize the benefits of numerical weather prediction. Numerical models, at any scale, should not be designed to produce weather forecasts or warnings directly, but rather to provide guidance for meteorologists to produce weather forecasts or warnings. Intelligent interpretation of that guidance is an essential link in the forecast process. This may be the most important aspect of the whole discussion. Without well-educated and well-trained operational meteorologists, no forecast methodology can be successful.

Acknowledgments. This work was done while one of the authors (HEB) held a National Research Council NOAA Research Associateship. We thank Drs. Louis Wicker and Robert Wilhelmson of the University of Illinois and the National Center for Supercomputing Applications for their support with the numerical modeling portion of the experimental forecast project. We also thank the forecasters of the Norman National Weather Service Forecast Office and the experimental forecasters from NSSL for their work on the project and for discussions about the utility of the numerical model. Discussions with and helpful suggestions from Dr. Robert Davies-Jones helped improve an early version of the manuscript. Comments in the review by Drs. Michael Fritsch, Kenneth Mitchell, and Harold Orville also helped to clarify our arguments. Mr. Donald Burgess of the WSR-88D Operational Support Facility provided radar information and Mr. David Stensrud and Dr. John Lewis of NSSL gave insights into mesoscale models and data assimilation. Mr. Stensrud provided Fig. 2.

REFERENCES

Anthes, R. A., 1974: Data assimilation and initialization of hurricane prediction models. J. Atmos. Sci., 31, 702-719.

______, 1986: The general question of predictability. Mesoscale Meteorology and Forecasting, P. S. Ray, Ed., Amer. Meteor. Soc., 636-656.

______, and T. T. Warner, 1978: Development of hydrodynamic models suitable for air pollution and other mesometeorological studies. Mon. Wea. Rev., 106,1048-1078.

Antolik, M. S., and C. A. Doswell III, 1989: On the contribution to model-forecast vertical motion from quasi-geostrophic processes. Preprints, 12th Conf on Weather Analysis and Forecasting, Monterey, California, Amer. Meteor. Soc., 312-318.

Barnes, S. L., 1986: The limited-area fine-mesh model and quasigeostrophic theory: A disturbing case study. Wea. Forecasting, 1, 89-96.

Baumhefner, D. P., J. J. Tribbia, and M. L. Blackmon, 1988: The influence of specified sea surface temperature and initial conditions uncertainty on Monte Carlo extended range forecast ensemble. Proc. WMO/WCRPP Workshop on Intercomparison of Results from Atmospheric and Oceanic GCM's. ECMWF WMO Tech. Document. [Available from the Secretariat of the WMO, Case Postale No. 5, CH-1211 Geneva 20, Switzerland.]

Berri, G. J., and J. Paegle, 1990: Sensitivity of local predictions to initial conditions. J. Appl. Meteor., 29, 256-267.

Brooks, H. E., 1990: Low-level curvature shear and supercell thunderstorm structure. Ph.D. thesis, University of Illinois, 233 pp. [Available from Dept. of Atmos. Sci., Univ. of Illinois, Urbana, IL 61801.]

Chang, C. B., D. J. Perkey, and D. W. Kreitzberg, 1986: Impact of missing wind observations on the simulation of a severe storm environment. Mon. Wea. Rev., 114, 1278-1287.

Davies-Jones, R., D. Burgess, and M. Foster, 1990: Test of helicity as a tornado forecast parameter. Preprints, 16th Conf. on Severe Local Storms, Kananaskis Park, Alta., Canada, Amer. Meteor. Soc., 588-592.

Derber, J. C., 1989: A variational continuous assimilation technique. Mon. Wea. Rev., 117, 2437-2446.

Doswell, C. A., III, 1985: The operational meteorology of convective weather. Volume II: Storm scale analysis. NOAA Tech. Memo. ERL ESG-15, 240 pp. [Available from the author at NSSL, 1313 Halley Circle, Norman, OK 73069.]

______, 1986a: Short-range forecasting. Mesoscale Meteorology and Forecasting, P. S. Ray, Ed., Amer. Meteor. Soc., 689-719.

______, 1986b: The human element in weather forecasting. Nat. Wea. Dig., 11, 6-17.

______, 1987: The distinction between large-scale and mesoscale contribution to severe convection: A case study example. Wea. Forecasting, 2, 3-16.

Droegemeier, K. K., 1990: Toward a science of storm-scale prediction. Preprints, 16th Conf on Severe Local Storms, Kananaskis Park, Alta., Canada, Amer. Meteor. Soc., 256-262.

Errico, R. M., and D. P. Baumhefner, 1987: Predictability experiments using a high resolution limited-area model. Mon. Wea. Rev., 114, 1625-1641.

Fawcett, E. B., 1969: Systematic errors in operational baroclinic prognoses at the National Meteorological Center. Mon. Wea. Rev., 97, 670-682.

Forsyth, D. E., D. W. Burgess, M. H. Jain, and L. E. Mooney, 1989: DOPLIGHT '87: Application of Doppler radar technology in a National Weather Service Office. Preprints, 24th Conf. on Radar Meteorology, Tallahassee, Florida, Amer. Meteor. Soc., 198-202.

Fritsch, J. M., and C. F. Chappell, 1980: Numerical prediction of convectively driven mesoscale pressure systems. Part I: Convective parameterization. J. Atmos. Sci., 37, 1722-1733.

______, ______, and D. M. Rodgers, 1981: The Ft. Collins hailstorm -- An example of the short-term forecast enigma. Bull. Amer. Meteor. Soc., 62, 1560 1569.

Kain, J. S., and J. M. Fritsch, 1991: Sensitivity of numerical simulation of convective weather systems to the convective "trigger" function. Preprints, 9th Conf on Numerical Weather Prediction, Denver, CO, Amer. Meteor. Soc., 46-49.

Kallen, E., and X.-Y. Huang, 1988: The influence of isolated observations on short-range numerical weather forecasts. Tellus, 40A, 324-336.

Klemp, J. B., and R. B. Wilhelmson, 1978: The simulation of three dimensional convective storm dynamics. J. Atmos. Sci., 35, 1070-1096.

Kopp, F. J., and H. D. Orville, 1990: The use of a cloud model to predict convective and stratiform clouds and precipitation. Preprints, 16th Conf. on Severe Local Storms, Kananaskis Park, Alta, Canada, Amer. Meteor. Soc., 322-327.

Lilly, D. K., 1986: The structure, energetics, and propagation of rotating convective storms. Part II: Helicity and storm stabilization. J. Atmos. Sci., 43, 126-140.

______, 1990: Numerical prediction of thunderstorms -- Has its time come? Quart. J. Roy. Meteor. Soc., 116, 779-798.

Lin, Y., P. S. Ray, and K. W. Johnson, 1990: Simulation of a convective storm using Doppler radar-derived initial fields. Preprints, 16th Conf on Severe Local Storms, Kananaskis Park, Alta., Canada, Amer. Meteor. Soc., 500-503.

Liou, Y.-C., 1990: Retrieval of three-dimensional wind and temperature fields from one component wind data by using the fourdimensional data assimilation technique. Preprints, 6th Conf on Severe Local Storms, Kananaskis Park, Alta., Canada, Amer. Meteor. Soc., 489-492.

Lorenz, E., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141.

Maddox, R. A., and G. K. Grice, 1986: The Austin, Texas, flash flood: An examination from two perspectives -- forecasting and research. Wea. Forecasting, 1, 66-76.

McPherson, R. A., and K. K. Droegemeier, 1991: Numerical predictability experiments of the 20 May 1977 Del City, OK, supercell storm. Preprints, 9th Conf. on Numerical Weather Prediction, Denver, Amer. Meteor. Soc., 734-738.

Mooney, L. E., 1983: Applications and implications of fatality statistics to the flash flood problem. Preprints, 5th Conf on Hydrometeorology, Tallahassee, Florida, Amer. Meteor. Soc., 127-129.

Mullen, S. L., and D. P. Baumhefner, 1991: Monte Carlo simulations of explosive cyclogenesis using a low-resolution, global spectral model. Preprints, 9th Conf on Numerical Weather Prediction, Denver, CO, Amer. Meteor. Soc., 750-751.

Nese, J. M., 1987: The transition to turbulence. Nonlinear Hydrodynamic Modeling: A Mathematical Introduction, H. N. Shirer, Ed., Lecture Notes in Physics, 271, Springer-Verlag, 384-411.

NOAA, 1991: Shadyside, Ohio, flash floods. June 14, 1990. Natural Disaster Survey Report, 124 pp. [Available from Warning and Forecast Branch, NWS, 1325 East West Highway, Silver Spring, MD 20910.]

NWS, 1991: Policy statement on the Weather Service/private sector roles. Bull. Amer. Meteor. Soc., 72, 393-397.

Orville, H. D., 1980: The potential for cloud scale models as forecast aids. Preprints, 8th Conf on Weather Forecasting and Analysis, Denver, CO, Amer. Meteor. Soc., 247-251.

Patrick, D., and A. J. Keck, 1987: The importance of the lower level windshear profile in tornado/non-tornado discrimination. Proc. Symp. Mesoscale Analysis and Forecasting, ESA SP-282, Vancouver, B.C., Intl. Union of Geodesy and Gheophys., 393-397.

Phillips N. A., 1951: A simple three-dimensional model for the study of large-scale extratropical flow patterns. J.Meteor., 8, 381-394.

Rockwood, A. A., and R. A. Maddox, 1988: Mesoscale and synoptic scale interactions leading to intense convection: The case of 7 June 1982. Wea. Forecasting, 3, 51-68.

Sanders, F., 1963: On subjective probability forecasting. J. Appl. Meteor., 2, 191 -201.

Stauffer, D. R., and N. L. Seaman, 1990: Use of four-dimensional data assimilation in a limited-area mesoscale model. Part 1: Experiments with synoptic-scale data. Mon. Wea. Rev., 118, 1250-1277.

Stensrud, D. J., and R. A. Maddox, 1988: Opposing mesoscale circulations. Wea. Forecasting, 3, 189-204.

______, and J.-W. Bao, 1991: A comparison of adjoins and nudging assimilation techniques using a low-order model. Preprints, 9th Conf on Numerical Weather Prediction, Denver, CO, Amer. Meteor. Soc., 173-176.

______, and J. M. Fritsch, 1991: Incorporating mesoscale convective outflows in mesoscale model initial conditions. Preprints, 9th Conf on Numerical Weather Prediction, Denver, Amer. Meteor. Soc., 798-801.

Sun, J., D. W. Flicker, and D. K. Lilly, 1991: Recovery of threedimensional wind and temperature fields from single-Doppler radar data. J. Atmos. Sci., 48, 876-890.

Tripoli, G. J., and W. R. Cotton, 1982: The Colorado State University three-dimensional cloud/mesoscale model -- 1982. Part I: General theoretical framework and sensitivity experiments. J. Rech. Atmos., 16, 185-220.

Vukicevic, T., and R. M. Errico, 1990: The influence of art)ficial and physical factors upon predictability estimates using a complex limited-area model. Mon. Wea. Rev., 118, 1460-1482.

Warner, T. T., and N. L. Seaman, 1990: A real-time, mesoscale numerical weather prediction system used for research, teaching, and public service at The Pennsylvania State University. Bull. Amer. Meteor. Soc., 71, 792-805.

Weygandt, S. S., K. K. Droegemeier, C. E. Hane, and C. L. Ziegler, 1990: Data assimilation experiments using a two-dimensional cloud model. Preprints, 16th Conf: on Severe Local Storms, Kananaskis Park, Alta., Canada, Amer. Meteor. Soc., 493-498.

Wicker, L. J., and R. B. Wilhelmson, 1990: Numerical simulation of tornado-like vortex in a high resolution three dimensional cloud model. Preprints, 16th ConJ: on Severe Local Storms, Kananaskis Park, Alta., Canada, Amer. Meteor. Soc., 263-268.

Wilhelmson, R. B., and J. B. Klemp, 1981: A three-dimensional numerical simulation of spliHing severe storms on 3 April 1964. J. Atmos. Sci., 35, 1037-1063.

Zhang, D.-L., and J. M. Fritsch, 1986: A case study of the sensitivity of numerical simulation of mesoscale convective systems to varying initial conditions. Mon. Wea. Rev., 114, 2418-2431.

______, and ______, 1988: A numerical investigation of a convectively generated, inertially stable, extratropical warm-core mesovortex over land. Part 1: Structure and evolution. Mon. Wea. Rev., 116, 2660-2687.

Zipser, E. J., 1983: Nowcasting and very-short-range forecasting. The National STORM Program: Scientific and Technological Bases and Major Objectives, Univ. Corp. for Atmos. Res., 6-1 to 6-30.