Paper 6.2 in Preprints, 11th Conf. on
Weather Forecasting and Analysis (17-20 June 1986),
Kansas City, MO, American Meteorological Society, 177-182. This Web
version has been subjected to minor editorial modifications,
including repaired typos from the original and improved figures (with
Weather forecasting can be thought of in the simple terms of combining the current state of the weather with a trend (see Doswell, 1986a,b: hereinafter referred to as D86a,b). In D86a,b various components of the forecasting process have been considered, but the topic of diagnosis and its relationship to scientific forecasting deserves special attention.
Diagnostic meteorology is intertwined with the role of humans in forecasting, as discussed in D86a. Truly, weather forecasting is part of the science of meteorology, but recent history has created the illusion of a dichotomy between them. This paper attempts to illustrate how we envision diagnostic meteorology should proceed. In the process, we shall provide a basis for understanding why the gap between forecasters and researchers is only imaginary. In turn, this should give some foundation for our assertion that diagnostic meteorology is not a burden from which forecasters should be relieved. Instead, it is an essential component of scientific forecasting.
Our use of the term "diagnosis" rather than "analysis" is probably unfamiliar to some readers. In the American Heritage Dictionary (1982), analysis is defined as " 1. The separation of an intellectual or substantial whole into its component parts for individual study... ." Whereas this definition clearly fits one of the components of meteorological science, it conveys a very different impression than the same dictionary's definition of diagnosis as "... 2. a. A critical analysis of the nature of something. b. The conclusion reached by such analysis." In effect, we consider diagnosis to be the re-creation of a coherent whole from those component parts considered during analysis; that is, a synthesis.
Part of the process of meteorological diagnosis is the production of contours (isopleths) for various fields. This can be viewed superficially as a mechanical technique for depicting fields of meteorological variables. However, the total process includes what is called "objective analysis" as well as contouring.
One can develop methods for objective analysis which do not in any way account for the character of the variables. Such a technique can be used to depict the field of any one variable, be it meteorological or otherwise, and so it is referred to as a "univariate" technique. Other methods for this make explicit the relationships among the meteorological variables (e.g., the geostrophic wind law, or the hydrostatic equation) and so are called "multivariate" schemes. The adjective "objective" refers to the fact that one and only one product is obtained from a given set of input data (as when it is done by a computer) .
Whether or not either of these two processes (contouring and specification of the field at grid points) is done objectively is, in some sense, less important than whether or not scientific principles are employed along the way. Univariate approaches generally take no account of meteorology, and so are really a more or less mechanical process. Multivariate approaches, by definition, employ the given data in a fundamentally distinct way. When doing analysis subjectively, it is possible (but, unfortunately, not necessary) to account for the interrelationships among variables. In order to do this, one must understand how the variables relate to one another.
b. Atmospheric processes
We observe many meteorological variables (e.g., humidity, cloud height, equivalent blackbody temperature, microwave reflectivity, etc.) even in an operational forecasting environment, to say nothing of the observations employed in research. These variables change (if they do not change, we need only observe them once!) in response to atmospheric processes. Examples of atmospheric processes include thunderstorms, extratropical cyclones, gravity waves, etc. Each process can be analyzed; that is, can be broken down into its constituent parts and, in so doing, we find that it comprises a collection of sub-processes. In turn, we can break these down into a finer collection of sub-processes, and so on ad infinitum (more or less). The actual observations represent the sum of all the constituent processes as they influence the variables we observe.
Some of the processes that affect the observations are not relevant to the atmosphere; for example, circuit noise in electronic measurement systems. There is a variety of error sources, but from a meteorological viewpoint, the biggest issue is "meteorological noise"; that is, the contribution from processes that are not well-sampled. Sampling theory is beyond the scope of this paper, but we often think of the effects of processes on scales not depicted by our data as contaminating noise. For purely objective techniques, this is a valid viewpoint. However, a subjective analyst can infer a lot from a limited sample, by means we hope to illustrate
c. Scientific process models
Although this paper cannot dwell on the details of science's history and philosophy, it is worthwhile to consider how an understanding of atmospheric processes is developed via scientific methods. To this end we use a definition of "science" from D86a,b: Science is the formulation, testing, and revision of models of the natural world, in order that we might understand that world. Whereas it is possible to create a model of the natural world without any observations, the requirement for testing forces us to evaluate the implications of the model in light of what is observed. On the basis of that test, then, one may or may not be forced into altering the model in some way, to achieve a better fit to the observations.
Thus, if a scientific statement is to be made about a process, there must be observations available that serve to test the validity of the statement. For many processes in the atmosphere, the data supporting existing scientific hypotheses about those processes are not routinely available. For example, there is a well developed theory of atmospheric boundary layers, but the data used to test and evaluate those theories are primarily from special experimental observation programs (see Doswell et al. 1986, elsewhere in this volume). The fact that the same density and frequency of observations may not be available in an operational forecasting environment does not preclude the use of the process models developed through such research, however.
Existing scientific models must make some specific statements about how atmospheric processes affect the observations. These statements can be both quantitative and qualitative. An example of a quantitative statement would be the assertion that a parcel rising without condensation of water vapor cools 9.8 deg C for every thousand meters of vertical ascent. On the other hand, when one expects rising motion ahead of extratropical cyclones and descent behind them, this is a qualitative prediction of a particular model. In spite of the fact that one may not have the data to assess a model quantitatively, it can still be possible to determine if the model can be applied to the observations that one actually has. It is precisely that determination that forms the basis for meteorological diagnosis. Thus, when the forecaster is unfamiliar with the scientific model, there is no basis for applying the scientific method to the data at hand. The forecasting process then necessarily defaults to the objective tools at the forecaster's disposal, or to "rules of thumb" (as discussed in D86a,b), or perhaps to mysticism. The latter is completely unscientific, no matter how successful one might be at it (e.g., predicting the winter's severity by woolly bear caterpillars).
The purpose of forecaster training and education is to provide a scientific basis for forecasting. If students are mystified by the science education, they at least learn the jargon and become familiar with the basic tools of scientific forecasting during education/training. Thus, it is possible to "save the appearance" of forecasting on a truly scientific foundation. But without understanding the meteorological processes that science has modeled, this is really only an illusion. In the absence of this understanding, the observations are more or less mysterious and there can be no systematic approach to the forecast.
When the forecaster possesses scientific insight, on the other hand, forecasting can become truly scientific. The data, as well as the objective guidance, are interpreted through the application of that knowledge. If the data do not fit a particular model, the issue becomes one of trying to understand why they do not fit and attempting to modify that model to fit the current observations. When this is possible, and it is obvious that it will not always be so (our understanding is incomplete and imperfect), it becomes possible to depart from objective guidance and rules of thumb with some confidence that the departure will be successful. Forecasting experience allows a continuing growth of scientific understanding when forecasting is based on the scientific method.
This does not mean that a forecaster must meet all the superficial characteristics of a scientist: publishing papers in scientific journals, going to scientific conferences, etc. The real purpose of those activities is to increase communication among scientists, a real benefit to the individual and not to be dismissed as pure posturing. However, these activities are not absolutely necessary for one to be a scientist; recall our definition above. If the forecaster is formulating, testing, and revising models of atmospheric processes during the course of daily activities in the forecast office, this is sufficient for putting forecasting on a scientific basis. Naturally, we feel that the communication of those models to others is an ideal way to learn and to help others share that understanding. However, the demands of operational forecasting in today's world do not encourage such communication, a sad commentary on our circumstances.
Although the forecasting process has been discussed in some detail in D86b, we want to summarize it briefly here. In objective forecasting, be it through numerical prediction models or statistical models (or whatever) the diagnostic and prognostic steps are clearly distinct. The diagnostic step consists of some objective analysis of the data, in order to prepare those data for input to the prognostic process. This implies there is little or no feedback between the steps. Instead, the forecast proceeds more or less linearly from data acquisition to final product, and the entire process can be treated as a "black box" about which we can choose to know virtually nothing.
Another characteristic of objective forecasting is that it can male no use of qualitative input. Whereas it is possible to do something vaguely akin to pattern recognition in an objective way, this qualitative process is simulated quantitatively! Further, present capability to do this objectively is quite primitive and costly. This constraint on objective techniques makes qualitative information inaccessible to the objective schemes. The greatest impact of this constraint is felt when treating the poorly-sampled processes that influence the observations but are not defined by the data (in an objective sense). For the moment, we define such processes to be below the scale we call "synoptic" (say, below about 1000 km). The scientists who work on numerical prediction models have recognized that ignorance of such processes is a major stumbling block to further progress in numerical forecasting and so they have become quite concerned about the "mesoscale" in the past several years.
This does not mean that there is no scientific information about mesoscale processes. The reader should consult Doswell (1982) and the references cited therein to see that the science of mesoscale meteorology has not been unproductive to this point. However, the very nature of objective forecasting precludes much input from the mesoscale meteorological science now (or in the near future) because that science is still largely qualitative.
It is important to note that the tools of science can be used to forecast and still not be doing scientific forecasting! As discussed in D86a,b, the forecaster who treats objective forecasting guidance as a "black box" is in great danger of falling into this trap. Although it is difficult, it not impossible, to keep up with all the changes to objective guidance, the value of that guidance to scientific forecasting diminishes in direct proportion to the forecaster's ignorance of how such guidance is derived from the data.
By concentrating on diagnosis (as we have defined it), a forecaster can develop strategies for using objective guidance that allow the forecaster to maintain a scientific outlook while still remaining ignorant about the details of guidance. These strategies are discussed in D86a,b; the primary notions are contained in two concepts. First, forecasters can use guidance as an independent opinion about the prognosis. That is, during diagnosis an idea can be formed about how the current state of the atmosphere is evolving. This can be compared to the guidance, using the objective tools to confirm (or call into question) any ideas of the evolution.
Second, it is clear that guidance is best suited for certain parts of the forecast (e.g., the large-scale aspects of the forecast) and ill-suited for others (e.g., the mesoscale aspects of the forecast). It would be foolish to ignore the demonstrated value of guidance, even if one does not know how it works. Of course, we think it is naive to believe that humans can understand the mesoscale models one must use to forecast mesoscale details without having some knowledge of the large-scale models as well. (Recall that the models one uses are not restricted to numerical prediction models.)
In order to show precisely what we envision the process of diagnosis to be, we shall provide some examples. The most basic element of diagnosis is the employment of data to develop a picture of what processes are operating in the atmosphere at a given moment. Forecasters should have models of atmospheric processes in mind, and these models make statements about what the data should show. Where the data tend to fit those models, there are specific implications about the future state of the atmosphere. When the data fail to fit a model, the model has to be revised (perhaps adding the effects of other process models) to account for those differences. If the revised model is supported by the data, then there are new implications about the prognosis. Moreover, the evolution of the data must be compared with the model. That is, new data may change the forecaster's perception of what processes are ongoing, so the diagnosis is never static.
a. Low-level nocturnal wind maximum
The low-level nocturnal wind maximum is a process for which models exist. Even though the model we present is highly simplified, it does account for certain observations. As seen in Fig. l, the effect of friction on the wind can be represented (crudely) as a force that is directly opposed to the wind. The flow which represents a balance among horizontal pressure gradient, Coriolis, and this rough approximation to frictional forces has been termed the "antitriptic wind" by Schaefer and Doswell (1980). Our model of this process includes the idea that friction is directly proportional to the amount of (dry) convective mixing (see Doswell, 1985, p. 5 ff.), so that in the evening, with the establishment of a surface-based nocturnal inversion, the friction is "turned off."
Fig. 1. Simplified model of force balance including friction, with Vg the geostrophic wind, V the "antitriptic" wind, P the pressure gradient force, C the Coriolis force, F the frictional force, and C' the vector sum of C and F.
Fig. 2. As in Fig. 1, showing the effect of "turning off" the friction, leaving an unbalanced component of the pressure gradient force Pu in the direction of V.
What happens then is that the flow no longer in balance, as shown in Fig. 2. There is a part of the pressure gradient force that is no longer balanced by the Coriolis force, but is directed along the wind. This results in an increase of wind speed, which increases the Coriolis force. Since Coriolis is always 90 deg to the right of the wind, this means that the wind must veer (in the northern hemisphere, of course). This quite simple model suggests that during the night, winds in low levels (above the nocturnal inversion) increase in speed and veer to the right. As shown in Fig. 3, this is often in qualitative agreement with what is observed.
Fig. 3. Real example showing the effect of the decrease in surface friction on the 850 mb winds, at 00 GMT when friction is large (top) and at 12 GMT when friction has been small overnight (bottom).
Suppose that the data for a particular situation disagree with this model. The challenge to the forecaster is to try to account for the disagreement. One factor of importance might be the background pressure gradient force (or, equivalently, the geostrophic wind). When the pressure gradient is weak, all the forces involved are weak and the wind may not behave as the model predicts. Another issue is the intensity of the nocturnal surface inversion. If some low clouds are present, thereby reducing the daytime friction (convective mixing) and limiting the strength of the nocturnal inversion, the diurnal oscillation in wind speed and direction may be damped strongly. Thus, the simple model's implications about the prognosis must be modified in accordance with the situation.
b. Norwegian cyclone
Who is not familiar with the depiction of the weather associated with the Norwegian cyclone model, as in Fig. 4?
Fig. 4. Schematic model of weather distribution associated with the Norwegian cyclone model.
This is such a standard view of how extratropical cyclones influence the weather that the "man-machine mix" forecasts from NMC typically put this distribution of weather onto machine-produced surface forecasts (e.g., Fig. 5).
Fig. 5. Example of "man-machine mix" weather depiction forecast.
It is common to hear forecasts for clearing, cooler conditions with cold front passage, when the actual event includes considerable postfrontal clouds and precipitation along with the cooling. The only explanation for such a busted forecast is a slavish insistence on imposing this model (i.e., Fig. 4) onto the actual situation, in spite of considerable experience that suggests this is a risky assumption.
The Norwegian cyclone model can be employed in weather forecasting, but it must be adapted to include processes that are not relevant in northern Europe, where the model was developed. For instance, many precipitation events in the High Plains of the United States are associated with upslope low-level flow. Upslope winds are quite common behind cold fronts and so post-frontal skies are not clear very often in the High Plains. Of course it is necessary to modify even this revised model at times. If the upslope winds are very dry, skies may well become clear after cold frontal passage in the High Plains, more in line with the traditional model interpretation. Further, one might find that the real upslope component is weak in the cold air, thus diminishing the chances for clouds and precipitation.
Clearly, no model should be used slavishly, be it traditional or some ad hoc modification of the traditional model, or even a non-traditional model (e.g., one developed to deal with special, local processes). The diagnostic step is important precisely because it allows the forecaster to evaluate how well (or how badly) a model of a process applies to the given situation.
c. Convective mesosystems
Our final example, convective mesosystems, represents the quite challenging task facing forecaster/diagnosticians when dealing with mesoscale processes. As mentioned in D86a,b, the mesoscale details are poorly sampled but represent a crucial part of any weather forecast. It often is such details that make the difference between light precipitation and a destructive flash flood, for instance. Without the models of such systems provided by research, the task of forecasting would be virtually impossible to do in a scientific way. In fact, it was to familiarize forecasters with some of the relevant models that the Technical Memorandum series of Doswell (1982, 1985) was undertaken.
Fig. 6. Schematic model of convectively-induced mesosystem (from Fujita, 1955).
If we consider the model of a convective mesosystem as exemplified by Fujita (1955), we find that conventional data would not have permitted much of the insight gained through the special, research-oriented networks. Figure 6 shows a model of such a mesosystem, but if we were to have such a system in the ordinary, operational surface network, there might be only one station (or some small number of stations) influenced by that system at any single observation time. Naturally, over a period of time, some large number of stations ultimately might be affected (Fig. 7). The total set of observations available to a forecaster over the life cycle of convective mesosystems may be enough to form a qualitative picture of that system. However, this is quite difficult to do if one does not have a model in mind. The model is not to be imposed on the data; a fit should not be forced where this is not consistent with the observations. To do so is profoundly anti-scientific. Instead, the model should be used to try to understand what the data are revealing about the ongoing atmospheric processes and to anticipate the changes that are about to occur.
Fig. 7. Schematic model of the region influenced by a convective mesosystem that persists for several hours (from Fujita, 1955).
Clearly, some caution must be exercised for these under-sampled phenomena. It is quite easy for a forecaster/analyst to give the model the benefit of the doubt and reject data that do not fit preconceived notions. Precisely because the data are not sufficient to perform an "objective" analysis on this scale, forecasters must accept the penalty of erroneous interpretations of the data by subjective diagnosis, as mentioned in D86a,b. Presumably, if forecasters have a rich "vocabulary" of models of atmospheric processes, it is possible to avoid (or minimize) mistakes of interpretation.
Although scientific forecasting depends on using the concepts developed by scientists investigation, it is not necessarily only quantitative and "objective" at its core. When appropriate, quantitative knowledge is clearly of great value. However, considerable scientific knowledge does not fit the image of cold, hard, factual information and this qualitative information in the form of conceptual models, is quite valuable to forecasting. For the foreseeable future, it is not likely that quantitative models alone will suffice for weather forecasting and so we as meteorologists and forecasters must continue to use qualitative information about atmospheric processes. Without the critical step of diagnosis, we are powerless to do so.
Many advocates of new technology (who are not meteorologists and/or forecasters) do not understand what a vital link in the chain of scientific reasoning is the diagnostic step. Many see it as a boring, repetitive procedure ripe for automation. This would be tragic for the science of meteorology, as mentioned in D86a. However, it would be a mistake to think that new technology is not of value in diagnosis. In fact, we believe that it has tremendous potential to improve the quality of real-time diagnosis. The challenge is to provide the forecaster with tools that really enhance his or her ability to apply scientific principles to the task.
This paper is not the forum for a full treatment of this potential in the new technologies. However, we should indicate something of what we envision. Consider a truly interactive system, with which a forecaster enters an analysis via a light pen. The computer could then evaluate the quantitative implications of the analysis (e.g., kinematic fields, quasigeostrophic forcing, etc.) and display the results on the analysis. This step could be repeated, with the forecaster revising the analysis until satisfied with its quantitative implications. Such a system would take advantage of the forecaster's qualitative knowledge and minimize the likelihood of the forecaster imposing a model that is incorrect.
There are other recent technological innovations that could be discussed, many having a potential impact on diagnostic meteorology. It is hoped that we have illustrated precisely what is accomplished during diagnosis by humans, and its extremely high value in scientific approaches to forecasting. By implementing technological innovations in a way that recognizes this human contribution, both research and operations can take maximum advantage of these innovations.
Doswell, C. A. III, 1982: The Operational Meteorology of Convective Weather. Vol. II: Storm Scale Analysis. NOAA Tech. Memo. NSSFC-5, National Severe Storms Forecast Center, 601 East 12th St., Kansas City, MO 64106.
______, 1985: The Operational Meteorology of Convective Weather. Vol. II: Storm Scale Analysis. NOAA Tech. Memo. ERL ESG-15, Environmental Sciences Group, 325 Broadway, Boulder, CO 80303.
______ 1986a: The human element in weather forecasting. Preprints, 1 st Workshop on Operational Meteorology (Winnipeg, Man), Atmos. Env. Service/Can. Meteor. and Oceanogr. Soc., 1-25.
______, 1986b: Short range forecasting. Ch. 29 in Mesoscale Meteorology and Forecasting (P. Ray, ed.), NOAA/CIMMS/AMS, 689-719.
______, R.A. Maddox, and C.F. Chappell, 1989: Fundamental considerations in forecasting for field experiments. Preprints, 11 th Conf. Wea. Forecasting and Analysis (Kansas City, MO), Amer. Meteor. Soc., 353-358.
Fujita, T., 1955: Results of detailed synoptic studies of squall lines, Tellus, 4, 405-436.
Schaefer, J.T., and C.A. Doswell III, 1980: The theory and practical application of antitriptic balance. Mon. Wea. Rev., 108, 746-756.