As usual throughout this Website, this document is purely the opinions of C. Doswell and does not have any official status. This is not really an essay ... just a collection of my pet peeves, with explanations of varying complexity to provide some rationale why this or that bothers me. It will be updated and added to as things come to mind. I welcome discussion about any or all of these at: firstname.lastname@example.org. I'm pleased that people actually do this from time to time, so I've added a page that incorporates those discussions without breaking up the flow of this page.
For the record .. this site is supposed to be fun! I don't have the ability to impose my will on anyone, let alone other scientists, dictionaries, glossaries, encyclopedias, or whatever. I reserve the right to speak my mind, and I'm presenting what I think are serious arguments, but the point of making them accessible here is (at least in part) to have some fun (and to act as a catharsis). So lighten up, folks.
Thanks to: Allan Rosenberg for providing me with the dictionary and "The Elements of Style" links, as well as pointing out some of my own mistakes herein ... also, to Alan Davis, Rachel Gavelek, and Steve Ricketts for reminding me of some examples that bug me, too ... and to those like Paul Sirvatka, Kai Esbensen, Jason Knievel, Matt Parker, David White, Jim Means, and Brian Curran who actually have responded with interesting arguments about my pet peeves.
This word usually is pronounced "ki LO meters" ... with the emphasis on the second syllable ... but this is not consistent with the way we normally pronounce metric units. We don't say "cen TI meters" or "ki LO grams," so I believe we should pronounce this as "KI lo meters," in the same way we say "CEN ti meters" or "KI lo grams."
This word is too long. It is a noun describing the degree to which a state is baroclinic. The equivalent noun describing the degree to which a state is barotropic is "barotropy," so to be consistent we should either say "barotropicity" or "barocliny" ... since both the former and the latter seem awkward, a compromise is "baroclinity," which I prefer to "baroclinicity."
3. short waves
The nearly universal application of this two-word phrase is to short wave troughs . Strictly speaking, "short waves" include both troughs and ridges, so a proper description should distinguish between them, unless the intent is to include them both.
The root of this word is "fluence" and the opposite of "difluence" is "confluence." In analogy with "divergence" and "convergence" ... the root of which is "vergence" ... this should not be spelled "diffluence." Otherwise, we should say "difvergence" to be consistent. Apparently, both spellings are considered correct, so I prefer the spelling with one "f" for consistency reasons. I have heard, but do not like, etymological discussions rationalizing the two "f" spelling.
The word "thundershower" is used often to imply a weak thunderstorm, often in radio and TV weather broadcasts. I have heard (via the Internet) such pathetic rationalizations as "The public might panic if they heard they were going to experience a thunderstorm, and so I use thundershower for ordinary events." Without going into details, I find it monumentally unlikely that the public would panic over the use of the right word: "thunderstorm." They don't panic when hearing "tornado" in a weather broadcast, either .. contrary to the discredited policy imposed on the Weather Bureau (before it became the National Weather Service) for many decades ... and the word "tornado" ought to be a lot scarier than "thunderstorm"!
Strictly speaking, any occurrence of thunder, with or without precipitation, is defined to be a thunderstorm ... the intensity of the "storm" is not really implied at all by the word "thunderstorm." Hence, the intensity must be indicated by including an adjective (from "weak" to "severe"). The occurrence of a thunderstorm ("T" in the pre-METAR format) accompanied by showery precipitation (either "RW" for rain showers or "SW" for snow showers in the pre-METAR format) might be considered a thundershower, of course. However, that's not the typical context in which the word is used ... or perhaps misused. Presumably, as the word is often misused, a thundershower could be accompanied by tornadoes, large hail, and damaging winds ... a "severe thundershower" ... a term that is virtually never heard. Since a "thunderstorm accompanied by showery precipitation" says nothing about intensity of either the thunder or the showery precipitation, you would have to use an adjective in addition to the word "thundershower" to convey properly any sense of intensity. Now, what is the difference between, say, a "weak thunderstorm" and a "weak thundershower"? Presumably, it could be argued that one implies the presence of showery precipitation and the other doesn't ... again, however, this is not the way the word is used (abused?). I vote for expunging "thundershower" from our vocabularies.
6. correlation and association
I often hear the phrase in scientific talks that "such -and-so is correlated with this-and-that" when it is obvious from the presentation that a statistical correlation analysis has not been done. The word "correlation" is being used, carelessly, as a synonym for "association." To have done correlation analysis involves, at the very minimum, having stated the statistical character of the proposed association (e.g., variable A is linearly correlated to variable B), and having tested that as a hypothesis (e.g., computing a linear correlation coefficient, which measures the degree to which A and B are linearly correlated). If this minimal analysis (or more) has not been done, then using "correlate" this way in a scientific presentation is really misleading terminology. If the context is not scientific, of course, this whole item is pure pedantry. But then again, most of this Web page is pedantry! [See item B.13, below, as well]
It is a mystery to me what was wrong with millibars, a perfectly good metric unit. It's not as if we need a nomogram or a calculator in going from mb to hPa, after all! [In case you don't know it, 1 mb = 1 hPa!] Of course, some purists want to argue that we should be using kiloPascals, and that hectoPascals are some sort of abomination by comparison. Another example of pure balderdash! I am in favor of uniformity in units, but I don't think we need to be dogmatic about it. After all, most of the readers of scientific papers should be able to make the conversion from mb to hPa (or even kPa, for that matter) without exerting too much effort ... I mean, most of them do have PhD's, right? It's not at all obvious that our science is being advanced in any way by converting from mb to hPa. Therefore, being a stubborn person, I refuse to agree to this insistence on pure SI units in publications.
8. accuracy vs. skill
When describing the quality of forecasts, the notions of accuracy and skill often are treated as synonymous, but they are not . Accuracy refers to the correspondence between forecasts and observations, with increasing accuracy associated with increasing correspondence. Skill, on the other hand, is associated with the relative performance of the forecasting system in question, when compared to some baseline forecasting system. Baseline systems often used for measuring skill include: climatology, persistence, and Model Output Statistics (MOS) forecasts; the idea is to measure the improvement (or lack thereof) of the system in question compared to the baseline system. An accurate forecast is not necessarily skillful, and vice-versa.
Forecast discussions among professional forecasters (and even authors of scientific papers, who definitely should know better) often use this term as if everyone knows what they are talking about, when it seems to me that this is about as vague a description as it is possible to offer. If this term is going to be used, at the very least, authors should be careful to specify precisely what they mean. I've discussed the word in a paper I published a while back (Doswell, C.A. III, 1987: The distinction between large-scale and mesoscale contribution to severe convection: A case study example. Wea. Forecasting, 2, 3-16) ... here, I recommend avoiding its use. Just describe the process(es) you think are occurring and don't indulge in this vague description.
The singular word "vortex" has the plural form "vortices" ... this is another relic of Latin (an inflected language) in English, that doesn't follow typical English rules governing the formation of plural forms. Apparently, creeping misusage has added some acceptability (in dictionaries) to the more recognizable English plural form "vortexes" but my perception (in my role as "schoolmarm" ... see #C.7, below) is that this latter form is not as proper as the Latin form. I have seen various spellings, especially in the context of storms with multi-vortex tornadoes. Hearing tornadoes referred to as having "multiple vortices" prompts some to use "vorticy" (or some variant on this) as the singular form ... these are not even proper words in most cases.
11. solar insolation/incoming insolation
The word "insolation" is a contraction of "incoming solar radiation" ... therefore, both "solar insolation" and "incoming insolation" are redundant.
12. Radar meteorology/Satellite meteorology
There may have been a time when the content of new observing tools like radar and satellites was deserving of attention in its own right. However, the time has long passed when "Satellite Meteorology" and "Radar Meteorology" made any sense as subdisciplines, analogous to "Tropical Meteorology" or "Synoptic Meteorology". Any particular observational tool's contribution to understanding is no more than a small part of any subfield within the general topic of meteorology. A useful comparison might be to imagine a whole subfield called "barometer meteorology" ... this reductio ad absurdum demonstrates the bankruptcy of the terms "Radar Meteorology" and "Satellite Meteorology". Let's drop them from our vocabulary, please.
I see the term optimal used a lot in scientific papers, and it always seems to me to imply more than what the authors are intending. Optimality is virtually always with respect to something; that is, something has been optimized under a specific set of constraints. The adjective optimal (as in "optimal analysis") seems to suggest, however, that no one could ever do any better than the result presented as being optimal. Nevertheless, in many cases, the result can be improved, but in some way other than that employed during the optimization. For instance, if an analysis scheme is constructed such that it has maximum correlation with an ensemble of observations, then it is certainly true that no analysis can be constructed that has higher correlation with that specific ensemble of observations. However, if the ensemble is changed, then some different scheme might do better than that constructed from the original ensemble. Moreover, the correlation with any particular member of the ensemble of observations might be increased well beyond that found when using a scheme derived by optimizing with respect to the entire ensemble. Thus, the optimality of a particular result needs to be understood to be limited to the specific process of optimization. It should not be interpreted to imply optimality in all applications and circumstances.
14. Model "resolution"
When describing the "resolution" of a model, the grid intervals (in space and time) typically are cited. Strictly speaking, features on the scale of the grid intervals are not resolved by the model! The smallest features that can be said to be "resolved" in any meaningful sense of the term are those at twice the model's grid interval, and even at that scale, the amount of information about such small features is pretty limited (see Doswell, C.A., and and F. Caracena, 1988: Derivative estimation from marginally-sampled vector point functions. J. Atmos. Sci., 45, 242-253). Thus, this terminology should be discouraged, in my opinion.
1. Air mass/frontal thunderstorms
An old idea is that thunderstorms are either of the "frontal" sort, or the "air mass" sort. This terminology seeks to distinguish between thunderstorms along fronts (zones of strong thermal gradients) from other sorts of thunderstorms. The idea is that fronts provide a lifting mechanism to develop convection, whereas other thunderstorms develop within broad areas of more or less homogeneous characteristics (air masses). It also is often taken to imply that the thunderstorms develop more or less randomly in the "air mass," as opposed to the organization provided by the front. I believe that many thunderstorms develop outside of surface frontal zones (i.e., synoptic-scale fronts). Moreover, the development of thunderstorms is never random ... they develop in particular places at particular times for reasons that we may not be able to observe and/or understand, but it is absurd to think that thunderstorms develop, in effect, for no reason! Hence, I dislike intensely the use of the term "air mass thunderstorm" ... actually, I find this whole classification scheme to be well-removed from the reality of thunderstorms. It's obsolete and should be discarded.
"Propagate" is a word that sounds erudite and scientific. "Move" is common and used widely by non-scientists. However, "propagate" has some very specific meanings, especially in regard to science and in meteorology in particular. Movement is a perfectly good word to describe how something is seen to be at some position, denoted by the position vector Xo at time t = to and is at some other position Xo+dX at time t = to+dt . What we see as movement in the atmosphere can be the result of two very distinct processes:
1. Something is simply carried along by the flow (like a stick in a traveling stream) which has velocity . This is called advection.
2. The atmospheric "thing" we are seeing dissipates at Xo sometime after t = to, only to re-form at Xo+dX by t = to+dt , when next we observe it. This is called propagation>.
It is impossible to watch an atmospheric process continuously and so we can never be completely sure that the "something" we see at t = to is "the same something" we see at t = to+dt .
Real motion of atmospheric "somethings" involves both because what we observe of atmospheric structure are observations of processes. Fronts, lows, troughs, ridges, outflow boundaries, etc ... these are all processes , not solid objects simply carried along with the flow (as is a stick in a stream). Their movement typically involves both advection and propagation. Since "propagation" is a $5 word and "movement" is only a 5-cent word, many writers believe it makes them sound scientific to write "propagation" when the intended meaning really is "movement."
3. PVA=upward motion
I have provided a separate Website discussion of this issue.
4. Divergence (convergence) causes vertical motion
I often hear folks saying things like "...the low-level convergence (or, the high-level divergence)" caused upward motion to appear. I also hear discussions about how upward motion results when "upper-level divergence comes to be superimposed on low-level convergence." These notions represent a basic misunderstanding of how the atmosphere really works. Divergence/convergence and vertical motion are connected by the Law of Mass Continuity. In constant pressure coordinates, this takes the form:
The vertical motion at some level is found by integrating the horizontal divergence in the layer below that level.
Thus, the vertical motion at some level depends on the layer beneath it, and what the vertical motion was at the bottom of the layer. This can get complicated, such that divergence/convergence at any given level cannot be taken to infer anything about vertical motion. Things are not always that bad, however.
Generally speaking, if there is upper-level divergence, the usual situation is that there is upward motion beneath it (with convergence at some level at the "foot" of the upward motion). See the figure. If there is surface convergence, there is upward motion above it, terminating somewhere above the surface in a region of divergence. The simultaneous existence of upward motion, with convergence at its base and divergence at its top, is a necessary consequence of mass continuity. The Law of Mass continuity is a diagnostic equation, that contains no time derivative of vertical motion. Hence, it cannot identify causes for vertical motion. There is a perfectly well-known way to ascertain causes for changes in vertical motion ... it's called the Third Momentum Equation. In constant height coordinates, this takes the form:
where w is the vertical motion, r is the density, p is pressure, g is the acceleration due to gravity, C 3 is the vertical part of the Coriolis force (C3 = 2Wucos f ) and F 3 is the vertical friction force. The latter two terms are usually neglected. There is no "convergence force" in this equation! If you want to establish causes for changes in vertical motion,  you must look for them here, not in the Law of Mass Continuity.
5. "Subgrid" things that MUST have been there
In many situations, authors can't validate the existence of some process or parameter (like PVA, or low-level moisture) because the data simply aren't present, or the existing data don't reveal its existence. Nevertheless, they insist that the process or parameter was actually present, based on the events that occurred. Logically, this is unacceptable in science ... they might be right, of course, but it violates a fundamental principle in science to assume that a particular hypothesized causative mechanism is present when the event is observed, even when the data don't reveal that mechanism! In effect, it is "changing the data to fit the hypothesis" ... a strictly forbidden exercise in science. If the event occurred, and it is impossible to observed the hypothesized mechanism, then the investigator must accept that the hypothesis could be wrong. If additional data were obtained that might tend to confirm the hypothesis, that's fine, but in the absence of such data, it becomes impossible to say very much about the hypothesized mechanism. The existing data can neither confirm nor deny the hypothesis in such situations. An author can speculate that the hypothesis is valid, but in the absence of information supporting the hypothesis, it can be no more than speculation and must be identified as such.
This word comes up often in the context of thunderstorms. Thunderstorm initiation requires moisture, instability, and lift. The concept is that somewhere within the atmosphere, a parcel can be found that has buoyancy if lifted far enough to attain its Level of Free Convection (or LFC, beyond which it is buoyant and can accelerate upward with no further lift required). For this to take place, there are 3 things needed: moisture, conditional instability, and some process to lift a non-buoyant parcel to its LFC. Presumably, the notion of lifting as a "trigger" assumes the presence of moisture and instability sufficient to allow some parcel to have an LFC, and it is only awaiting the lift.
In the absence of any one ingredient of the necessary triad, no thunderstorms will occur. So which is the "trigger" ... if any two are present, in the absence of the third, the thunderstorms await the missing ingredient as a "trigger," do they not? Which ingredient is most important? Trick question! ... no one of the three can be most important. Thunderstorms always require all three. Is it always found that moisture and instability gather in the absence of lift, as opposed to other combinations? I think not. Moisture and lift often occur in the absence of conditional instability ... its arrival could then logically be considered a "trigger"! Instability and lift often occur together in the absence of moisture. Arrival of moisture could then be viewed as a "trigger."
I prefer to consider the question of convective initiation as one of the simultaneous presence of all three basic ingredients and forgo the idea of "trigger" completely. If we must have a "trigger" it is not obvious to me that it must always be the "lift" ingredient!
I have provided a separate Website discussion of this issue.
8. Difluence = divergence
Difluence is simply the spread of streamlines downstream. If a represents the wind direction (as an angle relative to some standard direction ... in meteorology, the convention is that a wind from the north is zero degrees, and the angle increases in a clockwise direction ... in a so-called natural coordinate system (s , n ) where s is the direction along the flow and n is the direction normal to the flow (and to the right of the wind), it can be shown that divergence is given by
where V is the wind speed. The difluence part is given by only part of the first term: ... therefore, the difluence of the flow cannot be equivalent to the divergence, although they clearly are related. It is a common mistake, however, to equate difluence with divergence (and then go on to infer vertical motion, another mistake).
I'm mostly mystified by why anyone thinks difluence is a useful parameter, anyway, at least in the context of assessing the favorability of an environment for convection. Historically, it seems that difluence is a proxy for upward motion, through the dubious inference of divergence aloft. It strikes me as more useful, more direct, and more physical simply to find the vertical motion than to infer its presence from difluence at high levels.
9. Fronts always analyzed at windshifts
This has been discussed at length in the following paper: Sanders, F. and C. A. Doswell III, 1995: A case for detailed surface analysis. Bull. Amer. Meteor. Soc., 76, 505-521.
10. Oklahoma/Texas vs. the world
I am constantly hearing nonsense like "Oh, the storms you guys see in Oklahoma/Texas! We never see storms like that here [where "here" is another state, another region, another country, or whatever]!" Another version of this is "Well, the storms we see here are different than the ones you guys have in OK/TX!" This is really galling, and implies that somehow the laws of physics are different in Oklahoma/Texas than in the rest of the world. The atmosphere does not have a watch, a calendar, or a map ... if conditions for a certain type of storm are created at an atypical time, date, or location, then that type of storm can and will occur. There can be little doubt that the conditions for tornadic supercells are not created with the same frequency around the world, but there is also little doubt that tornadic supercells can and do occur outside of Oklahoma and Texas. Believing that they cannot occur outside of OK/TX makes you vulnerable to unpleasant surprises!! If we scientists here in Oklahoma have committed any error, it is that we often have chosen to write papers about only a small minority of storms ... tornadic supercells are not common even in OK/TX. There are lots of other storm types here, and not all tornadoes in Oklahoma come from "textbook" storms. The new Doppler radars are providing clear evidence that supercells (and, occasionally, tornadoes) occur in many parts of the U.S. where they were heretofore believed to "never happen"! There is abundant new evidence of their occurrence in Europe, Australia, Africa, South America, Asia ... around the world.
Part of the persistent notion I hear about "storms are just different here!" is apparently a convolution of the environment with the storms; that is, a lack of distinction between the processes that create the conditions for convective storms and the convective storms themselves. It's hard for me to imagine, otherwise, what folks might mean by the assertion that storms in some location (or at some unusual time of the year, or at an usual time of day) are different: Do their updrafts go down? Do their mesocyclones rotate anticyclonically? Does their rain fall upward? If you are going to assert that your storms are different, it behooves you to be very precise about how they're different! What is it about your storms that makes them different from mine? I maintain that the environmental conditions making a certain event possible are logically and physically distinct from the process that those conditions support! Taking the "holistic" view that a storm should not be separated from the conditions giving it birth is, in my opinion, not a useful perspective.
There can be no doubt that the frequencies and intensities of storms vary as a function of location, time of year, and time of day ... these variations are associated with differences in the frequencies of the environmental conditions giving rise to certain convective storm behaviors. As seen here, for example, there can be large spatial and temporal variability in these frequencies associated with variations in the meteorology, but this does not mean that when the conditions for a certain storm behavior are brought together, then something different happens because of the location, the time of year, or the time of day. The response to those conditions ... the convective storm ... is essentially the same the world over.
Now storms have a great number of superficial differences, not altogether unlike the fact that every person has a unique set of fingerprints, but that doesn't mean that everyone is fundamentally different from everyone else. No one would argue that Joe is a different type of human being from John because they have different fingerprints! There may well be subsets of humans that share certain characteristics and it might be possible to develop a taxonomy (classification scheme) based on one or more of these characteristics. Since we all have different fingerprints, this would not be a very useful classification scheme ... each class would have only one member!
In the same way, each storm is probably a unique event at some superficial level (no storm is probably exactly like another storm), but from the point of view of a meteorologist, there is no reason to believe that a supercell in Ethiopia is somehow fundamentally distinct from a supercell in Oklahoma, or that supercells in the fall have to be distinguished from those of the spring. A taxonomy based on the geographic location (or time of year, or time of day) is unlikely to produce anything particularly useful in terms of distinguishing convective storm types. A supercell develops because the ingredients for a supercell were brought together ... if they came together in an altogether different synoptic setting in one place versus another place, my argument is that the supercell is basically the same: the storm is a convective process, with updrafts and downdrafts, precipitation and winds, a mesocyclone, etc. Unless someone can point to some specific storm characteristic and show that it is uniquely associated with a location, a time of year, or time of day, I believe the burden of proof is on the shoulders of those who are asserting such a distinction.
11. The Fujita scale's misinterpretations 
Prof. Fujita's tornado rating scale, the so-called Fujita scale, has invited misinterpretations virtually since its introduction. Let me hasten to say at the outset that I believe that the F-scale is a real contribution to our science and Prof. Fujita deserves the recognition that he has gotten through its introduction. Having some reasonably systematic way to estimate tornado intensity strikes me as a real step up from treating all tornadoes as if they were the same, which is how things were before the F-scale was introduced. However, there are three important aspects of the F-scale that lead frequently to misinterpretations:
a. The F-scale category / windspeed relationship
The notion of the F-scale measuring the intensity of tornadoes is connected to the windspeed estimates assigned by Prof. Fujita to the categories. Although I have no way of knowing just what Prof. Fujita was thinking, it seems that the F-scale categories were originally designed to provide a sort of "bridge" between the Beaufort wind scale and the Mach scale. That is, F-1 was chosen to represent Beaufort force 12 (minimal hurricane force winds), and F-12 would represent Mach-1. Of course, Fujita believed that F-5 was incredible and F-6 was literally inconceivable, so that F-6 through F-12 were just empty placeholders on this "bridge" between speed scales. Therefore, strictly speaking, it seems that the F-scale was "designed" to be a windspeed scale.
I have written on this topic elsewhere [See: Doswell, C.A., III and D.W. Burgess, 1988: On some issues of United States tornado climatology, Monthly Weather Review, Vol. 116, pp. 495-501]. The hangup with the F-scale as a windspeed scale is that there was no way to determine windspeeds in tornadoes at the time of its development. Even now, obtaining tornadic windspeeds near the surface is pretty much impossible in most cases. The boundaries between F-scale categories in terms of wind were fixed by the fit between the Beaufort and Mach scales, not by an accurate knowledge of the windspeeds needed to produce a given level of damage to a frame home. As noted in the article cited above, the real-world application of the F-scale has always been in terms of damage, not windspeed. Unfortunately, the relationship between the windspeeds and the damage categories has not been tested in any comprehensive way, for several reasons:
I could go on, but the main point to understand about the F-scale is that the damage vs. windspeed relationship has never been tested thoroughly. With all due respect, the windspeed limits to the F-scale categories are simply Prof. Fujita's best guesses . For all I know, they might be quite accurate, but since they have not been "calibrated," it seems quite inappropriate to give them complete credence.
b. The implication of precision
By providing such specific numbers, the category boundaries imply a high degree of precision. That is, for example, it appears that a windspeed of 112 mph produces qualitatively different damage than a 113 mph windspeed; i.e., F1 vs. F2. This is absurd, of course. If we were to do a calibration of the windspeed-damage relationship, we would not find such hard boundaries between categories, with precise thresholds. Instead, we would find "fuzzy" boundaries ... rather than "top hats" we would find something resembling "bell curves." [See my discussion on this here.] That is, the windspeeds associated with a particular type of damage might well have a peak at some value, but the range would not drop to zero at values somewhere above and below that peak ... instead, the range would tail off at both ends. Sometimes, circumstances would create a particular type of damage at windspeeds higher or lower than the numerical ranges currently associated with the F-scale categories.
I'm confident that Prof. Fujita does not believe that a 157 mph wind causes quite different damage from a 158 mph wind (F2 vs. F3), so this implication is a misinterpretation of the F-scale, even disregarding the lack of precise windspeed-damage calibration. I often see media reports after a tornado has been given, say, an F4 rating, that the tornado "packed winds of 260 mph!" The media tend to push everything to the high end of the range, apparently because it sounds more impressive. If they do moderate this, it usually is to something like "... winds of up to 260 mph!" ... as if the specific number of 260 mph is set in stone. This is quite common in media "scientific" presentations, such as the Discovery Channel's "Raging Planet: Tornadoes" program, first aired in August of 1997.
c. The absolute maximum tornado windspeed
This topic is a popular item for discussion, even though it reminds me somewhat of debates regarding angels and pinheads. There is no reason to believe that the figure given for the top of the F5 category, 318 mph, has some scientific credibility as the maximum tornadic windspeed. It, like the other numerical values, is simply Prof. Fujita's best guess. Of course, Prof. Fujita's best guess ain't chopped liver, either! It's quite possible that if an absolute maximum windspeed can be given, the figure of 318 mph might well be in the right ballpark. However, we don't know for sure if such a maximum can be provided in any sense other than theoretical, and even if it could, it seems unlikely to be 318 mph, exactly. We have so few observations of tornadic windspeeds by any means whatsoever that it's silly to speak of having established the absolute maximum windspeed figure down to within a precision of 1 mph.
It is pretty certain, however, that the peak tornadic windspeeds are not as high as 500 mph, and certainly do not approach or exceed the speed of sound. Years ago, in total ignorance, numbers like these were mentioned but no longer are considered credible by most tornado researchers.
Some authors (notably, Prof. Brian Fiedler at the University of Oklahoma) have proposed a "thermodynamic speed limit" for tornadic windspeeds. This is no more than a point of departure for some theoretical discussions, and certainly is completely unrelated to Prof. Fujita's figure of 318 mph. It seems likely that the windspeeds in real tornadoes regularly exceed this "speed limit", albeit only briefly.
Recently, in light of the 3 May tornado in the Oklahoma City Metroplex, claims of observing windspeeds equalling or exceeding this 318 mph figure have been made. Irrespective of the merits of those claims, if we assume for the sake of argument that they're accurate, does this raise the possibility of an F6 tornado? Although Fujita included such a possibility in some of his work, it is not at all clear what sort of damage to look for to confirm the existence of such an event (in the absence of measured windspeeds). The uncalibrated character of the F-scale, especially at the high end, makes this discussion about the existence of an F6 tornado seem very silly to me, even if measured windspeeds are available. Formally, a 319 mph windspeed would qualify, of course. I've already dealt with this absurdity earlier in this discussion. Generally, I am opposed to this sort of sensationalism (pandering to the media) in any meaningful science-based discussion of tornado intensity. Also, see here for a discussion of "records" about tornadoes.
12. Dei ex machinae
A recurring theme in "explanations" of the weather is the nice, simple causative mechanism that provides a simple answer to a complex question. If a weather event of some sort (e.g., a tornado) happens to hit in a place or at a time that is climatologically unusual, then it is attributed variously to such things as:
The current favorite is El Niño. What will it be next year? Alternatively, there are implications a particular deus ex machina is going to result in some unpleasant condition in the future ... i.e., a prediction is made. The recurrence of weather events (e.g., the many rainstorms in the U.S. central plains during the summer of 1993, or a particular number of tornadoes in a given year, or whatever) also can be attributed to such oversimplified "explanations."
It's not that a particular instance of a weather event is unrelated to one (or more) of these factors ... it's simply that in many uses of this sort of "pat" explanation, vast oversimplification (perhaps to the point of being ludicrous) is being done. Certainly, for example, the El Niño process alters an important factor in the world's weather: the sea surface temperature distribution. However, to make the statement that a particular event (past, present, or future) is related to the El Niño via a chain of causality is hard to establish with any confidence ... for some events, it might even be literally impossible to show cause and effect. The El Niño phenomenon's relationship to the observed weather (say, a heavy rain event in California, a tornado in Mississippi, or a drought in Australia) is clouded by a number of factors:
It's entirely possible that a given El Niño did have an effect on one or more events, but the challenge is to show that the El Niño was the primary causative factor ... tough to do. Any of the dei ex machinae might well be a factor in a meteorological event, and some of them can have pretty compelling evidence of statistical associations, but statistical evidence is not proof of cause and effect (see the next item). As another example, volcanic eruptions certainly have the potential to affect the global temperature, and there even is a cause and effect link to global mean temperature that has been established with some confidence. But not every volcanic eruption has the same effect. There are many factors that can alter the meteorological outcome of a volcanic eruption. Attributing a particular event to the eruption of a volcano is risky unless it can be shown unambiguously to be associated with the known causality chains.
Naturally, the media often engage in this sort of simplistic exercise, but even scientists who should know better do so from time to time. For a rational view of El Niño, see this page.
13. Statistical association
I really shouldn't have to go into this, but it surfaces a lot in the media, and occasionally even in the scientific literature. The existence of a strong statistical association does not imply cause and effect. Any text in statistics will emphasize this, but if event A is strongly associated with event B, it is tempting to presume that A explains B or vice-versa. As a somewhat contrived (but still useful) example, it is easy to show that virtually every criminal has, at one time or another, eaten at least one pickle. If we did a statistical analysis of the data, there might well be a near perfect correlation between crime and having eaten at least one pickle. Does it make sense to infer that pickles cause crime? Perhaps it might better be said that we have asked the wrong question ... it is not uncommon that the questions we wish our data to answer have been posed improperly. Perhaps if we did a study that included non-criminals, we also would find that virtually all of them had eaten at least one pickle, as well . This would make it pretty clear that pickles are unlikely to be the source of criminal behavior (or we have a large number of unrecognized criminals?). The data and their statistical analysis may have been all "by the book," but the problem was ill-posed. If an association can be shown, then it might be a clue to causality, but there must be a plausible causal connection before it is even worth pursuing the issue in detail. Is there a plausible reason that explains why eating a pickle would lead to a life of lawlessness? Fun aside, even good scientists who should know better have fallen victim to this trap!
14. "Obstacle" flow around storms
A recurring theme associated with severe storms is the notion that the storm (whatever might be meant by "the storm") acts as an obstacle to the flow, in mid-troposphere. The available observations certainly look as though the storm is an obstacle, and there have been studies making extensive use of these observations to make statements about the vorticity source for the counterrotating vortices seen on the flanks of the updraft. This is an interesting analogy , but it is important to understand that the appearance of the flow does not necessarily mean that the flow dynamics are identical to those associated with solid obstacles embedded in a fluid flow.
The short version of the problem with the obstacle flow analogy is the following. When there really is a solid obstacle in the flow, vorticity is generated in the viscous boundary layer associated with the solid obstacle. This vorticity is shed into the wake of the flow and is the source of the vorticity in the counterrotating vortices. Thus, even if the ambient flow is completely uniform, containing no ambient vorticity whatsoever, the basic elements of obstacle flow will generate these vortices. Severe thunderstorms are associated with environmental flows having considerable vertical shear and, therefore, considerable vorticity about a horizontal axis. It is generally accepted now that the counterrotating vortices associated with severe thunderstorms arise from tilting of this substantial ambient vorticity. Thus, the similarity in appearance is only coincidental to the major differences in the dynamics of the vortices associated with the interaction between the updraft and its environment.
The long version can be found here, in published comments by R. Davies-Jones, me, and H.E. Brooks on a paper by R. Brown in the Journal of the Atmospheric Sciences.
15. Cold advection and "cap" elimination
It is a tired old cliché that when there is a "cap" that is restraining convective initiation, lower mid-tropospheric (i.e., around 700 mb) cold advection will help remove that cap. There are several problems with this argument. First of all, cold advection at some level is associated with subsidence at that level, in virtually all cases. This can be seen in my discussion of (ugh!) "overrunning" .. see this figure. Subsidence is not a very good way to remove a cap (see below). Second, it often happens on the Great Plains that the reason for the cap is the advection of an elevated mixed layer (EML) from the higher terrain to the west. This superposes high lapse rates over low-level moisture all right, but it also can produce a strong "capping" inversion that prevents release of the convective instability in such a sounding. In these situations, then, the morning 700 mb map often has a thermal ridge somewhere on the High Plains ... if this were further east, it is possible that the advection of this feature would push the thermal ridge further eastward. On the Great Plains, since the thermal ridge at 700 mb is not all that far above the terrain, a significant contribution to the temperature at 700 mb is the solar (diabatic) heating. With the solar heating during the day, it is not uncommon in these situations to find that the thermal ridge has not moved very far at all by evening, and may even have backed up ... the temperature changes that the morning pattern of advection implied have not happened.
Note that on a pressure surface, isotherms are also isentropes ... that is, on a p-surface, the temperature contours are also potential temperature contours. Consider, then, the local change in potential temperature, q, which is given by the following:
whereis the diabatic contribution.
That is, local changes in (potential) temperature on a pressure surface result from (1) horizontal advection, (2) vertical advection, and (3) diabatic processes. Since cold advection results in subsidence, in a stratified atmosphere (potential temperature increases with height), the associated subsidence actually counteracts the effect of the thermal advection on the pressure surface. On the High Plains, the diabatic warming effects are often large enough to counteract the cold advection entirely.
I don't want to make a categorical statement, but I can't think of a single case where one could attribute unambiguously the initiation of severe convection on the Great Plains to cap removal through cold advection at 700 mb. If such events can be found, I believe them to be atypical; by and large, this idea is a cliché that is not consistent with the facts.
16. The "convective temperature"
This is a widely-used term; it stands for a surface temperature that corresponds to the elimination of a "cap" (i.e., the removal of all of any negative area associated with ascending low-level parcels) by insolation (see A.11, above). Presumably, it is believed by users of this term that deep convective initiation will be delayed until the "convective temperature" is reached, after which deep convection will begin. If this was a valid concept, then deep convection should begin by cloud flashing into existence over big chunks of real estate, all at the same time. What we really see is that deep convection usually commences as isolated convective clouds, perhaps at a few places along a line, usually well before the attainment of the "convective temperature". Sometimes, however, the "convective temperature" is reached and nothing happens. The implicit model associated with this term is that deep convection is initiated solely by elimination of the negative area through solar heating. Since the reality is quite different from the scenario developed from this implicit model, consider why this is so.
a. There typically is considerable horizontal inhomogeneity in the real world. The "convective temperature" derived from a sounding can only be interpreted legitimately as representing the unique place and time when the sounding was taken ... it may or may not represent a wider space/time volume than that. Hence, the "convective temperature" cannot be taken literally unless (a) the convection actually begins at the sounding site, and (b) the real evolution of the sounding corresponds to that described by the model of how that sounding would evolve under the influence of solar heating. If a sounding evolves differently than how this model predicts it will under the sole influence of solar heating, some other process is happening for which this simple model fails to account. Thus, it is also easy to understand how the "convective temperature" could be reached with no convection ensuing.
b. In virtually all cases, deep convection is initiated by processes that lift parcels first to their LCLs and then to their LFCs. In the real world, deep convection is not initiated by solar heating alone, although that heating certainly reduces the negative area that must be overcome by some lifting process. Some subsynoptic-scale process (e.g., ascent associated with a front) is that which typically raises parcels to their LFCs, initiating deep convection. The spatial and temporal variability of the processes that ultimately result in deep convective initiation means that the first deep convective clouds will appear at some point (or a limited number of points) in time and space, rather than in some region that has attained its "convective temperature".
The concept of the "convective temperature" is one that contributes nothing of value in forecasting, owing to its failures as described above, and it perpetuates an improper understanding of deep convective initiation. Thus, I recommend that this term be rejected and used no more.
17. The gradient wind
This one might well raise some hackles ... my objection to this one is inherited from one of my career's biggest influences, Prof. Walter J. Saucier. For those of you who recall your elementary dynamics, the so-called gradient wind "balance" is similar to the geostrophic wind balance, except it considers forces associated with the curvature of the flow; that is, those forces precisely normal to the wind flow. Textbook treatments of the subject talk about it in some detail, often including an elaborate discussion of the four solutions associated with positive and negative radii of curvature combined with both real and imaginary roots to the quadratic equation that describes gradient flow.
First of all, the gradient wind defined in this way cannot be, strictly speaking, a balanced wind at all! Since the flow has curvature, it is accelerated, by definition. Accelerated flows are most definitely not in balance, right? But Walt's objections don't stop there. Walt felt, and I agree, that making the assumption of zero tangential acceleration while allowing for normal accelerations makes the elaborate exercise of developing four specific solutions to the formula for "gradient wind balance" seem rather contrived. We make a great fuss over this mostly mathematical exercise, for no truly obvious value other than to show off some undergraduate mathematics. I'm willing to keep this one in the lexicon, but it's not something that offers me much; the notion that it represents a "balanced" flow is clearly incorrect! To the extent that it's observed that flow around troughs is typically subgeostrophic and that around the margins of ridges can be supergeostrophic, the idea of gradient flow offers a lowest-order explanation for that observation. Beyond that, it's mostly a mathematical curiosity, as I see it.
18. 2000 thunderstorms at any given time
This has got to be one of the most frequently-quoted numbers in the weather business. Even I've used it. But upon what is it based? Whose study underpins this? It turns out that the seminal paper that "backs up" this estimate is by C.E.P. Brooks in 1925: The distribution of thunderstorms over the globe. Geophys. Memoirs, No. 24 (4th No. of Vol. 111), 147-164. The paper says "... there will be in progress at any one moment about 1,800 thunderstorms in different parts of the world." (at the end of section 4). This study proceeded by considering the reporting of thunder at surface observation stations (at and prior to 1925, obviously), making a number of assumptions about unobserved events over the oceans, in sparsely populated regions, etc., to arrive at this final number. Since this paper, everyone just quotes the figure, without ever being concerned for its validity in the modern era.
Given the information available in 1925, this might well be a reasonable figure, but in view of today's data and a modern understanding of thunderstorms, this hackneyed phrase has lost its meaning. What constitutes "a thunderstorm"? Is it a cell, a convective system, a radar echo, a satellite signature, or what? Using today's data (radar, satellite images, lightning detection networks, etc.) how would we verify this number? A squall line (or a convective system, or a radar echo, or a satellite signature, or whatever) can produce thunder observations at a number of surface stations during a single day. Is the squall line one thunderstorm, or many? Almost certainly, no one would consider it to be a single thunderstorm, but how many? Where would we have to draw the boundaries of a "thunderstorm" in order to validate this number of 2000 (or 1800) worldwide, on average, at any given moment? I know of no way to do this.
I'd like to advocate that we stop using this silly, superannuated number, and defer inserting any such estimates in textbooks, etc., until we can arrive at a number that makes some sense in today's world, validated with modern datasets.
19. "Top ten" lists of meteorological events
For some reason (too much time spent watching David Letterman?), as the century comes to a close, there seems to be an obsession with creating "top ten" lists of storms and other meteorological events. This has generated a lot of controversy over what events to include in such lists. Basically, I consider all of this to be no more than entropy generation, without any associated value, except perhaps to remind people of important past weather events. I offer the following thoughts:
From time to time, in the infrared (sometimes, the so-called "water vapor" band) satellite imagery, a long cloud band can be seen extending from the tropics well into midlatitudes. This is often referred to as the "tropical connection," or "tropical moisture." or some similar jargon. The implication is that the cloud band represents a flow of highly moist tropical air into some region in questions. However, the clouds that compose the band visible in the satellite imagery are present in the middle and upper troposphere, where the actual mixing ratio of the air is pretty negligible. The amount of moisture they contribute is essentially trivial. These cloud bands may be associated with a comparable flow of low-level moisture, or they may not. The mere presence of the band does not imply any meaningful contribution to the moisture content of the air that is potentially involved in deep, moist convection. It's bad enough when media broadcasters use such sloppy terminology, but occasionally even meteorologists do so.
21. The clash of air masses
In the context of the development of severe storms, the phrase "clash of air masses" (typically between warm and moist vs. cold and dry) is often invoked to "explain" the occurrence of severe thunderstorms and tornadoes. This is an idiotic oversimplification. Air masses "clash" all the time, even when severe thunderstorms are only the remotest of possibilities. If this were actually an "explanation" of the occurrence of severe thunderstorms and tornadoes, we ought to have such violent weather almost continuously, and only along frontal (and/or dryline) boundaries. This balderdash is often spouted by the media in their misguided efforts to "simplify" the explanations of events to the public.
22. Improper assessment of vorticity advection
(a) It's regrettably common to attempt to assess vorticity advection by counting the number of units of vorticity traversed by a moving air parcel. This is incorrect. In meteorology, advection of any quantity, say Q, is given by:
where V is the vector wind. Clearly, the amount of Q-advection depends on the magnitude of the wind component parallel to the Q-gradient vector, and the magnitude of the Q-gradent vector itself. Advection is an instantaneous quantity; that is, it's calculated at one instant of time, at some specific point in space, according to the formula above. Obviously, advection can vary with both space and time, and can be positive, zero, or negative..
If changes in Q are found by following an air parcel, what is being calculated is:
On the left-hand side of this is the total (or substantial) derivative associated with the moving parcel. On the right-hand side are two terms: the first is the partial derivative of Q with respect to time (measured at a point), and the second is obviously the advection (multiplied by a minus sign) also calculated at a point. Therefore, at some point in space, at an instant of time, the total derivative is equal to the sum of two contributions: the change associated with whatever processes are changing Q at that particular point, and the change associated with the wind bringing in different values of Q to that point. Hence, changes in vorticity seen by following a parcel are not just due to advection, and they will be of the wrong sign to determine the advection properly! To see this latter point, suppose the air flow is from high values of Q toward low values. Following a parcel, the surrounding Q-values go from high to low, implying Q is decreasing following the parcel. However, the actual advection is such that large values of Q will be replacing low values, so the local change in Q that is attributable to advection would be such that Q is increasing, wherever the wind blows from high values of Q toward lower values.
Advection is associated with those changes at a point that are due only to the winds carrying in new values from somewhere else. This implicitly presumes that Q is a passive scalar É that is, Q is conserved. This might not be the case for any particular variable Q, of course, but the sense of advection is determined as if the variable (e.g., Q) is conserved; that is, that
Under this assumption, then,
In meteorology, advection is simply the local change in Q (i.e., at a point) that would be seen as a result of the wind carrying new Q-values to the point in question. If the field is uniform, of course, the gradient vanishes and the advection also is zero. If the wind vanishes, or is not crossing isopleths of Q, but is rather parallel to those isopleths, the advection is also zero. For variables that are not conserved, of course, non-conservation means that the local changes would not be due entirely to advection, but would also have to account for changes following the air parcels.
(b) Another quite common mistake in assessing vorticity advection is to look at the instantaneous point values over time. That is, if the vorticity at 500 mb over Chicago goes from 22 units (10-5 s-1 for typical model forecast charts of absolute vorticity) to 28 units, that is taken to mean that the vorticity advection is positive. Wrong!! This is a measurement not of the vorticity advection but of the partial derivative of vorticity with respect to time (see above). Only if the vorticity is strictly conserved will the local time change in vorticity be equal to the advection ... moreover, assessing the vorticity in this way (by finding values at a spcific point in space, separated by finite time) is not an instantaneous measurement but rather is an evaluation over some span of time, during which the advection may have changed considerably, even to the point of having changed signs. Remember that advection is properly determined only by the first formula given above, so that comparing vorticity values at different times represents the net change over that finite time period, not the instantaneous advection of vorticity.
My favorite source for grammar and usage is the very readable book: The Elements of Style by W. Strunk, Jr. and E.B. White (ISBN 0-02-418200-1) published in paperback by MacMillan. I also have discovered a delightful little booklet entitled: The Science Editor's Soapbox by Werner J. Lipton [Copies can be obtained by writing him at: Science Soapbox, P.O. Box 16103, Fresno, CA 93755-6103]. Finally, there is a nice reference book called Handbook of Technical Writing by C.T. Brusaw, G.J. Alred, and W.E. Oliu [published by St. Martin's Press and available from Amazon.Com]
Of course, I reserve the right to dispute these sources, as well. I recognize that all languages evolve, but if we permit formally incorrect usage to proliferate, the language can become babble. Good grammar is needed to avoid confusion of meaning, and in English, word order is an important conveyor of meaning, since it is not generally an inflected language (word forms changing to reflect their use in speech), as are such languages as Spanish, German, and Russian.
1. Due to
This expression often is used incorrectly in place of because of or owing to . The litmus test is to replace the phrase with "attributable to" and see if it parses. For example, consider the sentence: "He missed the ball due to an incorrect swing." "He missed the ball attributable to an incorrect swing." does not parse properly. However, the sentence "His miss of the ball was due to an incorrect swing." does parse properly when written "His miss of the ball was attributable to an incorrect swing." The expression due to is proper in an adjectival phrase and incorrect in an adverbial phrase. Of late, I have seen many examples where writers have substituted owing to when due to would have been appropriate ... not all uses of due to are incorrect!
2. Split infinitives
An infinitive ("to go") is said to be split if an adverb is inserted between the "to" and the verb ("to boldly go"). This usage has become increasingly common, but I do not like it very much. It always is possible to reconstruct the infinitive without splitting it (e.g, "to go boldly" or "boldly to go"). See Strunk and White (p. 58). This has become such a lengthy item that I am putting it on its own Web page, here.
3. Split compound verbs
A compound verb is one that involves more than one word; in English, this often (but not always) involves adding a word to indicate a change in verb tense. Thus, "have been" is a compound form of "be" so that if it is said "have always been" then the compound form has been split. This is similar to a split infinitive, and I dislike it for basically the same reason. It nearly always is possible to reconstruct the phrase to avoid such constructions (e.g. "always have been").
4. Common Misspellings
Many common misspellings are the result of not knowing the meaning of words that sound similar but have different meanings. Try a dictionary if there is any doubt.
Many of these spelling errors result from the fact that some words that are spelled differently and have different meanings (up to and including being different parts of speech!) can sound very much alike, perhaps even the same. Such words are called either "homophones" or "homonyms".
The word "while" is a conjunction denoting "as long as" or "at the same time that" ... it frequently is used in place of "whereas" or "although" to indicate "inasmuch as" or "even though". For example, sentences like "While vorticity is a kinematic quantity, it is not a fundamental dynamic variable." should begin with "Whereas" or "Although".
One of Strunk and White's pet peeves, also (p. 59). As they point out, "that" is the restrictive or defining pronoun, whereas "which" is the nonrestrictive or nondefining pronoun. They recommend going "which-hunting" and replacing the "whiches" with "that" when they are restrictive or defining.
7. data, criteria, media, maxima, and phenomena
I'm astounded at how many writers are unaware that "data" is the plural form of the singular noun "datum" so that these writers inadvertently create a subject-verb disagreement when they say "...data is ..." A similar problem occurs frequently for "criteria" (the plural form of "criterion"), and "phenomena" (the plural form of "phenomenon"). "Maxima/minima" are similarly abused (as plurals of "maximum/minimum"). Particularly common is the use of "media" as a singular noun ... e.g., "TV is a stupid media." [true, but grammatically incorrect!] These plural forms are residual evidence of the connection between English and other Indo-European languages, most of which are inflected (as in Spanish or Russian). English retains only a few such "fossils" [including "who/whom"], and they are abused frequently because English speakers are losing the ability to detect such inflections and use them properly.
8. comprised of ...
To "comprise" means to "include" or to "be composed of" ... therefore, it's incorrect to say that something is "comprised of" a set of things. We should not say "Meteorology is comprised of elements from many different sciences." Rather, this should be either "Meteorology comprises elements from many different sciences." or "Meteorology is composed of elements from many different sciences."
9. a number of ...
Many times, writers inadvertently create subject-verb disagreements when they say things like "A large number of investigators are working ... ." The problem is that the subject has become number (a singular noun), not investigators (a plural noun), so the verb should become "is working" to reflect this change, but many writers fail to notice the conflict they have created. Most of the time, this problem is easily circumvented by using "Many investigators are working ... ." instead of using the additional phrase "A large number of ..." This problem arises with other phrases involving the word "of" (e.g., "the majority of") and writers should be aware of this tendency. It's easily avoided.
10. 2:00 a.m. in the morning
This is one I hear a lot when people are speaking, but even occasionally crops up in writings (e.g., forecast discussions) ... "The storm came through at 2 a.m in the morning." As opposed to 2 a.m. in the evening ? Perhaps the intent is to provide additional emphasis, but it simply is redundant.
11. Nouns becoming verbs
Apparently, like split infinitives, this tendency is unstoppable. Nevertheless, I am so tired of hearing words like "impact" and "service" being used as verbs. The most egregious of late is "office" used as a verb in the advert for a company I will not name. Perhaps this habit will become known as "verbing" ... ugh!
Sorry, this one is not even an acceptable word. Use either "Regardless" or "Irrespective".
I hear this word being used all the time as a synonym for "devastate" or "destroy". According to my American Heritage Dictionary, usage of "decimate" has been extended from its Latin original meaning (to destroy/kill one-tenth of something or group of things) to include destruction (or murder) of a large part of something (or a group of people). The Romans created the term to refer to how they routinely dealt with groups who resisted their rule. As a way to punish that resistance, one-tenth of the group was chosen by lot and killed. I suppose I have to accept the creeping misuse of words (see the preceding item), but it doesn't mean I have to like it.
14. an history
There is a tendency, in seeking to be erudite, to use "an" as the article preceding words beginning with the letter "h". So far as I can tell, this is an artifact of the British tendency not to pronounce the "h" at the beginning of words. That is, in English as spoken in the U.K. (and many of its former colonies, but not the United States), it would sound awkward to say "A (_)istory of the United Kingdom ..." for the same reason that it sounds awkward to say "A easy method ... ." However, given that English as spoken in the U.S. does pronounce the "h" at the beginning of words, it's not really all that awkward to say "A history of the United States ... ." I try not to be particularly chauvinistic, but in writing for American publications, I think it makes sense to write English as we speak it. If I were writing for a journal in the U.K., I'd have to change; this is an application of the "When in Rome, ... " principle and, as such, is nothing new. I already have to change my spelling for U.K. (and many of its former colonies) journals: "rigor" becomes "rigour", "center" becomes "centre", etc. Why not make this adjustment to the article, as well?
15. communication skills
I've collected a number of my most common gripes and tips about scientific communication, including scientific papers, here.
16. sloppy use of comparative adjectives
A common problem in meteorological writing is the use of comparative adjectives when either (a) no comparison is actually being made, or (b) it's ambiguous to what something in the text is being compared. For an example of (a), the phrase "smaller scale features" might appear in the text. What is almost certainly intended is something like "subsynoptic scale features", or "mesoscale features" , or whatever. No comparison is actually being drawn, so the use of a comparative adjective is incorrect. With regard to (b), it may be clear to the author what the comparison is, but the reader may not know (either because the context doesn't make it clear, or the comparison is being made with something far removed [in the text] from the comparative adjective). Thus, for example, the adjective "better" is often used in reference to some technique, when it is not obvious to what the technique is being compared. Authors should seek to make it clear what the comparison is, when using a comparative adjective.
17. lack of commas
When I review scientific manuscripts, it's really disturbing to me to see how many authors are very parsimonious with their use of commas to set off phrases within the text. The comma implies a pause and makes reading the text (especially out loud, but not exclusively so) much easier. I don't know, nor can I even imagine, the origins for this trend.
18. One space between sentences
In reviewing manuscripts, I often see now that sentences are being separated by single spaces. I was taught that two spaces are used between sentences. Using only one space makes reading more difficult, in my opinion. Again, I don't know the reasons for this trend, but I hear vague allusions to the use of variable spacing in word processing software. Whatever the reason, I don't like it.