Article

Modeling Ocean Circulation

Albert J. Semtner*

Abstract:
Ocean numerical models have become quite realistic over the past several years as a result of improved methods, faster computers, and global data sets. Models now treat basin-scale to global domains while retaining the fine spatial scales that are important for modeling the transport of heat, salt, and other properties over vast distances. Simulations are reproducing observed satellite results on the energetics of strong currents and are properly showing diverse aspects of thermodynamic and dynamic ocean responses ranging from deep-water production to El Niño. Now models can represent not only currents but also the consequences for climate, biology, and geochemistry over time spans of months to decades. However, much remains to be understood from models about ocean circulation on longer time scales, including the evolution of the dominant water masses, the predictability of climate, and the ocean's influence on global change.


The ocean is now well known to play a dominant role in the climate system because it can initiate and amplify climate change on many different time scales. The best known examples are the interannual variability of El Niño (1) and the potential modification of the major patterns for oceanic heat transport as a result of increasing greenhouse gases (2). Yet the ocean has been very much undermeasured for most of the history of ocean science. Even though systematic observations began in the 1880s with pioneering observations by Nansen and others (3), the seagoing and theoretical efforts were mainly oriented toward describing large-scale circulation (4), which was often regarded as steady for lack of more detailed information. It was not until the 1960s, when long-distance tracking of drifting buoys at mid-depth showed currents to be highly variable on quite small spatial scales (5), that oceanographers became aware of the immensity of their task.

In dealing with an undersampled natural fluid system, it is extremely helpful to invoke the simulation capability of supercomputers to improve understanding of basic processes and their interconnectedness, as well as to help interpret sparse observations. This article describes the development of ocean modeling as it has been influenced by improved methods, more powerful computers, and data from the latest measuring systems. Recent successes that pave the way to better physical understanding and more reliable ocean prediction are emphasized, although issues of running coupled atmosphere-ocean models are not discussed. Some remaining problems are highlighted briefly.

The Evolution of Models

The ocean is a turbulent fluid that is driven mainly by the mechanical forcing of the winds and the net effect on density of surface exchanges of heat and moisture. It responds according to physical laws for the conservation of mass, momentum, energy, and other properties. The resulting three-dimensional (3D) flows are strongly influenced by Earth's rotation and are regulated by internal mixing and boundary friction. The currents are particularly narrow and strong at the western side of ocean basins and in a belt around Antarctica. Many ocean properties such as temperature, salinity, dissolved oxygen, and biogeochemical tracers have local maxima in the cores of strong currents. Consequently, a proper representation of transports of these properties involves not only the means of individual variables but also relatively small-scale correlations with velocity. Furthermore, significant changes of the currents and property transports occur in response to modified surface forcing, and the correct simulation of these responses is crucial to solving a variety of problems ranging from understanding the history of the Earth system to predicting future climates. These are the challenges for ocean modeling.

An early paradigm. A first significant modeling effort extended over the decade of the 1960s at the Geophysical Fluid Dynamics Laboratory (GFDL) (now located at Princeton University and operated by the National Oceanic and Atmospheric Administration). Bryan built an ocean model that was to be used with atmospheric models to study climate. In rapid succession, he and co-worker Cox developed a 2D model, a 3D box model, and then a model of full circulation, with variable density as well, for the world ocean with its complex coastline and bottom topography (6).

The models used space-centered explicit finite differencing, and the last version used a variable number of vertical levels to represent the complex geometry. All of the models used an assumption known as the "rigid lid" approximation to eliminate high-speed external gravity waves and allow a longer time step: the upper surface was held fixed but could support pressure changes related to waves of lower speed and currents of interest. As a result, ocean tides and other waves having the speed of tsunamis were filtered out. Use of the rigid lid created an ancillary 2D equation at every time step to obtain the vertically averaged component of the currents. Bryan's 1969 model used partial differential equations, formulated in rotating spherical coordinates, for horizontal velocity, temperature, and salinity, plus hydrostatic balance, incompressibility, and an empirical equation for density. Processes occurring on scales smaller than the grid size were represented by horizontal and vertical viscosity and diffusion, using four different coefficients.

The first application with specified global geometry was done in the early 1970s (7). Cox designed a 2° latitude-longitude grid with up to 12 vertical levels at each point. The model was run for hundreds of hours on the fastest available computer but had only reached a few simulated years after many months. Because the model had been initialized with smoothly gridded fields of the observed temperature and salinity, it successfully diagnosed currents that were consistent with those fields, but then it began to evolve toward a new state. Realizing the impossibility of reaching thermodynamic equilibrium at this grid resolution, Bryan, Cox, and others turned the model formulation to coarser grid (5°) global studies and to regional studies. However, currents in the global studies were far too broad and sluggish, and pathways throughout the global ocean were omitted from regional studies. However, because of the generality of this "GFDL model," it was sought out and became a standard tool for ocean simulations at many other institutions.

Retooling for computational limitations. In the mid-1970s, there was growing awareness that most ocean currents have cross-stream dimensions that are a fraction of a degree, that is, less than 100 km. Most oceanic high-frequency motions are channeled into slowly varying ocean currents over a distance equal to the speed of the internal gravity wave divided by the local Coriolis frequency. This length scale, called the internal radius of deformation, varies from about 100 km in the tropics to 10 km or less at very high latitudes. The currents then meander, and some produce closed or cutoff circulations on scales of tens of kilometers. These features, called mesoscale eddies, are the oceanic analogs of atmospheric storms. They are known from instability theory to form at a scale near the radius of deformation, but as they develop, they scatter and mix momentum and properties in ways that elude theoretical description. Subsurface drifters show these disturbances to be common throughout the world ocean; in fact, they account for the bulk of the kinetic energy in the ocean (8).

If strong currents and eddies are to be treated adequately in numerical models, then the horizontal grid size has to be small enough to represent or resolve them, meaning that grid spacing should be approximately 20 km in middle latitudes. This could only be achieved effectively in the mid-1970s by shrinking the domain sizes to be much smaller than almost all ocean basins, reducing the number of levels, and further filtering the equations in advance to remove internal gravity waves and allow only the major currents and low-frequency eddies. The resulting two- and three-layer quasi-geostrophic models of Holland, McWilliams, and others became workhorses to understand how eddies affect the circulation in box domains (9) and in channels (10). These models were adiabatic ones with fixed-density layers of varying thickness and were driven only by specified winds. Usually they had a scale-selective viscosity and diffusion to reduce damping.

By means of these adiabatic models, the origin of mesoscale eddies in the strong currents, the complicated mixing of properties produced by eddies, and the generation of deep time-mean currents by vertical eddy transfers were able to be understood, and to a far better extent than by observations alone. In addition, one adiabatic layered model by O'Brien and his students, which did retain internal gravity waves so that equatorial and coastal problems involving these waves could be treated, led to an initial understanding of El Niño in terms of those waves (11).

Toward realism with more choices. By the mid-1980s, progress had been made in simulating more aspects of real-world ocean circulation, much of it resulting from increases in computing power. Quasi-geostrophic models were applied to larger domains and run longer and for more cases, thereby giving a better idea of how physical parameters affect model dynamics. Models for the Indian Ocean and progressively larger fractions of the Pacific were constructed following O'Brien's approach and were run with the observed time-varying wind patterns. Equatorial currents related to the Indian monsoon and the Southern Oscillation were better understood as a consequence. The GFDL-type models were able to have eddies marginally included in small, idealized domains, which revealed how eddies both transport and homogenize certain properties (12). Global GFDL-type models could be run for centuries of simulation with grids of about 3° spacing, which demonstrated that the complete structure of temperature and salinity could be crudely simulated but with a systematic overestimate of the depth of the temperature structure (13).

During this time, a number of alternate model formulations arose, such as ones using curvilinear coordinates to follow the local coastline (14) and with different methods for treating the varying depths (15). In a particularly promising alternative, Bleck used coordinate surfaces of constant density that are able to move freely within the water column (16); this method has the potential to represent the preferred spreading of water masses along such isopycnal surfaces and to reduce excessive horizontal diffusion that can occur in the GFDL model.

In the late 1980s, simulations could finally be undertaken using the GFDL formulation with eddies marginally resolved over extensive domains and with observed winds and some atmospheric influence on density. These studies of the Southern Ocean south of latitude 25° (17), the North Atlantic (18), and the World Ocean without the Arctic (19) were hailed as groundbreaking because they were the first ones of planetary scale that were realistic enough to invite side-by-side comparison with data. The grid sizes of these models were 1/4°, 1/3°, and 1/2°, respectively, and the models were run for decades of simulated time. Output from each of the simulations was scrutinized by a number of investigators who were independent of the model development; the collective efforts became known as community modeling. Each of the three community efforts has produced more than 20 scientific papers dealing with model evaluation and interpretation of physical results. Planetary-scale studies were also conducted with adiabatic models for the Pacific and Atlantic, mainly for the purpose of developing shorter term forecast models but also with applications to wind-forced events like El Niño (20). However, nonadiabatic changes in density such as in the GFDL or isopycnal models are needed to reproduce vertical circulations and associated heat transports that govern the long-term climate.

A free surface and less smoothing. Early in the 1990s, the fraction of computer time devoted to the 2D ancillary problem associated with the rigid lid approximation was becoming excessive in cases of large domains and resolved eddies. For this reason, and to allow options to include tidal effects or height data from satellites, methods were developed to predict the height and pressure of the ocean surface directly. The goal was to retain the free surface without incurring a tremendous reduction in allowable time step due to fast-moving external gravity waves. A straightforward method was first developed to treat the free surface and the vertically averaged velocity using many small steps in time for each single step of the full 3D model (21). Another method developed at Los Alamos National Laboratory solves the same 2D equations using an implicit method for the free surface (22). Both methods are quite efficient.

Investigators from Los Alamos (Smith and Dukowicz) noted that a major benefit derives from either type of free-surface approach. The bottom topography can be specified independently at each location and without regard for how severely it may vary from point to point. In contrast, the rigid lid approach is ill behaved without one or more passes of a smoothing operator over the initial depth field. The investigators at Los Alamos also did a thorough reevaluation of existing methods and made several changes to increase the allowable time step. Moreover, their methods were developed to be used on emerging massively parallel computers. Finally, they initiated a treatment of horizontal pressure forces that doubled the time step and still fit smoothly into the free-surface method. The net speedup of the GFDL formulation as a result of their efforts was a factor of 4 in typical applications.

Two recent simulations have capitalized on the new GFDL free-surface formulations applied to the global ocean (23, 24). Both use Mercator grids that progressively decrease grid size at high latitudes; both models have a maximum of 20 levels for the unsmoothed topography; and they are driven by high-frequency observed winds of 1985 to 1995 and by monthly atmospheric heating. Averaged over their full latitude ranges, the models have 1/4° and 1/6° grid spacing, based on equatorial grid sizes of 2/5° and 0.28°, respectively. Another recent free-surface simulation, this time for the North Atlantic and using Bleck's model with isopycnal coordinates, has a Mercator grid based on an equatorial grid of 1/12° (25). Thus, models are approaching the radius of deformation in grid spacing for global modeling and are surpassing it in one basin-size grid.

Recent related modeling studies that compare simulated eddy energies with observed satellite estimates (26) and examine the grid size needed in local models of the unstable Gulf Stream (27) suggest that small grid sizes such as those above are required to reproduce accurately the instabilities and many other aspects of currents throughout the world ocean. This is supported by snapshots of the surface features in two of the latest models. The warm Gulf Stream (Fig. 1A) is seen to separate at the proper location near Cape Hatteras and to follow its known path with pronounced eddy activity; lower resolution models had failed at these tasks. The simulated salinity near the southern tip of Africa (Fig. 1B) is filled with swirls and filaments and has a pronounced pair of counterrotating features in the Agulhas Current, which connects the Indian and Atlantic Oceans; these are further consequences of high resolution. Both panels in Fig. 1 look similar to the highest quality satellite images of ocean surface features.

Including unresolved effects. There are some processes that can never be included in basin-scale and global models: breaking surface waves are an example. Sometimes, net effects can be specified in terms of known quantities, in which case the method of doing so is called a parameterization. For breaking surface waves, the mixing they produce is probably quite amenable to a parameterization, as are other vertical mixing effects related to convection and small-scale internal waves.

Figure 1A Image

Figure 1B Image

Fig. 1. (A) Instantaneous ocean surface temperature from an Atlantic isopycnal model with a 1/12° grid. [Courtesy of R. Bleck] (B) Instantaneous ocean surface salinity [in parts per thousand (ppt)] from a global model with an average grid spacing of 1/6°.

Click on the image for an expanded view

Whether mesoscale eddies that make up the bulk of the ocean's kinetic energy can be parameterized is much less certain. It will be difficult to quantify the small-scale correlations of important variables that provide for much of the north-south transport of properties. Most meteorologists have abandoned efforts to parameterize atmospheric storms and the effects that are described as negative viscosity; instead, atmospheric modelers simply resolve the storms explicitly. Parameterizing ocean eddies would be an attractive way to conduct long-term simulations at a relatively coarse ocean grid size of about 1°. A recently developed method of this type is being used in some simulations (28). Earlier model deficiencies in portraying the vertical extent of upper ocean structures have been remedied by this method.

Harnessing the Power of Supercomputers

Demands far beyond atmospheric ones. The computational requirements of numerical weather prediction have always pushed the power of available computers to the limit, but the potential requirements for ocean modeling are roughly a thousandfold larger. This follows from the fact that the fundamental scale of motion, the internal radius of deformation, is 10 times smaller in the ocean than in the atmosphere. Thus, important disturbances are one-hundredth the size; in addition, the oceanic time scales are 10 times longer than atmospheric ones. The only solution other than turning to regional simplified models is to coarsen ocean grids until problems can be run on existing machines.

With the typical explicit time stepping of almost all ocean models, one gains a factor of 8 in computational efficiency for each doubling of the horizontal grid size. (Fully implicit methods allow longer time steps, but only at the expense of iterative solution techniques that tend to squander any potential gains.) If there are no eddies, as in coarse-grid models, additional savings can come from a technique to accelerate convergence by artificially reducing the heat capacity of the deep levels. Without this acceleration technique, not even a 5° model could have run to the equilibrium of 1000 simulated years until the 1980s.

Vector machines. The advent of computers that rapidly perform the same operation on strings of numbers, so-called "vector computers" like the original CRAY-1A, was crucial to ocean modeling. By 1980, well-programmed ocean models were running a hundred times faster than they had a decade earlier. The memory size was beginning to be a barrier at that point. By the mid-1980s, the memory problem was ameliorated by the addition of secondary memory that had to be managed like disk storage but was much faster. Then, multiple processors were introduced, starting with the four-processor CRAY X-MP in 1986 and continuing with the eight-processor Y-MP in 1989. These were the first computers that were both fast enough and large enough in memory to allow global ocean modeling at a grid size that is approaching the radius of deformation. Ocean models account for some of the biggest applications on such machines, where they can run at speeds of a billion numerical operations per second.

Trillions of operations per second. A great hope among computational scientists is that a machine will be produced by the year 2000 that will run complex problems at a trillion operations per second. Such a teraflop (10^12 floating point operations per second) computer would almost certainly have hundreds or even thousands of processors, making it a massively parallel machine. Already, there are massively parallel machines that rival the vector machines with their multiple processors in attaining a present performance level of about 10 billion operations per second. Those are Thinking Machine's 1024-processor Connection Machine 5 and the CRAY T3D. Both the isopycnal and the GFDL models already mentioned have versions for these machines. Thus, adaptations have already been made to exploit teraflop machines for ocean simulations when those machines arrive.

Right now, a calculation at the radius of deformation, using a global grid of about 1/10° on average, would require about 3 days of the best machine's time per simulated year. Therefore, a teraflop computer could perform a 1000-year simulation to full oceanic equilibrium in a month. It is likely that such a calculation can and will be done early in the next century. When that happens, it will be the first time the turbulent ocean equilibrium, along with the associated 3D structure of its properties, is completely determined from simple physical principles. However, computer power already exists to portray the global ocean quite realistically over decades of simulated time. This means that a variety of important problems dealing with climatic, geochemical, and biological processes in the ocean are already tractable.

Global Data Sets

The ocean is a fluid of global extent, and what happens in a given location depends on an extended history of previous surface effects at all locations. If numerical models are to be improved, they need to be tested by first subjecting them to a prescribed history of global surface forcing and then determining whether they exhibit other independently observed responses. If the geometry is realistic, discrepancies involving cause and effect can be isolated and explained. Thus, it is important to have complete records in space and time for the surface forcing and a collection of key confirming measurements throughout the ocean.

Forcing data. A major forcing field for all models, including adiabatic forecast models, is wind stress. Early modeling relied on ship anemometer records to build a climatological seasonal cycle of wind data. Present modeling relies on higher quality analyses of global meteorological data that are done twice daily at major weather forecasting centers. These fields give information about surface conditions over the last 15 years. Because weather analyses are made from sparse data over some mid-ocean regions, a better alternative that is slowly becoming available is satellite vector wind stress derived from radar backscatter.

Surface density changes are needed to drive properly the large overturning circulations that influence climate. This requires that surface exchanges of both heat and fresh water be known. Net heat exchange is available from crude estimates based on seasonally varying climatological data, but net water exchange resulting from evaporation minus precipitation is poorly known. Both of these fields will have to be specified from atmospheric analyses, with limited hope for improvements from satellite measurements in the near future.

In situ data. Historical observations over the past 100 years consist mainly of vertical profiles of temperature and salinity collected by oceanographic ships and maps of near-surface temperature from ships and weather satellites. They give an estimate of the long-term means and standard deviations of temperature, salinity, and density, but their quality is variable in different regions. Large international observing programs have been under way to provide much better data with which to test models. Among these are the Tropical Ocean and Global Atmosphere (TOGA) and the World Ocean Circulation Experiment (WOCE). In particular, WOCE is giving a modern description of the global ocean by extensive vertical profiling, deep moored current meters, surface drifting buoys, intermediate depth floats, and mapping from ships. TOGA provides information concentrated near the equator in the Pacific.

Surface height: A key variable. The ocean surface height is exceedingly important because its slopes yield the near-surface velocity and its time variations are measures of internal instabilities and wind-forced current fluctuations. It is observable by radar from space with little atmospheric obstruction. Surface height variations are now being measured from satellites such as TOPEX/POSEIDON and ERS-1 and 2. These are used as primary validation fields for ocean models. In addition, satellite altimetry can map the surface currents on a frequent basis, such as every 10 days for TOPEX and 35 days for the ERS satellites. The information can be used as a forcing function, relying on models to assimilate the surface height and progressively build reliable knowledge, even for features below the surface. This technique is still under development but promises to provide much better information about instantaneous ocean states. In that case, other observations could be more easily interpreted, and ocean forecast models could be properly initialized. Furthermore, a time sequence of such ocean states would constitute a useful data set of its own for understanding ocean physics.

Recent Highlights

Three high-resolution models were mentioned above: two with global grids averaging 1/4° and 1/6° and one with an Atlantic grid less than 1/12°. The models are showing rich turbulent structures that are unprecedented in previous ocean modeling (Fig. 1). Regional animations have been produced. The interested reader who has access to the World Wide Web is referred to URL address

http://vislab-www.nps.navy.mil/~braccio

for the global models and

http://everest.ee.umn.edu/~sawdey/micom.html

for the isopycnal Atlantic model. Video segments can be downloaded or viewed on-line. Viewing these is perhaps the easiest way to get an appreciation for both the phenomena and the models.

Properly simulated physics. The impressive physical results of the new simulations will be fully discussed in specialty publications. However, Figs. 2 through 6 should give a clear indication that many dynamical processes are being well simulated. Figure 2 shows a comparison of the surface-height variability (that is, the standard deviation of height from its time mean) observed by TOPEX with that simulated with the 1/6° model. Height variability is a good indicator of the level of active dynamics. The close agreement over many parts of the globe indicates that most strong currents are properly located and are exhibiting the correct amplitudes of internal instabilities. In areas of disagreement, there are explanations in terms of model deficiencies. For example, the omission of small islands in the Lesser Antilles leads to exaggerated levels of variability downstream in the Caribbean Sea. It can be anticipated from results of the 1/2° Atlantic model that some problems in separating western boundary currents are the result of a coarse grid size. An apparent thermodynamic difficulty is an insufficient vertical overturning in the model's Indian Ocean, as suggested by inadequate height variability west of Madagascar. This may be related to poorly known surface forcing in the remote South Indian Ocean.

Figure 2 Image

Fig. 2. Comparison of the standard deviation of sea-surface height (A) observed by the TOPEX/POSEIDON satellite (34) [courtesy of S. Nerem] and (B) simulated in a 1/6° global model at Los Alamos [courtesy of R. Smith]. Observations are shown for a shorter time period (17 months versus 9 years) and with a greater degree of spatial smoothing than the model output.

Click on the image for an expanded view

Figure 3 shows length scales of sea-level variations (29): TOPEX-derived versus that of a 1/4° simulation. These length scales are computed by forming the ratio of height variability to slope variability. Noise inherent even in precise TOPEX/POSEIDON measurements limits the ability of the satellite to show the longest scales that occur near the equator and in some relatively quiescent ocean regions. Nevertheless, there is agreement with the simulated wavelengths for internal instabilities in many of the areas that had high variability in Fig. 2. When considered together, Figs. 2 and 3 confirm a modeling capacity to represent a wide variety of physical phenomena in quantitatively valid terms.

Figure 3 Image

Fig. 3. A comparison of length scales of waves (A) observed by TOPEX altimetry and (B) simulated by a 1/4° global model. The longest waves in the model (white) include El Niño; long waves could not be discriminated in this TOPEX analysis because of measurement noise. [Courtesy of V. Zlotnicki and J. McClean].

Click on the image for an expanded view

Figure 4 shows current vectors for a depth of 500 m in the vicinity of Iceland, with strong density-driven outflow through Denmark Strait and a rim current around the deep northern basin that is probably eddy-driven from above. Both of these features are realistically portrayed and are instrumental in determining the circulation of the North Atlantic and the climate of the Northern Hemisphere.

Figure 4 Image

Fig. 4. Time-mean velocity vectors at a depth of 500 m centered around the Iceland Plateau. Flow out of Denmark Strait west of Iceland reaches 0.5 m/s, in close agreement with known flow rates. Many other features in this view from the 1/6° simulation match those found in nature.

Click on the image for an expanded view

Figure 5 shows the signal of coastal waves propagating up the west coast of North America after El Niño on the equator in 1991 to 1992 (30); the simulated waves match the observed ones that disrupted ecosystems and influenced climate along the coast during that period.

Figure 5 Image

Fig. 5. The time record of two simulated coastal waves triggered by El Niño of 1991 to 1992. Height anomaly is shown as a function of time in 1992 and distance up the west coast of the Americas (from 0° to 50°N). The double-pulsed structure and the wave speeds correspond well with phenomena observed off the coast of California. [Courtesy of J. McClean]

Click on the image for an expanded view

Finally, Fig. 6 displays the time-mean surface elevations from a 1/4° simulation and from satellite altimetry (31). (Near-surface currents follow the lines of constant height, flowing clockwise around highs in the Northern Hemisphere and counterclockwise in the Southern Hemisphere.) The satellite product still has irregularities related to inadequate knowledge of the gravitational field, but the overall agreement between heights is remarkably good. This indicates that the mean surface circulation, as a consequence of winds and density forcing worldwide, is well reproduced by the model. Furthermore, almost all the seasonal changes that occur in nature can be simulated by new models.

Figure 6 Image

Fig. 6. A comparison of 1993 through 1994 average sea-surface elevation from (A) a 1/4° model and from (B) TOPEX/POSEIDON altimetry. Recent improvements in wind stress and surface heating applied to the model, and in the observed gravitational field used with satellite data, have reduced the root-mean-square difference between the two elevation fields to only 16.8 cm. [Courtesy of D. Stammer and R. Tokmakian]

Click on the image for an expanded view

In addition to the phenomena shown in the figures, the new models are simulating the 3D ocean circulation quite robustly, with its global network of deep boundary currents and strong surface currents being driven by surface density contrasts and connected by polar sinking and tropical upwelling. These results are consistent with what is already known from observations, and they extrapolate our knowledge beyond that which is possible with existing measurements.

A smorgasbord of geophysical fluid phenomena. The recent numerical experiments are showing geophysical turbulence at its finest. Of some note, model eddies form in the Indian Ocean north of Madagascar and become incorporated in larger eddies that round the tip of Africa, cross the South Atlantic to the west, and head back eastward. These are examples of long-lived coherent vortices that seem to defy dissipative processes over time periods of many years. Observationalists had been aware of the existence of such eddies through the distinctive water types that they transport over long distances, and satellite altimetric measurements have helped track these Agulhas Eddies over periods of a few years. But it is only with the recent model realizations that their longevity can be better assessed and the dynamical processes that maintain them can be understood.

The strongest western boundary currents in the new models produce packets of planetary wave energy traveling eastward, as do the large wind-excited west-moving planetary waves upon reflection from submerged topography. Although the former disturbances are anticipated from the highest resolution simulations with quasi-geostrophic models, the latter disturbances have more recently been found in nature through the accurate measurements of surface height from TOPEX/POSEIDON. It appears also that WOCE current measurements along submerged ridges may be showing reflected waves superimposed on the more steady currents. Models will provide a mechanism to help interpret these observations in the future.

At low latitudes, some eddies reorganize into elongated east-west currents that are reminiscent of flows on Jupiter and Saturn. The strong variation of the Coriolis effect with latitude in the tropics causes the turbulence there to have a preferred orientation. In some southern deep basins, the Antarctic Circumpolar Current organizes into four persistent filaments maintained by eddy processes, rather than being one broad stream. The currents circling Antarctica are the closest analogs to atmospheric jet streams that can be found in the ocean: they are maintained by similar energy sources related to the latitudinal change of temperature, but their separation scale is much smaller by being tied to the local radius of deformation. These last two situations are ocean realizations of phenomena anticipated from idealized turbulence studies (32).

Challenges for the Future

Evidently, much progress has resulted from improving the resolution and hydrodynamical formulations of models. Some comparisons are under way to determine the resolution requirements in different applications, such as climate versus forecast uses, and whether isopycnal coordinates are better than fixed vertical levels. The two methods are probably comparable if the diffusion in leveled models is oriented along isopycnals or if both model grid sizes are well below the local radius of deformation. Hence, future development will probably emphasize improvements of physical components and processes in models, such as the surface mixed layer and sea ice, and outflows and mixing from water-mass formation regions. Furthermore, biogeochemical processes can be included by using methods already tested in coarser grid physical models (33).

An immediate goal of modelers is to conduct longer term simulations, including some to full thermodynamic equilibrium. Ocean models will not be fully proven to be well formulated until they can reproduce the equilibrium distributions of temperature, salinity, and other properties as well as the modern spreading of dissolved anthropogenic gases and radioactive tracers. The likelihood that the ocean is capable of different overturning circulations if started from different initial conditions needs to be examined. The natural variability of climate needs to be determined, including the distinct possibility that limits of climate predictability are affected by oceanic turbulence. Finally, a number of past climates and potential future climate (including those influenced by activities of mankind) should be investigated with the aid of high-quality atmospheric models coupled to global ocean models. Most of those efforts require continued growth in computer power.


REFERENCES AND NOTES

  1. S. G. Philander, El Niño, La Nina, and the Southern Oscillation (Academic Press, San Diego, 1990).

  2. S. Manabe and R. J. Stouffer, Nature 364, 215 (1993).

  3. F. Nansen, Oceanography of the North Polar Basin, vol. 3 of Science Research (Christiania, Oslo, Norway, 1902).

  4. H. Stommel, The Gulf Stream: A Physical and Dynamical Description (Univ. of California, Berkeley, 1965).

  5. J. C. Swallow and B. V. Harmon, Deep-Sea Res. 6, 155 (1960).

  6. K. Bryan, J. Comput. Phys. 4, 347 (1969).

  7. M. D. Cox, in Numerical Models of Ocean Circulation (National Academy of Sciences, Washington, DC, 1975), pp. 107 120.

  8. H. L. Dantzler, Deep-Sea Res. 23, 783 (1976).

  9. W. R. Holland, J. Phys. Oceanogr. 8, 363 (1978).

  10. J. C. McWilliams and J. H. Chow, ibid. 11, 921 (1981).

  11. A. J. Busalacchi and J. J. O'Brien, ibid. 10, 1929 (1980).

  12. M. D. Cox, ibid. 15, 1312 (1985).

  13. J. R. Toggweiler, K. Dixon, K. Bryan, J. Geophys. Res. 94, 8217 (1989).

  14. A. F. Blumberg and G. L. Mellor, Three-Dimensional Coastal Ocean Models, no. 4 (American Geophysical Union, Washington, DC, 1987).

  15. D. B. Haidvogel, J. L. Wilkin, R. E. Young, J. Comput. Phys. 94, 151 (1991).

  16. R. Bleck and D. B. Boudra, J. Geophys. Res. 91, 7611 (1986).

  17. The FRAM Group, Eos 72, 169 (1991).

  18. F. O. Bryan, C. W. Böning, W. R. Holland, J. Phys. Oceanogr. 25, 289 (1995).

  19. A. J. Semtner and R. M Chervin, J. Geophys. Res. 97, 5493 (1992).

  20. G. A. Jacobs et al., Nature 370, 360 (1994).

  21. P. D. Killworth, D. Stainforth, D. J. Webb, S. M. Paterson, J. Phys. Oceanogr. 21, 1333 (1991).

  22. J. K. Dukowicz and R. D. Smith, J. Geophys. Res. 99, 7991 (1994).

  23. A. J. Semtner, in Proceedings of the 1993 Snowmass Global Change Institute on the Global Carbon Cycle, T. Wigley, Ed. (Cambridge Univ. Press, Cambridge, in press).

  24. R. D. Smith, R. C. Malone, M. Maltrud, A. Semtner, in preparation.

  25. R. Bleck, S. Dean, M. O'Keefe, A. Sawdey Parallel Comput., in press.

  26. D. Stammer and C. W. Böning, J. Phys. Oceanogr. 22, 732 (1992).

  27. W. J. Schmitz and J. D. Thompson, ibid. 23, 1001 (1993)

  28. G. Danabasoglu, J. C. McWilliams, P. R. Gent, Science 264, 1123 (1994).

  29. V. Zlotnicki, J. McClean, A. J. Semtner, paper presented at the Annual Fall Meeting of the American Geophysical Union, San Francisco, 5 December 1994.

  30. S. Ramp, J. McClean, A. J. Semtner, C. A. Collins, K. A. S. Hays, in preparation.

  31. D. Stammer, R. T. Tokmakian, A. J. Semtner, C. Wunsch, in preparation.

  32. P. B. Rhines, J. Fluid Mech. 69, 417 (1975); R. L. Panetta, J. Atmos. Sci. 50, 2073 (1993).

  33. E. Maier-Reimer, Global Biogeochem. Cycles 7, 645 (1993).

  34. R. S. Nerem, E. J. Schrama, C. J. Koblinsky, B. D. Beckley, J. Geophys. Res. 99, 24565 (1994).

  35. The most recent modeling efforts discussed here are funded by the Department of Energy's Office of Health and Environmental Research under CHAMMP (Computer Hardware, Advanced Mathematics, Model Physics) and by the National Science Foundation's Physical Oceanography Program under WOCE. The TOPEX satellite data was provided by Caltech's Jet Propulsion Laboratory and the Goddard Space Flight Center through the support of the National Aeronautics and Space Administration. The computational resources for the simulations shown in the figures were provided by the National Center for Atmospheric Research, Los Alamos National Laboratory, and the Pittsburgh Supercomputing Center.

The author is in the Department of Oceanography, Naval Postgraduate School, Monterey, CA 93943, USA.

Computers `95: Fluid Dynamics

Science Magazine Home Page

AAAS Home Page

AAAS Copyright Information