Customer center

We are a boutique essay service, not a mass production custom writing factory. Let us create a perfect paper for you today!

Example research essay topic: Dramatic Fluctuations Of Devils Lake Nd - 2,702 words

NOTE: Free essay sample provided on this page should be used for references or sample purposes only. The sample essay is available to anyone, so any direct quoting without mentioning the source will be considered plagiarism by schools, colleges and universities that use plagiarism detection software. To get a completely brand-new, plagiarism-free essay, please use our essay writing service.
One click instant price quote

... ; In practice, we have a finite amount of data, and appropriate values of & # 61556; and d, as well as the form of f (. ) are not known a priori. If the function f (. ) is linear in its arguments, this modeling framework is similar to traditional autoregressive time series models. Forecasts from such systems tend to dissipate to the long-term mean value of the time series from whatever condition they are in, unless special efforts are made to include oscillatory or other long-memory features. As was discussed earlier, one would be hard pressed to explain the fluctuations of Devils Lake or other similar processes under the traditional, linear, stationary system paradigm. However, if f (. ) is nonlinear, there is a possibility of regime and oscillatory dynamics as well as chaotic dynamics. The system may stay in a particular regime for some time and then switch out to a different regime of behavior.

Chaos or loss of predictability is often associated with the regime transitions, as exemplified in the famous work of Lorenz (1963). Oscillations that are the outcomes of positive and negative feedbacks within the system may occur within regimes or across regimes at a variety of time scales. These are the aspects of the climate system that were of interest in the preceding section. Lall et al (1995) developed a forecasting model for the Great Salt Lake (GSL) using the above ideas, where a nonparametric, spline regression methodology was used to estimate f (. ), and statistical criteria were used to choose & # 61556; , d, and the subset of lagged coordinates used in building the model.

They noted that certain regime transitions (e. g. , the start of the 1983 rise of the lake) of the Great Salt Lake were not predictable even a few months in advance, but in general one could expect relatively accurate forecasts 1 - 4 years into the future, even during the extreme rise and fall of the lake. Data on the Great Salt Lake had been reconstructed back to 1847 for the analysis. This allowed some of the extreme fluctuations in the 19 th century that are similar to the 1980 's GSL fluctuations to be represented in the data set available for model building. Unfortunately, for Devils Lake, the record could only be reconstructed back to 1905, limiting the ability to reconstruct the dynamics associated with the extreme recent fluctuations.

Consequently, in our work here, we have used an extension (see Moon, 1995; Ames (1998) ) of the Lall et al (1995) algorithm, that allows for the reconstruction of the dynamics of a target variable (xt) using time series of selected climate indicators. The general forecasting model is represented as: (2) where y 1, y 2, ... ym, refer to m potential auxiliary predictors (e. g. , climate indicators), with associated sampling frequencies & # 61556; & # 61489; & # 61484; & # 61472; &a mp; # 61556; & # 61490; & # 61484; & # 61472; & # 61486; & # 61486; & # 61486; & # 61556; m, and embedding dimensions, and et is an error process that includes components due to measurement error and due to approximation error in estimating f (. ). The approximation error may result from under-specification of the true state space (useful predictors are missing), or from limitations of the numerical scheme used to fit f (. ). Forecasts using expressions like those in equations (1) or (2) can be produced using an iterated or direct approach.

For the 1 -step iterated approach, the next value of the time series (T = 1 in the equations) is forecast. This is then used as a "known" value at time step t+ 1, and the model is re-applied to forecast the next time step (t+ 2). This process is repeated T times until the desired lead-time for the forecast is reached. Only the existing data is used for fitting the 1 -step ahead forecasting function function f (. ).

The estimated values of xt+ 1, xt+ 2... etc. , are used only to compute new iterates and not to re-fit the function f (. ). The iterated approach is consequently similar to the traditional autoregressive modeling approach. In the direct method, separate 1 -step, 2 -step, ... T-step ahead models are fit to the data, and are directly applied to generate 1, 2, ... T-step ahead forecasts.

The two forecasting methods can provide different results depending on the relative signal-to-noise ratio (relative magnitude of the variance of the error term et, to the variance of xt), and on local variations in predictability that depend on the nonlinearity of the underlying f (. ). Both methods of forecasting were evaluated in a cross-validated testing mode with the Devils Lake data. The models were fit to selected portions of the data and tested on the remainder. For any candidate set of predictors in a particular fitting exercise, the predictors retained in the model as well as the complexity of the model (e. g. , number and placement of knots for the regression spline) is selected using traditional statistical criteria (e. g. , Generalized Cross Validation or GCV, and the Schwarz Criteria, SC).

The use of logarithmic and square root transforms of the Devils Lake volume were also explored in the model building process. Suitably chosen predictors with different intrinsic time scales of fluctuations (e. g. , interannual to decadal for ocean temperatures and seasonal for local precipitation) can potentially be used to reconstruct the short and long run dynamics of Devils Lake. A variety of numerical algorithms (e. g. , spline regression, locally weighted polynomial regression, and neural networks) were explored for estimating f (. ). Multivariate, adaptive regression splines (Friedman, 1991) encoded in a Windows 95 application (Ames, 1998) that focuses on time series model building and forecasting were used in the work reported here.

Pre-screening of potential predictors: One interesting implication of Taken's theorem is that if multiple, lagged variables are used to reconstruct the state space of a dynamical system, there is a potential for significant coordinate redundancy, since it is conceptually possible to reconstruct the state space by lagging any one of the state variables. The statistical criteria we used to select predictors seek to limit such redundancies and their effects on the forecast scheme. However, as the number of potential predictors and hence choices for the statistical criteria increases, there is increasing potential for model mis-specification. Consequently, it is important to pre-screen the potential predictor set, before a model such as in (2) is fit to the data. We used the correlative analyses described in the preceding section and fit a set of candidate models with different subsets of potential predictors.

Efforts were made to include potential predictors in each candidate set that span a set of intrinsic time scales of variability. After the appropriate transformations are applied, a series of trial forecasts are performed to identify the most important predictors. The predictors considered were the PDO, NAO, and NINO 3 indices, the five SST areas indicated in Figure 1, and the monthly precipitation anomalies for climate division 3 of North Dakota (PCP). The cumulative sum of the predictor was also considered as a predictor (Corradi, 1995).

Several MARS models were fit using different combinations of these predictors at several different lead-times and from different starting dates in the historical record. The most important predictors identified are the PDO index and the SEC area of SST. The PCP predictor is important in certain cases. Often models that used just the time history of Devils Lake performed as well as those that used climate predictors. Validation forecasts and outlook for future Devils Lake levels Once these predictors were identified, iterated and direct forecasts at four different lead-times were made to explore the predictability of the DL volume series in the historical record.

Comparisons for direct forecasts are shown in Figure 19 for: (a) the period of relatively steady lake volume beginning in 1981, (b) the 1987 transition to decreasing lake volume, (c) the increasing lake volume from 1997 - 1999, and (d) a blind outlook for future lake levels. We consider four different lead-times: 12, 18, 24, and 36 months. The potential predictors provided to the model for these forecasts were the past volumes of Devils Lake (with a square root transform), PDO, SEC, and N. Dakota Climate division (3) precipitation. Cumulative sums of anomalies from Figure 19. Forecasts of Devils Lake volume, converted to levels, starting at various times for different lead times as shown are given in the legend.

These are all direct forecasts, based on models fit using only data prior to the start date of the forecast. For a 36 -month lead forecast starting in January 1981, only data up to January 1978 (i. e. 36 months prior) is used for model fitting. The subsequent months data is used to generate the forecast with the fitted model. Thus, at the end of the 36 month forecast, data from December 1980 would be used. MARS chooses different predictors from the candidate set.

There was considerable variation in the predictors used for models fit at different times, and for different leads. For example, the 36 month ahead forecast model fit for data up to May 1999, uses Devils Lake (t- 36), Cum. Sum SEC (t- 36, t- 72, t- 84), Cum. Sum PDO (t- 84) and N. Dakota Climate Division Precipitation (t- 84) as predictors. the long-term average were used for all predictors except Devils Lake.

Considerable variation was noted in the fitted models for different lead times and for different times. One finding was that sometimes using the precipitation (PCP) as a predictor can improve the longer lead-time forecasts. The 24 - and 36 -month lead forecasts mentioned above used PCP as a predictor. The 24 -month forecast mentioned above without PCP failed to predict the decrease after July 1997 (Figure 19 c). Using PCP as a predictor also improved the 36 -month lead forecast starting from January 1997. The forecast made without PCP was too extreme, over-predicting the volume for January 1999 by 207, 000 acre-feet.

However, when PCP was included as a predictor the accuracy improved, predicting the correct volume within 12, 000 acre-feet (0. 6 %). The outlook on future Devils Lake levels (up to mid 2002) indicate decreasing lake levels. This is a rather surprising forecast given the recent increases. However, the outlook agrees with long-term forecasts of the Upper Mississippi River (UMR) streamflow, which indicate that 2000 will be near-normal, but that 2001 will be below-normal (Baldwin, 1998, also available at: web). We compare these forecasts with those of the UMR since they have been demonstrated to be more accurate than those of Devils Lake directly, and provide a useful validation. Bayesian time series forecasting methodology Wiche and Vecchia (1995) presented an effective implementation of established, stationary, hydrologic time series analysis methods for the analysis of Devils Lake volumes.

Unfortunately, such methods, can have a hard time reproducing features such as the recent rise of the Devils Lake, even with parameter uncertainty considered. In addressing our second objective for forecasting that entailed the determination of long-run lake volume probabilities conditional on current state, we considered several alternatives to the Wiche and Vecchia work. These included Fractionally Integrated Autoregressive Moving Average models (ARFIMA), and a Bayesian autoregressive modeling approach (ARCOMP) that considers uncertainty in both the model parameters and coefficients. ARCOMP, is a relatively new approach due to Huerta and West (1999), that admits certain long memory and quasi-periodic sub-processes. A direct application of the ARFIMA model to the monthly Devils Lake volume (log transform) time series leads to the selection by AIC of a (AR = 5, d = 0. 32, MA = 1) model, with model coefficients (AR: 1. 83, - 0. 97, 0. 08, - 0. 004, 0. 068; MA: 0. 73). The forecasts from this model tended to the mean of the series, and were unsuccessful for the 1990 's.

The ARCOMP procedure is described below. Consider, a univariate autoregressive process of order p, AR (p): (3) where the & # 61542; I are autoregressive coefficients for lag i, and & # 61541; t is a an independent, noise process. This process can also be written as: (4) where B is the back shift operator; & # 61537; j are the roots of the characteristic polynomial associated with the AR (p) process, including the (R = p- 2 C) real roots rj, for j = 2 C+ 1, ... p, and the 2 C complex roots, , for j = 1... 2 C, corresponding to quasi-periodic processes with frequency & # 61559; j; z and and are latent processes corresponding to the complex and real roots respectively; and bj, dj, and ej, are some real constants. Huerta and West note that (a) state space models can be written as ARMA (p', q') models, that can in turn be approximated as high order AR (p) models, and (b) as per (4) one can include a certain number of quasi-periodic components determined by the order of complex roots admitted (C). These observations are interesting, because they allow one to investigate low frequency trends and quasi-periodic behavior in univariate time series such as the Devils Lake volumes where these features are of interest.

In classical linear, autoregressive modeling, the order, p, of the model is selected and fixed at some level p , using a criteria of best fit, such as the Alike Information Criteria (AIC). Uncertainty of the AR coefficients and its impacts on simulations are then assessed within this framework. Huerta and West take a rather different approach. They assume user specified upper bounds on C and R. For instance, we could assume that the upper bound on C is 5 based on an assumption that potentially an annual cycle, two quasi-periodic periodic components with periodic ities in the 3 to 5 year range related to ENSO, a quasi-periodic component with a decadal time scale related to NAO, and a quasi-periodic component at inter-decadal scales related to the PDO, were the main components of the climate system that are likely to be seen in the Devils Lake fluctuations.

The actual frequencies & # 61559; j and their amplitudes rj for these components are not specified. Upper and lower bounds (typically 2 < & # 61548; < n/ 2, where & # 61548; = & # 61490; & # 61552; & # 61487; & amp; # 61559; ) on the frequencies are specified, and the amplitudes are bounded in absolute value by 1. The number of real roots R, could be chosen as a suitably large number, say 10. This would imply an upper bound on p of 20 (2 C+R). In the absence of any information as to the number of admissible C, R, components, one may consider larger values for the upper bounds.

Instead of seeking the 'optimal' values for the model order and the associated model coefficients, Huerta and West use a Bayesian approach in which a prior probability distribution (typically approximately uniform, that admits mass on unit and zero roots) is specified on the values of the model parameters, and the data is used to then develop a posterior distribution for these parameters. Admitting zero roots allows for consideration of model order uncertainty, while admitting unit roots allows non stationary components. A Markov Chain Monte Carlo (MCMC) approach is used to identify superior model coefficients. At the end of the MCMC simulations, contingent on the upper bounds for C and R, we have posterior probability distributions associated with each of the amplitudes rj for both the complex and the real roots. Thus, one could diagnose roots for which the posterior probability mass is significantly away from 0, and hence there is useful information. The corresponding frequencies & # 61559; j for such complex roots can then be highlighted.

Probabilistic model forecasts for xt+T consequently encode the posterior probability distributions for each of the admissible parameter values. The ARCOMP algorithm was applied to the Devils Lake monthly volume data and up to 10, 000 simulations were generated from different starting points using the posterior probability...


Free research essays on topics related to: devils lake volume, lake, great salt lake, lead times, square root

Research essay sample on Dramatic Fluctuations Of Devils Lake Nd

Writing service prices per page

  • $18.85 - in 14 days
  • $19.95 - in 3 days
  • $23.95 - within 48 hours
  • $26.95 - within 24 hours
  • $29.95 - within 12 hours
  • $34.95 - within 6 hours
  • $39.95 - within 3 hours
  • Calculate total price

Our guarantee

  • 100% money back guarantee
  • plagiarism-free authentic works
  • completely confidential service
  • timely revisions until completely satisfied
  • 24/7 customer support
  • payments protected by PayPal

Secure payment

With EssayChief you get

  • Strict plagiarism detection regulations
  • 300+ words per page
  • Times New Roman font 12 pts, double-spaced
  • FREE abstract, outline, bibliography
  • Money back guarantee for missed deadline
  • Round-the-clock customer support
  • Complete anonymity of all our clients
  • Custom essays
  • Writing service

EssayChief can handle your

  • essays, term papers
  • book and movie reports
  • Power Point presentations
  • annotated bibliographies
  • theses, dissertations
  • exam preparations
  • editing and proofreading of your texts
  • academic ghostwriting of any kind

Free essay samples

Browse essays by topic:

Stay with EssayChief! We offer 10% discount to all our return customers. Once you place your order you will receive an email with the password. You can use this password for unlimited period and you can share it with your friends!

Academic ghostwriting

About us

© 2002-2024 EssayChief.com