UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Improving hub-height wind forecasts in complex terrain Siuta, David 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2017_may_siuta_david.pdf [ 7.41MB ]
Metadata
JSON: 24-1.0343411.json
JSON-LD: 24-1.0343411-ld.json
RDF/XML (Pretty): 24-1.0343411-rdf.xml
RDF/JSON: 24-1.0343411-rdf.json
Turtle: 24-1.0343411-turtle.txt
N-Triples: 24-1.0343411-rdf-ntriples.txt
Original Record: 24-1.0343411-source.json
Full Text
24-1.0343411-fulltext.txt
Citation
24-1.0343411.ris

Full Text

Improving hub-height wind forecasts in complex terrainbyDavid SiutaB.S. Earth System Science, University of Wyoming, 2011M.S. Atmospheric Science, University of Wyoming, 2013A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Atmospheric Science)The University of British Columbia(Vancouver)March 2017c© David Siuta, 2017AbstractAbstractWind-speed forecasts from numerical-weather-prediction (NWP) models are important for dailywind-resource generation planning. However, NWP models are imperfect. The ability of energyplanners to efficiently manage resources is a function of the accuracy of deterministic wind fore-casts and of the associated probability estimates of forecast uncertainty. As the amount of energygenerated from wind increases to significant levels, improving forecast accuracy and representationof forecast uncertainty is a key area of active research.This dissertation advances wind forecasting over regions of complex topography using theWeather Research and Forecasting (WRF) model. The optimal WRF-model configuration is a func-tion of planetary-boundary-layer (PBL) physics, grid length, and initial-condition choice. The firstcomponent of this work determines which of these three factors most influences forecast accuracyover complex terrain. The two largest factors influencing forecast accuracy are the PBL-physicsscheme and the grid length, with the dominant factor being a function of location, season, and timeof day.The second component of the research addresses the need for probability-based forecast infor-mation, which is only recently being used within the industry. Wind forecasts from an ensembleusing eight PBL schemes, three grid lengths, and two initial-conditions sources are converted intoprobability models that are then evaluated. Using the full, empirical ensemble distribution producesuncalibrated probabilistic forecasts. Prescribing a Gaussian probability distribution based on statis-tical moments of a past training dataset results in calibrated and sharp probabilistic forecasts. Sucha method is also computationally cheap.The final aspect of this study evaluates the role of boundary-layer static stability on forecastperformance. Traditionally, empirical surface-layer similarity theory has been used to relate surfacefluxes of heat, momentum, and moisture to vertical profiles of temperature and wind. To evaluateand improve surface-layer similarity theories over mountain ridges, a year-long field campaign oftemperature and wind measurements was conducted at wind farms in British Columbia. New em-pirical equations for complex terrain are proposed based on the field data, and found to perform wellat an independent test location.iiPrefacePrefaceThis dissertation is composed of the original work of the author from three accepted or submittedjournal articles, with minimal editing changes. These articles provide the contents of Chapters 2, 3,and 4, and are described in more detail below. The introduction and conclusion chapters, however,are unique to this dissertation.Chapter 2D. Siuta, G. West, and R. Stull. WRF hub-height wind-forecast sensitivity to PBL scheme, gridlength, and initial-condition choice in complex terrain. Weather and Forecasting, in press. doi:10.1175/WAF-D-16-0120.1. Copyright 2017 American Meteorological Society.The idea to evaluate the performance of planetary-boundary-layer physics scheme choice in theWRF model (Chapter 2) was developed by D. Siuta with project funding provided by Dr. RolandStull. Additionally, Dr. Greg West collaborated with D. Siuta and Dr. Stull throughout the project.D. Siuta carried out the research, analyzed the results, and wrote the submitted manuscript. Dr.Stull provided the computational resources for such an extensive study and also edited the contentsof the manuscript. Dr. West provided background knowledge on the use of wind forecasts for windresource planning at BC Hydro and edited the manuscript.Chapter 3D. Siuta, G. West, R. Stull, and T. Nipen. Calibrated probabilistic hub-height wind forecasts incomplex terrain. Weather and Forecasting, in press. doi: 10.1175/WAF-D-16-0137.1. Copyright2017 American Meteorological Society.Drs. Greg West and Roland Stull identified the need to develop calibrated probabilistic hub-height wind-speed forecasts in locations of complex terrain (Chapter 3). D. Siuta thought of the ideafor an ensemble forecast based on multiple planetary-boundary-layer schemes, grid lengths, andinitial-conditions choices, and carried out the research, analysis, and authoring of the manuscript.D. Siuta collaborated with Dr. Thomas Nipen to use the Component-based Post-Processing System(COMPS) to carry out parts of the research. Dr. Nipen wrote the code used in the COMPS systemand provided edits to the manuscript. Drs. West and Stull also provided edits to the manuscript. Dr.Stull wrote the appendix for the manuscript (Appendix B in this dissertation) and devised the newpq distribution.iiiPrefaceChapter 4D. Siuta, R. Howard, and R. Stull. A flux-profile relationship for winds over mountain ridges.Submitted.The idea to perform fieldwork to measure boundary-layer stability was by D. Siuta, who alsocarried out the field campaign. Dr. Stull provided input into fieldwork, instrument purchase, andcalibration. Additional help in instrumentation, methodology, and editing was provided from Dr.Rosie Howard. Drs. Stull and Howard also provided some expertise in how to interpret the workingsof the PBL.ivTable of ContentsTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Use of Wind Forecasts for Wind Energy . . . . . . . . . . . . . . . . . . . . . . . 11.2 Deterministic Hub-Height Wind Forecasting and Forecast Accuracy . . . . . . . . 31.3 Probabilistic Hub-Height Wind Forecasting . . . . . . . . . . . . . . . . . . . . . 41.4 The Role of Boundary-Layer Stability . . . . . . . . . . . . . . . . . . . . . . . . 61.5 Dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, andInitial-Condition Choice in Complex Terrain . . . . . . . . . . . . . . . . . . . . . . 82.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.1 Planetary-Boundary-Layer Schemes . . . . . . . . . . . . . . . . . . . . . 122.2.2 Initial-Condition Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2.3 Verification Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2.4 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.5 Bias Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3.1 Deterministic Verification . . . . . . . . . . . . . . . . . . . . . . . . . . 17vTable of Contents2.3.2 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrain . . . . . . 293.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.2.1 Effect of Number of PBL Schemes in Ensemble . . . . . . . . . . . . . . . 333.2.2 Bias Correction Technique . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2.3 Choice of Uncertainty Model . . . . . . . . . . . . . . . . . . . . . . . . . 353.2.4 Verification Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3.1 Raw Ensemble Distribution as a Probability Distribution . . . . . . . . . . 403.3.2 Prescribed Probability Distributions Dressed on the Ensemble Mean . . . . 403.3.3 Use of Ensembles to Improve Forecast Sharpness . . . . . . . . . . . . . . 453.3.4 Probabilistic Forecasts for Wind Events . . . . . . . . . . . . . . . . . . . 453.3.5 Summary of Skill vs. Computation Cost . . . . . . . . . . . . . . . . . . . 463.3.6 Shape of the Prescribed Distribution . . . . . . . . . . . . . . . . . . . . . 493.4 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 A Flux-Profile Relationship for Winds over Mountain Ridges . . . . . . . . . . . . . 524.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.1.1 Empirical Formula for φm and φh . . . . . . . . . . . . . . . . . . . . . . 534.1.2 Issues for Current Flux-Profile Relationships . . . . . . . . . . . . . . . . 554.1.3 Goals of this Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574.2 Field-Work Description and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.2.1 Overview of Field Campaign . . . . . . . . . . . . . . . . . . . . . . . . . 584.2.2 Methodology for Deriving Observed φm, φh, and New Flux-Profile Rela-tionships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.3.1 Comparison of Observed Wind Profiles with those of the BD and ZA Rela-tionships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.3.2 Observations of φm, Theoretical φm, and Canopy Effects . . . . . . . . . . 644.3.3 Sensitivity of φm to Season and Wind Direction . . . . . . . . . . . . . . . 674.3.4 Independent Verification of SHS Profile at Site 2 . . . . . . . . . . . . . . 71viTable of Contents4.3.5 Sensitivity of Observed φm to Zero-Plane Displacement and RoughnessLength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.3.6 Observations of φh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.3.7 Limitations and Avenues for Future Work . . . . . . . . . . . . . . . . . . 744.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.1 Summary of Research Goals and Methods . . . . . . . . . . . . . . . . . . . . . . 775.2 Summary of Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785.3 Potential Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.4 Limitations and Recommendations for Future Work . . . . . . . . . . . . . . . . . 80Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82Appendix A Summary of Bias-Corrected Forecast Accuracy Scores . . . . . . . . . . . 93Appendix B The pq Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . 98Appendix C Terrain cross sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103viiList of TablesList of TablesTable 2.1 WRF model configurations used for this study. . . . . . . . . . . . . . . . . . . 13Table 3.1 Summary of the tests performed. Bias correction is detailed in section 3.2.2. Adescription of the uncertainty models is provided in section 3.2.3. . . . . . . . . 34Table 4.1 Meteorological sensor description, accuracy, and time averaging for field cam-paign. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58Table 4.2 Flow-regime-dependent summary statistics of fit to SHS relationship for statically-stable conditions at site 1. Residual calculated as φm,obs − φm,SHS , whereφm,SHS is from the SHS relationship. . . . . . . . . . . . . . . . . . . . . . . . 68Table 4.3 Verification summary statistics for BD, ZA, and SHS profiles at site 2 for statically-stable conditions. U¯SHS is the mean wind speed using the SHS flux-profile. U¯BDis the mean wind speed using the BD flux-profile. U¯ZA is the mean wind speedusing the ZA flux-profile. O¯ is the mean observed wind speed. MAESSBD isthe mean absolute error skill score of the SHS wind profile relative to that of theBD wind profile. MAESSZA is the mean absolute error skill score of the SHSwind profile relative to that of the ZA wind profile. Larger skill scores are better. 72Table A.1 Mean Absolute Error (MAE) for each bias-corrected forecast initialized off theGFS. Sites 1-4 are the anonymous wind farm locations. Statistics are dividedinto five blocks, Annual, Summer (June - August), Fall (September - November),Winter (December - February), and Spring (March - May). MAE is in m s-1. . . 94Table A.2 Mean Absolute Error (MAE) for each bias-corrected forecast initialized off theNAM. Sites 1-4 are the anonymous wind farm locations. Statistics are dividedinto five blocks, Annual, Summer (June - August), Fall (September - November),Winter (December - February), and Spring (March - May). MAE is in m s-1. . . 95viiiList of FiguresList of FiguresFigure 2.1 Method used to produce 48 daily forecasts at each wind farm. The names of thePBL schemes are detailed in Table 2.1. . . . . . . . . . . . . . . . . . . . . . 11Figure 2.2 WRF domains used in this study. The 36-, 12-, and 4-km grids are bound bythe red, blue, and black boxes, respectively. . . . . . . . . . . . . . . . . . . . 12Figure 2.3 Taylor diagrams of bias-corrected wind forecasts for sites 1-4. The symbolsin the legend differentiate PBL schemes, where similar colors represent similargrid lengths and initial conditions. Better forecasts will be located close to theyellow curve, which represents the standard deviation of the observations. Thebest forecasts will be located closest to the bottom of the yellow curve wherethere is also perfect forecast-observation correlation (r=1.0) and no CRMSE.Correlation values are provided by the dashed radials, CRMSE by the thick graysemi-circles (m s-1), and standard deviation by the thin black quarter-circles.Legend naming convention is [PBL]{IC}(Grid). . . . . . . . . . . . . . . . . 18Figure 2.4 Accumulated z-score for each of the 48 bias-corrected wind forecasts for thesummer (June, July, and August; top), and winter (December, January, andFebruary; bottom). Colors represent individual wind farm sites, with purplebars indicating the site-averaged performance. A larger-magnitude negative z-score indicates forecasts that are much better than the average. The boxed-areasrepresent the non-local mixing PBL schemes in the top panel and the 12-kmgrids in the bottom panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Figure 2.5 Annual MAE z-score for each of the 48 bias-corrected wind forecasts. Col-ors represent individual wind farm sites, with purple bars representing the site-averaged performance. A larger-magnitude negative z-score indicates forecaststhat are much better than the average. . . . . . . . . . . . . . . . . . . . . . . 23ixList of FiguresFigure 2.6 Contribution to the variance in MAE scores for the bias-corrected wind fore-casts as a percentage contribution by each factor for each site. Different colorsrepresent different seasons, where blue is summer (June, July, and August), redis fall (September, October, and November), orange is winter (December, Jan-uary, and February), and green is spring (March, April, and May). The residualrepresents the variance attributed to factor interactions and other unaccounteddifferences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Figure 2.7 Contribution to the variance in annual MAE scores for the bias-corrected windforecasts by time of day as a percentage contribution by each factor for each site.Different colors represent different factors, where blue is the PBL scheme, redis initial condition, orange is grid length, and green is the residual. The residualrepresents the variance attributed to factor interactions and other unaccountedsources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Figure 3.1 Example ensemble forecast (A) and corresponding probabilistic forecast (B). . 30Figure 3.2 Illustration of ensemble-forecast meteogram and Gaussian-based uncertaintymodels (above the meteogram). Individual ensemble members are indicated bythe colored, dashed curves and the ensemble mean by the black curve. Eachforecast time has a probability density function based on the Gaussian distribu-tion. The grey curve indicates a Gaussian distribution that does not scale (GNS)during the forecast. The pink curve is a Gaussian distribution that scales withensemble variance (GSEV), while the red curve scales with the ensemble mean(GSEM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Figure 3.3 Improvement in PIT histogram calibration for one year of hourly wind forecastsby using more PBL schemes in the ensemble for the raw ensemble distribution(R2-R8, Table 3.1) and bias-corrected ensemble distribution (RB2-RB8) uncer-tainty models. Improvement is based on reduction in deviation between bins ofthe PIT histogram (such as for the PIT histogram in Fig. 3.4). Larger improve-ment is better. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Figure 3.4 PIT histograms indicating an under-dispersive ensemble for tests R8 and RB8.Under-dispersion occurs when observed events fall too often at or outside theextremes of the ensemble forecast distribution. Flatter PIT histograms are better. 41xList of FiguresFigure 3.5 PIT histograms comparing the results of the three Gaussian-based distributionsfor the six- and 48-member ensembles, prior to and after bias correction. Labelsare the test name (Table 3.1). Flatter PIT histograms are better (closer to thehorizontal dashed line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Figure 3.6 Actual wind-speed forecast-error distribution about the ensemble mean in testGB8 for each of the four wind farms. . . . . . . . . . . . . . . . . . . . . . . 44Figure 3.7 Annual RMSE for each bias-corrected ensemble member, the six-member en-semble in test GB1, and the 48-member ensemble in test GB8, averaged over allfour wind sites. Ensemble members are named under the convention [PBL][IC][Grid(km)]. Smaller RMSE is better. . . . . . . . . . . . . . . . . . . . . . . . . . 46Figure 3.8 Reliability diagrams for tests RB1, RB8, GB1, GB8, and the GNS distributiondressed over the bias-corrected 12-km ACM2 deterministic forecast initializedoff the GFS (ACM2 12km GNS) for event thresholds of 5, 15, and 20 m s-1.Reliability curves closer to the diagonal 1:1 line (thick grey) are better (i.e.,are more calibrated). The blue dashed line represents the climatological eventthreshold frequency. The red dashed line represents the no-skill line. The insetfigure is the sharpness histogram and is a count of the number of occurrences theevent threshold is forecast in each probability bin (from 0th to 100th percentilein 10% increments.) Sharper forecasts assign mostly 0th and 100th percentileswhile non-sharp forecasts issue mainly 50th percentile forecasts. . . . . . . . . 47Figure 3.9 CRPS skill scores for test RB1, RB8, GB1, GB8, the ACM2 12-km GNS, andthe bias-corrected deterministic 12-km ACM2 forecast initialized off the GFS.Skill is calculated relative to the worst performing deterministic forecast, the36-km QNSE scheme initialized with the NAM. Larger CRPS Skill Score isbetter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Figure 3.10 Typical 30-day training period past error distribution (vertical bars) fit by the pqprobability distribution (smooth curve). Bins are every 0.2 m s-1. . . . . . . . . 49Figure 4.1 Photographs of site 1 (left) and site 2 (right). The photo of site 2 shows the fiveOnset HOBO temperature/RH sensors in their white radiation shields alignedin the vertical at 1-, 1.5-, 2-, 3-, and 4-m AGL. Trees at both locations arepredominantly of pine-forest type. . . . . . . . . . . . . . . . . . . . . . . . . 59Figure 4.2 Histogram of wind directions at site 1. Three wind regimes labeled: southwest-erly (SW), northwesterly (NW), and easterly (E). . . . . . . . . . . . . . . . . 62xiList of FiguresFigure 4.3 Comparison of the observed mean-wind profiles at site 1 (solid) to the profilesproduced from the BD (dashed) and ZA (dotted) relationships. The left panelis for statically-stable ASL conditions (ζ > 0) while the right is for statically-unstable conditions (ζ < 0). . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Figure 4.4 Wind-power density (WPD) at site 1 under statically-stable (ζ > 0) conditionsfor the observed (solid), BD (dashed), and ZA (dotted) mean-wind profiles. . . 64Figure 4.5 Plot of φm at site 1 as a function of ζ. Observed values are represented by blackdots. The red curve is the standard BD relationship. The orange and purplecurves are canopy-corrected BD relationships with a z∗ = 45 and z∗ = 150 m,respectively. The blue curve is a constant reduction factor of 55% appled to theBD curve. The green curve is the least-squares best-fit to the observed dataset. 65Figure 4.6 Ratio of observed non-dimensional wind shear φm to the theoretical values di-agnosed from the BD relationships φm,BD. . . . . . . . . . . . . . . . . . . . 66Figure 4.7 Comparison of diagnosed wind profiles using the SHS (dot-dashed), BD (dashed),and ZA (dotted) relations to that of the observations (solid) at site 1. . . . . . . 67Figure 4.8 φm observed at site 1 during statically-stable conditions (black dots) for thesummer (top left), fall (top right), winter (bottom left), and spring (bottomright). The green curve is the SHS theoretical fit derived using the data inclusiveof all seasons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Figure 4.9 φm observed at site 1 during statically-stable conditions (black dots) for thesouthwest (top left), east (top right), and northwest (bottom) wind regimes. Thegreen curve is the SHS theoretical fit derived using the data inclusive of all winddirections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Figure 4.10 Comparison of diagnosed wind profiles using the SHS (dot-dashed), BD (dashed),and ZA (dotted) relations to that of the observations (solid) at site 2. . . . . . . 71Figure 4.11 Sensitivity of the calculated φm values to the use of d = 0 m vs. d = 11.25 m(a), d = 9 m vs. d = 11.25 m (b), and zo = 0.93 m vs. zo = 0.4 m (c). . . . . 73Figure 4.12 Observed φh (black dots) and theoretical BD flux-profile curve (red) understatically-stable conditions (ζ < 0). The green curve is the best-fit line to the data. 74Figure A.1 Annual MAE skill score (MAESS), showing improvement in MAE resultingfrom bias correction. Skill is relative to the equivalent raw wind forecast. Col-ors represent individual wind farm sites, with purple bars indicating the site-averaged performance. Larger positive values are better. . . . . . . . . . . . . 96xiiList of FiguresFigure A.2 Effect of the training period length on forecast accuracy (MAESS), averagedover all 48 bias-corrected deterministic wind forecasts at each wind-farm site. . 97Figure B.1 (a) Relative error (contoured, arbitrary units) between the observed and pq dis-tributions as a function of parameters p and q. X marks the minimum error;namely, the best p and q values. S was fixed apriori at 10 m s-1 wind-forecasterror, and p, q, and xo were varied to find the distribution errors. (b) Resultingbest-fit pq distribution (curve) to the observed histogram of wind-speed fore-cast errors (vertical bars). Note that this fitting method of minimizing the meanabsolute error does not overly weight the tails of the distribution. . . . . . . . 100Figure B.2 Contour plots of (a) MAD (dashed line) and kurtosis (solid line), and (b) vari-ance (dashed line) and kurtosis (solid line) for the pq distribution as a functionof the p and q shape parameters. . . . . . . . . . . . . . . . . . . . . . . . . . 100Figure B.3 Some extreme examples of the pq distribution. . . . . . . . . . . . . . . . . . 101Figure B.4 Sample of some of the shapes produced by the pq distribution. The last shapeis approximately Gaussian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102Figure C.1 Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 1 (sitelabel does not match other chapters). The red dot represents the wind farmlocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Figure C.2 Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 2 (sitelabel does not match other chapters). The red dot represents the wind farmlocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Figure C.3 Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 3 (sitelabel does not match other chapters). The red dot represents the wind farmlocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105Figure C.4 Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 4 (sitelabel does not match other chapters). The red dot represents the wind farmlocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105xiiiGlossaryGlossarySymbolsBelow is a summary of all mathematical variables used in the dissertation:A Area under the un-normalized pq density curvecp Specific heat capacity of air at constant pressureCD Bulk drag coefficient for momentumCH Bulk drag coefficient for heatd Zero-plane displacement distanceDMBt Degree of mass balance bias correction factor for the current day (t)DMBt-1 Degree of mass balance bias correction factor at time from previous day (t-1)fs Functional form of pq frequency distributionf(x) Functional form of the pq probability densityF t Raw ensemble-member forecastF t-1 Mean of the previous day’s forecastsF¯t Ensemble mean forecastFˆt Bias-corrected ensemble-member forecasth Canopy heightk von Karman constantL Obukhov lengthMAESSBD Mean absolute error skill score of the SHS relationship relative to the BD relationshipMAESSZA Mean absolute error skill score of the SHS relationship relative to the ZA relationshipO¯ Mean value of the observed wind profileOt-1 Mean of the previous day’s observationsNt Gaussian probability distribution for the current forecast timep Shape parameter in pq distributionq Shape parameter in pq distributionr Linear correlation coefficientr2 Coefficient of determinationRH Relative humidityxivGlossaryRib Bulk Richardson numberS Scaling parameter in pq distributiont timex Either the ensemble mean or ensemble variance within a linear regression against theforecast errorsxo Center-location parameter in pq distributionu Wind speedu∗ Friction velocityUBD Average wind speed produced from the BD flux-profileUSHS Average wind speed produced from the SHS flux-profileUZA Average wind speed produced from the ZA flux-profilez Height above ground levelzo Aerodynamic roughness lengthz∗ Depth of the transition layer (roughness sublayer)α Empirical constant in MOSTβ Empirical constant in MOSTγ Empirical constant in MOSTΓ The gamma functionµ4 Fourth statistical momentφh Non-dimensional temperature gradientφm Non-dimensional wind shearφm,BD Non-dimensional wind shear diagnosed from the BD relationsφm,SHS Non-dimensional wind shear diagnosed from the BD relationsφ∗ Stability-independent correction factor for non-dimensional wind shearψh Integrated similarity function for heatψm Integrated similarity function for momentumρ Air densityσ2t Variance of prescribed probability distribution at time t calculated from the past errordistributionσ2x Variance of pq probability distributionτ Training period (Chapter 3), Momentum stress (Chapter 4)θ Potential temperatureθg Potential temperature at the groundθv Virtual potential temperatureθvg Virtual potential temperature at the groundxvGlossaryθ∗ Temperature scaleζ Monin-Obukhov stability parameterAbbreviationsBelow is a summary of all abbreviations used in the dissertations.ACM2 Asymmetric convection model version 2 schemeACM2GFS4 WRF 4-km grid output using the ACM2 PBL scheme initialized by the GFSACM2GFS12 WRF 12-km grid output using the ACM2 PBL scheme initialized by the GFSACM2GFS36 WRF 36-km grid output using the ACM2 PBL scheme initialized by the GFSACM2NAM4 WRF 4-km grid output using the ACM2 PBL scheme initialized by the NAMACM2NAM12 WRF 12-km grid output using the ACM2 PBL scheme initialized by the NAMACM2NAM36 WRF 36-km grid output using the ACM2 PBL scheme initialized by the NAMAGL Above ground levelANOVA Analysis of VarianceARW Advanced Research WRF dynamical coreASL Atmospheric Surface LayerBD Businger-Dyer flux-profile relationshipsBMA Bayesian model averagingCASES99 Cooperative Atmospheric Surfaces-Exchange Study 1999COMPS Component-based post-processing systemCRMSE Centered Root Mean Square ErrorCRPS Continuous ranked probability scoreDMB Degree of Mass BalanceDWD Deutscher Wetterdienst (German Weather Service)ECMWF European Centre for Medium-range Weather ForecastsEMOS Ensemble model output statisticsEPS Ensemble prediction systemG1 Ensemble using 1 PBL scheme and prescribed GNS distribution around raw-ensemble meanG2 Ensemble using 2 PBL schemes and prescribed GNS distribution around raw-ensemble meanG3 Ensemble using 3 PBL schemes and prescribed GNS distribution around raw-ensemble meanxviGlossaryG4 Ensemble using 4 PBL schemes and prescribed GNS distribution around raw-ensemble meanG5 Ensemble using 5 PBL schemes and prescribed GNS distribution around raw-ensemble meanG6 Ensemble using 6 PBL schemes and prescribed GNS distribution around raw-ensemble meanG7 Ensemble using 7 PBL schemes and prescribed GNS distribution around raw-ensemble meanG8 Ensemble using 8 PBL schemes and prescribed GNS distribution around raw-ensemble meanGB1 Ensemble using 1 PBL scheme and prescribed GNS distribution around bias-corrected ensemble meanGB2 Ensemble using 2 PBL schemes and prescribed GNS distribution around bias-corrected ensemble meanGB3 Ensemble using 3 PBL schemes and prescribed GNS distribution around bias-corrected ensemble meanGB4 Ensemble using 4 PBL schemes and prescribed GNS distribution around bias-corrected ensemble meanGB5 Ensemble using 5 PBL schemes and prescribed GNS distribution around bias-corrected ensemble meanGB6 Ensemble using 6 PBL schemes and prescribed GNS distribution around bias-corrected ensemble meanGB7 Ensemble using 7 PBL schemes and prescribed GNS distribution around bias-corrected ensemble meanGB8 Ensemble using 8 PBL schemes and prescribed GNS distribution around bias-corrected ensemble meanGBM Grenier-Bretherton-Park schemeGBMGFS4 WRF 4-km grid output using the GBM PBL scheme initialized by the GFSGBMGFS12 WRF 12-km grid output using the GBM PBL scheme initialized by the GFSGBMGFS36 WRF 36-km grid output using the GBM PBL scheme initialized by the GFSGBMNAM4 WRF 4-km grid output using the GBM PBL scheme initialized by the NAMGBMNAM12 WRF 12-km grid output using the GBM PBL scheme initialized by the NAMGBMNAM36 WRF 36-km grid output using the GBM PBL scheme initialized by the NAMGDPS Global Deterministic Prediction System, also known as the GEMGFS Global forecast systemxviiGlossaryGL Grid lengthGNS Gaussian non-scaling distributionGSEM Gaussian distribution scaled by ensemble meanGSEV Gaussian distribution scaled by ensemble varianceGSI Gridpoint Statistical InterpolationGSM1 Ensemble using 1 PBL scheme and prescribed GSEM distribution around raw-ensemble meanGSM2 Ensemble using 2 PBL schemes and prescribed GSEM distribution around raw-ensemble meanGSM3 Ensemble using 3 PBL schemes and prescribed GSEM distribution around raw-ensemble meanGSM4 Ensemble using 4 PBL schemes and prescribed GSEM distribution around raw-ensemble meanGSM5 Ensemble using 5 PBL schemes and prescribed GSEM distribution around raw-ensemble meanGSM6 Ensemble using 6 PBL schemes and prescribed GSEM distribution around raw-ensemble meanGSM7 Ensemble using 7 PBL schemes and prescribed GSEM distribution around raw-ensemble meanGSM8 Ensemble using 8 PBL schemes and prescribed GSEM distribution around raw-ensemble meanGSMB1 Ensemble using 1 PBL scheme and prescribed GSEM distribution around bias-corrected ensemble meanGSMB2 Ensemble using 2 PBL schemes and prescribed GSEM distribution around bias-corrected ensemble meanGSMB3 Ensemble using 3 PBL schemes and prescribed GSEM distribution around bias-corrected ensemble meanGSMB4 Ensemble using 4 PBL schemes and prescribed GSEM distribution around bias-corrected ensemble meanGSMB5 Ensemble using 5 PBL schemes and prescribed GSEM distribution around bias-corrected ensemble meanGSMB6 Ensemble using 6 PBL schemes and prescribed GSEM distribution around bias-corrected ensemble meanGSMB7 Ensemble using 7 PBL schemes and prescribed GSEM distribution around bias-corrected ensemble meanxviiiGlossaryGSMB8 Ensemble using 8 PBL schemes and prescribed GSEM distribution around bias-corrected ensemble meanGSV1 Ensemble using 1 PBL scheme and prescribed GSEV distribution around raw-ensemble meanGSV2 Ensemble using 2 PBL schemes and prescribed GSEV distribution around raw-ensemble meanGSV3 Ensemble using 3 PBL schemes and prescribed GSEV distribution around raw-ensemble meanGSV4 Ensemble using 4 PBL schemes and prescribed GSEV distribution around raw-ensemble meanGSV5 Ensemble using 5 PBL schemes and prescribed GSEV distribution around raw-ensemble meanGSV6 Ensemble using 6 PBL schemes and prescribed GSEV distribution around raw-ensemble meanGSV7 Ensemble using 7 PBL schemes and prescribed GSEV distribution around raw-ensemble meanGSV8 Ensemble using 8 PBL schemes and prescribed GSEV distribution around raw-ensemble meanGSVB1 Ensemble using 1 PBL scheme and prescribed GSEV distribution around bias-corrected ensemble meanGSVB2 Ensemble using 2 PBL schemes and prescribed GSEV distribution around bias-corrected ensemble meanGSVB3 Ensemble using 3 PBL schemes and prescribed GSEV distribution around bias-corrected ensemble meanGSVB4 Ensemble using 4 PBL schemes and prescribed GSEV distribution around bias-corrected ensemble meanGSVB5 Ensemble using 5 PBL schemes and prescribed GSEV distribution around bias-corrected ensemble meanGSVB6 Ensemble using 6 PBL schemes and prescribed GSEV distribution around bias-corrected ensemble meanGSVB7 Ensemble using 7 PBL schemes and prescribed GSEV distribution around bias-corrected ensemble meanGSVB8 Ensemble using 8 PBL schemes and prescribed GSEV distribution around bias-corrected ensemble meanIC Initial conditionxixGlossaryIFS Integrated Forecast SystemLES Large Eddy SimulationLLJ Low-level jetMAD Mean absolute deviationMAE Mean absolute errorMAESS Mean absolute error skill scoreMM5 5th Generation of the Pennsylvania State University-National Center for AtmosphericResearch Mesoscale ModelMOST Monin-Obukhov Similarity TheoryMRF Medium range forecast schemeMRFGFS4 WRF 4-km grid output using the MRF PBL scheme initialized by the GFSMRFGFS12 WRF 12-km grid output using the MRF PBL scheme initialized by the GFSMRFGFS36 WRF 36-km grid output using the MRF PBL scheme initialized by the GFSMRFNAM4 WRF 4-km grid output using the MRF PBL scheme initialized by the NAMMRFNAM12 WRF 12-km grid output using the MRF PBL scheme initialized by the NAMMRFNAM36 WRF 36-km grid output using the MRF PBL scheme initialized by the NAMMSL Elevation above mean sea levelMYJ Mellor-Yamada-Janjic schemeMYJGFS4 WRF 4-km grid output using the MYJ PBL scheme initialized by the GFSMYJGFS12 WRF 12-km grid output using the MYJ PBL scheme initialized by the GFSMYJGFS36 WRF 36-km grid output using the MYJ PBL scheme initialized by the GFSMYJNAM4 WRF 4-km grid output using the MYJ PBL scheme initialized by the NAMMYJNAM12 WRF 12-km grid output using the MYJ PBL scheme initialized by the NAMMYJNAM36 WRF 36-km grid output using the MYJ PBL scheme initialized by the NAMMYNN Mellor-Yamade-Nakinishi-Nino level 2 schemeMYNNGFS4 WRF 4-km grid output using the MYNN PBL scheme initialized by the GFSMYNNGFS12 WRF 12-km grid output using the MYNN PBL scheme initialized by the GFSMYNNGFS36 WRF 36-km grid output using the MYNN PBL scheme initialized by the GFSMYNNNAM4 WRF 4-km grid output using the MYNN PBL scheme initialized by the NAMMYNNNAM12 WRF 12-km grid output using the MYNN PBL scheme initialized by the NAMMYNNNAM36 WRF 36-km grid output using the MYNN PBL scheme initialized by the NAMNAM North American mesoscale modelNAVGEM Navy Global Environmental ModelNCEP National centers for environmental predictionNMM Nonhydrostatic Mesoscale Model dynamical corexxGlossaryNWP Numerical weather predictionPBL Planetary boundary layerPIT Probability integral transformpq pq distributionR1 Ensemble using 1 PBL scheme and empirical raw-ensemble distributionR2 Ensemble using 2 PBL schemes and empirical raw-ensemble distributionR3 Ensemble using 3 PBL schemes and empirical raw-ensemble distributionR4 Ensemble using 4 PBL schemes and empirical raw-ensemble distributionR5 Ensemble using 5 PBL schemes and empirical raw-ensemble distributionR6 Ensemble using 6 PBL schemes and empirical raw-ensemble distributionR7 Ensemble using 7 PBL schemes and empirical raw-ensemble distributionR8 Ensemble using 8 PBL schemes and empirical raw-ensemble distributionRB1 Ensemble using 1 PBL scheme and empirical bias-corrected ensemble distributionRB2 Ensemble using 2 PBL schemes and empirical bias-corrected ensemble distributionRB3 Ensemble using 3 PBL schemes and empirical bias-corrected ensemble distributionRB4 Ensemble using 4 PBL schemes and empirical bias-corrected ensemble distributionRB5 Ensemble using 5 PBL schemes and empirical bias-corrected ensemble distributionRB6 Ensemble using 6 PBL schemes and empirical bias-corrected ensemble distributionRB7 Ensemble using 7 PBL schemes and empirical bias-corrected ensemble distributionRB8 Ensemble using 8 PBL schemes and empirical bias-corrected ensemble distributionRMSE Root mean square errorQNSE Quasi-normal scale elimination schemeQNSEGFS4 WRF 4-km grid output using the QNSE PBL scheme initialized by the GFSQNSEGFS12 WRF 12-km grid output using the QNSE PBL scheme initialized by the GFSQNSEGFS36 WRF 36-km grid output using the QNSE PBL scheme initialized by the GFSQNSENAM4 WRF 4-km grid output using the QNSE PBL scheme initialized by the NAMQNSENAM12 WRF 12-km grid output using the QNSE PBL scheme initialized by the NAMQNSENAM36 WRF 36-km grid output using the QNSE PBL scheme initialized by the NAMSHS Siuta-Howard-Stull flux-profile relationshipSKEBS Stochastic kinetic energy backscatterSPPT Stochastic perturbation of physical tendenciesTKE Turbulence Kinetic EnergyUBC University of British ColumbiaUKMO UK Metoffice Unified ModelUTC Coordinated Universal TimexxiGlossaryUW University of Washington schemeUWGFS4 WRF 4-km grid output using the UW PBL scheme initialized by the GFSUWGFS12 WRF 12-km grid output using the UW PBL scheme initialized by the GFSUWGFS36 WRF 36-km grid output using the UW PBL scheme initialized by the GFSUWNAM4 WRF 4-km grid output using the UW PBL scheme initialized by the NAMUWNAM12 WRF 12-km grid output using the UW PBL scheme initialized by the NAMUWNAM36 WRF 36-km grid output using the UW PBL scheme initialized by the NAMWFIP2 Wind Forecast Improvement Project 2WFRT Weather forecast research teamWRF Weather research and forecasting modelYSU Yonsei University SchemeYSUGFS4 WRF 4-km grid output using the YSU PBL scheme initialized by the GFSYSUGFS12 WRF 12-km grid output using the YSU PBL scheme initialized by the GFSYSUGFS36 WRF 36-km grid output using the YSU PBL scheme initialized by the GFSYSUNAM4 WRF 4-km grid output using the YSU PBL scheme initialized by the NAMYSUNAM12 WRF 12-km grid output using the YSU PBL scheme initialized by the NAMYSUNAM36 WRF 36-km grid output using the YSU PBL scheme initialized by the NAMZA Zhang and Anthes flux-profile relationshipsxxiiAcknowledgmentsAcknowledgmentsI must first thank my research supervisor, Dr. Roland Stull, who provided me with endless moti-vation and encouragement over the last four years. His tremendous support and years of devotionto research in atmospheric science provided me with a great environment for my studies, for whichI am forever indebted. I also wish to thank the other members of my supervisory committee, Drs.Phil Austin, Doug McCollor, and Magdalena Rucker, who gave input throughout my tenure as astudent at UBC and provided me with insights into the use of weather forecasts for wind energy aswell as access to key observational data.I am truly grateful for the help of many of the members of the Weather Forecast Research Team,without which my successes would not have been possible. I must specifically thank Dr. Greg Westwho spent many late nights with me going over manuscript edits, traveled across British Columbiato help with my field work, and whose conversation helped spur many of the ideas presented here.Dr. Rosie Howard, who provided me with key guidance on the setup of my field campaign andinstrumentation, is also greatly acknowledged. I appreciate team members Dr. Dominique Bourdin,Pedro Odon, Maggie Campbell, Jesse Mason, Tim Chui, and Nadya Moisseeva for always allow-ing me to stop by their desks to chat about my work. Former team member Dr. Thomas Nipenis acknowledged for spending several of his evenings in Norway chatting with me online aboutcreating probabilistic forecasts and statistical verification methods. Dr. Henryk Modzelewski andRoland Schigas made such an extensive modeling study possible through excellent assistance withcomputing resources and coding.I thank Pedro Odon, Tyler Gifford, and my father, Albert Siuta for their willingness to spendseveral days with me in remote sections of British Columbia to set up, retrieve data from, anddisassemble my field campaign. They stood outside for many hours in sub-freezing temperatures,snowfall, and wind-driven rains for the betterment of my research.BC Hydro, the Canadian Natural Science and Engineering Research Council, and Mitacs are ac-knowledged for providing the funding to make this research possible. Energia (formerly GDF SUEZNorth America), Alterra Power Corp, Capital Power, and Altagas Ltd are thanked for allowing meaccess to their sites to install instrumentation and meteorological data.Finally, I thank my parents, Albert and Annmarie, and my sister Rose for being sources ofinspiration through the last four years.xxiiiChapter 1: IntroductionChapter 1IntroductionThe research goal of this dissertation is to improve wind-speed forecasts at wind-turbine hub (axle)height (80 - 100 m) in mountainous terrain. The method is numerical weather prediction (NWP),where an ensemble of multiple NWP-model configurations is run to provide different estimates offorecast wind speed every hour. This work is motivated by the fact that utility companies needto anticipate how much contingency energy-reserve generation is needed to fill in when the windsdecrease. Small errors in anticipated wind speed can cost utility companies thousands of dollarseach day. The procedure is first to do sensitivity studies to find the optimum NWP-model configura-tion for the ensemble members. The result is used to give an ensemble-average (best deterministic)wind-speed forecast and a probability distribution of likely wind-speed errors. A field study is alsoconducted at four wind farms in British Columbia, Canada, to relate hub-height wind speed to staticstability near the ground to identify the best NWP model options. The resulting findings presentsignificant innovations in wind-profile estimation and NWP hub-height wind-speed forecasting forcomplex mountainous terrain. Details of the goals, method, motivation, procedure, and significanceare explained below.1.1 Use of Wind Forecasts for Wind EnergyEnergy planners rely on wind forecasts from NWP models to anticipate electrical power generatedfrom renewable wind resources. These estimates are used for two primary purposes: (1) schedulingof spinning (online) and stand-by contingency energy reserves (DeMeo et al., 2007; Corbus et al.,2009), and (2) market trading (Draxl et al., 2014). Contingency reserves are other energy resourcesthat are available to dispatch energy to the electric transmission grid to ensure reliable service in theevent that wind generation is less than expected, or if other maintenance issue arises (DeMeo et al.,2007). Namely, they add flexibility to the grid to accommodate natural wind variability and forecastuncertainty.Accurate forecasts of hub-height wind speeds are critical for efficient reserve planning becausethey determine how much reserve capacity is needed. While contingency reserves serve importantroles in maintaining a reliable electric grid, they operate at a significant cost to utility companies1Chapter 1: Introductionand scheduling too many reserves can result in missed market trading opportunities (Clement, 2012).Additionally, these reserves require ‘spin-up’ time that can range from hours to over a day, which isone reason why wind forecasts are needed for short-term and long-term forecast horizons (Ahlstromet al., 2013).For energy-market trading, analysts need to know how much excess electricity will be availableto sell to adjacent utility districts, or, if there will be a need to import energy. Trading also occurs atboth short and long-term intervals, with bids updated as short as half an hour prior to delivery dead-lines (Draxl et al., 2014). In some cases, traders can sell wind-generated electricity at a premium toadjacent areas needing to meet green-energy quotas. Penalties can be incurred for not being able todeliver the bid power amount (Monteiro et al., 2009).As the amount of the total electric supply generated from wind resources increases, the eco-nomic value of accurate forecasts from NWP models also increases (Corbus et al., 2009). Marquiset al. (2011) detail that small reductions in the mean absolute error (MAE) of hub-height wind fore-casts on the order of 10-20% result in millions of dollars of savings. They find that there is between$1.6 and $4.1 billion dollars of savings that can be achieved by improving current state-of-the-artforecast systems to a ‘perfect forecast’, assuming approximately 20% wind penetration (the por-tion of electrical generation produced by wind). The United States aims to reach this level of windpenetration by 2030, with a further increase to 35% wind penetration by 2050 (United States De-partment of Energy, 2015). Canada has a similar goal of 35% wind penetration by 2025 (CanadianWind Energy Association, 2016). Marquis et al. (2011) also mention that additional savings couldbe achieved in energy planning by incorporating measures of forecast uncertainty.Wind forecasts from NWP models are imperfect, leading to uncertainty in expected wind gen-eration estimates. Such uncertainty is not unique to wind, and arises from several NWP modeldeficiencies. Errors arise from incorrect/incomplete depictions of atmospheric state during data as-similation, incomplete understanding of atmospheric physics, and because of spatial discretizationof atmospheric flows over complex terrain (Warner, 2011; Candille, 2009; Mass et al., 2002). Thesefactors have led to the use of ensemble-forecasting techniques, which quantify forecast uncertaintyby running several NWP models defined by different initial conditions, physics parameterizations,and grid lengths. Since the processes describing the evolution of the atmosphere are non-linear,these small model differences cause the forecast differences between ensemble members to growwith time. Thus, the overall spread of the ensemble members provides an indication of forecastuncertainty (Warner, 2011), with larger spread implying greater uncertainty.Over areas of complex (mountainous) terrain, uncertainties may be amplified by sparse obser-vations, by the use of inappropriate physical parameterizations that have been derived over homo-geneous flat terrain, and through poor representation of topography. Together, these question the2Chapter 1: Introductionability of NWP models to accurately represent typical flows that arise in complex topography, suchas thermally-induced circulations, dynamically-forced flow channeling, and mountain-wave break-ing. For wind-energy sources located on ridge tops and in other complex terrain, this could result inless certain forecasts.Nonetheless, locations of complex terrain often provide some of the richest wind resources.Building wind farms in mountainous areas, however, has been limited by poor access (limited ornon-existent roads) and lack of nearby electric-transmission infrastructure. Thus, studies on fore-casting wind-turbine hub-height winds in areas of complex terrain have been limited. The WindForecast Improvement Project 2 (WFIP2) recently began a large field campaign in the United Statesto address this knowledge gap (see http://www.esrl.noaa.gov/gsd/renewable/wfip2.html).There are three goals of this dissertation. The first is to identify the sensitivity of determinis-tic hub-height wind-forecast accuracy to the selection of three Weather Research and Forecasting(WRF) model configurations: (1) the initial-condition (IC) source, (2) the grid length (GL), and (3)the planetary-boundary-layer (PBL) scheme. The second goal is to evaluate the performance of amulti-PBL, multi-GL, multi-IC ensemble forecast to obtain calibrated probabilistic wind forecastsin regions of complex terrain. The final goal is to determine the role of planetary-boundary-layerstatic stability on the performance of WRF hub-height wind forecasts. Each of these three goals areintroduced in the following three subsections, with details presented in subsequent chapters.1.2 Deterministic Hub-Height Wind Forecasting and ForecastAccuracyHistorically, vertical extrapolation methods like the 1/7 power law and logarithmic profile (Petersonand Hennessey, 1978; Stull, 1988) have been used to convert winds at 10-m wind to those expectedat wind-turbine hub height. However, these methods make assumptions about atmospheric staticstability and have been derived from observations in flat terrain. The 1/7 power law in particular hasno physical basis for its formulation (Arya, 2001). Nonetheless, both these methods are easy to useand have stood the test of time.Recently, forecasts from NWP models such as WRF have taken the forefront of hub-heightwind-forecast research. Vertical levels in the WRF model can be fine-tuned to produce outputdirectly at hub heights, thus reducing the dependence on surface-based extrapolation. In addition,while most NWP models used by national meteorological centers have been configured with thespecific goal of improving precipitation and temperature forecasts, the WRF model has numerousphysics packages that can be used to improve forecast performance with wind-energy applications inmind. Siuta (2013) found that WRF wind-forecast performance was more accurate than that of the3Chapter 1: IntroductionUS National Centers for Environmental Prediction (NCEP) operational North American MesoscaleModel (NAM) at several surface locations in Wyoming. Finding the best WRF configuration forwind forecasts is an active area of research.Since wind-turbine hub-heights are located within the planetary boundary layer (approximatelythe bottom 3 km of the atmosphere), a few studies have evaluated the effects of PBL-scheme choiceon forecast hub-height wind speeds. Deppe et al. (2012) and Shin and Hong (2011) evaluated severalPBL schemes in WRF over flat terrain, with neither study finding any particular scheme that per-forms best. Deppe et al. (2012) found an ensemble of PBL schemes provided higher accuracy thanany individual scheme, although there was very little difference between the individual schemes.Since these early studies, options for the PBL scheme have increased, and a bug in one of the mostwidely used schemes has been identified and fixed (Hu et al., 2013). A recent study by Draxl et al.(2014) showed that the use of different PBL schemes affects modeled wind shear. This is importantsince shear-induced stress has been shown to result in damaged wind-turbine gear boxes. Addition-ally, the difference in vertical wind shear between PBL schemes implies that verification of windforecasts at hub height (as opposed to verification of 10-m winds) is a necessity for wind-energyapplications when using NWP models.The studies mentioned were all conducted over flat terrain, and literature is limited with respectto studies over complex terrain. One study by Jime´nez and Dudhia (2011) finds WRF to have asurface wind bias that depends on the terrain type, with winds too fast over mountain tops. Chowet al. (2013) indicate the need for studies over complex terrain as current flat-terrain model physicsparameterizations may not apply. None of the PBL schemes currently available in WRF have beenderived from data over mountainous terrain. The relative importance of PBL scheme, GL, and ICchoice is necessary knowledge for those needing to set up wind forecasts in areas of complex terrain,and is addressed by new research described in Chapter 2.1.3 Probabilistic Hub-Height Wind ForecastingResearch on the use of probabilistic wind forecasts in the wind-energy community is approximately10 years old with pioneering papers written by Gneiting et al. (2005) and Pinson et al. (2006), whorecognized the value of quantifying forecast uncertainty for wind-energy purposes. Namely, quan-tifying forecast uncertainty provides added value to energy planners through optimized resourceplanning and market trading abilities (Monteiro et al., 2009). Gneiting et al. (2005) and Pinsonet al. (2006) describe that for probabilistic forecasts to be useful to the wind-energy community,such forecasts must primarily be probabilistically calibrated (i.e., forecast probabilities match ob-served frequencies), with added value by forecast systems that have sharper probabilistic spread4Chapter 1: Introduction(i.e., narrower range of uncertainty).Several methods exist for deriving probabilistic forecasts, including parametric and non-parametricapproaches (Zhang et al., 2014). Chapter 3 focuses on the use of ensemble prediction systems(EPSs) to provide probabilistic forecasts through both parametric and non-parametric methods.Literature on the use of EPSs and probability-based hub-height wind forecasts is limited (Junket al., 2015). While ensemble-forecast methods have been around since the 1960s-1970s (Warner,2011), their application to wind energy is a new area of research. For example, Wilczak et al. (2015)recently evaluated the benefits of a nine-member ensemble for improving ramp-event prediction andfound an approximate 20% improvement in the ranked probability score when compared to the bestdeterministic forecast. However, Wilczak et al. (2015) evaluated ensemble forecasts for only 0-6 hour lead times and found quick reduction in the skill of ensemble-based forecast systems forlonger forecast horizons. This result leads one to question the use of ensembles for hub-heightwind forecasts at longer lead times, including those for same-day and day-ahead planning. Formeteorological variables that are highly influenced by the Earth’s surface, such as hub-height winds,the usefulness of ensembles has been previously questioned (Eckel and Mass, 2005; Stensrud et al.,2000).EPSs have primarily been shown to improve deterministic forecasts through the use of the en-semble mean and for quantifying forecast uncertainty based on the spread of the ensemble members(Warner, 2011; Bakhshaii and Stull, 2009). Empirical ensemble distributions, however, are oftenunderdispersive. This makes them a poor representation of forecast uncertainty once converted toa probability distribution (probabilistic forecast) (Warner, 2011; McCollor and Stull, 2008b; Eckeland Mass, 2005). The fact that empirical ensemble distributions tend to be underdispersive com-plicates the probabilistic forecast process since calibration steps must be taken. Probability distri-butions should fit the typical error distribution calculated around the deterministic ensemble mean.Only when forecast probabilities match the observed frequency of occurrence, is the forecast saidto be calibrated (Nipen and Stull, 2011).Two main frameworks exist for converting a distribution of ensemble members to a probabilitydistribution representing forecast uncertainty (uncertainty model). The first method uses ensem-ble model output statistics (EMOS), which parameterizes a probability distribution based on errorstatistics of a training dataset (Gneiting et al., 2005; Nipen and Stull, 2011). The second is BayesianModel Averaging (BMA), which fits a Gaussian distribution around each ensemble member be-fore weighting the distributions (Courtney et al., 2013; Sloughter et al., 2010; Raftery et al., 2005).EMOS is a simpler method to implement operationally.Junk et al. (2015) compare forecasts produced from EPSs consisting entirely of global NWPmodels to those consisting only of local area models. They find the global ensemble to produce5Chapter 1: Introductionlower MAE and continuously ranked probability scores (CRPS) when using the EMOS method.They also improve ensemble performance by using a multi-model approach by combining membersfrom the global ensemble and a local-area ensemble.The new research described in Chapter 3 furthers probabilistic hub-height wind forecasts in twomain areas. First, I evaluate the performance of a multi-PBL, multi-GL, multi-IC ensemble, whichis posited to capitalize on uncertainty in the evolution of the PBL over complex terrain. Second, Ievaluate the EMOS method for hub-height winds in regions of complex terrain.1.4 The Role of Boundary-Layer StabilityWRF and other NWP models rely on empirical relationships within surface-layer and PBL schemesto parameterize the surface stress, vertical mixing, and turbulence, all of which affect wind speedas a function of height. Surface-layer schemes parameterize surface fluxes of heat, momentum,and moisture between the ground and the first model level, and provide the lower boundary condi-tions for the PBL scheme (Shin and Hong, 2011). PBL-schemes parameterize sub-grid-scale fluxesand vertical diffusion from the first model level up to model top. Both the surface-layer and PBLschemes are a factor in the shape of vertical wind profiles (Shin and Hong, 2011; Jime´nez et al.,2012). Depending on model configurations, actual wind-turbine hub-heights can be lower or higherthan the first model layer.Surface-layer parameterizations are based on empirically-derived flux-profile relationships, withperhaps the most well-known being the Businger-Dyer relations (Businger et al., 1971; Dyer, 1974).These relationships rely on the assumptions of Monin-Obukhov Similarity Theory (MOST), andwere designed to describe the shape of wind, temperature, and moisture profiles in the lowest 10%of the planetary boundary layer using experimental observations over flat homogeneous terrain.Other profiles have been derived by Paulson (1970); Webb (1970); Beljaars (1995); Zhang and An-thes (1982); Arya (2001); Santoso and Stull (2001); Optis et al. (2016). Flux-profile relationshipsrecognize the effects of boundary-layer stability on vertical profiles through the use of stability cor-rection factors. However, because of the assumptions made in similarity theory, and because norelationships have been derived explicitly from data over complex terrain, use of existing relation-ships over such locations is questionable. Chapter 4 describes new fieldwork to determine a betterflux-profile relationship for two wind farms in mountainous British Columbia.1.5 Dissertation OrganizationAside from the Introduction and Conclusion chapters (Chapters 1 and 5, respectively), this dis-sertation is composed of submitted or accepted peer-reviewed journal manuscripts. While these6Chapter 1: Introductionmanuscripts have been reformatted for this dissertation, the contents are the same as the submittedor published versions, with some minor edits.Chapter 2 addresses the sensitivity of deterministic WRF hub-height wind forecasts in complexterrain to PBL scheme, GL, and IC choice. This work has been accepted for publication in Weatherand Forecasting (Siuta et al., 2017b). Chapter 3 describes a method to obtain calibrated hub-heightwind forecasts and has also been accepted for publication in Weather and Forecasting (Siuta et al.,2017a). Chapter 4 explores the validity of current flux-profile relationships in complex terrain andsuggests new relationships that better fit observed wind profiles. The work of Chapter 4 has beensubmitted for peer review.7Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainChapter 2WRF Hub-Height Wind Forecast Sensi-tivity to PBL Scheme, Grid Length, andInitial-Condition Choice in Complex Ter-rain2.1 IntroductionMany methods exist to approximate hub-height wind speeds from 10-m winds. Typical methods arethe 1/7 Power Law and Adiabatic Profile (logarithmic law) (Peterson and Hennessey, 1978; Stull,1988; Arya, 2001), and Radix profile (Santoso and Stull, 1998, 2001). However, these methods arebased on implicit assumptions about the boundary layer including atmospheric stability, which arenot always valid. Siuta (2013) showed for the Cooperative Atmospheric Surface-Exchange Study1999 (CASES99, Poulos et al., 2002) that the assumptions in these wind-profile laws can breakdown in differing boundary-layer static stability regimes, producing large error in the estimatedhub-height wind.NWP models, such as the Weather Research and Forecasting model (WRF) (Skamarock et al.,2008), have taken the forefront of wind-speed forecast research and can be used to directly fore-cast winds at hub-height to avoid vertical interpolation. WRF has two dynamical cores, the Non-hydrostatic Mesoscale Model (NMM) and the Advanced Research WRF (ARW). The NMM core isused by the U.S. National Centers for Environmental Prediction (NCEP) North American MesoscaleModel (NAM), while the ARW core has been primarily used by academic research. Both cores haveseveral physics and dynamics options that can be individually selected by the user, which can in-fluence the hub-height wind-speed forecast and lead to improvements over standard operationalmodels from national meteorological centers. Siuta (2013) found WRF-ARW simulations over theWyoming Wind Corridor (Martner and Marwitz, 1982) to be superior to those from the NCEP op-8Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainerational NAM forecasts over the same area. In some cases, the difference in estimated wind powerdensity varied by several hundred W m-2, both seasonally and daily.Several studies have recently been done to evaluate the effects of WRF planetary-boundary-layer(PBL) physics parameterization scheme choice on wind forecasts. Shin and Hong (2011) evaluatedfive PBL schemes for a single-day case study in the CASES99 dataset and found larger differencesand higher biases in 10-m wind forecasts produced by different PBL schemes during night-timestable regimes. Deppe et al. (2012) evaluated six PBL schemes over Iowa at wind-turbine hubheight and found an ensemble of schemes to outperform the deterministic forecasts from any onePBL scheme. They also found the non-local-mixing PBL schemes (detailed in the next section) tohave better accuracy.Draxl et al. (2014) found biases over flat terrain to be strongly dependent on atmospheric staticstability, with the non-local schemes performing best during unstable and near-neutral conditions,and a local scheme performing best during stable conditions. Draxl et al. (2014) also highlight thestrong variation in the vertical wind shear between seven PBL schemes, indicating that evaluatingPBL scheme performance on the 10-m winds alone is not sufficient when trying to use forecasts forhub-height winds.Storm et al. (2009) evaluate two PBL schemes for simulating the Great Plains nocturnal low-level jet (LLJ) at two locations, and find different positional and magnitude errors, depending onthe scheme. The LLJ is often at sufficiently low altitudes to affect wind turbines. Finally, Hahmannet al. (2015) determine that hub-height wind forecasts over sea are most sensitive to the choice ofPBL scheme. While these studies provide useful results to the wind energy and NWP communities,they leave knowledge gaps in areas of complex terrain.Jime´nez and Dudhia (2011) showed WRF-ARW to have a surface (10 m) wind bias depen-dent on the terrain type, highlighting important deficiencies in wind forecasts over complex terrain.However, given the Draxl et al. (2014) findings, hub-height winds (not 10-m winds) should be eval-uated for wind-energy purposes. Yang et al. (2013) provides one such study over the ColumbiaRiver Basin for three PBL schemes, with the finding that each scheme allowed winds to be toofast during ramp events (rapid increases or decreases in wind speed) under stable conditions. Yanget al. (2016) find strong sensitivity to the adjustment of 26 parameters within the Mellor-Yamada-Nakanishi-Niino PBL scheme, but do not consider other schemes.One of the primary problems with producing hub-height wind forecasts from NWP models overcomplex terrain is that none of the available PBL physics parameterization schemes (in WRF) werederived from data taken over complex terrain, which I conclude from reading the source publicationscited in the next section. Instead, model physics that are responsible for evolving boundary-layerprocesses were formulated mainly from field data over flat terrain, with heavy focus on improving9Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainprecipitation forecasts.Atmospheric flows in mountainous terrain are complicated by the presence of both thermally-induced circulations and dynamically forced flows (Markowski and Richardson, 2010). Thermally-induced circulations include slope flows, mountain/valley winds, and mountain convective boundarylayers (De Wekker and Kossmann, 2015; Rucker et al., 2008; De Wekker et al., 2004). For exam-ple, slope-wind systems have been shown to result in turbulent mixing above the typical convectiveboundary-layer along mountain tops due to slope-flow convergence (De Wekker and Kossmann,2015). Closer to the surface, the development of upslope and downslope flows impacts wind speedsand directions at wind-turbine hub heights due to differential heating (Markowski and Richardson,2010; Rucker et al., 2008). Dynamically-forced flows affect turbine-level winds under specific syn-optic conditions that support flow blockage, channeling, and mountain-top acceleration (Markowskiand Richardson, 2010). Additionally, mountain-wave activity may be present, resulting in enhancedturbulence, for flow going over mountain ridges under statically-stable conditions (Stull, 1988).Thus, the assumptions made in PBL schemes may not be appropriate for complex terrain, andtheir performance should be evaluated (Chow et al., 2013). This is especially true for hub-heightwind forecasts where small improvements (around 10-20% improvement in mean absolute error)can lead to millions of dollars in cost savings (Marquis et al., 2011). Because of these limita-tions, the improvement of hub-height wind forecasts in regions of complex terrain is the goal of theWind Forecast Improvement Project 2 (http://www.esrl.noaa.gov/gsd/renewable/wfip2.html), whichrecently commenced field work.The goal of the study presented here is to quantify the sensitivity of hub-height wind-speed fore-casts in complex terrain to the PBL-scheme, initial-condition, and grid-length selection in the WRFmodel. Answering this question will fill a knowledge gap on the use of different PBL schemes overcomplex terrain. It will also be of use to those who need to create WRF-model hub-height wind fore-casts over mountainous areas by addressing which factors are the most important to consider duringthe initial stages of the modeling process (e.g., the influence of initial-condition, PBL-scheme, andgrid-length selection on forecast accuracy). The rest of this chapter is organized as follows. Section2.2 describes the methodology used, including WRF model set-up and brief descriptions of PBLschemes, initial conditions, and grid lengths tested. Section 2.3 provides results from the study pe-riod, followed by a discussion in section 2.4. Section 2.5 concludes with a summary of the findingsand suggestions for future work.10Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainFigure 2.1: Method used to produce 48 daily forecasts at each wind farm. The names of thePBL schemes are detailed in Table 2.1.2.2 MethodologyThe WRF-ARW model was used to produce 48 forecasts per day for one year (June 2014-May2015) by using different initial-condition sources, horizontal grid lengths, and PBL schemes (Fig.2.1, Table 2.1). Eight available PBL schemes, detailed later in this section, were each used insixteen daily WRF models. Eight runs were initialized by the 32-km NAM and another eight by the0.5 degree Global Forecast System (GFS). Each of these runs contained domains of three differentgrid lengths, yielding the 48 total forecasts. The model forecast used four, two-way nested domains:a 36-km parent over western Canada, a 12-km nest over British Columbia and Alberta, and two 4-km nests containing wind farms of interest (Fig. 2.2). Forecasts covered the important 0-24 hourforecast horizon, after allowing for 9 hours of model spin-up. With a 24-hour forecast horizon, suchforecasts would be used for same-day reserve scheduling and market trading.Spin-up time is important for higher resolution local-area models like WRF because observationnetworks and relatively coarse initial conditions do not contain enough information to describetypical small scale features (e.g., tight gradients around fronts, terrain-induced local flows, andboundary-layer circulations) (Warner, 2011). Instead, WRF must develop, or spin-up, these featureswith time, taking advantage of the filtered digital elevation model in WRF. Thus, the output from thefirst few hours may contain unrealistic features and should be discarded. Spin-up times generallyrange from 6-12 hours (Warner, 2011; Skamarock, 2004), during which the output should not beused.11Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainFigure 2.2: WRF domains used in this study. The 36-, 12-, and 4-km grids are bound by thered, blue, and black boxes, respectively.Vertical levels in the model were tuned to have fine resolution around wind-turbine hub-heightwith 62 vertical levels total and six levels in the lowest 100 m. Forecast data was extracted withouthorizontal interpolation (using nearest neighbor) or vertical interpolation (using model level closestto hub-height) for each wind farm location to allow for differences to be attributed only to thePBL scheme. The authors recognize that interpolations can further improve the hub-height windforecasts.2.2.1 Planetary-Boundary-Layer SchemesThere are 11 PBL schemes available for use in the WRF-ARW core, version 3.5.1. Of those, eightwere tested here. Three schemes were not tested (Mellor-Yamada Nakanishi and Niino Level 3,Total Energy-Mass Flux, and Boulac schemes), because they caused WRF to crash, and due tocomputing resource constraints.A PBL scheme is ultimately responsible for the simulated development of the planetary bound-ary layer, and for sub-grid-scale fluxes of heat, momentum, and moisture by vertical turbulent trans-port from the first model level to model top (Yang et al., 2013; Cheng et al., 2013; Shin and Hong,2011). Wind-turbine hub-heights and blade-spans fall within the PBL during most times of the day,12Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainTable 2.1: WRF model configurations used for this study.Model Detail SettingWRF Core ARW Version 3.5.1Grid Dimensions 36 km: 100x76, 12 km: 136x103, 4km:103x103Land Surface NoahMicrophysics WRF Single-Moment 5-classCumulus Kain-FritschShortwave Radiation DudhiaLongwave Radiation Rapid Radiative Transfer ModelPlanetary Boundary Layer Yonsei University (YSU), MediumRange Forecast (MRF), AsymmetricConvective Model (ACM2), Mellor-Yamada-Janjic (MYJ), Quasi-NormalScale Elimination (QNSE), Mellor-Yamada Nakanishi Niino Level 2.5(MYNN), Grenier-Bretherton-McCaa(GBM), and University of Washington(UW)Surface Layer MM5 similarity (used with YSU,ACM2, MRF, MYNN, GBM, and UWPBL schemes) or Eta similarity (MYJPBL scheme only) or QNSE surfacelayer (QNSE PBL scheme only)thus making the PBL scheme of crucial importance in understanding WRF strengths and deficien-cies in simulating the hub-height wind. Each PBL scheme is generally compatible with at leastone (but sometimes up to three) surface-layer scheme(s). The surface-layer scheme was kept con-stant (Table 2.1) for consistency when possible. For more information on how the PBL and surfacelayer are linked, see Skamarock et al. (2008). The following are brief explanations of the maincomponents of each PBL scheme.(i) Medium Range Forecast Model (MRF) scheme: A first-order closure, non-local eddy-diffusivity(K-theory) approach that uses a countergradient correction term based on the definition ofPBL top (Hong and Pan, 1996). PBL top is defined by a critical value of the bulk Richardsonnumber (Rib=0.5). The countergradient correction term accounts for contributions to the totalflux by large eddies that span the boundary layer, making this a non-local scheme.(ii) Yonsei University (YSU) scheme: A first-order closure, non-local eddy-diffusivity (K-theory)13Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainapproach with explicit treatment of the PBL top (Hong et al., 2006). This scheme providesan updated countergradient flux term originally provided in the MRF scheme. Here, the PBLtop is determined by the level at which minimum turbulent flux exists (heat, momentum,moisture). The YSU scheme was updated in WRF version 3.4 to account for a bug in thecode in the calculation of the momentum diffusion coefficient (Hu et al., 2013). Hu et al.(2013) claims this improves wind speed estimation in stable regimes by decreasing verticalmixing, and is one of the only PBL schemes to focus improvements in wind.(iii) Mellor-Yamada-Janjic (MYJ) scheme: A local mixing, 1.5-order closure, prognostic turbulentkinetic energy (TKE) scheme, used operationally in the NAM. TKE vertical distribution iscontrolled via a master mixing length, which is used in the TKE budget equation (Janjic´,1994). Local closure implies only small turbulent-eddy contributions to the TKE distributionare accounted for.(iv) Asymmetric Convective Model (ACM2) scheme: A combination non-local transilient turbu-lence (Stull, 1993) scheme during unstable conditions with local eddy diffusivity in stableconditions (Pleim, 2007). A stability parameter controls the contribution of the total mixingfrom non-local and local components. Downward mixing is controlled by local K-theory.Pleim (2007) show strong agreement between the ACM2 scheme and LES. The PBL heightis determined by a critical bulk Richardson number.(v) Quasi-Normal Scale Elimination (QNSE) scheme: The QNSE scheme follows the same theoryas the MYJ scheme in unstable and neutral conditions but develops a spectral theory duringstable periods (Deppe et al., 2012). The goal of this scheme is to more accurately develop theboundary-layer during stable regimes (Sukoriansky et al., 2005). This scheme treats turbulenteddies as waves during stable regimes.(vi) Mellor-Yamada Nakanishi Niino Level 2.5 (MYNN) scheme: Also a local mixing, 1.5-orderclosure, prognostic TKE scheme, and a slight variation of the MYJ scheme. TKE verticaldistribution is controlled via a master mixing length that differs from that in the MYJ scheme.For the MYNN scheme, the master mixing length is the sum of three individual length scales:the surface-layer length, the turbulent-layer length and the buoyancy length (Deppe et al.,2012).(vii) Grenier-Bretherton-McCaa (GBM) scheme: The GBM scheme was formed to try and improverepresentations of cloud-topped marine boundary layers (Grenier and Bretherton, 2001). Ituses moist thermodynamics in its turbulence scheme, unlike others that use unsaturated ther-14Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainmodynamics. The GBM scheme is also a 1.5-order closure local TKE prognosis schemeusing K-theory to parameterize turbulent fluxes.(viii) University of Washington (UW) scheme: A modified GBM scheme taking into account im-portant cloud-top processes such as cloud-top cooling, evaporation and phase change (Brether-ton and Park, 2009). This scheme uses moist thermodynamic variables as in GBM and slightlyvaries the master mixing length formulation.2.2.2 Initial-Condition SourcesInitial conditions are used as the best estimate of the initial atmospheric state in a model simulation,whereas boundary conditions provide updates to the lateral boundaries of the outermost model do-main during the simulation. In this dissertation I refer to both generally as initial conditions. In asimilar study over uniform terrain, Deppe et al. (2012) show different initial-condition sources dolead to different solutions for hub-height winds after progressing forward in time.For this study the NCEP GFS and NAM were used for the initial conditions. Each source wastested with all PBL schemes mentioned in section 2.2.1. GFS-initialized WRF runs used the 0000UTC GFS output at 0.5 degree resolution, while NAM-initialized WRF runs used 0000 UTC 32-km NAM output. The NAM contains many different grid lengths ranging from 4 to 32 km at thetime of this writing. The master grid of 32-km grid length was used for this study because it wasthe only NAM grid that captured the full study area. The NAM (NMM dynamical core) differsfrom WRF-ARW primarily in its grid configuration. The NAM is a grid box model operating on anArakawa-B grid arrangement whereas the WRF-ARW core used in this study uses an Arakawa-Cgrid arrangement.2.2.3 Verification MetricsMurphy and Winkler (1987) detail a systematic framework for forecast verification. Namely, thejoint distribution of forecasts and observations should be considered, and the forecast user or providershould pinpoint which aspects of the forecast are most relevant for their use. Wind power is pro-portional to the cube of wind speed for individual wind turbines, although the farm-averaged windpower may be slightly less than cubic. Therefore, errors in estimates of wind power are highly sen-sitive to differences between the forecast and observed wind speeds, which should also be highlycorrelated. State-of-the-art forecasts should capture wind-speed variations as well as forecast un-certainty due to imperfect models and initial conditions.This chapter addresses short-term, deterministic forecast performance through the followingmetrics: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and linear correlation co-15Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainefficient (r). The best forecasts will have small MAE (high accuracy), small RMSE (small amount oflarge forecast errors), and r closest to 1.0 (forecasts and observations trend with each other). Whengauging forecast performance of a large number of forecasts (in this case 48 forecasts) a summaryplot such as a Taylor diagram (Taylor, 2001) is useful. It shows centered-RMSE (CRMSE), r,and standard deviation (the latter showing how well the standard deviation of the forecast valuesmatches the standard deviation of the observations). CRMSE is calculated by removing the meanvalue of the forecasts and mean value of the observations from each forecast-observation pair whencalculating RMSE [Eqn. (2) in Taylor (2001)].I also use the standardized anomaly (z-score) to measure which model configurations are betteror worse than average. In addition, MAE skill score (MAESS) is used to quantify the amount ofimprovement that can be gained using one configuration over another, and through the use of biascorrection (detailed later).Finally, an Analysis of Variance (ANOVA) was done to estimate the importance of three factors(initial-condition source, grid length, and PBL scheme) on the hub-height wind-forecast accuracy.An ANOVA is a method of evaluating how much of the overall variance of a quantity (in this case,MAE) can be explained by any number of factors (Wilks, 2011). I use the ANOVA to pinpointwhich of these factors has the biggest potential for improving short-term, hub-height wind forecastaccuracy over complex terrain for the wind farms studied here.This chapter does not address aspects of ensemble forecasting or forecast uncertainty, but theauthors would like to mention their importance for wind-energy forecasts. Forecast uncertainty istypically derived from an ensemble prediction system, where multiple models are run producing adistribution of possible forecast solutions. The resulting forecast probability distribution providesan indication of forecast uncertainty. This aspect of the study is provided in Chapter 3 and Siutaet al. (2017a).2.2.4 ObservationsHub-height wind-speed observations used to calculate verification metrics in this dissertation are theconfidential property of the wind-farm owners. The authors have taken careful consideration to notreveal wind-farm locations and observed values. I refer to the wind-farm sites as stations numbered1-4. Hub heights ranged from 80 to 95 m, depending on the wind site. All wind farms were locatedon mountain ridge-tops with elevations ranging from approximately 500 to 1000 m above sea level.I use wind-farm-averaged observations to compare with model output. Wind-farm-averaged obser-vations are derived from nacelle-based observations. The raw, hub-height wind observations fromthe wind farms were quality controlled by BC Hydro (a utility company independent of the privatewind-farm operators) before being made available to the authors.16Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrain2.2.5 Bias CorrectionI apply a degree of mass balance (DMB), or multiplicative, bias-correction scheme detailed in Bour-din et al. (2014). This statistical post-processing scheme is suitable for wind forecasts because itdoes not allow for negative wind speeds. Bias correction effectively removes the systematic com-ponent of error, but is only possible by having access to real-time wind-farm observations. Chapter3 discusses this bias-correction method and the effects of bias-correction on ensemble and proba-bilistic wind forecasts. I found that a running 30-day training period for bias correction providedthe maximum increase in forecast skill over raw forecasts, when compared to other training periodsranging from 1 to 30 days (not shown here).This bias-correction technique implicitly accounts for average wind-turbine wake effects. How-ever, it is possible that this method does not successfully differentiate between waked and unwakedflows, so some wake-induced biases still exist under specific regimes. I did not use the wind-turbinedrag parameterization scheme in WRF, which is not compatible with all the PBL schemes used here.2.3 Results and DiscussionGiven the complexity of evaluating 48 forecasts, my discussion will be split into two subsections.First, I detail the deterministic performance of each configuration for the bias-corrected forecasts.Second, I discuss the sensitivity of the bias-corrected forecast accuracy to each of the three factors(initial-condition source, grid length, and PBL scheme). Appendix A contains the complete MAEdataset used in the following discussion for the bias-corrected forecasts and a summary of forecastskill gained through bias correction.2.3.1 Deterministic VerificationAnnual Taylor diagrams for sites 1-4 (Fig. 2.3) provide a quick summary of the short-term forecastperformance of the 48 forecasts. Better forecasts will have lower CRMSE (closer to the center of thegray semi-circles), a linear correlation coefficient (dashed radials) closer to 1.0 (on the abscissa),and match the standard deviation of the observations (be closer to the yellow line). In Fig. 2.3,similar symbols represent similar PBL schemes and similar colors represent similar grids (i.e., gridlength and initial-condition source).While each Taylor diagram contains 48 forecasts, several key patterns can be found by lookingat the differences in the plots between wind farm sites. At sites 1 and 4, the Taylor diagrams showgrid clustering, meaning forecasts from the same grid length perform similarly, regardless of thePBL scheme or initial condition used. In other words, differences in the forecast performance aremainly a function of grid length or of the location of the nearest-neighbor grid point. At site 1,17Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainFigure 2.3: Taylor diagrams of bias-corrected wind forecasts for sites 1-4. The symbols inthe legend differentiate PBL schemes, where similar colors represent similar grid lengthsand initial conditions. Better forecasts will be located close to the yellow curve, whichrepresents the standard deviation of the observations. The best forecasts will be locatedclosest to the bottom of the yellow curve where there is also perfect forecast-observationcorrelation (r=1.0) and no CRMSE. Correlation values are provided by the dashed radi-als, CRMSE by the thick gray semi-circles (m s-1), and standard deviation by the thinblack quarter-circles. Legend naming convention is [PBL]{IC}(Grid).18Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainthe grid clusters are primarily differentiated by their correlation coefficients, with the 12-km gridshaving the highest correlation and lowest CRMSE. At site 4, the grids are primarily differentiatedby their standard deviation, with the 4-km grids able to most closely match the standard deviationof the observations (the yellow quarter-circle curve). At site 4, however, the 36-km grids have thehighest correlation, while having similar accuracy to the 12- and 4-km grids.At sites 2 and 3, the grid-clustering pattern is not so distinct. At site 2, the 12-km grids providethe closest match to the observations, while the 4-km (36-km) grids generally have too little (toomuch) standard deviation. Forecasts using the MRF PBL scheme, which is an older scheme, havethe best correlation with the observations. At site 3, the grid clustering is evident, but the differencein standard deviation between the different grid clusters is much smaller than at sites 1 and 4. At site3, the 12-km grids also provide the best forecasts because they have higher accuracy and correlation,while better matching the standard deviation of the observations.While the Taylor diagrams (Fig. 2.3) have hinted at a large importance in the choice of gridlength in complex terrain, there is also a clear difference in the performance of forecasts usingdifferent PBL schemes for the same grid length. To quantify which of the factors (grid length,PBL scheme, or initial-condition source) is the most important, I later perform an ANOVA on thedistribution of MAE scores.One other finding here is that using either the GFS or NAM as the initial conditions producedsimilar forecasts for the short-term (see Appendix A). To simplify subsequent figures, the NAM-initialized members are omitted. One potential reason for the similarity in the GFS and NAMproduced forecasts for equivalent configurations is that both these models are produced by NCEPand rely on the same underlying data assimilation system, the Gridpoint Statistical Interpolation(GSI, Shao et al., 2016). Thus, their initial states could be more similar than when compared toinitial conditions produced from other agencies or data assimilation methods.Reducing the dataset further, Fig. 2.4 shows the MAE standardized anomaly (z) scores forthe bias-corrected WRF forecasts at each site for the summer (June, July, and August) and winter(December, January, February) seasons. Since the standardized anomaly compares the MAE ofeach scheme to the mean MAE of all schemes, negative values indicate configurations that performbetter than the average (lower MAE).Fig. 2.4 shows two distinct patterns in forecast accuracy. PBL schemes that include non-localmixing (YSU, ACM2, and MRF schemes; boxed, Fig. 2.4) have accuracy higher than those thatinclude only local mixing in summer, although forecasts using the MYJ scheme also generally hadbetter than average accuracy. Non-local schemes treat vertical mixing in a more realistic way byrecognizing that turbulent eddies can span and/or move through multiple small layers within theboundary layer. In other words, mixing is attributed to not only eddies present within a small,19Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainFigure 2.4: Accumulated z-score for each of the 48 bias-corrected wind forecasts for thesummer (June, July, and August; top), and winter (December, January, and February;bottom). Colors represent individual wind farm sites, with purple bars indicating thesite-averaged performance. A larger-magnitude negative z-score indicates forecasts thatare much better than the average. The boxed-areas represent the non-local mixing PBLschemes in the top panel and the 12-km grids in the bottom panel.20Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainlocal layer of constant static stability, but also those that originate in other layers that move throughin transit to their level of neutral buoyancy (Stull, 1993). These large eddies will have the mostinfluence in complex terrain over the summer season where differential heating over sloped terraincan lead to important boundary-layer circulation development.The 12-km forecast using the ACM2 PBL scheme initialized off the GFS had the best accuracyaveraged over all wind farms in summer (purple bars, Fig. 2.4), although other forecasts had betteraccuracy at individual wind farms. For example, the 4-km forecast using the ACM2 PBL schemewas best at sites 2 and 3, and the 12-km GBM scheme was best at site 4, all initialized off the GFS.In winter, the MAE z-scores show a different pattern. Here, the 12-km grids for each scheme,regardless of the treatment of vertical mixing, generally perform better (boxed, Fig. 2.4), especiallywhen averaged over all locations. The exceptions are the MYNN and MRF PBL schemes whichperform relatively poorly regardless of grid length. I also find several cases where the 4-km gridsproduce better than average forecasts.Averaged over all locations, the best forecast for the winter was the 12-km forecast producedusing the GBM PBL scheme. Again, individual sites did show slightly better accuracy with otherconfigurations. Site 1 had the best accuracy from the 12-km grid using the GBM PBL scheme. Site2 had the highest accuracy using the 4-km grid and ACM2 PBL scheme, site 3 with the 4-km gridand MRF PBL scheme, and site 4 with the 12-km grid and ACM2 PBL scheme. In contrast to thesummer, the most accurate bias-corrected forecasts in the winter were initialized off the NAM. Inwinter when static stability is generally higher and wind-shear-generated mechanical turbulence isgreater, non-local mixing is likely not as important, especially at these latitudes where sun angle islow. Low-pressure systems frequently approach the coast of British Columbia and move into theinterior of North America, and dynamically forced flows have more influence on wind patterns incomplex terrain.While one might expect the 4-km grid to have the best forecast verification due to superiorresolution of topography, the results indicate the 12-km grids generally have higher accuracy (withthe mentioned exceptions), particularly when averaged over all locations. Mass et al. (2002) suggestthat while shorter grid lengths do simulate the structure of atmospheric phenomena better, they aresubject to positional error. Mass et al. (2002) and Murphy and Winkler (1987) also mention that skillmeasures should ultimately be defined by the forecast user. So, while the 4-km grids may producemore realistic-looking feature structures, for the purposes of generation planning the forecast withthe least error over the long term at a wind farm location is ultimately preferred. My results showthat higher resolution grids (those with shorter grid lengths) may provide a poorer quality forecastthan coarser (12-km) grids, while being more computationally expensive to produce (although Ido find several cases where the 4-km grids are the most accurate). However, I am not advocating21Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainentirely for coarse grids to be used for wind forecasts in regions of complex terrain. The gridresampling process that occurs with two-way nested domains could result in the 4-km grids have apositive influence on the 12-km grid forecast accuracy. Additionally, it is clear from Fig. 2.4 thatthe 36-km grids generally produce some of the least accurate forecasts.To highlight the potential for improvement in forecast accuracy by using particular PBL schemesor grid lengths, I have calculated the MAE skill scores (MAESS) for the best forecast configurationrelative to the worst for each site for both the summer and winter seasons. In summer, sites 1, 2, and3 had MAESS values (i.e., a potential improvement in MAE) of 18, 17, and 19%, respectively. Site4 had a much larger MAESS of 29%. In winter, MAESS were 24, 17, 15, and 13% for sites 1-4,respectively. Differences of this magnitude are significant to the wind-energy community (Marquiset al., 2011), and highlight the importance of careful grid-length and PBL-physics configuration.Fig. 2.5 shows the MAE z-score for the entire year. These results show the best forecast av-eraged over the entire study period and all locations to be the 12-km ACM2 forecast initializedwith the GFS initial-condition source (purple bars, Fig. 2.5). Forecasts from the 4-km grid usingthe ACM2 scheme, and the 12- and 4-km grids using the GBM schemes, all with the GFS initial-condition source, are also some of the more accurate forecasts.I theorize that the strong performance of the ACM2 and GBM schemes can be attributed totwo distinct qualities of these respective schemes. The ACM2 scheme treats vertical mixing inthe most realistic manner of all the schemes. It allows for both non-local and local mixing via itsformulation (Pleim, 2007). This is similar to transilient turbulence theory, which more accuratelycaptures boundary-layer-spanning eddies that occur in convective boundary layers (Stull, 1993).Such mixing characteristics will not be captured by local-mixing schemes. In terms of the GBMscheme, its physical formulation differs from other schemes in that it incorporates moisture intoits thermodynamic calculations, which better represents cloud-topped boundary layers (Grenier andBretherton, 2001). Due to the active storm track along coastal British Columbia, cloud-toppedboundary layers are quite common, and may be a reason why this scheme performs well.2.3.2 Sensitivity AnalysisSo far, this study has shown that proper selection of the PBL scheme and grid length can lead tosignificantly improved forecast accuracy for hub-height wind forecasts. Also, the best performingconfiguration varies by location and season. To quantify which factor (PBL scheme, grid length,or initial condition) is the most important to forecast accuracy at these wind farms, a multi-factor(factorial) ANOVA was done on the MAE scores.Fig. 2.6 shows the contribution to the overall variance in MAE scores for the bias-correctedWRF forecasts attributed to each factor at each wind site. The plot also shows how the contribution22Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainFigure 2.5: Annual MAE z-score for each of the 48 bias-corrected wind forecasts. Colorsrepresent individual wind farm sites, with purple bars representing the site-averaged per-formance. A larger-magnitude negative z-score indicates forecasts that are much betterthan the average.to variance changes by season. In addition, the residual column includes the variance that can beattributed to the interaction between two or more factors.At sites 1 and 4, where the grid clustering was easy to distinguish in Fig. 2.3, the grid lengthis the dominant factor controlling forecast accuracy for all seasons on the year. Site 4 in particularhas the largest influence of the grid length, contributing to 70% or greater of the variance amongthe MAE scores for all seasons. At site 1, the grid length explains 50% of the variance in MAE,or greater, depending on the season. Both these locations also show a seasonal cycle, where thePBL scheme or the residual increase contribution to the variance in MAE scores. At site 1, the PBLscheme has the largest influence in the spring and summer months, although the grid length is stillthe dominant factor. At site 4, the influence of the PBL scheme increases in the fall and winter, butexplains 12% or less of the variance in forecast accuracy. At site 2, the PBL-scheme is the dominantfactor, with also a strong seasonal cycle. The PBL scheme explains a minimum of 36% of thevariance in winter and and a maximum of 62-70% in the spring and summer months. At site 3, thePBL scheme is dominant in the fall and spring, the grid length in the winter, and initial-conditionsource in the summer. Site 3 is the only location where the initial-condition choice explains mostof the variance for an individual season (52% in the summer). Site 3 is also the location where thePBL scheme has explained the highest amount of the variance in MAE for an individual season at80% in the spring.Like the initial analyses, the ANOVA also indicates that the choice of initial and boundary con-23Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainFigure 2.6: Contribution to the variance in MAE scores for the bias-corrected wind forecastsas a percentage contribution by each factor for each site. Different colors represent dif-ferent seasons, where blue is summer (June, July, and August), red is fall (September,October, and November), orange is winter (December, January, and February), and greenis spring (March, April, and May). The residual represents the variance attributed to fac-tor interactions and other unaccounted differences.ditions explain little of the variance in forecast accuracy for the short term at most of the locations,with the exception being site 3 in the summer. However, there are three potential caveats of thisanalysis. The first is that both sources are produced by NCEP, with both relying of the GSI dataassimilation system. Greater variance may be achieved by using initial-condition sources producedby separate agencies and separate data assimilation methods. The second is that greater variancemay be achieved at longer forecast horizons since a 24-hour forecast may not have allowed enoughtime for the small differences in initial conditions to grow significantly. Finally, while the resultsof the ANOVA make physical sense, the small sample size does not perfectly meet the primaryassumptions made in using an ANOVA. Namely, the factor-level data are not perfectly normal, and24Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainexhibit some heteoroscedasticity.Since the grid length affects the exact location of the nearest-neighbor grid-cell, I posit thephysical location of the nearest-neighbor grid point could contribute to the variance in accuracyscores at some locations (especially those that have accuracy largely explained by the grid length).To evaluate these effects, I looked at the distance between the nearest-neighbor grid point and theactual coordinates of the wind-farm site. At site 1 (and site 4) which found the grid length to controlMAE variance for individual seasons (Fig. 2.6), I found horizontal distances of 14.5-, 4.1-, and 1.4-km (and 11.8-, 5.5-, 2.1-km), for the 36-, 12-, and 4-km grids, respectively. At site 2 (and site 3),which found the PBL scheme to be the dominant factor for most seasons, the horizontal distanceswere found to be 15.2-, 6.5-, and 2.1-km (and 19.3-, 2.7-, and 1.1-km), for the 36-, 12-, and 4-kmgrids, respectively. This indicates that the physical error in location of the nearest-neighbor grid cellmay not be the cause for some locations being more largely influenced by the grid length.Next, I checked to see if differences in the modeled terrain height at the nearest-neighbor gridpoints are larger between grid lengths at locations that showed higher influence of the grid length.Again, this does not seem to explain why some locations are more effected by the grid length (sites 1and 4, Fig. 2.6) as all locations had similar height variations between nested grids of approximately100 to 200 m. I finally checked to see if there was an abrupt change in land-use category and thusroughness between the grids and came to a similar conclusion, although for one location the nearest36-km grid cell identified as being over water. Thus the reason why hub-height wind forecastsat particular sites are more affected by the PBL scheme, while others by the grid length, remainsunclear and is worthy of future study. One theory could be that some locations are more affectedby flow channeling and blocking, which may be more largely influenced by the grid length than thePBL scheme. Appendix C provides a summary of the differences between the model-terrain andthat of a digital elevation model.Fig. 2.7 shows the contribution to MAE variance by time of day at each location (over the entireyear). The midday (1200-1800 PST) peak in PBL scheme contribution to variance for sites 1, 2, and3 is consistent with the development of slope flows and boundary-layer growth. Forecast accuracy atsite 4 does not exhibit a diurnal cycle, with the grid length explaining the most variance throughoutthe entire day. Since each forecast run was initialized at the same time of day, the results in Fig.2.7 also show the relative contribution of each factor by forecast horizon, although the signal of thediurnal cycle appears to be more evident than that of forecast horizon.25Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainFigure 2.7: Contribution to the variance in annual MAE scores for the bias-corrected windforecasts by time of day as a percentage contribution by each factor for each site. Dif-ferent colors represent different factors, where blue is the PBL scheme, red is initialcondition, orange is grid length, and green is the residual. The residual represents thevariance attributed to factor interactions and other unaccounted sources.2.4 Conclusions and Future WorkThis study compared the sensitivity of hub-height wind-speed forecasts to the PBL scheme, gridlength, and initial-condition source for the WRF model over complex terrain. I have provided sev-eral important conclusions that forecasters and researchers should consider when using NWP mod-els to produce hub-height wind-speed forecasts. Namely, I found the grid length and PBL schemewere the most important factors to consider for short-term forecasts (24-hour forecast horizon) incomplex terrain, for my data sets. For the majority of locations, the effects of PBL scheme and gridlength contribute more to differences in the bias-corrected forecast accuracy than initial-conditionchoice. This means that choosing the best PBL scheme and grid length over the worst could yieldthe largest accuracy gains. However, one caveat could be a lack of diversity in initial-conditionsources, since the processes used during data assimilation for the NAM and GFS are similar, andboth are produced by the same agency. Initializing WRF with other initial-condition sources, such26Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex Terrainas the with the Canadian Global Deterministic Prediction System (GDPS), the Fleet Numerical Me-teorology and Oceanography Center Navy Global Environmental Model (NAVGEM), the EuropeanCenter for Medium Range Weather Forecast Integrated Forecast System (IFS or ECMWF), or theUK Met Office Unified Model (UKMO) could result in a larger sensitivity to initial-condition choicethan shown here, and is worthy of future study.At sites 2 and 3, the PBL scheme was a large or dominant factor controlling forecast accuracy,regardless of the season. At the other two stations (sites 1 and 4), the grid length had the largestinfluence on forecast accuracy.Additionally, the influence of the PBL scheme showed a seasonal cycle at sites 1, 2, and 4.At sites 1 and 2, the influence of the PBL scheme (grid length) peaked in the spring and summer(fall and winter) months. At site 4, the influence of the PBL scheme was small, but peaked in thewinter. Site 3 had a different seasonal trend where the PBL scheme and grid length were the largestinfluence over forecast accuracy for all seasons, except for the summer.The influence of the PBL scheme was also largest during times of the day where boundary-layer circulations are expected to develop (midday to early afternoon at sites 1-3). Site 4 showedrelatively little variation in which factor contributed the most to the variance in accuracy, regardlessof the time of day.I investigated if the reason why sites 1 and 4 were more heavily influenced by the grid lengthwas because of differences in the location of the nearest-neighbor grid point. I concluded that thephysical distance of the wind-farm location to the nearest neighbor location was not the primarycause. I also found that the differences in elevation of the nearest-neighbor grid points were sim-ilar at all locations, and that land-use types were similar between grids at 3 of the 4 locations. Iconcluded that the locations that were most strongly influenced by the grid length are likely moreimpacted by terrain-flow interactions, such as channeling and blocking, than at the other locations,but that this finding is worthy of future study.When averaged over all locations and seasons, the ACM2 PBL scheme was the most accu-rate PBL scheme in complex terrain, although for some specific farms and seasons it was slightlyoutperformed by other schemes. Energy planners will likely benefit the most by using the ACM2scheme in complex terrain since it had the best overall average accuracy (averaged over all loca-tions). Choosing the best PBL scheme and grid length configuration resulted in reductions of MAEof up to 29% over the poorest configuration, depending on the season, with larger improvementsfound in the summer.Improvements of this magnitude can save millions of dollars (Marquis et al., 2011). The sensi-tivity results provide valuable information to meteorologists, energy planners, and utilities that usewind-energy forecasts. Namely, savings can be realized through careful experimentation with their27Chapter 2: WRF Hub-Height Wind Forecast Sensitivity to PBL Scheme, Grid Length, and Initial-ConditionChoice in Complex TerrainWRF configuration, focusing on PBL-scheme and grid-length selection, as well as bias correction.An interesting study would be to compare the effects of one-way and two-way domain nestingtechniques on hub-height wind forecasts over complex terrain. While I find the 12-km grids togenerally have higher accuracy, particularly in the winter, it is possible that this is a result of theresampling and averaging process that takes place between nested grids. The 12-km grid forecastat these locations might be improved over the 4-km grid through this feedback process. A usefulfollow-up to this study would be to rerun the same tests with one-way nesting. This would eliminatefine-grid resampling and may provide different results with respect to the performance of fine andcoarse grid lengths. It would also provide insight into the benefits of one-way versus two-way nestedgrid setups in WRF over complex terrain.Because I have shown that the best PBL scheme and grid length varies by location and season,it may be beneficial to use an ensemble of multiple PBL schemes, grid lengths, and initial-conditionsources to improve accuracy through use of the ensemble mean forecast. This study has only evalu-ated the deterministic aspect of wind forecasts. Capturing forecast uncertainty is critical, especiallyfor energy planning. The use of an ensemble consisting of all available PBL schemes, grid lengths,and initial conditions may be beneficial, especially over complex terrain where boundary-layer cir-culations are prevalent. Chapter 3 evaluates the bias-correction method and the use of this datasetfor ensemble and probabilistic forecasts. Use of probabilistic information has been shown to pro-vide further economic value over individual deterministic forecasts (Zhu et al., 2002; McCollor andStull, 2008b).28Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainChapter 3Calibrated Probabilistic Hub-Height WindForecasts in Complex Terrain3.1 IntroductionPlanners and market traders use wind forecasts based on numerical weather prediction (NWP) mod-els to gauge future wind-energy production. Such NWP models are the largest source of uncertaintyin wind-energy generation estimates (Monteiro et al., 2009). Forecast uncertainty arises from incor-rect initial-condition (IC) sources, incomplete knowledge of atmospheric physics (e.g., turbulence),and improper assimilation of observational data (Buizza et al., 2005). Better estimation of forecastuncertainty, and higher forecast accuracy lead to more efficient energy planning, optimized markettrading opportunities, and significant monetary savings (Pinson et al., 2006; Marquis et al., 2011;Mahoney et al., 2012; Wilczak et al., 2015).Probabilistic weather forecasts are the primary tools used to gain insight on forecast uncertainty.Earlier forecasts used for wind-energy planning were deterministic, meaning they did not provideinformation on forecast uncertainty (Pinson et al., 2006). However, in the last 10 years, probabilisticforecasts have begun to be adopted throughout the industry as planners started to recognize the valueof quantifying forecast uncertainty. Monteiro et al. (2009), Giebel et al. (2011), and Zhang et al.(2014) provide extensive reviews of the history of probabilistic forecasts for wind energy.Zhang et al. (2014) detail the two standard approaches for creating probabilistic forecasts forwind energy: parametric and non-parametric methods. Parametric methods prescribe a probabilitydistribution of a particular shape that is typically representative of the past forecast-error distribu-tion. Such distributions are described by location and scale parameters and have the benefit of beingcomputationally cheap (Zhang et al., 2014). Early methods dressed distributions around single de-terministic forecast models, with several distributions being tested. Lange (2005) found that theforecast-error distribution calculated from the German Weather Service (DWD) numerical forecastsare well-dressed by a Gaussian distribution. They provided the forecast-error-distribution resultsfor 10-m wind speeds, not hub-height winds, and likely only at a location in flat terrain (the exam-29Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.1: Example ensemble forecast (A) and corresponding probabilistic forecast (B).ple location is confidential and a large quantity of the stations tested were in flat terrain). Whenconverted to wind power, forecast-error distributions have been fit by Gaussian (Lange, 2005), Beta(Bludszuweit et al., 2008), and generalized logit-normal distributions (Pinson, 2012).Non-parametric approaches do not make a shape assumption. One example is an ensemble ofNWP models. Ensemble forecast systems provide a distribution of possible forecast outcomes (Fig.3.1a), and can be produced by NWP using a variety of methods: 1) using multiple IC sources,2) stochastically perturbing a single IC, 3) perturbing observations used during data assimilationfor observation error, 4) varying model physical parameterization configurations, 5) varying NWPdynamical cores, 6) varying model grid lengths, 7) using stochastic kinetic energy backscatter(SKEBS), or 8) stochastically perturbing parameterization tendencies (SPPT) (Stensrud et al., 2000;Grimit and Mass, 2002; Buizza et al., 2005; Eckel and Mass, 2005; McCollor and Stull, 2008b; Can-dille, 2009; Berner et al., 2009; Palmer et al., 2009).An ensemble forecast distribution can be converted into a probability distribution (Fig. 3.1b).This is commonly done by using the empirical distribution of ensemble members (Anderson, 1996).Bayesian Model Averaging has also been successfully applied to wind-forecast systems (Hoetinget al., 1999; Raftery et al., 2005; Sloughter et al., 2010; Courtney et al., 2013). Kernel density30Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainestimation is another non-parametric method that links one or more explanatory variables to forecastprobability density functions (Juban et al., 2007). While such methods have been shown to workwell, the latter two require large datasets and can become computationally expensive (Juban et al.,2007; Zhang et al., 2014).A hybrid approach is to parameterize the moments of a prescribed probability distribution (suchas Gaussian) as a function of statistical properties of an ensemble (e.g., the ensemble mean, en-semble variance, and ensemble forecast-error distribution; Gneiting et al. (2005); Nipen and Stull(2011)). This method requires running computationally expensive ensembles, but does not requirean extensive training dataset. Gneiting et al. (2005) refer to this method as ensemble model outputstatistics (EMOS).Probabilistic forecasts are most useful when the forecast probability of an event matches itsobserved frequency of occurrence. Distributions from raw ensembles are often under-dispersive(Stensrud et al., 2000; Eckel and Mass, 2005; McCollor and Stull, 2008b), and must undergo biascorrection and probabilistic calibration (Buizza et al., 2005; Nipen and Stull, 2011). When theforecast probabilities match the relative frequency of occurrence, a probabilistic forecast is reliableor calibrated. Calibrated probabilistic forecasts can be trusted as accurate estimates of forecastuncertainty. However, as Nipen and Stull (2011) discuss, even forecasts based on climatology canbe probabilistically calibrated, so assessing calibration is not enough. One method that can beused to differentiate probabilistically-calibrated forecasts is through evaluating forecast sharpness(Gneiting et al., 2005; Pinson et al., 2006; Juban et al., 2007). Forecast sharpness is a measure of thewidth of probabilistic spread, with a deterministic forecast being perfectly sharp (Juban et al., 2007).Pinson et al. (2006) suggest forecast calibration is the primary concern of probabilistic forecasts forwind energy, while any improvements in sharpness represent added value.The use of ensembles in probabilistic forecasts for hub-height wind speeds is growing, butstudies evaluating probabilistic skill are still relatively limited (Junk et al., 2015). Deppe et al.(2012) performed an ensemble study over the flat terrain of Iowa using the Weather Research andForecasting (WRF, Skamarock et al., 2008) model version 3.1.1., but did not assess probabilistic-forecast calibration. Deppe et al. (2012) used an ensemble of different planetary-boundary-layer(PBL) schemes and IC sources. They found a multi-PBL ensemble had a more accurate ensemble-mean forecast when compared to an ensemble formed from perturbed initial conditions, althoughthe latter had larger ensemble variance. Therefore, a multi-PBL ensemble might be a suitable choicewhen using the EMOS method of generating probabilistic forecasts.Since none of the existing PBL schemes available in WRF were designed for complex terrain,and because many wind features in complex terrain are dependent on PBL evolution, I posit that anensemble containing several PBL schemes may be beneficial. The main differences between PBL31Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainschemes are the treatment of vertical mixing and the statistical order of turbulence closure (Stull,1988; Stensrud, 2007; Deppe et al., 2012). Differences in PBL schemes could be expected to belarger in complex terrain than was found by Deppe et al. (2012) over flat terrain.For complex terrain, better methods to produce calibrated, sharp probabilistic hub-height wind-speed forecasts are needed. This study evaluates the performance of multi-PBL, multi-IC, multi-grid-length WRF ensembles at four wind farms in the mountainous terrain of British Columbia usingthe empirical ensemble distribution and EMOS methods. The eight PBL schemes used here arethe Yonsei University Scheme (YSU, Hong et al., 2006; Hu et al., 2013), Asymmetric ConvectiveModel version 2 (ACM2, Pleim, 2007), Medium Range Forecast (MRF, Hong and Pan, 1996),Mellor-Yamada-Janjic (MYJ, Janjic´, 1994), Mellor-Yamada-Ninno-Nakanishi and Niino Level 2.5(MYNN, Nakanishi and Niino, 2006), Quasi-Normal Scale Elimination (QNSE, Sukoriansky et al.,2005), University of Washington (UW, Bretherton and Park, 2009), and Grenier-Bretherton-McCaa(GBM, Grenier and Bretherton, 2001) schemes. See Chapter 2 for details.Further, this study provides insight on probabilistic forecasting techniques for hub-height wind-speed forecasts using the WRF model. It also adds knowledge about the shape of the hub-heightwind-speed forecast-error distribution in complex terrain. This work is a follow-up to Siuta et al.(2017b), who evaluated the deterministic-forecast performance of this dataset and the deterministic-forecast sensitivity to the choice of PBL-scheme, grid length, and IC source. The rest of this chapteris organized as follows: Section 3.2 describes the research methodology used. Section 3.3 providesa discussion of the results. Finally, Section 3.4 discusses general conclusions and suggestions forfuture work.3.2 MethodologyThe forecast methodology detailed in Chapter 2 (Figs. 2.1, 2.2 and Table 2.1) was used to pro-duce ensemble members and to calculate wind-forecast probabilities. Forecasts from each ensem-ble member, as well as hub-height wind-speed observations were used in the Component-BasedPost-Processing System (COMPS, Nipen, 2012) to post-process and evaluate the forecast. Post-processing is anything applied to the raw forecast for improvement, including bias removal, choiceof uncertainty model, and any calibration done to adjust the uncertainty model. COMPS is a mod-ular post-processing and verification system, analogous to how WRF is modular for NWP. COMPSuses a series of namelists containing user-defined post-processing options that are applied to the rawforecast input.The four wind farms used in this study are located on mountain ridge tops, with elevationsbetween 500 and 1000 m MSL. I use wind-farm-averaged nacelle (hub-height) wind-speed obser-32Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainvations for post-processing and model evaluation. These observations from the independent powerproducers are confidential and were provided to me by the local utility company, BC Hydro. BCHydro, who purchases the wind power, performed quality control on the data. Wind-farm averagedobservations have been shown to better match power generation and output from NWP models overthat of individual turbines or meteorological towers. Farm-averaged values remove sub-grid-scale(intra-wind-farm) variability not currently resolved by NWP models (Cutler et al., 2012). Wind-farm names and locations remain anonymous in the figures shown here. However, statistics andaggregated results based on these observations are provided.I evaluate several factors affecting the probabilistic hub-height wind-speed forecast in complexterrain, including the effect of using multiple PBL schemes, applying bias correction, and the choiceof uncertainty model. These tests are summarized by Table 3.1 and are described in detail in thefollowing subsections.3.2.1 Effect of Number of PBL Schemes in EnsembleThis study is a follow-up to that by Siuta et al. (2017b), who used the same data set to examinedeterministic hub-height wind-speed forecast sensitivity to the choice of PBL scheme, grid length,and IC source. The outcome of that study showed that the grid length and PBL scheme had themost influence on deterministic-forecast accuracy, but the most influential factor varied by location,season, and time of day. Appendix A provides the Mean Absolute Error (MAE) scores for the bias-corrected forecasts used in this prior study. Siuta et al. (2017b) found the ACM2 PBL scheme tobe the best-performing of all the PBL schemes, and the 12 km to be the best performing grid, whenaveraged over all four wind farms over the entire year. However, the best PBL scheme (and gridlength) differed for individual wind-farms and seasons (see appendix A).In this study I start with only the ACM2 PBL scheme (six total ensemble members), and thentest for short-term probabilistic-forecast improvements by adding PBL schemes one-by-one into theensemble. I start with the ACM2 PBL scheme because of the prior results mentioned above.3.2.2 Bias Correction TechniqueI bias correct each individual ensemble member prior to forming the ensemble mean. Wind speedscan never be negative. To satisfy this condition, I use a degree-of-mass-balance (DMB) multiplica-tive bias-correction technique (Grubisˇic´ et al., 2005; McCollor and Stull, 2008a; Bourdin et al.,2014) applied to each ensemble member. The current bias correction factor, DMBt, is calculated33Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainTable 3.1: Summary of the tests performed. Bias correction is detailed in section 3.2.2. Adescription of the uncertainty models is provided in section 3.2.3.Test ID Uncertainty Model Bias Correction PBL schemes used Count ofYes No ACM2 MYJ YSU GBM UW MRF QNSE MYNN Ensemble MembersR1 Raw X X 6R2 Raw X X X 12R3 Raw X X X X 18R4 Raw X X X X X 24R5 Raw X X X X X X 30R6 Raw X X X X X X X 36R7 Raw X X X X X X X X 42R8 Raw X X X X X X X X X 48RB1 Raw X X 6RB2 Raw X X X 12RB3 Raw X X X X 18RB4 Raw X X X X X 24RB5 Raw X X X X X X 30RB6 Raw X X X X X X X 36RB7 Raw X X X X X X X X 42RB8 Raw X X X X X X X X X 48G1 GNS X X 6G2 GNS X X X 12G3 GNS X X X X 18G4 GNS X X X X X 24G5 GNS X X X X X X 30G6 GNS X X X X X X X 36G7 GNS X X X X X X X X 42G8 GNS X X X X X X X X X 48GB1 GNS X X 6GB2 GNS X X X 12GB3 GNS X X X X 18GB4 GNS X X X X X 24GB5 GNS X X X X X X 30GB6 GNS X X X X X X X 36GB7 GNS X X X X X X X X 42GB8 GNS X X X X X X X X X 48GSV1 GSEV X X 6GSV2 GSEV X X X 12GSV3 GSEV X X X X 18GSV4 GSEV X X X X X 24GSV5 GSEV X X X X X X 30GSV6 GSEV X X X X X X X 36GSV7 GSEV X X X X X X X X 42GSV8 GSEV X X X X X X X X X 48GSVB1 GSEV X X 6GSVB2 GSEV X X X 12GSVB3 GSEV X X X X 18GSVB4 GSEV X X X X X 24GSVB5 GSEV X X X X X X 30GSVB6 GSEV X X X X X X X 36GSVB7 GSEV X X X X X X X X 42GSVB8 GSEV X X X X X X X X X 48GSM1 GSEM X X 6GSM2 GSEM X X X 12GSM3 GSEM X X X X 18GSM4 GSEM X X X X X 24GSM5 GSEM X X X X X X 30GSM6 GSEM X X X X X X X 36GSM7 GSEM X X X X X X X X 42GSM8 GSEM X X X X X X X X X 48GSMB1 GSEM X X 6GSMB2 GSEM X X X 12GSMB3 GSEM X X X X 18GSMB4 GSEM X X X X X 24GSMB5 GSEM X X X X X X 30GSMB6 GSEM X X X X X X X 36GSMB7 GSEM X X X X X X X X 42GSMB8 GSEM X X X X X X X X X 48 34Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainfrom the ratio of past forecast and observation pairs, weighted by an e-folding time τ :DMBt =τ − 1τDMBt−1 +1τFt−1Ot−1. (3.1)Here, DMBt-1 is the bias correction factor from the previous day weighted by (τ −1)/τ . I used a τof 30 days because it was found to be the optimum training period when averaged over all locations(Appendix A). The ratio of the mean of the previous day’s forecasts (F t-1) and observations (Ot-1)is weighted by 1/τ . The result is that the influence of the most recent forecast-observation pairsdecreases with an e-folding time of τ . This recursive bias correction minimizes the need to store anever-growing dataset for training.The factor DMBt is calculated for each individual ensemble member and then applied to theraw ensemble-member forecast Ft to produce a bias-corrected ensemble-member forecast Fˆt:Fˆt =FtDMBt. (3.2)Eqs. (3.1) and (3.2) utilize data such as would be available in a true operational setting—currentand future observations are not available when the forecast is made.3.2.3 Choice of Uncertainty ModelThe method used to convert the distribution of raw or bias-corrected ensemble members to a prob-ability distribution is called the uncertainty model. The uncertainty model must not assign anyprobabilities to wind speeds below zero.Five uncertainty models are evaluated, four of which are shown in Table 3.1. The first method,referred to as Raw (Table 3.1), uses the empirical distribution of ensemble members to assign prob-ability. This is a basic method that could be used by those producing probabilistic forecasts overareas where observations do not exist.The second uncertainty model, referred to as GNS (Table 3.1) assumes the form of a Gaussiandistribution dressed about the ensemble mean. This Gaussian forecast probability distribution (Nt)is described at any time t byNt(F¯t, σ2t ). (3.3)Here, Ft is the bias-corrected ensemble mean calculated from (3.1) and (3.2), where each mem-ber is equally weighted. The variance, σ2t , is adaptively estimated from the square of the pastforecast errors calculated from the ensemble mean, weighted by the same 30-day e-folding time35Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainused in (3.1).The third uncertainty model, GSEV (Table 3.1), scales the variance of the Gaussian distributionbased on the ensemble variance. For this method, variance is calculated using a linear regression ofthe form:σ2t = mx+ b (3.4)In Eqn. (3.4), x is the raw or bias-corrected ensemble variance, and constants m and b are deter-mined by regression against the square of the past errors (calculated from the bias-corrected ensem-ble mean). When used in this manner, the GSEV method could capitalize on theorized spread-skillrelationships (Wilks, 2011; Grimit and Mass, 2002). Namely, when ensemble variance is smaller(larger), the spread of the Gaussian distribution (Eqn. 3.4) could be smaller (larger), assuming arelationship exists. Generally, linear correlation is used to quantify spread-skill relationships, withvalues over 0.6 considered strong relationships (Grimit and Mass, 2007).The fourth uncertainty model, GSEM (Table 3.1), scales the Gaussian distribution by the en-semble mean. For this method, x in Eqn. (3.4) represents ensemble mean. This method could allowthe uncertainty model to have larger (smaller) spread when the ensemble-mean-forecast wind speedis high (low). For the GSEM and GSEV uncertainty models, the regressed variables in Eqn. (3.4)(e.g., the ensemble mean or the ensemble variance, and the past squared errors) are also adaptivelyupdated with an e-folding time of 30 days.Fig. 3.2 illustrates how these three Gaussian-based methods could differ for an idealized three-member ensemble. I posit that the GSEV and GSEM methods could allow for sharper probabilisticforecasts than the GNS method. For each of these distributions, I also test for probabilistic forecastimprovements resulting from adding more PBL schemes to the ensemble, and through ensemble-member bias correction. An overview of the tests performed is given in Table 3.1.A fifth uncertainty model, based on a new distribution (which I call the pq distribution), is alsotested. It allows for optimization of kurtosis to better match the distribution of past forecast errors(shown later). Probabilistic-forecast results using this model did not improve on those from theGaussian-based models. The authors felt it important to show the results of this experiment, butdetails will be relegated to Appendix B.3.2.4 Verification MetricsAs my focus in this chapter is probabilistic forecasting, I concentrate forecast evaluation on distributions-oriented verification metrics, which use the joint distribution of forecasts and observations (Wilks,2011).36Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.2: Illustration of ensemble-forecast meteogram and Gaussian-based uncertaintymodels (above the meteogram). Individual ensemble members are indicated by the col-ored, dashed curves and the ensemble mean by the black curve. Each forecast time has aprobability density function based on the Gaussian distribution. The grey curve indicatesa Gaussian distribution that does not scale (GNS) during the forecast. The pink curve isa Gaussian distribution that scales with ensemble variance (GSEV), while the red curvescales with the ensemble mean (GSEM).37Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainThese types of metrics provide advantages over deterministic measures when assessing prob-abilistic forecasts. Because they evaluate the full probabilistic-forecast distribution, they provideinsight into how well forecast uncertainty is represented by the forecast probability distribution.Accurate representation of forecast uncertainty allows for optimum decision making and cost mini-mization (Pinson et al., 2006; Juban et al., 2007; McCollor and Stull, 2008c).As with deterministic verification, the choice of verification metrics should be decided based onthe purpose of the forecast and the needs of the end user. Gneiting et al. (2005, 2007) and Jubanet al. (2007) propose that ideal probabilistic forecasts achieve calibration while maximizing forecastsharpness (both are described in more detail later). For this I use the Probability Integral Transform(PIT) histogram, reliability diagram, and continuously ranked probability score (CRPS). Skill scoresare also used to quantify improvements in verification metrics of a test forecast configuration overthat of a reference forecast (Wilks, 2011).Calibration is a measure of how well forecast probability represents the actual frequency ofevent occurrence (Wilks, 2011; Nipen and Stull, 2011; Gneiting et al., 2007, 2005). The PIT his-togram provides a concise method of evaluating ensemble probabilistic calibration. PIT values arecalculated by finding which forecast percentile matches the associated observation for individualforecast-observation pairs. A histogram is generated by counting the frequency of occurrence ofobservations in each forecast percentile bin. This histogram should be flat for a calibrated forecast,and the deviation from flatness can be used to gauge calibration as was shown in Nipen and Stull(2011).When the PIT histogram is not flat, it can indicate two things: 1) the forecast is under- or over-dispersive (needs calibration), or 2) the forecast has bias. Both of these situations can be fixed bycalibration methods that adjust the probability distribution, resulting in flatter PIT histograms.Gneiting et al. (2007) highlight that while the use of PIT histograms is prudent to address fore-cast calibration, it is hardly the only important aspect of a probabilistic forecast. They mentionthat forecasts that maximize sharpness, while maintaining calibration, are best. In addition, fore-cast calibration may not hold true over a subset of events. For wind-energy applications, sharperprobabilistic forecasts (when also probabilistically calibrated) result in more efficient energy re-serve resource planning and market trading opportunities because uncertainty (typical error fromthe ensemble mean) in the forecast is reduced (Juban et al., 2007). Because even forecasts basedon climatology can be probabilistically calibrated, forecasts must be sharper than climatology to beconsidered skillful.The observations from a previous 15-day window are used to define climatology at each hourof the day. The choice of a 15-day window is two-fold: (1) limited observations are available withonly a year-long dataset, and (2) a shorter definition of climate (over that which averages all days38Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainof the year) could better represent seasonality. I do not use any observations in the future to defineclimate as this may cause the climate-forecast skill to falsely appear better.Reliability diagrams allow for the assessment of forecast calibration for a subset of events givena threshold (Wilks, 2011). For this study, wind-speed thresholds of 5, 15, and 20 m s-1 are used torepresent low winds, turbine-rated winds, and high winds, respectively. Turbine-rated speeds arethose at which wind turbines start to generate maximum power output (i.e., increasing winds willno longer increase power output). This speed can differ between turbine models.A sharpness histogram is located within the reliability diagram. Sharpness is a function of onlythe forecast, and is a measure of the width of a forecast probability distribution (Gneiting et al.,2007). Given a forecast threshold, it is a count of how many times this threshold value lands ineach probability bin. Forecasts that most often place this threshold near either the 0th or 100thpercentiles are said to be sharp, while forecasts that frequently place the forecast near the 50thpercentile are not sharp. Narrower forecast probability distributions are sharper. While a sharperforecast is useful because it is more decisive, caution must be used to make sure that sharp forecastsare also probabilistically calibrated (i.e., a falsely confident forecast is not useful).The CRPS is analogous to the MAE, but for probability. It is a measure of how well probabilityis assigned around each observed value (Hersbach, 2000). A lower CRPS indicates a forecast thatbetter assigns probability with respect to actual event occurrence (i.e., it assigns higher probabilityfor the event). A perfect forecast will have a CRPS of 0. The CRPS reduces to the MAE fordeterministic forecasts (the cumulative probability distribution takes the form of a step function),which makes it possible to compare ensemble and deterministic forecasts in a probabilistic sense.In addition, the CRPS provides a strictly proper scoring metric of assessing both forecast calibrationand sharpness in a single score (Gneiting et al., 2005). Strictly proper scores promote honesty inthe sense that forecasters can not hedge their forecasts to achieve a better verification score (Wilks,2011). The CRPS skill score is calculated by comparing the CRPS score of any probabilistic forecastto that of a reference probabilistic forecast.Lastly, I will refer to root mean square error (RMSE) as a deterministic measure of forecastaccuracy and also relate it to probabilistic-forecast sharpness, since RMSE is directly related to thespread of the Gaussian uncertainty model.39Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrain3.3 Results and Discussion3.3.1 Raw Ensemble Distribution as a Probability DistributionTests R1-R8 (Table 3.1) are designed to quantify improvements in probabilistic calibration due toadding more PBL schemes to a short-range ensemble forecast in complex terrain. Starting withthe best PBL scheme (ACM2, test R1), additional PBL schemes were added to the ensemble intests R2-R8. Each additional PBL scheme added six total members to the ensemble, comprised ofthree grid sizes and two ICs. R1-R8 used the binned, raw-ensemble distribution as the forecast-probability distribution. Tests RB1-RB8 followed the same method, but used ensemble-memberbias correction.Fig. 3.3 shows the improvement in probabilistic calibration resulting from adding up to eightPBL schemes (i.e., improvement over R1). For the raw-ensemble distribution, forming a multi-PBLscheme, multi-IC, and multi-grid ensemble leads to large improvements in probabilistic calibrationover an ensemble consisting of just a single PBL scheme (six members). For example, using 2 PBLschemes (12 members; R2, Table 3.1) results in 9% calibration improvement over 1 PBL scheme(R1, Table 3.1). A statistically significant 19% improvement in calibration (relative to R1, Table3.1) results from using all eight PBL schemes (R8, Table 3.1). Throughout this chapter, I refer tostatistical significance as exceeding the p = 0.05 significance level in a Student’s t-test.Once bias correction is applied, using all eight PBL schemes (RB8, Table 3.1) provides a 27%improvement in calibration (relative to RB1, Table 3.1). A 14% improvement is found when usingonly two PBL schemes (RB2, Table 3.1). Differences in forecast calibration between RB1 and RB8 are statistically significant. Additionally, improvements in calibration through the application ofbias-correction also pass statistical significance testing (e.g., comparing tests R1-R8 to RB1-RB8).Even though R8 and RB8 were the best within their respective group of experiments, their PIThistograms (Fig. 3.4) show that the binned, raw-ensemble distribution is under-dispersive, evenwith a 48-member ensemble. The probability forecast is too confident and events too-often fall atthe extremes of the distribution. To fix this, a better uncertainty model is needed, and I address thisthrough the next set of tests.3.3.2 Prescribed Probability Distributions Dressed on the Ensemble MeanTests G1-G8 (Table 3.1) use the GNS uncertainty model dressed about the raw ensemble mean withspread (variance) based on the past squared errors of the ensemble mean. Using this method withthe best overall PBL scheme (G1), or all eight PBL schemes (G8), results in a biased forecast (Fig.3.5). To remove the bias, I performed bias correction on each ensemble member in tests GB1-GB8.40Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.3: Improvement in PIT histogram calibration for one year of hourly wind forecastsby using more PBL schemes in the ensemble for the raw ensemble distribution (R2-R8, Table 3.1) and bias-corrected ensemble distribution (RB2-RB8) uncertainty models.Improvement is based on reduction in deviation between bins of the PIT histogram (suchas for the PIT histogram in Fig. 3.4). Larger improvement is better.Figure 3.4: PIT histograms indicating an under-dispersive ensemble for tests R8 and RB8.Under-dispersion occurs when observed events fall too often at or outside the extremesof the ensemble forecast distribution. Flatter PIT histograms are better.41Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.5: PIT histograms comparing the results of the three Gaussian-based distributions forthe six- and 48-member ensembles, prior to and after bias correction. Labels are the testname (Table 3.1). Flatter PIT histograms are better (closer to the horizontal dashed line).These also used the GNS uncertainty model, but centered the forecast probability distribution onthe bias-corrected ensemble mean. The removal of the bias resulted in a nearly flat PIT histogram,indicating a calibrated forecast (GB1 and GB8 in Fig. 3.5). A separate calibration step is thereforenot needed.Next, I tested two additional Gaussian-based uncertainty models that allow the distributionto scale by either the ensemble variance or the ensemble mean. First, I scaled the distributionby the ensemble variance in tests GSV1-GSV8 (no ensemble-member bias-correction, Table 3.1)and GSVB1-GSVB8 (with ensemble-member bias correction, Table 3.1). Tests GSV1-GSV8 andGSVB1-GSVB8 show nearly identical results to using a non-scaling Gaussian uncertainty model(GNS). PIT histograms for GSV1 and GSV8 show a biased probabilistic forecast (GSV1 and GSV8;Fig. 3.5), that result in a well-calibrated forecast once ensemble-member bias correction is applied(GSVB1 and GSVB8; Fig. 3.5). Tests GSM1-GSM8 and GSMB1-GSMB8 yielded similar re-42Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainsults (Fig. 3.5) for scaling the Gaussian distribution by the ensemble mean. When using any ofthe Gaussian-based uncertainty models, improvements in calibration through the use of the bias-corrected ensemble members passed significance testing.Fig. 3.6 shows the actual distributions of wind-speed forecast errors with respect to the bias-corrected ensemble mean. Each histogram is for a different wind farm. By inspection, these dis-tributions do not have a Gaussian shape. These distributions have more probability near the center(kurtosis = 3.5 to 6.6) than a Gaussian (kurtosis = 3), but they are nearly symmetric (skewness ofless than 0.5, close to the Gaussian zero skewness). This motivates two questions: a) what aspectsof the Gaussian distribution allow it to work so well to produce calibrated probabilistic forecasts,and b) can I find a different theoretical distribution that is a better fit to the observed error distri-butions? I address question one next, and address question two in section 3.3.6 and in AppendixB.Perhaps all that is needed to produce calibrated forecasts is to have almost any distribution shapethat has central tendency, spread about the mean, and is symmetric. The fact that the Gaussian distri-bution has tails that unphysically extend to±∞ appears to be irrelevant to its ability to satisfactorilydress the ensemble mean. So the next question is how much spread in the distribution is needed?First, I investigate whether larger forecast errors correspond to a) greater spread among the raw-ensemble members, or b) larger ensemble-mean-forecast winds. To do this, I calculated the coeffi-cient of determination (r2) for the linear relationship between the squared error of the bias-correctedensemble mean, and either the bias-corrected ensemble variance or bias-corrected ensemble mean.I found the relationship between the bias-corrected ensemble variance and the squared error had r2values of 0.027 or less. This means that only 2.7% of the error variance can be explained by the lin-ear relationship between the ensemble variance and squared-error magnitude (e.g., the spread-skillrelationship does not apply at these four locations in complex terrain).The corresponding analysis comparing the bias-corrected ensemble mean with the squared errormagnitude resulted in r2 values no larger than 0.053. The implications of this are that the Gaussiandistributions used in GB1-GB8, GSVB1-GSVB8, and GSMB1-GSMB8 are similar, since the scal-ing relationships are weak. At some of the four locations, the differences between the GNS, GSEV,and GSEM distributions are statistically insignificant. Because of this, the next analyses will focuson the GNS distribution used in tests GB1-GB8.An r2 value of 1 is not expected or feasible, as several studies have noted (Whitaker and Loughe,1998; Grimit and Mass, 2007; Hopson, 2014). Some of those studies, and Wang and Bishop (2003),propose potential methods for extracting a stronger spread-skill relationship. Exploring those meth-ods is beyond the scope of this study. Hopson (2014), however, concludes that in the case of aweak-spread relationship, an invariant spread, derived from ensemble-mean error, may be used in-43Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.6: Actual wind-speed forecast-error distribution about the ensemble mean in testGB8 for each of the four wind farms.stead. I reach the same conclusion here.Having addressed calibration, I now look at sharpness. For the GNS uncertainty model, spreadis only a function of the past error of the bias-corrected ensemble mean (3). The smaller the error(which can be measured by RMSE), the narrower the prescribed forecast probability distribution,and the sharper the forecast. I address this in the next section.44Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrain3.3.3 Use of Ensembles to Improve Forecast SharpnessThe benefit in using ensembles in the short-term forecast horizons has been debated, especially fornear surface variables like hub-height winds (Mass et al., 2002; Eckel and Mass, 2005; Warner,2011). I approach this question by looking at the effects of an ensemble on forecast sharpness.When using the GNS uncertainty model, forecast systems that have lower RMSE must have smallerprobabilistic spread (improved sharpness).Fig. 3.7 shows the average RMSE for each ensemble member, as well as the six-member bias-corrected ensemble from the best PBL scheme (GB1, Table 3.1) and the full 48-member bias-corrected ensemble (GB8, Table 3.1). While probabilistic calibration could be achieved by dressinga Gaussian distribution around any of these individual ensemble members, the sharpest forecastsare produced by dressing the distribution on the ensemble means from GB1 or GB8, which havethe lowest RMSE. While I show that the best-PBL ensemble (GB1) and full-PBL ensemble (GB8)have equivalent sharpness, it is not possible to know which PBL scheme is best apriori, unless ithas been shown to consistently perform best over an extended period of time. Nonetheless, theseresults indicate that larger ensembles do not necessarily perform better in short-term forecasts thansmaller, selective ensembles. This is an important finding for short-term wind planning as it reducescomputational cost, while still providing sharp, probabilistically calibrated forecasts.3.3.4 Probabilistic Forecasts for Wind EventsWhile the bias-corrected ensemble forecasts show statistical calibration for the data used in thetraining period, they are not necessarily calibrated over subsets of this training data. Forecasts ofrare events, which in this case are high-wind events, can be uncalibrated. To assess calibration overdifferent wind-speed subsets, I provide the reliability diagrams using event thresholds of 5, 15, and20 m s-1 for tests RB1, RB8, GB1, GB8, and also for a GNS distribution dressed on the single-bestbias-corrected ensemble member (ACM2 12-km GNS; not in Table 3.1) (Fig. 3.8).RB1 and RB8 are under-dispersive for an event threshold of 5 m s-1, while GB1, GB8, and theACM2 12-km GNS are calibrated. However, for the 15 and 20 m s-1 thresholds, all forecasts exhibitpoor calibration, typically overforecasting probabilities. The horizontal dashed line in Fig. 3.8represents the climatological frequency of events above the given threshold. Out of a total sampleof 32,442 observations, 20,455 (63%) are above 5 m s-1, 603 (1.8%) are above 15 m s-1, and 35(0.1%) are above 20 m s-1. For the 15 m s-1 threshold, forecasts have near-zero skill (close to reddashed line, Fig. 3.8). For the 20 m s-1 threshold, the probabilistic forecast has no skill. The lackof probabilistic-forecast calibration for high-wind events is likely because these events comprise asmall percentage of the training data, and forecast biases for these events may differ from that of the45Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.7: Annual RMSE for each bias-corrected ensemble member, the six-member en-semble in test GB1, and the 48-member ensemble in test GB8, averaged over all fourwind sites. Ensemble members are named under the convention [PBL][IC][Grid (km)].Smaller RMSE is better.bulk of the training data. Separate calibration for different thresholds may alleviate this issue.Insets within the panels of Fig. 3.8 are sharpness diagrams for the event thresholds. These dia-grams indicate how often the event threshold falls in each forecast probability bin. Sharper forecastswill more frequently give event probabilities closer to 0% or 100%, while less sharp forecasts giveevent probabilities closer to 50%. For all thresholds, I see that the raw bias-corrected-ensembledistribution (RB1 and RB8) produces the sharpest forecasts. However, sharpness without calibra-tion is worthless. The high sharpness is a result of the under-dispersive, over-confident nature ofraw-ensemble distributions. Each of the GNS distributions (GB1, GB8, ACM2 12-km GNS in Fig.3.8) produce sharp forecasts for each of these event thresholds.3.3.5 Summary of Skill vs. Computation CostAs a summary of the forecast improvements obtained by using probabilistic-based forecasts, I plotthe CRPS skill score, which allows a comparison deterministic and ensemble forecasts in a proba-bilistic sense (Fig. 3.9). I use the worst-performing deterministic forecast (the bias-corrected 36-kmQNSE forecast) as the reference forecast to gauge short-term probabilistic-forecast improvements.Using the best deterministic forecast (12-km ACM2 forecast) provides an 11% improvement in the46Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.8: Reliability diagrams for tests RB1, RB8, GB1, GB8, and the GNS distributiondressed over the bias-corrected 12-km ACM2 deterministic forecast initialized off theGFS (ACM2 12km GNS) for event thresholds of 5, 15, and 20 m s-1. Reliability curvescloser to the diagonal 1:1 line (thick grey) are better (i.e., are more calibrated). The bluedashed line represents the climatological event threshold frequency. The red dashed linerepresents the no-skill line. The inset figure is the sharpness histogram and is a count ofthe number of occurrences the event threshold is forecast in each probability bin (from0th to 100th percentile in 10% increments.) Sharper forecasts assign mostly 0th and 100thpercentiles while non-sharp forecasts issue mainly 50th percentile forecasts.47Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.9: CRPS skill scores for test RB1, RB8, GB1, GB8, the ACM2 12-km GNS, and thebias-corrected deterministic 12-km ACM2 forecast initialized off the GFS. Skill is cal-culated relative to the worst performing deterministic forecast, the 36-km QNSE schemeinitialized with the NAM. Larger CRPS Skill Score is better.CRPS, averaged over all stations. A six-member ensemble forecast based on the best PBL schemeusing the raw-ensemble distribution as the uncertainty model (RB1, Table 3.1) improves the CRPSby an additional 17% (or 28% over the worst deterministic forecast). Using the raw distributionwith the full 48-member ensemble (RB8, Table 3.1), provides an additional improvement in CRPSof 3%, or a 31% improvement over the worst deterministic forecast.Fig. 3.9 shows the large improvement that is gained by dressing the ensemble mean with anuncertainty model. The bias-corrected ACM2 12-km GNS provides a better probabilistic forecastthan that of RB8 (on average), at a fraction of the computational cost. CRPS is improved over RB8by an additional 5%. Test GB1, which uses all the ACM2-based forecasts, provides the most CRPSskill, with a 41% improvement over the worst deterministic forecast, and an additional 5% improve-ment over the best deterministic forecast dressed by the GNS distribution. No further improvementis gained with a larger ensemble for this short-term forecast (GB8, Fig. 3.9), while computationalcost increases. All ensemble forecasts in Fig. 3.9 (e.g., tests RB1, RB8, ACM2 12-km GNS, GB1,and GB8) have probabilistic skill (increased sharpness) over a climatology forecast while the two48Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex TerrainFigure 3.10: Typical 30-day training period past error distribution (vertical bars) fit by the pqprobability distribution (smooth curve). Bins are every 0.2 m s-1.deterministic forecasts do not (the ACM2 12-km deterministic and the 36-km QNSE deterministicreference forecasts; not explicitly shown).3.3.6 Shape of the Prescribed DistributionIn section 3.3.2, I suggested that the exact shape of the prescribed distribution used to dress theensemble mean is less important than the distribution’s gross attributes: central tendency, spread,and symmetry. As a preliminary test of this hypothesis, I use a new distribution, called the pq dis-tribution. This distribution can better fit the shape of the observed wind-forecast-error distribution(Fig. 3.10). The pq distribution is symmetric and bounded; see Appendix B for details.When Fig. 3.9 was re-calculated using CRPS from the best-fitting pq distributions, the results(not shown) were nearly identical to those from the GNS distribution. The lack of improvement inCRPS through the use of a better-fitted distribution (pq) is preliminary evidence that the distribu-tion’s gross attributes are more important, rather than the exact shape. Although the wind-forecasterrors were nearly symmetric for the four wind farms studied here, there might be other situations49Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainwhere the forecast-error distributions may be asymmetric or even multi-modal (i.e., not Gaussian;not pq), and other distributions would better describe the errors.3.4 Conclusions and Future WorkI detailed methods to produce calibrated hub-height wind-speed forecasts over complex terrain froma multi-PBL-scheme, multi-IC, multi-grid-length WRF ensemble. Tests evaluated the effects of hav-ing multiple PBL schemes in the ensemble, bias correction, and uncertainty model choice. Proba-bilistic forecast performance was evaluated based on improvements in the PIT histogram, reliabilitydiagram, sharpness, CRPS, and RMSE.For the binned raw-ensemble distribution, increasing the number of PBL schemes availableimproved forecast calibration. However, this distribution was under-dispersive, and remained prob-abilistically uncalibrated even after removing the bias from each individual ensemble member.To improve probabilistic calibration, I tested if a prescribed uncertainty model could be used tobetter represent forecast uncertainty. Three Gaussian-based uncertainty models and one pq-modelwere evaluated. The first had no scaling; the distribution was based on the past error. The secondand third Gaussian-based uncertainty models attempted to scale the Gaussian distribution based on alinear regression of either the ensemble variance or the ensemble mean against the square of the pasterrors from the ensemble mean. Using any Gaussian uncertainty model without individual memberbias correction yielded biased PIT histograms. However, bias correcting each individual ensemblemember resulted in probabilistically-calibrated forecasts for all three Gaussian-based uncertaintymodels.When using the prescribed uncertainty models, using additional PBL schemes did not result inimproved probabilistic calibration. Therefore, large ensembles are not necessary to produce proba-bilistically calibrated forecasts. Instead, the addition of multiple PBL schemes improved sharpness.Although the RMSE (and thus sharpness) was similar when using the full 48-member ensemble andthe reduced 6-member ensemble using only the ACM2 PBL scheme, the single best PBL schememay not be known apriori. If the best PBL scheme is known, a selective ensemble can be used tosave computation costs.Additionally, the ensemble-mean RMSE, and thus sharpness, may be improved through the useof ICs produced by other agencies. Examples include the Canadian Global Deterministic PredictionSystem, the Fleet Numerical Meteorology and Oceanography Center Navy Global EnvironmentalModel, the UK Metoffice Unified Model, and the European Centre for Medium Range WeatherForecasts Integrated Forecast System. The reason I suggest testing the use of these other sources isbecause the underlying data assimilation system used for both the GFS and the NAM is the same:50Chapter 3: Calibrated Probabilistic Hub-Height Wind Forecasts in Complex Terrainthe Gridpoint Statistical Interpolation (Shao et al., 2016). Differences between ICs may be largerwhen the sources come from distinctly different agencies (those using different data-assimilationtechniques).My error analysis shows that the Gaussian distribution describes the forecast error well, butthat forecast error has little relationship to either the ensemble mean or the ensemble variance. Ialso found that gross attributes (central tendency, spread, symmetry) of a prescribed distribution aremore important than its exact shape.51Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesChapter 4A Flux-Profile Relationship for Winds overMountain Ridges4.1 IntroductionThe atmospheric surface layer (ASL) typically occupies the lowest 10% of planetary boundary layer(Stull, 1988). The ASL has a depth that ranges on the order of several 10s to over 100s of meters.The effects of the Coriolis force are negligible within this layer such that wind direction does notvary significantly with height. Additionally, turbulent fluxes of heat and momentum in this layer areapproximately constant with height and play a large factor in the shape of vertical profiles of windand temperature. The shapes of wind and temperatures profiles in the ASL are largely influencedby land-atmosphere interactions (Jime´nez et al., 2012).Wind-turbine hub heights (50- and 120-m AGL) are often located within the ASL. Therefore,accurate parameterization of turbulent fluxes in the ASL, and thus representation of vertical profilesof wind, are critical to the wind-energy community. Such parameterizations are commonly used forwind-resource estimates, which extrapolate near-surface (10-m) wind observations up to hub height.ASL schemes are also used within numerical models like the Weather Research and Forecastingmodel (WRF, Skamarock et al., 2008) to parameterize surface fluxes. However, most methods ofparameterizing fluxes in the ASL (and thus the shape of vertical wind profiles), have been derivedfrom observations over flat, homogeneous terrain.Perhaps the most-widely used flux parameterization is Monin-Obukhov Similarity Theory (MOST,Monin and Obukhov, 1954; Panofsky, 1963). MOST parameterizes surface fluxes of wind and tem-perature, and thus ASL wind and temperature profiles, through the use of two dimensionless quan-tities: the non-dimensional wind shear φm, and the non-dimensional temperature gradient φh. Theytake the form:φm =kzu∗∂u∂z, (4.1)52Chapter 4: A Flux-Profile Relationship for Winds over Mountain Ridgesφh =kzθ∗∂θ∂z, (4.2)where k is the von Karman constant, which has been shown to have a value of approximately 0.4(Stull, 1988), z is height above the ground, u is wind speed, and θ is potential temperature. Variablesu∗ and θ∗ are the friction velocity and temperature scales, respectively, and are related to the surfacemomentum (τ ) and sensible heat (H) fluxes as follows:τ = ρu2∗ = ρCDu2, (4.3)H = −ρcpu∗θ∗ = −ρcpCHu(θ − θg), (4.4)where, ρ is air density, cp is specific heat at constant pressure, and θg is potential temperature atthe ground. Variables CD and CH represent bulk transfer coefficients of momentum and heat,respectively.Eqns. (4.1) and (4.2) can be integrated with respect to height to diagnose the wind and temper-ature in the ASL when the variables u∗ and θ∗ are known:u(z) =u∗k[ln(zzo)− ψm], (4.5)θ(z)− θg = θ∗k[ln(zzo)− ψh]. (4.6)Eqn. (4.5) gives the logarithmic wind profile, where zo is roughness length and ψm is theintegrated similarity function for momentum. In Eqn. (4.6), ψh is the integrated similarity functionfor heat. The integrated similarity functions provide an adjustment to the standard logarithmicprofile shape, with values dependent on the static stability. φm and φh, and their correspondingintegrated functions ψm and ψh are empirically estimated from field data, as described next.4.1.1 Empirical Formula for φm and φhSeveral empirical functions for φm and φh have been derived from observations over homogeneousterrain, with perhaps the most well-known being the Businger-Dyer (BD, Businger et al., 1971;Dyer, 1974) relationships derived from observations at a 32-m tower in western Kansas. Otherforms have been suggested by Webb (1970), Hicks (1976), Zhang and Anthes (1982), Beljaars andHoltslag (1991), Santoso and Stull (2001), and Arya (2001), among others. Optis et al. (2016) pro-vide a summary of several existing similarity relationships and show that the largest profile differ-ences at the Cabauw meteorological tower in Netherlands occur during statically-stable conditions.53Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesEmpirical functions for φm and φh generally take similar log-linear forms, such as:φm,h = [α+ βζ]γ . (4.7)Here, ζ is the Monin-Obukhov stability parameter, z/L, where L is the Obukhov length. The α,β, and γ parameters are empirical constants derived from experimental data. Inclusion of the ζparameter allows for a dependence on boundary-layer static stability.For φm, most studies find a linear relationship during statically-stable regimes (ζ > 0) withγ = 1. For example, the BD relationships (Businger et al., 1971) are as follows:φm = 1 + 4.7ζ, for ζ > 0, (4.8)φm = 1, for ζ = 0, (4.9)φm = [1− 15ζ]−0.25, for ζ < 0, (4.10)andφh = 0.74 + 4.7ζ, for ζ > 0, (4.11)φh = 0.74, for ζ = 0, (4.12)φh = 0.74[1− 9ζ]−0.5, for ζ < 0. (4.13)Once empirical constants have been derived from experimental data, the integrated similarityfunctions shown in (4.5) and (4.6) are found byψm,h =∫ ζ0[1− φm,h]∂ζζ. (4.14)For the BD relationships, integrated similarity functions for momentum and heat areψm = ψh = −4.7ζ, for ζ > 0, (4.15)ψm = ψh = 0, for ζ = 0, (4.16)ψm = ln[(1 + x22) + (1 + x2)2]− 2tan−1x+ pi2, for ζ < 0, (4.17)ψh = 2ln(1 + x22), for ζ < 0, (4.18)wherex = (1− 15ζ)0.25. (4.19)54Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesThe most commonly used ASL scheme in the WRF model, the MM5 surface layer scheme, usesa modified form of the BD relationships and is detailed in Zhang and Anthes (1982) and Jimenezand Dudhia (2012). Hereafter, I refer to these as the ZA relationships. The ZA relationships dividestability into four regimes based on values of the bulk Richardson number (Rib). The regimes are(a) stable, laminar flow (Rib > 0.2), (b) damped mechanical turbulence (0.2 > Rib > 0), (c)neutral conditions (Rib = 0), (d) free convection and unstable conditions (Rib < 0), and have thefollowing integrated similarity relationships:a : ψm = ψh = −10ln( zzo), (4.20)b : ψm = ψh = −5Ribln zzo1.1− 5Rib , (4.21)c : ψm = ψh = 0, (4.22)d : ψm = ln[(1 + x22) + (1 + x2)2]− 2tan−1x+ pi2, ψh = 2ln(1 + x22). (4.23)In Eqn. (4.23), x is the same as that in Eqn. (4.19).For NWP models like WRF, Eqns (4.5) and (4.6) are typically rearranged to diagnose the valuesof u∗ and θ∗ by using the values of u and θ at the first model level and the ground, and using thevalues of the integrated similarity functions provided in Eqns. (4.20)-(4.23). Thus, flux-profilerelationships play a key role in representation of surface stress and heat flux within NWP models.4.1.2 Issues for Current Flux-Profile RelationshipsBusinger et al. (1971) and others note some scatter in their observational datasets during statically-stable regimes. Optis et al. (2016) attribute this scatter to the intermittent nature of turbulence duringstatically-stable conditions. Additionally, the large number of published similarity relationshipssuggest there is no unique solution, particularly for statically-stable surface layers.While several studies have found good performance of MOST over flat, homogeneous terrain(e.g., those cited in section 4.1.1), the assumptions involved and added complexity of flows inmountainous terrain may limit its applicability there. Complex topography introduces terrain inho-mogeneity that can induce internal boundary layers and mountain waves (Optis et al., 2016). Internalboundary layers occur when turbulent eddies generated over rough surfaces mix vertically and areadvected over large distances by the mean horizontal wind (Stull, 1988). Wind and temperatureprofiles downstream of the roughness feature can be significantly affected.Under stably stratified conditions, internal boundary layers in coastal regions have been shownto advect downstream by several 10s of km, leading to periods of enhanced mixing and rapid fluc-55Chapter 4: A Flux-Profile Relationship for Winds over Mountain Ridgestuations in wind speed (Do¨renka¨mper et al., 2015). Over mountainous regions, one could expectsimilar internal boundary layers to be present due to surface heterogeneity. Additionally, dynamicand thermally-induced circulations like flow channeling, anabatic/katabatic winds, mountain/valleybreezes, and drainage flows can affect wind and temperature profiles in complex terrain. Several ofthese features occur in statically-stable regimes.One study by Rotach and Zardi (2007) found that boundary-layer features over valley locationsin the Alps were consistently reproducible, suggesting that parameterization of turbulent structuresis possible in regions of complex terrain. However, Nadeau et al. (2013), who apply MOST over asteep alpine slope in the Swiss Alps, find significant departures between observed wind and temper-ature profiles and those predicted by the BD relationships. The Nadeau et al. (2013) study was in avalley, where they found φm < 0 during statically-stable regimes where slope flows caused windsto be faster near the surface. They also found φm was between 0 and 0.2 for unstable regimes.φh was found to be unrelated to stability. While Nadeau et al. (2013) did not have much successin applying current MOST along an alpine slope, the application of MOST to mountain ridge topsrepresents an existing knowledge gap. Such locations are becoming increasingly common for windfarms. Better similarity relationships for ridge tops will result in better wind resource estimates aswell as hub-height wind-speed predictions from NWP models.Further, tree canopies significantly affect wind and temperature profiles in regions of tall veg-etation (Stull, 1988). Namely, vegetation canopies act like displaced roughness surfaces such thatheight z in Eqns. (4.1-4.23) must be adjusted by a zero-plane displacement distance d to accountfor a shifted reference plane (Stull, 1988; Arya, 2001). This results in Eqns. (4.1-4.2) and Eqns.(4.5-4.6) being rewritten, respectively, as:φm =k(z − d)u∗∂u∂z, (4.24)φh =k(z − d)θ∗∂θ∂z, (4.25)andu(z) =u∗k[ln(z − dzo)− ψm], (4.26)θ(z)− θg = θ∗k[ln(z − dzo)− ψh]. (4.27)Also, in Eqns. (4.7-4.23), ζ = (z − d)/L. The value of d is is related to the canopy height h, witha value of d ≈ 0.75h (Kaimal and Finnigan, 1994). The exact value of d is difficult to obtain, andrequires direct flux measurements to do so. Primarily, d is not usually a constant and varies withcanopy porosity (Garratt, 1992). Kaimal and Finnigan (1994) note that investigators who do not56Chapter 4: A Flux-Profile Relationship for Winds over Mountain Ridgeshave access to direct flux measurements (e.g., who must estimate u∗ and θ∗ from mean quantitiesof wind and potential temperature in vertical profiles) can minimize error by fixing the value of dbased on estimates of h.Several studies have found observed values of φm and φh to significantly deviate from those pre-dicted by standard MOST (e.g. BD, ZA) in a region of the ASL known as the roughness sublayer,or transition layer (Raupach, 1979; Garratt, 1983; Ho¨gstro¨m et al., 1989; Garratt, 1992; Arya, 2001;Lee and Mahrt, 2005). The transition layer is a region of the ASL that spans from the ground up toapproximately 3 times the canopy height (e.g., 3h), and is characterized by enhanced turbulent mix-ing caused by flow impinging on the canopy top (Kaimal and Finnigan, 1994; Arya, 2001). Thus,wind turbines are sometimes within the transition layer. Values for φm and φh have been found tobe reduced by 65-75% in the transition layer (Garratt, 1992; Arya, 2001), with the magnitude ofreduction found to be independent of boundary-layer static stability. Canopy-corrected flux profilesare described byφmφ∗ =k(z − d)u∗∂u∂z, (4.28)φhφ∗ =k(z − d)θ∗∂θ∂z, (4.29)where φ∗ is the empirical reduction factorφ∗ = exp[−0.7(1− z − dz∗)]. (4.30)In Eqn. (4.30), z∗ is the height of the transition layer (z∗ ≈ 3h).4.1.3 Goals of this ResearchThis research addresses the applicability of currently accepted MOST theory over mountain ridge-top locations with wind-energy applications in mind. Namely, the goal is to improve flux-profileparameterization over mountain ridge tops for hub-height winds. Observed values of φm and φh willbe compared to those predicted by the BD and ZA relationships. Canopy effects (Eqns. 4.28-4.30)will be evaluated, and a new flux-profile relationship will be formulated for mountain ridge tops.The new relationship is evaluated at an independent test location. Section 4.2 provides an overviewof the one-year-long field campaign used to obtain the dataset, and describes the methodology.Section 4.3 provides a discussion of results and limitations, followed by conclusions and suggestionsfor future work in section 4.4.57Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesTable 4.1: Meteorological sensor description, accuracy, and time averaging for field campaign.Sensor description Accuracy SamplingFrequencyAveraging HeightAGL (m)AllsitesOnset HOBO Temperatureand RH sensor (U23-001)T: +/- 0.21◦CRH: +/- 3.5%15 min None 1, 1.5, 2,3, 4Site 1 Thies Clima First ClassWind Velocity Sensor (cupanemometer) (4.3350.00.000)+/- 0.2 m s−1 1 Hz 15 min 38, 59, 78Campbell Scientific Tempera-ture sensor (107)+/- 0.4◦C 1 Hz 15 min 75R.M. Young Barometric Pres-sure (61205V)+/- 0.5 hPa 1 Hz 15 min 75Site 2 Vaisala (cup) Anemometer(WAA252)+/- 0.17 m s−1 1 Hz 10 min 72, 95Vaisala Temperature Probe(HMP155)+/- 0.3◦C 1 Hz 10 min 91Vaisala Barometer (PTB210) +/- 0.25 hPa 1 Hz 10 min 24.2 Field-Work Description and Data4.2.1 Overview of Field CampaignA one-year-long field campaign from 1 June 2014 through 31 May 2015 was done to measuretemperature and wind profiles at two anonymous wind-farm locations in British Columbia (Fig.4.1). Observations from these wind farms are confidential and the property of the wind-farm owners.Hence, I do not reveal exact locations and names and hereafter refer to these wind farms as sites 1and 2. Sites 1 and 2 in this chapter do not correspond to sites 1 and 2 in Chapters 2 and 3.Sites 1 and 2 are located at elevations between 500 and 1000 m ASL along mountain ridge topsor plateaus (Appendix C). The straight-line horizontal distance between sites 1 and 2 is approxi-mately 60 km. Both wind farms have existing meteorological towers with anemometers on at leasttwo levels, temperature near wind-turbine hub height, and a barometer. At each site I deployed fivetemperature/RH data loggers (Onset HOBO model U23-001) aligned in the vertical near the bottomof the meteorological tower to complement the existing sensors (Fig. 4.1). Such HOBO sensorshave been used in a few previous field studies with much success (Whiteman et al., 2000; Lundquistand Cayan, 2007; Whiteman and Hoch, 2014). Table 4.1 provides the full sensor details at sites 1and 2, including their elevations, accuracy, sampling frequency, and time-averaging scheme. I usethese sensors to observe vertical profiles of wind and temperature, and also static stability.58Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.1: Photographs of site 1 (left) and site 2 (right). The photo of site 2 shows the five On-set HOBO temperature/RH sensors in their white radiation shields aligned in the verticalat 1-, 1.5-, 2-, 3-, and 4-m AGL. Trees at both locations are predominantly of pine-foresttype.Temperature measurements were converted to potential temperature by using a reference heightof mean sea level. Potential temperatures were calibrated by removing the mean bias found duringstatically-neutral conditions relative to the sensor at 1 m. This method is used in lieu of laboratorymeasurements, which were not available. Measurements of RH allowed for the calculation of vir-tual potential temperature to include the thermodynamic effects of water vapor on air density. Sincecalibration results of the wind measurements were not available, the effects of anemometer over-speeding and flow modification due to the meteorological tower have not been explicitly accountedfor. Overspeeding is only a small effect in modern cup anemometers (Pindado et al., 2014). The ar-tifacts of flow modification may have been reduced by averaging wind speeds from the two sensorslocated on opposite booms at the same elevation.Sites 1 and 2 are located in coniferous-forested regions in a continental-polar climate type.59Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesClear-cuts are present in locations immediately around wind-turbines, and immediately adjacent toroads. Subjective estimates of the total amount of area at each location covered by forest, based onsatellite imagery, is 70 to 90%. Fig. 4.1 provides an estimate of canopy density. At site 2, dead treestands are also present from a recent forest fire. Measurements taken at this location are less than a1-km distance from the dead tree stands. In the immediate vicinity of the measurements, a clear-cutarea around the meteorological tower ranged from approximately 10 (site 1) to 50 (site 2) meters,with patchy grass and bare soil present (Fig. 4.1). Additionally, tree-top elevations (canopy heights)are estimated to be between 10 and 15 m. The Hobo sensors were also located in the vicinity ofmetal boxes, or the meteorological tower itself. A check of the data did not reveal any interferenceof these objects at sites 1 or 2.4.2.2 Methodology for Deriving Observed φm, φh, and New Flux-ProfileRelationshipsTime-averaged measurements (Table 4.1) valid on the hour were used to calculate the observedvalues of φm, φh, and ζ at site 1 for comparison to the BD and ZA relationships (Eqns. 4.8-4.23). I included the ZA relationships because of their common use within WRF [see Jime´nez et al.(2012)], which has been used in two related components of this research (Siuta et al., 2017a,b). Theobservations were also used to evaluate canopy effects (Eqns. 4.28-4.30), and to derive a new flux-profile relationship for mountain ridge tops. Site 2 is used as an independent dataset to evaluate thenew flux-profile relationship for wind. While sites 1 and 2 are located approximately 60-km apart,they are subject to the same synoptic conditions. Therefore, there is some correlation between theobservations at each location. The linear correlation coefficient for the wind speeds near hub-heightbetween sites 1 and 2 was 0.63 over the entire year.I start by calculating the values of u∗ and θ∗ with Eqns. (4.3) and (4.4) at a height of z = 48.5 m.This corresponds to the midpoint between the lowest two anemometers at site 1, and is the middleof the layer that values of φm and φh will be calculated. The average of the observed winds at 38and 59 m is used in Eqn. (4.3) to estimate u∗. A log-linear interpolation of the θ values observedat 1, 1.5, 2, 3, 4, and 75 m plotted against the logarithm of height is used to obtain the value of θat z = 48.5 m. The value of θ at 48.5 m is used in Eqn. (4.4) to obtain θ∗. I use the potentialtemperature at 1 m as a substitute for potential temperature at the physical ground surface (θg),which was not available.Because I do not have direct flux measurements, estimates of the bulk drag coefficients CD andCH are needed to close Eqns. (4.3) and (4.4). The values of CD and CH are often assumed constantfor simplicity, and due to the lack of sufficient observational data (Arya, 2001). I assume values for60Chapter 4: A Flux-Profile Relationship for Winds over Mountain Ridgesthese coefficients based on Stull (1988),CD,H =k2ln( z−dzo )2. (4.31)Estimates of the canopy height at sites 1 and 2 (Fig 4.1) were used to derive values for thezero-plane displacement distance. I estimate h = 15 m (h = 10 m) and d = 11.25m (d = 7.5m) at site 1 (site 2), with both sites covered predominantly in a pine-forest vegetation type. Thesevalues for h and d are consistent with those provided in (Graefe, 2004), who summarize values of hand d for several studies. Using these approximations for h, the transition layer depth (z∗) extendsapproximately 45 m above the canopy top at site 1 and 30 m above the canopy top at site 2.Measurements above the transition layer are required to determine zo (Kaimal and Finnigan,1994), because zo is an aerodynamic surface-layer parameter. I cannot use observed wind profilesto estimate zo because there are insufficient observation heights above the transition layer. Instead,I use a value of 0.93 m for zo provided in Graefe (2004) for a coniferous forested region. Graefe(2004) list values for zo between 0.4 and 0.93 m for coniferous-forested regions with similar canopyheights to sites 1 and 2. I use the upper end of the range because of the increased complexity of theterrain in my locations. An analysis of the sensitivity of my results on the assumptions in values ofh, d, and zo is provided in section 4.3.5.Using these assumptions, the values for u∗ and θ∗ were used to calculate the observed values ofφm and φh with Eqns. (4.24) and (4.25).Values of φm and φh were used to calculate ζ as (Arya, 2001):ζ =Rib( φhφ2m), (4.32)where Rib isRib =gθzθv − θvgu2. (4.33)Following the above process allows me to compare observed flux-profiles to those of the BDand ZA relationships, and to fit a new stability-dependent relationship to the data. When comparingto the BD or ZA relationships, theoretical values of φm and φh are calculated by using the observedvalue for ζ in Eqns (4.8)-(4.23).I eliminate low wind speeds (less than 3 m s-1) from the data since light winds often do notfollow MOST, nor are low wind speeds of concern to the wind-energy community (Optis et al.,2016). Fig. 4.2 shows a histogram of observed wind directions at site 1. I discretize this histograminto three wind regimes to examine the role of wind direction on observed values of φm. The three61Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.2: Histogram of wind directions at site 1. Three wind regimes labeled: southwesterly(SW), northwesterly (NW), and easterly (E).regimes are southwest (SW, 180◦ to 270◦), northwest (NW, 270◦ to 360◦), and east (E, 0◦ to 180◦).Winds with a dominant westerly or southwesterly component occur when upper-level troughingis present over the eastern Pacific, bringing southwesterly flow across British Columbia. This setupis typical of the Pacific Northwest and is the dominant wind direction at site 1. A secondary peakis evident through winds having an easterly component. Winds with an easterly component areobserved during periods of lee cyclogenesis in the Canadian Rocky Mountains, or during arcticoutflow events. The northerly or northwesterly flow regime is observed when ridging is presentover Alaska, Yukon, and British Columbia.62Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.3: Comparison of the observed mean-wind profiles at site 1 (solid) to the profilesproduced from the BD (dashed) and ZA (dotted) relationships. The left panel is forstatically-stable ASL conditions (ζ > 0) while the right is for statically-unstable condi-tions (ζ < 0).4.3 Results and Discussion4.3.1 Comparison of Observed Wind Profiles with those of the BD and ZARelationshipsI calculate the average observed wind profiles at site 1 by static stability regime to compare withthose produced using the BD and ZA relationships. Statically-stable conditions occur when ζ > 0and statically-unstable conditions when ζ < 0. For the BD and ZA relationships, wind profiles arefirst calculated based on the observed values of u∗ and ζ prior to averaging. A value of 11.25 m wasused for d at site 1 when producing the profiles.Fig 4.3 compares the observed, BD, and ZA wind profiles at site 1 during statically-stable andstatically-unstable conditions. For statically-unstable conditions, the BD and ZA relationships areidentical (i.e., Eqns. (4.17) and (4.23) are the same). Thus, their profiles (Fig. 4.3) are the same. Theaverage BD and ZA profiles for statically-unstable conditions are a good fit to the average observedwind profile.For statically-stable conditions, however, the BD and ZA profiles provide a poor fit to the obser-vations (Fig. 4.3). Both profiles substantially overestimate the winds at site 1. For winds closest tohub height (e.g., the 78-m AGL observations), the BD and ZA profiles are too fast by factors of 1.463Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.4: Wind-power density (WPD) at site 1 under statically-stable (ζ > 0) conditions forthe observed (solid), BD (dashed), and ZA (dotted) mean-wind profiles.and 1.7, respectively. Wind power is related to the cube of wind speed for the speeds shown in theaverage profiles in Fig. 4.3 for statically-stable regimes. Therefore, the BD and ZA profiles provideoverestimates of wind power by factors of approximately 2.7 and 4.9, respectively, for statically-stable conditions. To highlight this point, I plot the wind-power density (WPD) at site 1 in Fig.4.4 for the observations and the BD and ZA relationships for statically-stable conditions. WPD iscalculated asWPD =12ρu3. (4.34)The remainder of my discussion will be focused on statically-stable conditions because of thegood fit by the BD and ZA relationships to the observations in statically-unstable conditions. I alsofocus the comparison to the BD profiles because of a better fit to the observed wind profile understatically-stable regimes (Fig. 4.3).4.3.2 Observations of φm, Theoretical φm, and Canopy EffectsObserved and theoretical (BD) values of φm at site 1 are plotted in Fig. 4.5. The observed φmvalues (black dots, Fig. 4.5) are consistently less than the BD relationship (red curve, Fig. 4.5)64Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.5: Plot of φm at site 1 as a function of ζ. Observed values are represented by blackdots. The red curve is the standard BD relationship. The orange and purple curves arecanopy-corrected BD relationships with a z∗ = 45 and z∗ = 150 m, respectively. Theblue curve is a constant reduction factor of 55% appled to the BD curve. The green curveis the least-squares best-fit to the observed dataset.allows, with only a few observations larger than theoretical. Fig. 4.6 shows the ratio of φm tothe theoretical values produced using the BD relationship φm,BD. On average, φm is 55% smallerthan φm,BD for statically-stable conditions. This is evidence that surface stress (u∗) is likely beingunderestimated under statically-stable conditions, providing wind profiles that increase speed tooquickly with height (e.g., allow winds to be to fast near hub heights).For numerical modeling, the implications of the results in Figs. 4.5 and 4.6 are that parameter-ized values of u∗ using the BD relationships will be underestimated for statically-stable conditionsalong mountain ridges. Because φm values provided by BD are larger than observed, values for u∗are smaller than observed when Eqn. (4.8) is rearranged to solve for u∗. Since values of u∗ are usedto update momentum tendencies for subsequent model time-steps, forecast winds under statically-stable conditions are likely to be overestimated because of the underestimated surface stress.I theorize the reduced magnitude of φm could be the result of canopy effects since the measure-ments are likely to be within the transition layer at site 1. I use Eqns. (4.28)-(4.30) with z∗ = 45 mto include canopy effects on the theoretical BD relationships (orange curve, Fig. 4.5). The result isa 12% reduction in the values predicted by the BD relationships (red curve, Fig. 4.3). The bulk of65Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.6: Ratio of observed non-dimensional wind shear φm to the theoretical values diag-nosed from the BD relationships φm,BD.the observed φm are still smaller than theoretical after including canopy effects. This finding leadsme to hypothesize that a) the transition-layer depth may be larger over forested mountain crests, orb) the assumption of a stability-independent correction factor does not apply over mountain ridges.To evaluate a), I increased the transition-layer-depth magnitude until the resultant curve (purple,Fig. 4.5) intersected the least-squares best-fit to the observations (green, Fig. 4.5). The curvesintersected when z∗ was 150 m. This is preliminary evidence that the transition layer may extend tohigher altitudes over mountain ridges (by a factor of 3). However, this assumes that the applicationof a stability-independent reduction factor is correct. The blue curve in Fig. 4.5 represents a constantreduction of the BD relationships by 55%, found by taking the average ratio in Fig. 4.6.It is clear that a stability-independent correction factor is not sufficient at my location on amountain ridge [e.g, b) and Eqns. (4.28)-(4.30) do not apply]. The increase of φm with increasingstatic stability is much less for the observations (green, Fig. 4.5) than the theoretical BD curves (red,Fig. 4.5), including when accounting for canopy effects (orange, purple, Fig. 4.5). It is possiblethat increased mountain-wave activity and/or advection of internal boundary layers from upstreamroughness features may be contributing to enhanced turbulent mixing in statically stable ASLs, thusreducing observed values of φm at site 1. However, without vertical soundings, or the availabilityof lidar and/or sodar equipment, I cannot confirm the roles of mountain waves or internal boundary66Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.7: Comparison of diagnosed wind profiles using the SHS (dot-dashed), BD (dashed),and ZA (dotted) relations to that of the observations (solid) at site 1.layers. Such a study is recommended for future work based on my findings. Appendix C providesan overview of the terrain in the vicinity of the wind farms used in this chapter.I propose that the best-fit curve (green, Fig. 4.5) is a new similarity relationship applicable tomountain ridge tops for statically-stable conditions. The curve is described by:φm = 0.001 + 1.49ζ0.238, (4.35)with integrated formψm = 0.999ln(zzo) + 6.26ζ0.238. (4.36)Hereafter, I refer to this new flux-profile relationship as Siuta-Howard-Stull (SHS). I plot the fit ofthe mean wind and wind-power density profiles produced with the SHS relationship in Fig. 4.7. TheSHS profile is a much better fit to the observed wind-speed and wind-power density profiles at site1. However, the SHS fit does have a tendency to underestimate the wind speeds and wind-powerdensities near hub heights. Nonetheless, it is a much better fit than the BD and ZA relationships. Iapply the SHS profile at site 2 for independent verification in section 4.3.4.4.3.3 Sensitivity of φm to Season and Wind DirectionScatter around the SHS curve in Fig. 4.5 is observed. To try to explain the scatter, I divide the ob-served φm values into seasonal and wind-flow regime datasets. I use June, July, August as summer,67Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesTable 4.2: Flow-regime-dependent summary statistics of fit to SHS relationship for statically-stable conditions at site 1. Residual calculated as φm,obs − φm,SHS , where φm,SHS isfrom the SHS relationship.Wind Regime Mean Residual Residual VarianceSW -0.03 0.04NW 0.04 0.09E 0.04 0.11September, October, November as fall, December, January, February as winter, and March, April,May as spring for the seasonal analysis shown in Fig. 4.8. The wind-direction regimes in Fig. 4.2are used to isolate φm by flow regime in Fig. 4.9.The seasonal plots (Fig. 4.8) do not show any substantial change in scatter as a function ofseason. However, when analyzing the dataset by wind direction (Fig. 4.9), more scatter is observedin the E and NW regimes than with the SW regime. Table 4.2 shows residual error statistics tothe fit of the observed φm values by the SHS curve. Residuals are small for all three regimes.However, residual variance is larger for the NW and E regimes. Using a Kolmogorov-Smirnov(K-S) statistical significance test, the differences in the residual distributions for NW and E flowregimes passes significance testing at the 5% level when compared to the SW regime. Thus, theincreased variance in the data for the NW and E regimes is statistically significant. It is possible thatflow from the NW or E regimes is interacting with roughness elements of different characteristicsand may be causing the scatter. Looking at Figs. 4.8 and 4.9, it appears that the proposed newsimilarity curve is not a function of wind direction or season for this site.68Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.8: φm observed at site 1 during statically-stable conditions (black dots) for the sum-mer (top left), fall (top right), winter (bottom left), and spring (bottom right). The greencurve is the SHS theoretical fit derived using the data inclusive of all seasons.69Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.9: φm observed at site 1 during statically-stable conditions (black dots) for the south-west (top left), east (top right), and northwest (bottom) wind regimes. The green curveis the SHS theoretical fit derived using the data inclusive of all wind directions.70Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.10: Comparison of diagnosed wind profiles using the SHS (dot-dashed), BD(dashed), and ZA (dotted) relations to that of the observations (solid) at site 2.4.3.4 Independent Verification of SHS Profile at Site 2I compare the wind profiles produced by the SHS, BD, and ZA flux-profile relationships duringstatically-stable conditions to the independent observations at site 2 in Fig. 4.10. Observed valuesof u∗, θ∗, and ζ are used as input in each respective relationship to produce the profiles. A summaryof the mean-profile fits is given in Table 4.3 where mean absolute error skill scores (MAESSs)are provided to show relative improvement in the representation of wind speeds by using the SHSrelationship over the BD and ZA relationships.The BD and ZA relationships produce winds speeds that are too fast while the SHS relationshipis too slow. Profiles produced using the SHS relationship, however, are large improvements overthose produced using the BD or ZA relationships. The SHS profile reduces MAE over that of the BDrelationships by 28 and 36% at 72 and 95 m, respectively. When compared to the ZA relationships,MAE is reduced by 54 and 48%, respectively.The improvement is even more evident once converting to WPD, where the BD and ZA rela-tionships largely overestimate the wind resource potential (Fig. 4.10). As Siuta (2013) discusses,WPD values of 500 W m-2 are prime wind resources. My results show that use of the BD and ZArelationships would significantly overestimate the wind resource potential under statically-stableconditions. The SHS profile on the other hand results in an underestimation. The underestimation,however, is of much less magnitude then the overestimation produced by the BD and ZA profiles.71Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesTable 4.3: Verification summary statistics for BD, ZA, and SHS profiles at site 2 for statically-stable conditions. U¯SHS is the mean wind speed using the SHS flux-profile. U¯BD is themean wind speed using the BD flux-profile. U¯ZA is the mean wind speed using the ZAflux-profile. O¯ is the mean observed wind speed. MAESSBD is the mean absolute errorskill score of the SHS wind profile relative to that of the BD wind profile. MAESSZA isthe mean absolute error skill score of the SHS wind profile relative to that of the ZA windprofile. Larger skill scores are better.Height U¯SHS U¯BD U¯ZA O¯ MAESSBD MAESSZAm m s-1 m s-1 m s-1 m s-1 % %72 5.24 9.30 10.68 6.94 28 5495 5.63 10.58 11.30 7.55 36 484.3.5 Sensitivity of Observed φm to Zero-Plane Displacement and RoughnessLengthTo solve for the observed values of φm, I had to approximate the values of d and zo based onestimates of the canopy height, or prior results over similar vegetation in the literature (see section4.1.2). To quantify the effects of these assumptions, I recalculated the values for φm at site 1 throughseveral additional tests summarized by Fig. 4.11.First, I varied the choice of d. I found that using the zero-plane displacement distance withinthe calculation had a large effect. Without its use (e.g., d = 0 m), φm values were 29% larger(Fig. 4.11a). Much less uncertainty arises through the exact value of d (Fig. 4.11b). Values for φmwere effected by 7% when using d = 9 m instead of d = 11.25 m. This represents the differencebetween a canopy height estimate of 12 vs. 15 m. Since d ≈ 0.75h (Garratt, 1992), I concludethat uncertainty in the estimate of h results in only a small uncertainty in φm. Namely, the effectsof differences in the canopy height over a forested region are not as large as neglecting to use azero-plane displacement.Second, I varied zo based on the values quoted in Graefe (2004). φm was recalculated using zo =0.4 to compare with the results shown throughout this chapter using zo = 0.93 (Fig. 4.11c). Usingthe larger roughness value over the smaller roughness resulted in a 19% difference in φm. Thus, Iconclude that my assumption of the roughness-length value is the largest source of uncertainty inmy study. The 19% difference, however, is marginal compared to the much larger reduction foundin Fig. 4.6 over that of the BD relationships. I therefore conclude that my assumptions have littleeffect on the results that show that the BD and ZA relationships have generally poorer fits than theSHS profile for statically-stable conditions.72Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.11: Sensitivity of the calculated φm values to the use of d = 0 m vs. d = 11.25 m(a), d = 9 m vs. d = 11.25 m (b), and zo = 0.93 m vs. zo = 0.4 m (c).4.3.6 Observations of φhWhile this study has been focused on improvements in the relationship for φm in statically-stableconditions because of the direct applications to wind energy, I felt it important to show the resultsfor φh at site 1 for the same static-stability conditions (Fig. 4.12). I find observed values of φh arerelatively constant with static-stability magnitude. There is also less scatter in the observations ofφh (Fig. 4.12) than in φm (Fig. 4.5). Fig. 4.12 confirms a finding by Nadeau et al. (2013) whosaw similar behavior in the observed values of φh over a steep alpine slope. I propose that sinkingmotions, which develop along mountain ridges in statically-stable conditions as a result of katabaticflows, may cause this behavior. Such sinking motions would limit the strength of static stability in73Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesFigure 4.12: Observed φh (black dots) and theoretical BD flux-profile curve (red) understatically-stable conditions (ζ < 0). The green curve is the best-fit line to the data.the ASL through adiabatic warming.4.3.7 Limitations and Avenues for Future WorkThere are several limitations to the work presented here, which I must highlight. First, the data doesexhibit scatter. One potential cause of this could be the lack of precise flux measurements, or lack ofvertical resolution. Direct flux measurements would eliminate the need to estimate u∗ and θ∗ frommean-wind profiles.Second, I have made several necessary assumptions that could be affecting the results: constantbulk drag and heat coefficients, non-varying roughness length, and assumptions about the relation-ship between canopy height and zero-plane displacement distance. The effects of these assumptionswere evaluated in the previous section and concluded to be small factors in my results. New stud-ies over mountain ridges should use direct flux measurements to confirm my results. Additionally,since the locations used here were only separated by 60 km, it is possible the relatively high degreeof correlation (0.63) of the observations at sites 1 and 2 has resulted in a better fit by the SHS profileat site 2 than may apply at other ridge-top locations. Future work should test stations that are moreindependent.More work must be done before the SHS flux-profile can be implemented in a model like WRF.74Chapter 4: A Flux-Profile Relationship for Winds over Mountain RidgesOne of my primary arguments for deriving a new flux-profile relationship was because the BD andZA relationships were derived from data over homogeneous terrain and thus may not apply overridge tops. However, applying the SHS relationship across entire WRF domains containing uniformand complex terrain would violate my own motivations for doing this study. Such a method mightactually worsen wind forecasts. Future work should involve testing methods of selectively applyingthe SHS flux-profile relationships in WRF over only regions classified as mountain ridge tops.4.4 ConclusionsThe research provided here showed that widely-accepted forms of flux-profile relationships, such asBD and ZA, fail over the mountain ridge tops for this case study at two wind farms in western NorthAmerica. Observed values for φm and φh depart significantly from their theoretical counterparts.Namely, both were found to be much smaller at my locations. This deviation from theoreticalvalues has important consequences for those wanting to estimate wind resources at similar locations,and also those using NWP models over mountain tops. Since φm values during statically-stableconditions are much smaller (on average 55%) than those of the BD and ZA relationships, surface-stress magnitudes are underestimated leading to overestimates of wind speeds near wind-turbinehub heights.I evaluated if canopy effects could be the result of the observed reduction in φm values at mylocations. Inclusion of canopy effects did not reduce theoretical φm values enough, nor did thecanopy reduction factor capture the dependence on static stability. Therefore, a new flux-profilerelationship (SHS) was derived from the observations and evaluated at an independent test location.The SHS relationship yielded wind profiles with significant reductions in MAE when compared tothat of the BD and ZA wind profiles. The SHS profile did, however, underestimate the wind speedsat the test location. Nonetheless, I find MOST can be applied successfully along mountain ridgetops.I also confirm the finding from Nadeau et al. (2013) about the lack of slope in φh during staticallystable regimes.The deviation of φm and φh from theoretical may be attributed to the nature of flows in complexterrain. Internal boundary layers advected from rough terrain upstream, mountain-wave breaking,and the development of slope flows could result in intermittent periods of enhanced mixing duringstatically-stable conditions. Additionally, canopy effects are also likely to be influencing verticalprofiles of wind and temperature at my locations.While I have been successful in deriving a new flux-profile relationship for the two wind farmsstudied here, future work should be focused in two areas. The first focus should confirm these75Chapter 4: A Flux-Profile Relationship for Winds over Mountain Ridgesresults at locations with higher independence; namely, locations that were not subject to the samesynoptic conditions as the derivation source. Second, future work should determine how to applythese new flux-profile relationships to numerical weather prediction models. Care should be takensuch that the new relationships are not applied across flatter regions of model domains. Care alsomust be taken if applying the SHS relationship below canopy height since their applicability to thisregion has not been tested. The relationships found by Businger et al. (1971) and Zhang and Anthes(1982), among others, have been found to strongly correlate to observations over flatter locations.The best approach may be to apply my relationships at ridge-top locations and the existing BDor ZA relationships over other locations. The SHS scheme may also be useful as a diagnostictool to produce more accurate forecasts of winds at hub-heights using existing NWP-model output.However, such a method would be sensitive to NWP estimates of u∗ and θ∗.76Chapter 5: ConclusionsChapter 5ConclusionsThe goal of this dissertation was to improve wind forecasts at wind-turbine hub heights in regionsof complex terrain using NWP models. Namely, knowledge gaps have been filled with respectto understanding the sensitivity of deterministic hub-height wind-forecast accuracy to WRF-modelconfiguration, the creation of calibrated probabilistic hub-height wind forecasts, and the limitationsof current surface-layer similarity theory over mountain ridge tops.5.1 Summary of Research Goals and MethodsEnergy planners rely on forecasts from NWP models to schedule energy-reserve resources andfor market trading. Such forecasts are subject to error due to incorrect initial-condition sources,incomplete knowledge of atmospheric physics, and grid discretization. Improvements in forecastaccuracy result in large cost savings as wind penetration climbs to significant levels (Marquis et al.,2011; Wilczak et al., 2015). Research presented in this dissertation focused in three primary areasrelated to these forecast uncertainties at four wind-farms located on mountain ridge tops in BritishColumbia:• In Chapter 2, the sensitivity of WRF-model hub-height wind forecast accuracy to PBL scheme,initial condition, and grid length choice in mountainous terrain was addressed. A year-longcase study of 0-24 hour forecasts was produced using two initial-condition sources, threegrid lengths, and eight PBL schemes yielding a total of 48 individual forecasts. An ANOVAwas performed on the accuracy scores to explain the forecast-accuracy variance amongst the48 total forecasts. Deterministic verification measures were also used to identify the bestperforming WRF configurations.• Since all model forecasts are subject to uncertainty, probabilistic forecasts are used to providea measure of forecast confidence. The use of probabilistic forecasts for wind energy is rel-atively new and Chapter 3 evaluated two methods of creating probabilistic hub-height windforecasts in regions of complex terrain. The first was binning the raw-ensemble distribution.The second was through prescribing a probability distribution based on the error statistics of77Chapter 5: Conclusionsthe ensemble mean. The effect of ensemble size on short-term probabilistic wind forecastswas also addressed.• NWP models depend on surface-layer parameterization schemes to represent the effects of theground surface on the lowest portion of the planetary boundary layer. Such parameterizationsrely on the use of MOST derived from data over homogeneous terrain. Chapter 4 evaluatedthe applicability of traditional MOST relationships over mountain ridge tops through a fieldcampaign and proposed a new relationship.5.2 Summary of FindingsThere were several key findings that will help NWP modelers tasked with producing hub-heightwind forecasts in regions of complex terrain:• Deterministic WRF-forecast accuracy in regions of complex terrain is most influenced by thegrid length or PBL scheme, with the dominant factor varying by location, season, and time ofday.• Medium-resolution WRF domains (12-km grid length) produced the highest-accuracy scoreson average, although several 4-km grids were best at individual locations after the applicationof bias correction.• The ACM2 PBL scheme was generally the most accurate of the eight schemes tested, butthe best PBL scheme is unique to the location and/or season. An ensemble of all tested PBLschemes matched the forecast accuracy of the single-best PBL scheme, which may not beknown apriori.• Use of empirical ensemble distributions to represent forecast uncertainty of hub-height windsin complex terrain did not produce calibrated probabilistic forecasts. Instead, prescribing aGaussian probability model around a bias-corrected ensemble mean resulted in a calibratedprobabilistic forecast.• The benefit of increased ensemble size is through improved forecast sharpness when usinga prescribed Gaussian probability model. This is due to the difficulty in knowing the bestensemble members a priori. However, probabilistic-forecast sharpness from a small, selectiveensemble of only the best PBL scheme was shown to match that of the full, 48-memberensemble. Smaller, selective ensembles have the potential to save substantial computationalcosts when the best model configurations are known.78Chapter 5: Conclusions• There was no evidence of a spread-skill relationship for short-term hub-height wind forecasts.Nor was there evidence of a relationship between ensemble-mean forecast magnitude and thetypical forecast error.The field campaign highlighted deficiencies in current flux-profile relationships used withinNWP models over mountain ridges, which may be used by model developers to improve NWPmodels:• Current empirical formulas that diagnose the non-dimensional wind shear and temperaturegradient values were demonstrated to poorly fit the observed values for statically-stable sur-face layers.• The non-dimensional wind shear was observed to be much less than predicted by existingflux-profile relationships. I theorize this is due to enhanced turbulent mixing over mountaintops in statically-stable regimes due to slope flows, mountain-wave activity, internal boundarylayers, and canopy effects. These features may effect wind speeds at other mountain-ridgelocations than those shown here because of the common physical mechanisms required fortheir formation. However, the magnitude of turbulent transport caused by such features mayvary by location, thus the reduction in non-dimensional wind shear values over current theorymay vary.• Existing correction factors for canopy effects alone were not found to be sufficient.• An improved flux-profile relationship for the mountain ridge-top locations studied understatically-stable conditions was given as Eqns. (4.35) or (4.36).5.3 Potential ApplicationsSeveral aspects of this research are currently used in real-time by the Weather Forecast ResearchTeam (WFRT) at the University of British Columbia (UBC). UBC runs an EPS to produce tailoredweather forecasts for clients, including BC Hydro, one of the primary funding sources for thisresearch. Based on the results shown here, additional WRF models were added to the UBC EPS toincrease ensemble diversity. The best four PBL schemes evaluated in Chapter 2 are now used daily.In addition, since grid length was found to be one of the most important factors influencing forecastaccuracy, I further diversified our EPS by adding new grid lengths. Similar tactics could be used byforecast agencies setting up WRF forecast systems for wind energy based on my results.Prior to this work, the WFRT provided only deterministic hub-height wind forecasts to BCHydro. The results in Chapter 3 allowed the WFRT to provide calibrated probabilistic forecasts in79Chapter 5: Conclusionsreal-time. The results of Chapter 3 could be used by energy planners in other regions to also obtaincalibrated probabilistic forecasts with computational efficiency.Evidence of the breakdown of widely-accepted flux-profile relationships for the ridge tops stud-ied should pinpoint areas for future research in complex terrain. My refined flux-profile relationshipscould be used by model developers to improve forecasts over complex terrain with similar ridge to-pography and land use. Namely, the new flux-profile relationship should be tested at locations alongmountain ridges. Ridge-top locations are likely subject to the same forcing mechanisms causingthermally-induced circulations and dynamic effects which enhance mixing during statically-stableconditions.5.4 Limitations and Recommendations for Future WorkWhile the work presented in this dissertation has demonstrated several ways to improve hub-heightwind forecasts over complex terrain for the four locations studied, it has also resulted in new ques-tions for future investigation, and is subject to some limitations.For the results in Chapters 2 and 3, I used forecasts with a maximum time horizon of only 24hours. These forecasts are useful to same-day reserve scheduling and market trading, but forecastsof longer duration would be beneficial. My primary limitation was availability of spare computercycles to produce such an extensive dataset. I hypothesize that the sensitivity analysis in Chapter2 would hold true for longer forecast horizons, but this should be confirmed through additionaltesting. Fortunately, the advent of cloud-computing has started to remove some of these constraintsthrough availability of vast computing resources to the academic community (Siuta et al., 2016).The results in Chapter 2 found the initial-condition source to be the smallest factor influencingforecast accuracy. However, a potential caveat to this analysis is that I use initial-condition sourcesproduced only by NCEP (e.g., the GFS and NAM), which use the same underlying data-assimilationsystem (Shao et al., 2016). It is possible that using initial-condition sources from other nationalmeteorological agencies that use different data-assimilation methods could result in an increase inthe influence of initial-condition choice. To this end, I have worked (along with others in the WFRT)to add new initial-condition sources to the UBC EPS by using the Canadian Meteorological Centre’sGlobal Deterministic Prediction System (GDPS or GEM) and the Fleet Numerical Meteorology andOceanography Center’s Navy Global Environmental Model (NAVGEM) to initialize WRF.In Chapter 3, I use a simple, computationally efficient method to produce calibrated probabilistichub-height wind forecasts from an EPS. However, I find subsets of the dataset remain uncalibrated.It is possible that other methods not evaluated here (detailed in Chapter 3) may provide better prob-abilistic calibration than the method shown, particularly for extreme events. Provided the extensive80Chapter 5: Conclusionslist of possible methods to create probabilistic forecasts, a comparative study of all these meth-ods over a single dataset in complex terrain would be a welcome contribution to the wind-energycommunity.Chapter 4 detailed a field campaign at two wind farms in mountainous areas of British Columbia.Challenges with this field campaign were site access, particularly in the winter, and lack of remotecommunication abilities. I was therefore limited to deploying self-containing temperature/RH dataloggers to complement existing sensors located along meteorological towers. Unfortunately, directflux measurements were not available, which lessens the accuracy of the observed non-dimensionalwind shear and temperature gradient values. Measurements at more levels would have also beenideal, particularly anemometers below the canopy height, and temperature/RH sensors between 4 mand hub height. Future work should directly measure fluxes to confirm the work in Chapter 4, andprovide insight into the cause of the reduced non-dimensional wind shear values. Such future workshould also use locations that are more independent than the two locations used in Chapter 4. Moredetailed measurements should be used to confirm if the reduced wind shear under statically-stableconditions is the result of thermal effects, dynamic effects, or a combination of the two. Futurework should also measure the changes between vertical wind and temperature profiles in adjacentmountain/valley or mountain/prairie systems. I have also demonstrated a method using simple,inexpensive data loggers that other researchers and energy forecasters might find useful.81BibliographyBibliographyAhlstrom, M., and Coauthors, 2013: Knowledge Is Power: Efficiently Integrating Wind Energyand Wind Forecasts. IEEE Power and Energy Magazine, 11 (6), 45–52,doi:10.1109/MPE.2013.2277999. → pages 2Anderson, J. L., 1996: A Method for Producing and Evaluating Probabilistic Forecasts fromEnsemble Model Integrations. Journal of Climate, 9 (7), 1518–1530,doi:10.1175/1520-0442(1996)009〈1518:AMFPAE〉2.0.CO;2. → pages 30Arya, S. P., 2001: Introduction to Micrometeorology. 2nd ed., Academic Press, San Diego, 420 pp.→ pages 3, 6, 8, 53, 56, 57, 60, 61Bakhshaii, A., and R. Stull, 2009: Deterministic Ensemble Forecasts Using Gene-ExpressionProgramming. Weather and Forecasting, 24 (5), 1431–1451, doi:10.1175/2009WAF2222192.1.→ pages 5Beljaars, A. C. M., 1995: The Parametrization of Surface Fluxes in Large-Scale Models UnderFree Convection. Quarterly Journal of the Royal Meteorological Society, 121 (522), 255–270,doi:10.1002/qj.49712152203. → pages 6Beljaars, A. C. M., and A. A. M. Holtslag, 1991: Flux Parameterization over Land Surfaces forAtmospheric Models. Journal of Applied Meteorology, 30 (3), 327–341,doi:10.1175/1520-0450(1991)030〈0327:FPOLSF〉2.0.CO;2. → pages 53Berner, J., G. J. Shutts, M. Leutbecher, and T. N. Palmer, 2009: A Spectral Stochastic KineticEnergy Backscatter Scheme and its Impact on Flow-Dependent Predictability in the ECMWFEnsemble Prediction System. Journal of the Atmospheric Sciences, 66 (3), 603–626,doi:10.1175/2008JAS2677.1. → pages 30Bludszuweit, H., J. A. Dominguez-Navarro, and A. Llombart, 2008: Statistical Analysis of WindPower Forecast Error. IEEE Transactions on Power Systems, 23 (3), 983–991,doi:10.1109/TPWRS.2008.922526. → pages 30Bourdin, D. R., T. N. Nipen, and R. B. Stull, 2014: Reliable Probabilistic Forecasts from anEnsemble Reservoir Inflow Forecasting System. Water Resources Research, 50 (4), 3108–3130,doi:10.1002/2014WR015462. → pages 17, 33Bretherton, C. S., and S. Park, 2009: A New Moist Turbulence Parameterization in the CommunityAtmosphere Model. Journal of Climate, 22 (12), 3422–3448, doi:10.1175/2008JCLI2556.1. →pages 15, 3282BibliographyBuizza, R., P. L. Houtekamer, G. Pellerin, Z. Toth, Y. Zhu, and M. Wei, 2005: A Comparison ofthe ECMWF, MSC, and NCEP Global Ensemble Prediction Systems. Monthly Weather Review,133 (5), 1076–1097, doi:10.1175/MWR2905.1. → pages 29, 30, 31Businger, J. A., J. C. Wyngaard, Y. Izumi, and E. F. Bradley, 1971: Flux-Profile Relationships inthe Atmospheric Surface Layer. Journal of the Atmospheric Sciences, 28 (2), 181–189,doi:10.1175/1520-0469(1971)028〈0181:FPRITA〉2.0.CO;2. → pages 6, 53, 54, 55, 76Canadian Wind Energy Association, 2016: Pan-Canadian Wind Integration Study. Tech. rep. →pages 2Candille, G., 2009: The Multiensemble Approach: The NAEFS Example. Monthly WeatherReview, 137 (5), 1655–1665, doi:10.1175/2008MWR2682.1. → pages 2, 30Cheng, W. Y., Y. Liu, Y. Liu, Y. Zhang, W. P. Mahoney, and T. T. Warner, 2013: The Impact ofModel Physics on Numerical Wind Forecasts. Renewable Energy, 55, 347–356,doi:10.1016/j.renene.2012.12.041. → pages 12Chow, F. K., B. J. Snyder, and S. F. J. De Wekker, 2013: Mountain Weather Research andForecasting: Recent Progress and Current Challenges. Springer, Dordrecht, London, 750 pp. →pages 4, 10Clement, M. A., 2012: A Methodology to Assess the Value of Integrated Hydropower and WindGeneration. Ph.D. thesis, University of Colorado, 183 pp. → pages 2Corbus, D., D. Lew, G. Jordan, W. Winters, F. Van Hull, J. Manobianco, and B. Zavadil, 2009: UpWith Wind. IEEE Power and Energy Magazine, 7 (6), 36–46, doi:10.1109/MPE.2009.934260.→ pages 1, 2Courtney, J. F., P. Lynch, and C. Sweeney, 2013: High Resolution Forecasting for Wind EnergyApplications Using Bayesian Model Averaging. Tellus A, 65 (0),doi:10.3402/tellusa.v65i0.19669. → pages 5, 30Cutler, N. J., H. R. Outhred, and I. F. MacGill, 2012: Using Nacelle-based Wind SpeedObservations to Improve Power Curve Modeling for Wind Power Forecasting. Wind Energy,15 (2), 245–258, doi:10.1002/we.465. → pages 33De Wekker, S. F. J., and M. Kossmann, 2015: Convective Boundary Layer Heights OverMountainous TerrainA Review of Concepts. Frontiers in Earth Science, 3, 77. → pages 10De Wekker, S. F. J., D. G. Steyn, and S. Nyeki, 2004: A Comparison Of Aerosol-Layer AndConvective Boundary-Layer Structure Over A Mountain Range During Staaarte ’97.Boundary-Layer Meteorology, 113 (2), 249–271, doi:10.1023/B:BOUN.0000039371.41823.37.→ pages 1083BibliographyDeMeo, E. A., G. A. Jordan, C. Kalich, J. King, M. R. Milligan, C. Murley, B. Oakleaf, and M. J.Schuerger, 2007: Accommodating Wind’s Natural Behavior. Power and Energy Magazine,IEEE, 5 (6), 59–67, doi:10.1109/MPE.2007.906562. → pages 1Deppe, A. J., W. A. Gallus, and E. S. Takle, 2012: A WRF Ensemble for Improved Wind SpeedForecasts at Turbine Height. Weather and Forecasting, 28 (1), 212–228,doi:10.1175/WAF-D-11-00112.1. → pages 4, 9, 14, 15, 31, 32Do¨renka¨mper, M., M. Optis, A. Monahan, and G. Steinfeld, 2015: On the Offshore Advection ofBoundary-Layer Structures and the Influence on Offshore Wind Conditions. Boundary-LayerMeteorology, 155 (3), 459–482, doi:10.1007/s10546-015-0008-x. → pages 56, 103Draxl, C., A. N. Hahmann, A. Pen˜a, and G. Giebel, 2014: Evaluating Winds and Vertical WindShear from Weather Research and Forecasting Model Forecasts Using Seven PlanetaryBoundary Layer Schemes. Wind Energy, 17 (1), 39–55, doi:10.1002/we.1555. → pages 1, 2, 4, 9Dyer, A. J., 1974: A Review of Flux-Profile Relationships. Boundary-Layer Meteorology, 7 (3),363–372, doi:10.1007/BF00240838. → pages 6, 53Eckel, F. A., and C. F. Mass, 2005: Aspects of Effective Mesoscale, Short-Range EnsembleForecasting. Weather and Forecasting, 20 (3), 328–350, doi:10.1175/WAF843.1. → pages 5, 30,31, 45Garratt, J., 1983: Surface Influence Upon Vertical Profiles in the Nocturnal Boundary Layer.Boundary-Layer Meteorology, 26 (1), 69–80, doi:10.1007/BF00164331. → pages 57Garratt, J., 1992: The Atmospheric Boundary Layer. Cambridge University Press, 316 pp. →pages 56, 57, 72Giebel, G., R. Brownsword, G. Kariniotakis, M. Denhard, and C. Draxl, 2011: TheState-Of-The-Art in Short-Term Prediction of Wind Power - A Literature Overview, 2nd edition.Tech. rep., Riso DTU, 109 pp. → pages 29Gneiting, T., F. Balabdaoui, and A. E. Raftery, 2007: Probabilistic Forecasts, Calibration andSharpness. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69 (2),243–268, doi:10.1111/j.1467-9868.2007.00587.x. → pages 38, 39Gneiting, T., A. E. Raftery, A. H. Westveld, and T. Goldman, 2005: Calibrated ProbabilisticForecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation. MonthlyWeather Review, 133 (5), 1098–1118, doi:10.1175/MWR2904.1. → pages 4, 5, 31, 38, 39Graefe, J., 2004: Roughness Layer Corrections with Emphasis on SVAT Model Applications.Agricultural and Forest Meteorology, 124 (3), 237–251, doi:10.1016/j.agrformet.2004.01.003.→ pages 61, 7284BibliographyGrenier, H., and C. S. Bretherton, 2001: A Moist PBL Parameterization for Large-Scale Modelsand its Application to Subtropical Cloud-Topped Marine Boundary Layers. Monthly WeatherReview, 129 (3), 357–377, doi:10.1175/1520-0493(2001)129〈0357:AMPPFL〉2.0.CO;2. →pages 14, 22, 32Grimit, E. P., and C. F. Mass, 2002: Initial Results of a Mesoscale Short-Range EnsembleForecasting System over the Pacific Northwest. Weather and Forecasting, 17 (2), 192–205,doi:10.1175/1520-0434(2002)017〈0192:IROAMS〉2.0.CO;2. → pages 30, 36Grimit, E. P., and C. F. Mass, 2007: Measuring the Ensemble Spread-Error Relationship with aProbabilistic Approach: Stochastic Ensemble Results. Monthly Weather Review, 135 (1),203–221, doi:10.1175/MWR3262.1. → pages 36, 43Grubisˇic´, V., R. K. Vellore, and A. W. Huggins, 2005: Quantitative Precipitation Forecasting ofWintertime Storms in the Sierra Nevada: Sensitivity to the Microphysical Parameterization andHorizontal Resolution. Monthly Weather Review, 133 (10), 2834–2859,doi:10.1175/MWR3004.1. → pages 33Hahmann, A. N., C. L. Vincent, A. Pen˜a, J. Lange, and C. B. Hasager, 2015: Wind ClimateEstimation Using WRF Model Output: Method and Model Sensitivities Over the Sea.International Journal of Climatology, 35 (12), 3422–3439, doi:10.1002/joc.4217. → pages 9Hersbach, H., 2000: Decomposition of the Continuous Ranked Probability Score for EnsemblePrediction Systems. Weather and Forecasting, 15 (5), 559–570,doi:10.1175/1520-0434(2000)015〈0559:DOTCRP〉2.0.CO;2. → pages 39Hicks, B. B., 1976: Wind Profile Relationships from the Wangara Experiment. Quarterly Journalof the Royal Meteorological Society, 102 (433), 535–551, doi:10.1002/qj.49710243304. →pages 53Hoeting, J. A., D. Madigan, A. E. Raftery, and C. T. Volinsky, 1999: Bayesian Model Averaging:A Tutorial. Statistical Science, 14 (4), 382–417. → pages 30Ho¨gstro¨m, U., H. Bergstro¨m, A. S. Smedman, S. Halldin, and A. Lindroth, 1989: TurbulentExchange Above a Pine Forest, I: Fluxes and Gradients. Boundary-Layer Meteorology, 49 (1-2),197–217, doi:10.1007/BF00116411. → pages 57Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A New Vertical Diffusion Package with an ExplicitTreatment of Entrainment Processes. Monthly Weather Review, 134 (9), 2318–2341,doi:10.1175/MWR3199.1. → pages 14, 32Hong, S.-Y., and H.-L. Pan, 1996: Nonlocal Boundary Layer Vertical Diffusion in aMedium-Range Forecast Model. Monthly Weather Review, 124 (10), 2322–2339,doi:10.1175/1520-0493(1996)124〈2322:NBLVDI〉2.0.CO;2. → pages 13, 32Hopson, T. M., 2014: Assessing the Ensemble Spread-Error Relationship. Monthly WeatherReview, 142 (3), 1125–1142, doi:10.1175/MWR-D-12-00111.1. → pages 4385BibliographyHu, X.-M., P. M. Klein, and M. Xue, 2013: Evaluation of the Updated YSU Planetary BoundaryLayer Scheme within WRF for Wind Resource and Air Quality Assessments. Journal ofGeophysical Research: Atmospheres, 118 (18), 410–490, doi:10.1002/jgrd.50823. → pages 4,14, 32Jambunathan, M. V., 1954: Some Properties of Beta and Gamma Distributions. The Annals ofMathematical Statistics, 25 (2), 401–405, doi:10.1214/aoms/1177728800. → pages 98Janjic´, Z. I., 1994: The Step-Mountain Eta Coordinate Model: Further Developments of theConvection, Viscous Sublayer, and Turbulence Closure Schemes. Monthly Weather Review,122 (5), 927–945, doi:10.1175/1520-0493(1994)122〈0927:TSMECM〉2.0.CO;2. → pages 14,32Jime´nez, P. A., and J. Dudhia, 2011: Improving the Representation of Resolved and UnresolvedTopographic Effects on Surface Wind in the WRF Model. Journal of Applied Meteorology andClimatology, 51 (2), 300–316, doi:10.1175/JAMC-D-11-084.1. → pages 4, 9Jime´nez, P. A., J. Dudhia, J. F. Gonza´lez-Rouco, J. Navarro, J. P. Monta´vez, andE. Garcı´a-Bustamante, 2012: A Revised Scheme for the WRF Surface Layer Formulation.Monthly Weather Review, 140 (3), 898–918, doi:10.1175/MWR-D-11-00056.1. → pages 6, 52,60Juban, J., N. Siebert, and G. N. Kariniotakis, 2007: Probabilistic Short-Term Wind PowerForecasting for the Optimal Management of Wind Generation. 2007 IEEE Lausanne PowerTech, IEEE, 683–688, doi:10.1109/PCT.2007.4538398. → pages 31, 38Junk, C., S. Spa¨th, L. von Bremen, and L. Delle Monache, 2015: Comparison and Combination ofRegional and Global Ensemble Prediction Systems for Probabilistic Predictions of Hub-HeightWind Speed. Weather and Forecasting, 30 (5), 1234–1253, doi:10.1175/WAF-D-15-0021.1. →pages 5, 31Kaimal, J. C., and J. J. Finnigan, 1994: Atmospheric Boundary Layer Flows: Their Structure andMeasurement, Vol. 72. 289 pp., doi:10.1016/0021-9169(95)90002-0. → pages 56, 57, 61Kumaraswamy, P., 1980: A Generalized Probability Density Function for Double-BoundedRandom Processes. Journal of Hydrology, 46 (1-2), 79–88, doi:10.1016/0022-1694(80)90036-0.→ pages 98Lange, M., 2005: On the Uncertainty of Wind Power Predictions—Analysis of the ForecastAccuracy and Statistical Distribution of Errors. Journal of Solar Energy Engineering, 127 (2),177–184, doi:10.1115/1.1862266. → pages 29, 30Lee, Y. H., and L. Mahrt, 2005: Effect of Stability on Mixing in Open Canopies. Agricultural andForest Meteorology, 135 (1-4), 169–179, doi:10.1016/j.agrformet.2005.11.013. → pages 5786BibliographyLundquist, J. D., and D. R. Cayan, 2007: Surface Temperature Patterns in Complex Terrain: DailyVariations and Long-Term Change in the Central Sierra Nevada, California. Journal ofGeophysical Research, 112 (D11), D11 124, doi:10.1029/2006JD007561. → pages 58Mahoney, W. P., and Coauthors, 2012: A Wind Power Forecasting System to Optimize GridIntegration. IEEE Transactions on Sustainable Energy, 3 (4), 670–682,doi:10.1109/TSTE.2012.2201758. → pages 29Markowski, P., and Y. Richardson, 2010: Mesoscale meteorology in midlatitudes.Wiley-Blackwell, 407 pp. → pages 10Marquis, M., J. Wilczak, M. Ahlstrom, J. Sharp, A. Stern, J. C. Smith, and S. Calvert, 2011:Forecasting the Wind to Reach Significant Penetration Levels of Wind Energy. Bulletin of theAmerican Meteorological Society, 92 (9), 1159–1171, doi:10.1175/2011BAMS3033.1. → pages2, 10, 22, 27, 29, 77, 93Martner, B. E., and J. D. Marwitz, 1982: Wind Characteristics in Southern Wyoming. Journal ofApplied Meteorology, 21 (12), 1815–1827,doi:10.1175/1520-0450(1982)021〈1815:WCISW〉2.0.CO;2. → pages 8Mass, C. F., D. Ovens, K. Westrick, and B. A. Colle, 2002: Does Increasing Horizontal ResolutionProduce More Skillful Forecasts? Bulletin of the American Meteorological Society, 83 (3),407–430, doi:10.1175/1520-0477(2002)083〈0407:DIHRPM〉2.3.CO;2. → pages 2, 21, 45McCollor, D., and R. Stull, 2008a: Hydrometeorological Accuracy Enhancement viaPostprocessing of Numerical Weather Forecasts in Complex Terrain. Weather and Forecasting,23 (1), 131–144, doi:10.1175/2007WAF2006107.1. → pages 33McCollor, D., and R. Stull, 2008b: Hydrometeorological Short-Range Ensemble Forecasts inComplex Terrain. Part I: Meteorological Evaluation. Weather and Forecasting, 23 (4), 533–556,doi:10.1175/2008WAF2007063.1. → pages 5, 28, 30, 31McCollor, D., and R. Stull, 2008c: Hydrometeorological Short-Range Ensemble Forecasts inComplex Terrain. Part II: Economic Evaluation. Weather and Forecasting, 23 (4), 533,doi:10.1175/2007WAF2007064.1. → pages 38Monin, A. S., and A. M. Obukhov, 1954: Basic Laws of Turbulent Mixing in the Surface Layer ofthe Atmosphere. Contrib. Geophys. Inst. Acad. Sci. USSR, 24 (151), 163–187. → pages 52Monteiro, C., R. Bessa, V. Miranda, A. Botterud, J. Wang, and G. Conzelmann, 2009: Wind PowerForecasting: State-of-the-Art 2009. Argonne National Laboratory, 1–216, doi:10.2172/968212.→ pages 2, 4, 29Murphy, A. H., and R. L. Winkler, 1987: A General Framework for Forecast Verification. MonthlyWeather Review, 115 (7), 1330–1338,doi:10.1175/1520-0493(1987)115〈1330:AGFFFV〉2.0.CO;2. → pages 15, 2187BibliographyNadeau, D. F., E. R. Pardyjak, C. W. Higgins, and M. B. Parlange, 2013: Similarity Scaling over aSteep Alpine Slope. Boundary-Layer Meteorology, 147 (3), 401–419,doi:10.1007/s10546-012-9787-5. → pages 56, 73, 75Nakanishi, M., and H. Niino, 2006: An Improved MellorYamada Level-3 Model: Its NumericalStability and Application to a Regional Prediction of Advection Fog. Boundary-LayerMeteorology, 119 (2), 397–407, doi:10.1007/s10546-005-9030-8. → pages 32Nipen, T., and R. Stull, 2011: Calibrating Probabilistic Forecasts from an NWP Ensemble. TellusA, 63 (5), 858–875, doi:10.1111/j.1600-0870.2011.00535.x. → pages 5, 31, 38Nipen, T. N., 2012: A Component-Based Probabilistic Weather Forecasting System forOperational Usage. Ph.D. thesis, University of British Columbia, 101 pp.,doi:10.14288/1.0053570. → pages 32Optis, M., A. Monahan, and F. C. Bosveld, 2016: Limitations and Breakdown of Monin-ObukhovSimilarity Theory for Wind Profile Extrapolation under Stable Stratification. Wind Energy,19 (6), 1053–1072, doi:10.1002/we.1883. → pages 6, 53, 55, 61Palmer, T. N., R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G. J. Shutts, M. Steinheimer,and A. Weisheimer, 2009: Stochastic Parametrization and Model Uncertainty. Tech. rep.,European Centre for Medium-Range Weather Forecasts, 42 pp. → pages 30Panofsky, H. A., 1963: Determination of Stress from Wind and Temperature Measurements.Quarterly Journal of the Royal Meteorological Society, 89 (379), 85–94,doi:10.1002/qj.49708937906. → pages 52Paulson, C. A., 1970: The Mathematical Representation of Wind Speed and Temperature Profilesin the Unstable Atmospheric Surface Layer. Journal of Applied Meteorology, 9 (6), 857–861,doi:10.1175/1520-0450(1970)009〈0857:TMROWS〉2.0.CO;2. → pages 6Peterson, E. W., and J. P. Hennessey, 1978: On the Use of Power Laws for Estimates of WindPower Potential. Journal of Applied Meteorology, 17 (3), 390–394,doi:10.1175/1520-0450(1978)017〈0390:OTUOPL〉2.0.CO;2. → pages 3, 8Pindado, S., J. Cubas, and F. Sorribes-Palmer, 2014: The Cup Anemometer, a FundamentalMeteorological Instrument for the Wind Energy Industry. Research at the IDR/UPM Institute.Sensors (Basel, Switzerland), 14 (11), 21 418–21 452, doi:10.3390/s141121418. → pages 59Pinson, P., 2012: Very-short-term Probabilistic Forecasting of Wind Power with GeneralizedLogit-normal Distributions. Journal of the Royal Statistical Society: Series C (AppliedStatistics), 61 (4), 555–576, doi:10.1111/j.1467-9876.2011.01026.x. → pages 30Pinson, P., J. Juban, and G. N. Kariniotakis, 2006: On the Quality and Value of ProbabilisticForecasts of Wind Generation. 2006 International Conference on Probabilistic Methods Appliedto Power Systems, IEEE, 1–7, doi:10.1109/PMAPS.2006.360290. → pages 4, 29, 31, 3888BibliographyPleim, J. E., 2007: A Combined Local and Nonlocal Closure Model for the Atmospheric BoundaryLayer. Part I: Model Description and Testing. Journal of Applied Meteorology and Climatology,46 (9), 1383–1395, doi:10.1175/JAM2539.1. → pages 14, 22, 32Poulos, G. S., and Coauthors, 2002: CASES-99: A Comprehensive Investigation of the StableNocturnal Boundary Layer. Bulletin of the American Meteorological Society, 83 (4), 555–581,doi:10.1175/1520-0477(2002)083〈0555:CACIOT〉2.3.CO;2. → pages 8Raftery, A. E., T. Gneiting, F. Balabdaoui, and M. Polakowski, 2005: Using Bayesian ModelAveraging to Calibrate Forecast Ensembles. Monthly Weather Review, 133 (5), 1155–1174,doi:10.1175/MWR2906.1. → pages 5, 30Raupach, M. R., 1979: Anomalies in Flux-Gradient Relationships Over Forest. Boundary-LayerMeteorology, 16 (3), 467–486, doi:10.1007/BF03163564. → pages 57Rotach, M. W., and D. Zardi, 2007: On the Boundary-Layer Structure over Highly ComplexTerrain: Key Findings from MAP. Quarterly Journal of the Royal Meteorological Society,133 (625), 937–948, doi:10.1002/qj.71. → pages 56Rucker, M., R. M. Banta, and D. G. Steyn, 2008: Along-Valley Structure of Daytime ThermallyDriven Flows in the Wipp Valley. Journal of Applied Meteorology and Climatology, 47 (3),733–751, doi:10.1175/2007JAMC1319.1. → pages 10Santoso, E., and R. Stull, 1998: Wind and Temperature Profiles in the Radix Layer: The BottomFifth of the Convective Boundary Layer. Journal of Applied Meteorology, 37 (6), 545–558,doi:10.1175/1520-0450(1998)037〈0545:WATPIT〉2.0.CO;2. → pages 8Santoso, E., and R. Stull, 2001: Similarity Equations for Wind and Temperature Profiles in theRadix Layer, at the Bottom of the Convective Boundary Layer. Journal of the AtmosphericSciences, 58 (11), 1446–1464, doi:10.1175/1520-0469(2001)058〈1446:SEFWAT〉2.0.CO;2. →pages 6, 8, 53Shao, H., and Coauthors, 2016: Bridging Research to Operations Transitions: Status and Plans ofCommunity GSI. Bulletin of the American Meteorological Society, 97 (8), 1427–1440,doi:10.1175/BAMS-D-13-00245.1. → pages 19, 51, 80Shin, H., and S.-Y. Hong, 2011: Intercomparison of Planetary Boundary-Layer Parametrizations inthe WRF Model for a Single Day from CASES-99. Boundary-Layer Meteorology, 139 (2),261–281, doi:10.1007/s10546-010-9583-z. → pages 4, 6, 9, 12Siuta, D., 2013: An Analysis of the Weather Research and Forecasting Model for Wind EnergyApplications in Wyoming. M.S. thesis, University of Wyoming. 124 pp. → pages 3, 8, 71Siuta, D., G. West, H. Modzelewski, R. Schigas, and R. Stull, 2016: Viability of Cloud Computingfor Real-Time Numerical Weather Prediction. Weather and Forecasting, 31 (6), 1985–1996,doi:10.1175/WAF-D-16-0075.1. → pages 8089BibliographySiuta, D., G. West, T. N. Nipen, and R. Stull, 2017a: Calibrated Probabilistic Hub-Height WindForecasts in Complex Terrain. Weather and Forecasting, doi:10.1175/WAF-D-16-0137.1. →pages 7, 16, 60, 93Siuta, D., G. West, and R. Stull, 2017b: WRF Hub-Height Wind-Forecast Sensitivity to PBLScheme, Grid Length, and Initial-Condition Choice in Complex Terrain. Weather andForecasting, doi:10.1175/WAF-D-16-0120.1. → pages 7, 32, 33, 60Skamarock, W., J. Klemp, J. Dudhia, D. Gill, D. Barker, W. Wang, X.-y. Huang, and M. Duda,2008: A Description of the Advanced Research WRF Version 3. NCAR Tech. NoteNCAR/TN-475+STR. 113 pp., doi:10.5065/D68S4MVH. → pages 8, 13, 31, 52Skamarock, W. C., 2004: Evaluating Mesoscale NWP Models using Kinetic Energy Spectra.Monthly Weather Review, 132 (12), 3019–3032, doi:10.1175/MWR2830.1. → pages 11Sloughter, J. M., T. Gneiting, and A. E. Raftery, 2010: Probabilistic Wind Speed Forecasting usingEnsembles and Bayesian Model Averaging. Journal of the American Statistical Association,105 (489), 25–35, doi:10.1198/jasa.2009.ap08615. → pages 5, 30Stensrud, D. J., 2007: Parameterization Schemes: Keys to Understanding Numerical WeatherPrediction Models. Cambridge University Press, 478 pp. → pages 32Stensrud, D. J., J.-W. Bao, and T. T. Warner, 2000: Using Initial Condition and Model PhysicsPerturbations in Short-Range Ensemble Simulations of Mesoscale Convective Systems. MonthlyWeather Review, 128 (7), 2077–2107,doi:10.1175/1520-0493(2000)128〈2077:UICAMP〉2.0.CO;2. → pages 5, 30, 31Storm, B., J. Dudhia, S. Basu, A. Swift, and I. Giammanco, 2009: Evaluation of the WeatherResearch and Forecasting Model on Forecasting Low-Level Jets: Implications for Wind Energy.Wind Energy, 12 (1), 81–90, doi:10.1002/we.288. → pages 9Stull, R. B., 1988: An Introduction to Boundary Layer Meteorology. Kluwer Academic Publishers,666 pp. → pages 3, 8, 10, 32, 52, 53, 55, 56, 61Stull, R. B., 1993: Review of Non-Local Mixing in Turbulent Atmospheres: TransilientTurbulence Theory. Boundary-Layer Meteorology, 62 (1), 21–96, doi:10.1007/BF00705546. →pages 14, 21, 22Sukoriansky, S., B. Galperin, and I. Staroselsky, 2005: A Quasinormal Scale Elimination Model ofTurbulent Flows with Stable Stratification. Physics of Fluids, 17 (8), 085 107,doi:10.1063/1.2009010. → pages 14, 32Taylor, K. E., 2001: Summarizing Multiple Aspects of Model Performance in a Single Diagram.Journal of Geophysical Research, 106 (D7), 7183, doi:10.1029/2000JD900719. → pages 16United States Department of Energy, 2015: Wind Vision: A New Era for Wind Power in theUnited States. Tech. rep. → pages 290BibliographyWang, X., and C. H. Bishop, 2003: A Comparison of Breeding and Ensemble Transform KalmanFilter Ensemble Forecast Schemes. Journal of the Atmospheric Sciences, 60 (9), 1140–1158,doi:10.1175/1520-0469(2003)060〈1140:ACOBAE〉2.0.CO;2. → pages 43Warner, T. T., 2011: Numerical Weather and Climate Prediction. Cambridge University Press, 550pp. → pages 2, 5, 11, 45Webb, E. K., 1970: Profile Relationships: The Log-Linear Range, and Extension to StrongStability. Quarterly Journal of the Royal Meteorological Society, 96 (407), 67–90,doi:10.1002/qj.49709640708. → pages 6, 53Whitaker, J. S., and A. F. Loughe, 1998: The Relationship between Ensemble Spread andEnsemble Mean Skill. Monthly Weather Review, 126 (12), 3292–3302,doi:10.1175/1520-0493(1998)126〈3292:TRBESA〉2.0.CO;2. → pages 43Whiteman, C. D., and S. W. Hoch, 2014: Pseudovertical Temperature Profiles in a Broad Valleyfrom Lines of Temperature Sensors on Sidewalls. Journal of Applied Meteorology andClimatology, 53 (11), 2430–2437, doi:10.1175/JAMC-D-14-0177.1. → pages 58Whiteman, C. D., J. M. Hubbe, and W. J. Shaw, 2000: Evaluation of an Inexpensive TemperatureDatalogger for Meteorological Applications. Journal of Atmospheric and Oceanic Technology,17 (1), 77–81, doi:10.1175/1520-0426(2000)017〈0077:EOAITD〉2.0.CO;2. → pages 58Wilczak, J., and Coauthors, 2015: The Wind Forecast Improvement Project (WFIP): APublic-Private Partnership Addressing Wind Energy Forecast Needs. Bulletin of the AmericanMeteorological Society, 96 (10), 1699–1718, doi:10.1175/BAMS-D-14-00107.1. → pages 5,29, 77Wilks, D. S., 2011: Statistical Methods in the Atmospheric Sciences. Academic Press, 704 pp. →pages 16, 36, 38, 39Yang, B., and Coauthors, 2016: Sensitivity of Turbine-Height Wind Speeds to Parameters inPlanetary Boundary-Layer and Surface-Layer Schemes in the Weather Research and ForecastingModel. Boundary-Layer Meteorology, 1–26, doi:10.1007/s10546-016-0185-2. → pages 9Yang, Q., L. K. Berg, M. Pekour, J. D. Fast, R. K. Newsom, M. Stoelinga, and C. Finley, 2013:Evaluation of WRF-Predicted Near-Hub-Height Winds and Ramp Events over a PacificNorthwest Site with Complex Terrain. Journal of Applied Meteorology and Climatology, 52 (8),1753–1763, doi:10.1175/JAMC-D-12-0267.1. → pages 9, 12Zhang, D., and R. A. Anthes, 1982: A High-Resolution Model of the Planetary BoundaryLayer—Sensitivity Tests and Comparisons with SESAME-79 Data. Journal of AppliedMeteorology, 21 (11), 1594–1609,doi:10.1175/1520-0450(1982)021〈1594:AHRMOT〉2.0.CO;2. → pages 6, 53, 7691BibliographyZhang, Y., J. Wang, and X. Wang, 2014: Review on Probabilistic Forecasting of Wind PowerGeneration. Renewable and Sustainable Energy Reviews, 32, 255–270,doi:10.1016/j.rser.2014.01.033. → pages 5, 29, 31Zhu, Y., Z. Toth, R. Wobus, D. Richardson, and K. Mylne, 2002: The Economic Value ofEnsemble-Based Weather Forecasts. Bulletin of the American Meteorological Society, 83 (1),73–83, doi:10.1175/1520-0477(2002)083〈0073:TEVOEB〉2.3.CO;2. → pages 2892Appendix A: Summary of Bias-Corrected Forecast Accuracy ScoresAppendix ASummary of Bias-Corrected Forecast Ac-curacy ScoresTables A.1 and A.2 provide the full MAE dataset for wind-farm sites 1-4. MAE is provided for theannual and seasonal averages, for each PBL-grid combination, and for using the GFS (Table A.1)and NAM (Table A.2) as initial-condition sources for the bias-corrected wind forecasts.Fig. A.1 shows the annual MAESS of the bias-corrected wind forecasts (skill score is relativeto the corresponding raw forecast, with larger positive scores being better) at each station using theDMB post-processing method. For sites 1-3 improvement in MAE due to bias correction rangedfrom 0-12% over MAE of the raw forecasts. While this 12% may seem small, the resulting monetaryvalue can be large (Marquis et al., 2011). Site 4 saw large improvements (up to 56% for the 4-kmgrid using the MRF PBL scheme) due to large systematic biases. While the bias correction schemeapplied to some forecasts did make the MAE up to 3% worse at individual sites, the majority ofconfigurations were improved. I attribute the cause of poorer verification after bias correction atindividual locations to the training dataset and to the fact that some grids (mostly of 12-km gridlength) have low bias prior to bias-correction (Siuta et al., 2017a). When averaged over all locations,bias correction always resulted in improved accuracy skill.Fig. A.2 shows the variation in the annual MAESS at all four wind-farm sites averaged over all48 bias-corrected forecasts. While I do not show training periods less than 5 days in length, I testeda training period of only 1 day and found MAESS to be significantly worse (-40% or worse), thus itis not included in Fig. A.2. I find that the benefits of a longer training period level out between 20and 30 days.93AppendixA:SummaryofBias-CorrectedForecastAccuracyScoresTable A.1: Mean Absolute Error (MAE) for each bias-corrected forecast initialized off the GFS. Sites 1-4 are the anonymouswind farm locations. Statistics are divided into five blocks, Annual, Summer (June - August), Fall (September - Novem-ber), Winter (December - February), and Spring (March - May). MAE is in m s-1.Initial Condition GFSPBL Scheme YSU ACM2 MRF MYJ MYNN QNSE UW GBMGrid Length (km) 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4AnnualSite 1 2.30 2.16 2.45 2.28 2.12 2.41 2.27 2.20 2.46 2.34 2.15 2.50 2.35 2.26 2.52 2.41 2.24 2.62 2.40 2.19 2.53 2.35 2.15 2.47Site 2 1.83 1.82 1.79 1.76 1.77 1.68 1.81 1.86 1.82 1.83 1.85 1.77 1.92 1.92 1.86 1.91 1.93 1.82 1.85 1.88 1.85 1.78 1.80 1.79Site 3 1.81 1.77 1.77 1.78 1.72 1.73 1.77 1.73 1.69 1.78 1.71 1.79 1.87 1.81 1.80 1.89 1.81 1.87 1.86 1.77 1.83 1.75 1.72 1.76Site 4 1.87 1.61 1.62 1.85 1.61 1.60 1.86 1.66 1.66 1.82 1.60 1.62 1.83 1.65 1.64 1.88 1.62 1.63 1.85 1.64 1.67 1.80 1.61 1.61SummerSite 1 2.04 1.99 2.14 1.95 1.90 2.08 1.97 1.91 2.03 2.10 2.01 2.14 2.10 2.03 2.13 2.22 2.14 2.28 2.14 2.03 2.21 2.09 2.00 2.15Site 2 1.70 1.70 1.64 1.58 1.59 1.53 1.69 1.65 1.58 1.72 1.72 1.62 1.80 1.78 1.68 1.80 1.81 1.71 1.75 1.77 1.69 1.72 1.71 1.64Site 3 1.64 1.61 1.60 1.56 1.53 1.52 1.66 1.62 1.55 1.61 1.57 1.56 1.75 1.70 1.64 1.72 1.67 1.68 1.71 1.65 1.60 1.62 1.61 1.58Site 4 1.58 1.21 1.32 1.57 1.21 1.28 1.64 1.25 1.34 1.64 1.21 1.37 1.64 1.26 1.36 1.62 1.22 1.36 1.59 1.23 1.38 1.57 1.17 1.29FallSite 1 2.32 2.13 2.36 2.32 2.12 2.34 2.31 2.19 2.43 2.34 2.10 2.41 2.36 2.30 2.51 2.42 2.21 2.52 2.42 2.22 2.51 2.38 2.14 2.42Site 2 1.79 1.83 1.82 1.72 1.77 1.71 1.76 1.92 1.87 1.77 1.87 1.78 1.93 2.00 1.94 1.86 1.93 1.81 1.82 1.92 1.86 1.76 1.84 1.81Site 3 1.75 1.70 1.72 1.70 1.62 1.66 1.73 1.68 1.68 1.68 1.63 1.76 1.81 1.73 1.77 1.80 1.72 1.87 1.78 1.71 1.82 1.68 1.62 1.71Site 4 1.96 1.84 1.74 1.97 1.86 1.73 1.95 1.90 1.79 1.86 1.82 1.72 1.89 1.88 1.74 1.97 1.88 1.76 1.99 1.91 1.81 1.85 1.82 1.73WinterSite 1 2.67 2.45 2.98 2.76 2.45 3.01 2.76 2.62 3.14 2.75 2.44 3.10 2.78 2.62 3.10 2.71 2.47 3.14 2.84 2.44 3.06 2.80 2.40 2.99Site 2 2.03 1.90 1.92 2.03 1.93 1.80 2.05 2.05 2.05 2.01 1.97 1.87 2.11 2.01 1.94 2.04 1.99 1.87 2.01 1.98 1.95 1.90 1.87 1.88Site 3 1.96 1.85 1.79 2.05 1.89 1.87 1.92 1.85 1.79 1.96 1.81 1.91 2.02 1.90 1.84 2.03 1.85 1.91 1.98 1.81 1.91 1.86 1.78 1.83Site 4 2.16 1.97 1.98 2.10 1.93 1.94 2.06 1.99 2.02 2.08 1.98 1.97 2.07 1.98 1.98 2.15 1.97 1.95 2.10 1.95 1.99 2.03 1.96 1.96SpringSite 1 2.16 2.06 2.32 2.11 2.00 2.21 2.04 2.07 2.25 2.17 2.03 2.36 2.17 2.09 2.33 2.29 2.16 2.54 2.20 2.09 2.36 2.16 2.06 2.33Site 2 1.82 1.84 1.78 1.74 1.77 1.69 1.76 1.81 1.80 1.80 1.85 1.82 1.85 1.88 1.86 1.94 1.99 1.89 1.82 1.87 1.92 1.76 1.79 1.82Site 3 1.90 1.93 1.95 1.83 1.83 1.87 1.77 1.75 1.75 1.88 1.83 1.91 1.89 1.90 1.93 2.01 1.97 2.03 1.95 1.92 2.00 1.85 1.85 1.91Site 4 1.68 1.33 1.35 1.67 1.35 1.34 1.72 1.41 1.39 1.64 1.29 1.35 1.68 1.38 1.38 1.71 1.31 1.37 1.67 1.35 1.42 1.67 1.35 1.3894AppendixA:SummaryofBias-CorrectedForecastAccuracyScoresTable A.2: Mean Absolute Error (MAE) for each bias-corrected forecast initialized off the NAM. Sites 1-4 are the anony-mous wind farm locations. Statistics are divided into five blocks, Annual, Summer (June - August), Fall (September -November), Winter (December - February), and Spring (March - May). MAE is in m s-1.Initial Condition NAMPBL Scheme YSU ACM2 MRF MYJ MYNN QNSE UW GBMGrid Length (km) 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4 36 12 4AnnualSite 1 2.39 2.17 2.48 2.37 2.14 2.47 2.32 2.19 2.49 2.40 2.15 2.53 2.39 2.23 2.52 2.47 2.27 2.65 2.49 2.23 2.56 2.42 2.16 2.47Site 2 1.81 1.83 1.76 1.75 1.80 1.70 1.81 1.86 1.79 1.81 1.87 1.78 1.88 1.89 1.83 1.90 1.95 1.84 1.84 1.88 1.85 1.74 1.81 1.81Site 3 1.87 1.81 1.80 1.83 1.75 1.76 1.81 1.77 1.72 1.85 1.76 1.84 1.90 1.84 1.82 1.92 1.82 1.89 1.88 1.80 1.86 1.82 1.77 1.79Site 4 1.88 1.63 1.64 1.85 1.63 1.63 1.88 1.66 1.67 1.82 1.62 1.67 1.85 1.67 1.67 1.88 1.63 1.66 1.86 1.68 1.71 1.82 1.64 1.65SummerSite 1 2.07 1.95 2.13 1.99 1.86 2.08 2.01 1.90 2.09 2.09 1.94 2.16 2.12 1.99 2.17 2.18 2.06 2.29 2.16 2.02 2.20 2.10 1.95 2.11Site 2 1.73 1.72 1.70 1.63 1.66 1.62 1.73 1.74 1.67 1.80 1.72 1.70 1.84 1.76 1.74 1.86 1.84 1.77 1.84 1.80 1.76 1.74 1.72 1.71Site 3 1.80 1.80 1.71 1.70 1.69 1.63 1.75 1.75 1.66 1.77 1.74 1.69 1.88 1.84 1.76 1.81 1.76 1.75 1.84 1.78 1.69 1.78 1.75 1.68Site 4 1.60 1.26 1.36 1.58 1.28 1.35 1.64 1.27 1.36 1.66 1.27 1.43 1.66 1.31 1.41 1.63 1.27 1.41 1.61 1.30 1.43 1.58 1.23 1.34FallSite 1 2.44 2.19 2.51 2.45 2.15 2.48 2.37 2.19 2.52 2.46 2.16 2.52 2.42 2.29 2.59 2.52 2.28 2.66 2.53 2.27 2.62 2.50 2.20 2.52Site 2 1.75 1.83 1.74 1.71 1.82 1.69 1.74 1.89 1.81 1.74 1.88 1.75 1.88 1.95 1.82 1.84 1.94 1.84 1.78 1.93 1.86 1.71 1.88 1.79Site 3 1.80 1.73 1.77 1.75 1.67 1.70 1.80 1.75 1.75 1.74 1.64 1.77 1.88 1.82 1.85 1.85 1.73 1.87 1.84 1.74 1.87 1.74 1.68 1.77Site 4 2.00 1.86 1.77 2.01 1.86 1.74 1.98 1.87 1.80 1.87 1.80 1.74 1.95 1.91 1.78 2.00 1.85 1.76 1.99 1.92 1.80 1.91 1.84 1.75WinterSite 1 2.73 2.39 2.90 2.81 2.45 3.01 2.74 2.56 3.06 2.77 2.42 3.06 2.80 2.52 2.99 2.75 2.48 3.12 2.92 2.45 2.98 2.81 2.39 2.94Site 2 1.95 1.92 1.82 1.91 1.92 1.75 1.98 2.00 1.92 1.93 1.98 1.83 2.02 1.99 1.89 1.98 2.02 1.82 1.93 1.96 1.91 1.81 1.84 1.86Site 3 1.98 1.82 1.76 2.05 1.84 1.84 1.94 1.83 1.74 1.99 1.81 1.92 2.03 1.89 1.80 2.04 1.84 1.91 1.96 1.80 1.91 1.91 1.76 1.77Site 4 2.11 1.92 1.95 2.05 1.88 1.90 2.06 1.96 1.97 2.02 1.94 1.96 2.04 1.96 1.95 2.07 1.95 1.97 2.04 1.96 2.00 2.01 1.96 1.98SpringSite 1 2.32 2.17 2.40 2.25 2.09 2.32 2.16 2.11 2.27 2.26 2.1 2.37 2.22 2.14 2.33 2.41 2.27 2.55 2.34 2.18 2.44 2.25 2.11 2.33Site 2 1.83 1.85 1.80 1.76 1.82 1.72 1.78 1.81 1.77 1.77 1.89 1.86 1.79 1.84 1.85 1.93 2.00 1.95 1.80 1.85 1.88 1.70 1.80 1.86Site 3 1.89 1.90 1.95 1.80 1.81 1.88 1.75 1.73 1.74 1.88 1.86 1.98 1.83 1.82 1.88 1.99 1.94 2.03 1.88 1.89 1.96 1.86 1.87 1.95Site 4 1.73 1.37 1.42 1.70 1.42 1.43 1.78 1.45 1.47 1.71 1.36 1.46 1.71 1.42 1.46 1.75 1.35 1.42 1.72 1.43 1.52 1.71 1.43 1.4795Appendix A: Summary of Bias-Corrected Forecast Accuracy ScoresFigure A.1: Annual MAE skill score (MAESS), showing improvement in MAE resulting frombias correction. Skill is relative to the equivalent raw wind forecast. Colors representindividual wind farm sites, with purple bars indicating the site-averaged performance.Larger positive values are better.96Appendix A: Summary of Bias-Corrected Forecast Accuracy ScoresFigure A.2: Effect of the training period length on forecast accuracy (MAESS), averaged overall 48 bias-corrected deterministic wind forecasts at each wind-farm site.97Appendix B: The pq Probability DistributionAppendix BThe pq Probability DistributionThe meteorological community is introduced to a new frequency distribution that was devised forChapter 3. The distribution has a simple un-normalized one-sided form:fs(x) = (1− xp)q (B1)where p and q are the shape parameters and the distribution is defined between 0 and 1. It can bemade into a normalized (unit area) symmetric probability density (f ) with arbitrary scaling param-eter S and center-location parameter xo:f(x) =1A[1−∣∣∣∣x− xoS∣∣∣∣p]q for (xo − S) ≤ x ≤ (xo + S) (B2)where the area A under the un-normalized symmetric curve isA =2SΓ (q + 1) Γ (1/p)pΓ [q + 1 + (1/p)](B3)and where Γ is the gamma function. The pq distribution is similar, but not identical, to the Ku-maraswamy (1980) distribution and the beta distribution (Jambunathan, 1954).The pq distribution has a theoretical mean absolute deviation ofMAD = SΓ [q + 1 + (1/p)] Γ (2/p)Γ [q + 1 + (2/p)] Γ (1/p)(B4)and variance ofσ2x = S2Γ [q + 1 + (1/p)] Γ (3/p)Γ [q + 1 + (3/p)] Γ (1/p). (B5)The fourth statistical moment isµ4 = S4Γ [q + 1 + (1/p)] Γ (5/p)Γ [q + 1 + (5/p)] Γ (1/p)(B6)98Appendix B: The pq Probability Distributionfrom which the kurtosis can be found as µ4/σ4x. Skewness is zero for this symmetric distribution.Sometimes a situation may exist where part of the distribution tail on one or both sides must be cutoff. For this situation, numerically integrate to get the area under the curve and to get the highermoments, using evenly-spaced samples from (B2) instead of using the theoretical expressions (B3)- (B6).If you are given an observed distribution with scatter, then you can find the best-fit parameters(p, q, S, xo) for the pq distribution as follows. If you know apriori (or want to fix) one or more ofthe parameters (typically S and/or xo), then do so and find the remaining parameters by one of twomethods. You can use the method of moments, where you first calculate the statistical momentsof the observation data, and then use the the Newton-Raphson or other root-finding algorithm tofind the pq-distribution parameters that give the same statistical moments. You will need as manystatistical moments as unknown parameters.A second approach is brute force, which is used here to help learn more about the sensitivity ofthe distribution to the choice of parameters. Namely, I loop through all reasonable values of p andq, and for each (p, q) combination to find the mean absolute error between the pq distribution andthe observed frequency distribution (after normalizing the observations to have unit area under itshistogram). The best (p, q) combination is the one with minimum error. Fig. B.1 shows an examplefor one of the wind-farm forecast-error distributions.While stepping through each (p, q) combination, a plot of the pq-distribution MAD, variance,and kurtosis is produced (Fig. B.2). You can use this as a poor-mans method of moments. Namely,if you know the variance and kurtosis (or the MAD and kurtosis) from your observation data, thenyou can use those values in Fig. B.2 to find the approximate p and q values.Although the pq distribution cannot fit skewed distributions, its versatility for symmetric distri-butions is remarkable. Fig. B.3 shows extremes that approximate a delta function, a linear rampshape, and a near-uniform distribution. Fig. B.4 shows that q controls the slope of the tails, whilep controls the shape of center of the distribution. The last frame in Fig. B.4 shows how the pqdistribution can approximate Gaussian distributions, particularly for values of p ≈ 1.85.99Appendix B: The pq Probability DistributionFigure B.1: (a) Relative error (contoured, arbitrary units) between the observed and pq distri-butions as a function of parameters p and q. X marks the minimum error; namely, thebest p and q values. S was fixed apriori at 10 m s-1 wind-forecast error, and p, q, and xowere varied to find the distribution errors. (b) Resulting best-fit pq distribution (curve)to the observed histogram of wind-speed forecast errors (vertical bars). Note that thisfitting method of minimizing the mean absolute error does not overly weight the tails ofthe distribution.Figure B.2: Contour plots of (a) MAD (dashed line) and kurtosis (solid line), and (b) variance(dashed line) and kurtosis (solid line) for the pq distribution as a function of the p and qshape parameters.100Appendix B: The pq Probability DistributionFigure B.3: Some extreme examples of the pq distribution.101Appendix B: The pq Probability DistributionFigure B.4: Sample of some of the shapes produced by the pq distribution. The last shape isapproximately Gaussian.102Appendix C: Terrain cross sectionsAppendix CTerrain cross sectionsTo provide more insight into the terrain characteristics of the anonymous wind-farm locations, Ihave provided terrain cross sections for all four locations (Figs. C1-C4) in this appendix. Eachcross section is approximately 80 to 90 km in the horizontal, taken out of the predominant winddirection. Four plots are provided in each of Figs. C1-C4. The panels are cross sections takenfrom a digital elevation model (DEM) at 30s resolution, the WRF-model terrain from the 4-kmgrid, the WRF-model terrain from the 12-km grid, and the WRF-model terrain from the 36-km grid.To maintain confidentiality of the locations, actual terrain elevations are not shown. Instead, theelevation is plotted with respect to the lowest elevation available in the cross section area. I also donot link the images to any of the specific site numbers used in Chapter 2-4.Figs. C1-C4 show that the wind-farms are located along mountain ridges in the DEM. In addi-tion, all of the wind farms have higher terrain in their immediate vicinity (e.g., within 40-50 km inthe horizontal). Likewise, all locations show that the 4-km grids do represent the general shape ofthe terrain, but do not capture many of the small valleys and highest peaks. Nor do the 4-km WRFgrids match the correct elevation. For the 12- and 36-km grids, the model terrain often takes theshape of an elongated slope, not putting the wind-farms on any type of peak.In Chapter 4, I find non-dimensional wind shear and temperature gradient values are smallerthan accepted theory for statically-stable conditions. Because of the complexity of the terrain shownin Figs. C1-C4, it is possible that internal boundary layers generated by roughness features upstreamcould be one of the causes on enhanced turbulence during statically-stable conditions. Do¨renka¨mperet al. (2015) found similar internal boundary layers could be advected several 10s of kilometers overwater surfaces. Also, under statically-stable conditions, mountain wave activity generated off of anyof the mountain tops shown in the cross sections may also result in enhanced turbulence. Finally,slope flows are a common feature in complex terrain. Upslope and downslope flows will result inenhanced vertical mixing near mountain tops due to flow convergence or divergence, dependingon the time of day. Future work should be focused on determining the effects of these features onsurface-layer similarity theories.103Appendix C: Terrain cross sectionsFigure C.1: Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 1 (site label doesnot match other chapters). The red dot represents the wind farm location.Figure C.2: Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 2 (site label doesnot match other chapters). The red dot represents the wind farm location.104Appendix C: Terrain cross sectionsFigure C.3: Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 3 (site label doesnot match other chapters). The red dot represents the wind farm location.Figure C.4: Representation of terrain from a digital elevation model (DEM), the 4-km WRFgrid, the 12-km WRF grid, and the 36-km WRF grid at anonymous site 4 (site label doesnot match other chapters). The red dot represents the wind farm location.105

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0343411/manifest

Comment

Related Items