Saarblitz, Wind farm in HDR, flickr.com, creative commons by-nc-sa 2.0

Bolund Neutral

Managed by

Scope

The benchmark revisits the blind test of 2009 now allowing the participants to optimize their models to obtain the best match to the validation dataset.

Objectives

Test model fine-tuning strategies that will be applied in complex terrain sites. Evaluate turbulence models in a test site with well defined boundary conditions.

Data Accessibility

The benchmark is offered to participants of the IEA Task 31 Wakebench.

Input data

The conditions for simulating the Bolund flow field in neutral conditions are:

  • Digitized map of the Bolund hill with a 25 cm resolution. Water level is set to 0.75 m
  • Roughness digitized map: hill with z0 = 0.015 m, water with z0 = 0.0003 m, coastal (X > 325 m) with z0 = 0.015 m
  • Inlet profiles: Measured at M0 for westerly winds and M9 for easterly winds
  • Coordinates of met masts along lines A (239o) and B (270o)
  • No heat flux, gravity g = 9.81 m/s2, Coriolis parameter f=1e-4 s-1
  • Obukhov length: L = ∞
  • Use dry air with a density ρ = 1.229 kg/m3 and dynamic viscosity μ = 1.73e-5 kg/ms

Validation data

The validation dataset is composed of mean flow and turbulence data from cup and sonic anemometers at 10 met masts. Ensemble averages of 10 min averaged samples within ±8º wind direction sector, with wind speeds between 5 and 12 m/s at 5 m level and under neutral conditions (|1/L| < 0.004 m-1 ) at the upstream masts, were used to derive the validation datasets which consists on:

  •  Fractional-Speedup-Ratio (FSR) and normalized added turbulent kinetic energy (TKE*) with respect to the reference inlet position, at 2 and 5 m above ground level along mast lines A and B
  • FSR and TKE* vertical profiles at mast positions.

Velocity and TKE values will be normalized with the upstream friction velocity at the reference mast as in Bechmann et al. (2011). The validation dataset includes mean and standard deviation statistics from the ensemble profiles.

Model runs

The inlet profile can be based on neutral M-O log-law, defined by the following input parameters:

  • Run 1: WD = 270o, z0 = 0.0003 m, TKE/u*2 = 5.8, u* = 0.4 m/s
  • Run 2: WD = 255o, z0 = 0.0003 m, TKE/u* = 5.8, u* = 0.4 m/s
  • Run 3: WD = 239o, z0 = 0.0003 m, TKE/u* = 5.8, u* = 0.4 m/s
  • Run 4: WD = 90o, z0 = 0.015 m, TKE/u*2 = 5.8, u* = 0.5 m/s

or by best fit to the measured inlet profiles (at M0 for runs 1,2 and 3 and M9 for run 4) if the participant considers that this can improve the results. The computational domain must extend at least to X = ±400 m in order to include the coastline to the East and make sure that the hill wake is completely covered. The origin of the coordinate system should be placed at M3 position with X pointing East, Y pointing North and Z pointing up. 

Output data

The simulated validation profiles consist on horizontal profiles along lines A and B at 2 and 5 m height and vertical profiles at mast positions of velocity components (U,V,W), turbulence kinetic energy (tke), dissipation rate (tdr), friction velocity (u*) and kinematic momentum fluxes (uu, vv, ww). The profiles should traverse the simulated domain from boundary to boundary. Hence, the required outputs are, in this order: X(m), Y(m), Z(m), U(m/s), V(m/s), W(m/s), tke(m2/s2), tdr(m2/s3), us(m/s), uu(m2/s2), vv(m2/s2), ww(m2/s2). 

Use the file naming and format convention described in the Windbench user's guide with profID = prof#, where # = [M0,M1,M2,M3,M5,M6,M7,M8,M9,A2,A5,B2,B5], i.e. 13 output files per user and model run. Additionally, for those users that participated in the blind test of 2009, please provide the output files that were obtained at that time. This will allow an assessment of the added value of onsite measurements for model tuning. Please follow the same format described before but with a BenchmarkID = Bolund_blind2009 to differentiate between the two sets of simulations.

Remarks

In order to evaluate the added value of model fine-tuning it is important that you describe how this is performed. Please report on the deviations with respect to default settings (those of the blind test). There are no guidelines on the definition of the computational mesh since this can have an important influence in the fine-tuning aspects of the model chain. Please describe how you integrate grid dependency in the evaluation process.

You can find the input and validation files as well as other background documentation in the official DTU website of the Bolund model intercomparison exercise.

Terms andConditions

The benchmark is offered to participants of the IEA Task 31 Wakebench.