TITLE: Nearshore Wave Prediction System (NWPS) v1.3.0 CHANGE REQUEST CLASS: Major change DESCRIPTION OF CHANGE: This upgrade includes the following major changes: - Improved algorithm for wave system tracking guidance, incl. reducing low-frequency cut-off to 0.035 Hz (all 36 WFO domains). - For 12 WFO domains (HGX, MOB, TAE, KEY, MLB, JAX, CHS, ILM, PHI, GYX, ALU, GUM), computation on unstructured domain meshes with variable resolution of 5 km to 200 m. (These are, however, interpolated onto existing regular CG1-CG5 output grids for AWIPS ingest.) - For 9 WFO domains (HGX, MOB, TAE, MLB, JAX, CHS, ILM, PHI, GYX), the addition of rip current and runup (erosion/overwash) guidance. For WFOs KEY and GUM, only rip current guidance is added. - Inclusion of wave field transect output graphics. The NWPS is run 1-8 times per day, on demand, depending on the coastal Weather Forecast Office (WFO). The 1-hourly grids will be disseminated in GRIB2 format, as well as in graphical (png) format on http://polar.ncep.noaa.gov/nwps/. Grid resolutions will be dependent upon individual coastal WFO. Each WFO will receive integral (total wave field) model guidance on a CG1 grid, and on up to 4 nested grids (CG2-CG5). In addition, partitioned wave fields will be available on a lower-resolution CG0 grid, if requested: CG0 grid - partition output on low-resolution version of overall computational domain CG1 CG1 grid - integral output on overall computational grid CG2 grid - integral output on first nested grid, if applicable CG3 grid - integral output on second nested grid, if applicable CG4 grid - integral output on third nested grid, if applicable CG5 grid - integral output on fourth nested grid, if applicable In the case of the 12 new unstructured meshes, the results on the native unstructured meshes are interpolated on the above regular grids for AWIPS display. JOB DEPENDENCIES: This system has upstream dependencies on the following WCOSS models: - WW3 Multi_1 (wave boundary conditions) - RTOFS Global (surface current fields) - ESTOFS Atlantic, Pacific and Micronesia (water level fields) - MMAB Sea Ice Analysis (sea ice fields) - P-Surge (water level fields) - GFS pgrb files (U10 fields for GFS fail-over option) The system has upstream data dependencies on: - User input received from AWIPS via LDM This system currently has no downstream dependencies. REQUIRED SYSTEM USAGE: i) Input field preprocessing: Preprocessing module to extract input fields from upstream models ESTOFS, Sea Ice, RTOFS and P-Surge to grids directly ingestible in NWPS. These run in parallel to the on-demand runs for each WFO (see below), and are staged for on-demand use in the latter. ESTOFS water level and Sea Ice extraction: CPU time: 0:46:54 h:min:sec Max/Ave Memory : 29 MB / 22.04 MB Num of CPUs : 36 (cfp parallel, distrib. over 2 nodes) RTOFS surface current extraction: CPU time: 0:10:24 h:min:sec Max/Ave Memory : 131 MB / 39.11 MB Num of CPUs : 1 (Serial, on 1 node) P-Surge water level extraction (tropical conditions in SR/ER): CPU time : 0:34:34 h:min:sec Max/Ave Memory : 28 MB / 22.18 MB Num of CPUs : 22 (cfp parallel, distrib. over 2 nodes) ii) Real-time, on-demand model: For each of the 36 on-demand WFO runs, the following tasks are executed: PREP, FORECAST_CG1, POST_CG1, PRDGEN_CG1, WAVETRACK_CG1, PRDGEN_CG0. Additionally, up to 4 nested domains are run, using the tasks FORECAST_CGn, POST_CGn and PRDGEN_CGn in a loop of n. The system usage for each of these tasks depends on the specific WFO domain, and number of nest grids. Example system usage values for the unstructured-mesh domain for WFO Miami, FL (MFL) are: JNWPS_PREP CPU time : 07:45 min:sec Max/Ave Memory : 79 MB / 48.64 MB Num of CPUs : 1 (Serial) JNWPS_FORECAST_CG1 CPU time : 53:01 min:sec Max/Ave Memory : 40 MB / 32.65 MB Num of CPUs : 96 (Parallel) JNWPS_POST_CG1 CPU time : 18:53 min:sec Max/Ave Memory : 158 MB / 27.68 MB Num of CPUs : 11 (Parallel) JNWPS_PRDGEN_CG1 CPU time : 00:07 min:sec Max/Ave Memory : 523 MB / 20.05 MB Num of CPUs : 11 (Parallel) JNWPS_WAVETRACK_CG1 CPU time : 19:20 min:sec Max/Ave Memory : 313 MB / 25.26 MB Num of CPUs : 1 (Serial) JNWPS_PRDGEN_CG0 CPU time : 00:02 min:sec Max/Ave Memory : 142 MB / 20.14 MB Num of CPUs : 8 (Parallel) JNWPS_FORECAST_CGn CPU time : 06:44 min:sec Max/Ave Memory : 41 MB / 20.85 MB Num of CPUs : 10 (Parallel) JNWPS_POST_CGn CPU time : 0:46 min:sec Max/Ave Memory : 52 MB / 20.52 MB Num of CPUs : Up to 4 (Parallel) JNWPS_PRDGEN_CGn CPU time : 0:08 min:sec Max/Ave Memory : 172 MB / 20.22 MB Num of CPUs : Up to 4 (Parallel) Estimated average system usage for each of the 36 WFOs: 16-96 cores. This is determined by the size of the JNWPS_FORECAST_CG1 job above, which depends on the size/complexity of each WFO domain: MFL, KEY, AKQ: 96 CPUs ALU: 84 CPUs AER, AFG, LCH, MHX: 24 CPUs HFO: 16 CPUs Remaining WFOs: 48 CPUs A total of 44 nodes on WCOSS TO4 (Cray) should be reserved for the on-demand operation, preferably EXCLUSIVE. NWPS v1.3 has some product delays relative to v1.2. The tables below detail the delivery times per job for each WFO, measured relative to the run initiation time. The table on the left shows the timing for v1.3, and the table on the right shows the corresponding values for v1.2: Delivery time of v1.3 products (min): Delivery times of v1.2 products (min): CG1 CG0 CG2 CG3 CG4 CG5 CG1 CG0 CG2 CG3 CG4 CG5 bro 25 37 - - - - bro 31 32 - - - - crp 33 41 - - - - crp 33 34 - - - - hgx 36 46 50 - - - hgx 54 56 - - - - lch 31 42 - - - - lch 35 37 - - - - lix 43 54 74 - - - lix 42 44 76 - - - mob 36 47 51 52 52 53 mob 40 42 65 66 67 68 tae 45 58 63 64 64 65 tae 26 27 65 66 66 67 tbw 36 46 49 - - - tbw 38 40 55 - - - key 64 79 84 - - - key 31 32 86 - - - mfl 80 99 105 106 106 107 mfl 98 101 114 115 115 116 mlb 41 54 58 - - - mlb 33 36 68 - - - jax 31 43 47 48 48 49 jax 22 27 58 59 59 60 sju 57 74 80 81 - - sju 58 63 79 80 - - chs 42 54 57 58 - - chs 28 31 68 69 - - ilm 47 62 - - - - ilm 23 29 - - - - mhx 69 79 84 85 85 86 mhx 72 74 88 89 89 90 akq 51 63 - - - - akq 55 58 - - - - lwx 16 25 - - - - lwx 15 - - - - - phi 34 46 49 50 - - phi 17 21 52 53 - - okx 53 68 72 73 73 - okx 59 62 74 74 75 - box 67 84 89 - - - box 64 67 82 - - - gyx 44 56 61 62 - - gyx 17 19 66 67 - - car 38 45 49 50 - - car 41 42 58 59 - - sgx 40 50 53 54 - - sgx 51 54 66 67 - - lox 23 42 59 60 - - lox 25 37 74 75 - - mtr 28 44 77 78 79 - mtr 22 31 80 81 82 - eka 19 34 41 - - - eka 17 29 45 - - - mfr 25 40 69 70 70 71 mfr 23 33 74 75 75 76 pqr 26 40 57 58 58 59 pqr 24 29 58 59 59 60 sew 22 36 46 - - - sew 23 31 57 - - - ajk 30 41 120 121 - - ajk 25 27 138 139 - - aer 29 40 123 124 - - aer 24 27 134 135 - - alu 55 72 79 80 - - alu 41 47 - - - - afg 43 - 86 87 - - afg 40 - 101 102 - - hfo 92 113 120 121 121 122 hfo 95 106 123 124 124 125 gum 32 45 48 49 49 - gum 18 21 43 44 45 - The table below summarizes all product delays >5 min. Note that new products in v1.3 is indicated by "new": CG1 CG0 CG2 CG3 CG4 CG5 bro - - crp - 7 hgx - - new lch - - lix - 10 - mob - - - - - - tae 19 31 - - - - tbw - 6 - key 33 47 - mfl - - - - - - mlb 8 18 - jax 9 16 - - - - sju - 11 - - chs 14 23 - - ilm 24 33 mhx - - - - - - akq - - lwx - new phi 17 25 - - okx - 6 - - - box - 17 7 gyx 27 37 - - car - - - - sgx - - - - lox - - - - mtr 6 13 - - - eka - - - mfr - 7 - - - - pqr - 11 - - - - sew - - - ajk - 14 - - aer - 13 - - alu 14 25 new new afg - - - hfo - 7 - - - - gum 14 24 - - - Disk storage: Total disk footprint in all run directories, nwges, com and pcom is 2.5T (up from 19.T in v1.2.13). Request data to be stored for 5 days. HPSS tape storage: Total volume (36 WFOs, all domains) = 111 GB/day, incl. space of retrospective inputs (up from 88.5 GB/day in v1.2.13). Request data to be archived for 2 years. BENEFITS OF CHANGE: - Improved quality of wave system tracking guidance to aid in the production of separate wind sea and swell forecasts - For 12 unstructured WFO domains: Higher nearshore grid resolution improves representation of the coastal geography and nearshore wave growth and propagation - For 12 unstructured WFO domains: High resolution enables the computation of rip current and erosion/overwash guidance to aid in the production of coastal hazard forecasts - Transect output provides a view of the wave guidance along high-impact tracks. USER IMPACT STATEMENT: Coastal WFOs in SR, ER, WR, PR and AR will be provided with enhanced, more consistent wave system guidance available in GFE (for editing) and D2D (for display). For the 12 WFOs transitioning to unstructured meshes, coastal wave physics will be resolved with more detail. For 21 WFOs (existing and new unstructured mesh domains), probabilistic rip current probability guidance will be available in GFE/D2D for the first time. Similarly for 16 WFOs probabilistic erosion and overwash guidance will be available for the first time in GFE/D2D. TECHNICAL IMPACT STATEMENT: Three technical impacts will be seen at the user (WFO) level: 1) The product delays listed above 2) The following GRIB2 fields will be removed from the SBN (see PNS20-47): - Water depth (WDEPTH) [m] - Wave length (WLENG) [m] 3) The following GRIB2 fields will be added to the SBN (configured in AWIPS v20.2.1): - Rip current probability (RIPCOP) [0-100%] - Erosion Occurrence Probability (EROSNP) [0-100%] - Overwash Occurrence Probability (OWASHP) [0-100%] - Total Water Level Accounting for Tide, Wind and Waves (TWLWAV) [m MSL] - Total Water Level Increase due to Waves (RUNUP) [m] - Mean Increase in Water Level due to Waves (SETUP) [m] - Time-varying Increase in Water Level due to Waves (SWASH) [m] - Total Water Level Above Dune Toe (TWLDT) [m] - Total Water Level Above Dune Crest (TWLDC) [m] RISKS: This on-demand system has two primary technical risks: 1. Dataflow: Each of the 23 coastal WFOs will submit a set of three files via LDM for their respective runs. The data volume of each submission will be up to 3 MB. There is a risk that there would be insufficient capacity on the LDM to relay this data, or that the transmission would be interrupted by a network outage. If this would occur, the NWPS runs will not initiate on WCOSS. 2. The NWPS will handle individual runs of up to 23 coastal WFOs, but typically not all concurrently. Projections suggest that 44 nodes on WCOSS T04 (Cray) will provide sufficient capacity, but the actual operational loading could vary. PROPOSED IMPLEMENTATION DATE: December 21, 2020 TIME: 00z BUGFIXES - Bug 926: Add wind initialization time window checking in nwps (01/21/20) - Bug 903: decrease rsync timeout time in JNWPS_DATACHK (07/31/19) - Bug 828: nwps_hfo_post_cg1 job failure with CYCLE 09 (07/31/19) - Bug 881: Fix nwps_forecast_cg1 hung when missing restart files (06/03/19) - Bug 860: improve handling bad WFO submission in nwps prep job (04/29/19) - Bug 845: Check for empty NWPS submission file (03/11/19) - Bug 828: Nwps_hfo_post_cg1 job failure with CYCLE 09 (11/20/18) - Bug 678: Remove the Python warnings in NWPS code in next (11/17/17) - Bug 677: NWPS executables do not match their top level source directory names (11/16/17) - Bug 676: Memory leaking in nwps_forecast job (11/16/17) - Bug 671: Add the warning in prep job if fail over to gfs data (11/15/17) IMPLEMENTATION INSTRUCTIONS I) Checking out the code from VLab Make the $NWROOT home directory for the model: > mkdir nwps > cd nwps Clone the NWPS code from the following VLab repo, and checkout the tag v1.3.0: > git clone https://vlab.ncep.noaa.gov/code-review/EMC_nwps > git checkout v1.3.0 II) Building the executables First define the path variable ${NWPSdir} in your profile file, which points to the base of the code checked out under (I) above. > export NWPSdir=$(pwd) Next, change directory to ${NWPSdir}/sorc/, and execute the general NWPS install script. This single step will install the total package, including all binary compilations: > cd ${NWPSdir}/sorc > ./make_NWPS.sh Once the compilations are done, all executables are moved to ${NWPSdir}/exec, and the system will be ready. The table below lists the executables, their respective source code directories, and build logs: Executable | Source directory name | Build log ---------------------------+---------------------------------------+--------------------- punswan4110.exe | punswan4110.fd | punswan_build.log swan.exe | swan.fd | swan_build.log runupforecast.exe | runupforecast.fd | runup_build.log ripforecast.exe | ripforecast.fd | ripcurrent_build.log estofs_padcirc | estofs_padcirc.fd | padcirc_build.log estofs_adcprep (util) | estofs_padcirc.fd | padcirc_build.log ww3_sysprep.exe | ww3_sysprep.fd | sysprep_build.log degrib | degrib-2.15.cd | degrib_build.log psurge2nwps | psurge2nwps.cd | psurge2nwps_build.log psurge_identify.exe (util) | psurge2nwps.cd | psurge_identify_build.log psoutTOnwps.exe (util) | psurge2nwps.cd | psoutTOnwps_build.log psurge_combine.exe (util) | psurge2nwps.cd | psurge_combine_build.log | | check_awips_windfile | nwps_utils.cd/check_awips_windfile | nwps_utils_build.log wave_runup_to_bin | nwps_utils.cd/wave_runup_to_bin | nwps_utils_build.log rip_current_to_bin | nwps_utils.cd/rip_current_to_bin | nwps_utils_build.log fix_ascii_point_data | nwps_utils.cd/fix_ascii_point_data | nwps_utils_build.log read_awips_windfile | nwps_utils.cd/read_awips_windfile | nwps_utils_build.log readdat | nwps_utils.cd/readdat | nwps_utils_build.log seaice_mask | nwps_utils.cd/seaice_mask | nwps_utils_build.log swan_out_to_bin | nwps_utils.cd/swan_out_to_bin | nwps_utils_build.log swan_wavetrack_to_bin | nwps_utils.cd/swan_wavetrack_to_bin | nwps_utils_build.log writedat | nwps_utils.cd/writedat | nwps_utils_build.log g2_write_template | nwps_utils.cd/g2_write_template | nwps_utils_build.log shiproute_to_bin | nwps_utils.cd/shiproute_to_bin | nwps_utils_build.log