PROGRAM WRFPOST !$$$ MAIN PROGRAM DOCUMENTATION BLOCK ! . . . ! MAIN PROGRAM: WRFPOST ! PRGMMR: BALDWIN ORG: NSSL/SPC DATE: 2002-06-18 ! ! ABSTRACT: ! THIS PROGRAM DRIVES THE EXTERNAL WRF POST PROCESSOR. ! ! PROGRAM HISTORY LOG: ! 92-12-24 RUSS TREADON - CODED ETAPOST AS STAND ALONE CODE ! 98-05-29 BLACK - CONVERSION OF POST CODE FROM 1-D TO 2-D ! 00-02-04 JIM TUCCILLO - PARALLEL VERSION VIA MPI ! 01-02-15 JIM TUCCILLO - MANY COMMON BLOCKS REPLACED WITH MODULES ! TO SUPPORT FORTRAN "ALLOCATE"s FOR THE EXACT SIZE OF THE ! ARRAYS NEEDED BASED ON THE NUMBER OF MPI TASKS. ! THIS WAS DONE TO REDUCE THE ADDRESS SPACE THAT THE LOADER SEES. ! THESE CHANGES WERE NECESSARY FOR RUNNING LARGER DOMAINS SUCH AS ! 12 KMS ! 01-06-15 JIM TUCCILLO - ADDED ASYNCRONOUS I/O CAPABILITY. IF THERE ARE MORE ! THAN ONE MPI TASK, THE IO WILL BE DONE AYNCHRONOUSLY BY THE LAST ! MPI TASK. ! 02-06-17 MIKE BALDWIN - CONVERT ETAPOST TO WRFPOST. INCLUDE WRF I/O API ! FOR INPUT OF MODEL DATA. MODIFY CODE TO DEAL WITH C-GRID ! DATA. STREAMLINE OUTPUT TO A CALL OF ONE SUBROUTINE INSTEAD OF THREE. ! REPLACE COMMON BLOCKS WITH A LIMITED NUMBER OF MODULES. ! 04-01-01 H CHUANG - ADDED NMM IO MODULE AND BINARY OPTIONS ! 05-07-08 Binbin Zhou: Aadded RSM model ! 05-12-05 H CHUANG - ADDED CAPABILITY TO OUTPUT OFF-HOUR FORECAST WHICH HAS ! NO IMPACTS ON ON-HOUR FORECAST ! 06-02-20 CHUANG, BLACK, AND ROGERS - FINALIZED COMPLETE LIST OF NAM ! OPERATIONAL PRODUCTS FROM WRF ! 06-02-27 H CHUANG - MODIFIED TO POST MULTIPLE ! FORECAST HOURS IN ONE EXECUTION ! 06-03-03 H CHUANG - ADDED PARRISH'S MPI BINARY IO TO READ BINARY ! WRF FILE AS RANDOM ASSCESS SO THAT VARIABLES IN WRF OUTPUT ! DON'T HAVE TO BE READ IN IN SPECIFIC ORDER ! 11-02-06 J WANG - ADD GRIB2 OPTION ! 11-12-14 SARAH LU - ADD THE OPTION TO READ NGAC AER FILE ! 12-01-28 J WANG - Use post available fields in xml file for grib2 ! ! USAGE: WRFPOST ! INPUT ARGUMENT LIST: ! NONE ! ! OUTPUT ARGUMENT LIST: ! NONE ! ! SUBPROGRAMS CALLED: ! UTILITIES: ! NONE ! LIBRARY: ! COMMON - CTLBLK ! RQSTFLD ! ! ATTRIBUTES: ! LANGUAGE: FORTRAN 90 ! MACHINE : IBM RS/6000 SP !$$$ ! ! !============================================================================================================ ! ! This is an MPI code. All array indexing is with respect to the global indices. Loop indices ! look as follows for N MPI tasks. ! ! ! ! Original New ! Index Index ! ! JM ----------------------------------------------- JEND ! JM-1 - - JEND_M ! JM-2 - MPI TASK N-1 - JEND_M2 ! - - ! - - ! ----------------------------------------------- JSTA, JSTA_M, JSTA_M2 ! ----------------------------------------------- JEND, JEND_M, JEND_M2 ! - - ! - MPI TASK N-2 - ! - - ! - - ! ----------------------------------------------- JSTA, JSTA_M, JSTA_M2 ! ! . ! . ! . ! ! ----------------------------------------------- JEND, JEND_M, JEND_M2 ! - - ! - MPI TASK 1 - ! - - ! - - ! ----------------------------------------------- JSTA, JSTA_M, JSTA_M2 ! ----------------------------------------------- JEND, JEND_M, JEND_M2 ! - - ! - MPI TASK 0 - ! 3 - - JSTA_M2 ! 2 - - JSTA_M ! 1 ----------------------------------------------- JSTA ! ! 1 IM ! ! ! Jim Tuccillo ! Jan 2000 ! ! README - Jim Tuccillo Feb 2001 ! ! Many common blocks have been replaced by modules to support Fortran ! "allocate" commands. Many of the 3-D arrays are now allocated to be the ! exact size required based on the number of MPI tasks. The dimensioning will be ! x ( im,jsta_2l:jend_2u,lm) ! Most 2-D arrays continue to be dimensioned (im,jm). This is fine but please be aware ! that the EXCH routine for arrays dimensioned (im,jm) is different than arrays dimensioned ! (im,jsta_2l:jend_2u). Also, be careful about passing any arrays dimensioned ! (im,jst_2l:jend_2u,lm). See examples in the code as to the correct calling sequence and ! EXCH routine to use. ! ! ! ASYNCHRONOUS I/O HAS BEEN ADDED. THE LAST MPI TASK DOES THE I/O. IF THERE IS ! ONLY ONE MPI TASK THN TASK ) DOES THE I/O. ! THE CODE HAS GOTTEN A LITTLE KLUDGY. BASICLY, IM, IMX and IMOUT MUST BE EQUAL ! AND REPRESENT THE VALUE USED IN THE MODEL. THE SAME HOLDS FOR JM, JMX and JMOUT. ! ! Jim Tuccillo June 2001 ! ! !=========================================================================================== ! use gfsio_module, only: gfsio_gfile, gfsio_init, gfsio_open, gfsio_getfilehead use nemsio_module, only: nemsio_getheadvar, nemsio_gfile, nemsio_init, nemsio_open, & nemsio_getfilehead,nemsio_close use CTLBLK_mod, only: filenameaer, me, num_procs, num_servers, mpi_comm_comp, datestr,& mpi_comm_inter, filename, ioform, grib, idat, filenameflux, filenamed3d, gdsdegr,& spldef, modelname, ihrst, lsmdef,vtimeunits, tprec, pthresh, datahandle, im, jm, lm,& lp1, lm1, im_jm, isf_surface_physics, nsoil, spl, lsmp1, global,& jsta, jend, jsta_m, jend_m, jsta_2l, jend_2u, novegtype, icount_calmict, npset, datapd,& lsm, fld_info, etafld2_tim, eta2p_tim, mdl2sigma_tim, cldrad_tim, miscln_tim,& fixed_tim, time_output, imin, surfce2_tim, komax, ivegsrc use grib2_module, only: gribit2,num_pset,nrecout,first_grbtbl,grib_info_finalize use sigio_module, only: sigio_head use sigio_r_module, only: sigio_rropen, sigio_rrhead !- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - implicit none ! type(gfsio_gfile) :: gfile type(nemsio_gfile):: nfile,ffile,rfile type(sigio_head):: sighead INCLUDE "mpif.h" ! ! DECLARE VARIABLES. ! ! SET HEADER WRITER FLAGS TO TRUE. ! !temporary vars ! real(kind=8) :: time_initpost=0.,INITPOST_tim=0.,btim,timef,rtc real rinc(5) integer :: status=0,iostatusD3D=0,iostatusFlux=0 integer iii,l,k,ierr,nrec,ist,lusig,idrt integer :: PRNTSEC,iim,jjm,llm,ioutcount,itmp,iret,iunit, & iunitd3d,iyear,imn,iday,LCNTRL,ieof integer :: iostatusAER ! integer :: kpo,kth,kpv real,dimension(komax) :: po,th,pv namelist/nampgb/kpo,po,kth,th,kpv,pv,fileNameAER character startdate*19,SysDepInfo*80,IOWRFNAME*3,post_fname*80 character cgar*1,cdum*4 ! !------------------------------------------------------------------------------ ! START HERE ! call start() ! ! INITIALIZE MPI CALL SETUP_SERVERS(ME, & & NUM_PROCS, & & NUM_SERVERS, & & MPI_COMM_COMP, & & MPI_COMM_INTER) ! ! ME IS THE RANK ! NUM_PROCS IS THE NUMBER OF TASKS DOING POSTING ! NUM_SERVERS IS ONE IF THERE ARE MORE THAN ONE TOTAL MPI TASKS, OTHERWISE ZERO ! MPI_COMM_COMP IS THE INTRACOMMUNICATOR ! MPI_COMM_INTER IS THE INTERCOMMUNICATOR FOR COMMUNCATION BETWEEN TASK 0 OF THE ! TASKS DOING THE POSTING AND THE I/O SERVER ! ! ! IF WE HAVE MORE THAN 1 MPI TASK THEN WE WILL FIRE UP THE IO SERVER ! THE LAST TASK ( IN THE CONTEXT OF MPI_COMM_WORLD ) IS THE I/O SERVER ! print*,'ME,NUM_PROCS,NUM_SERVERS=',ME,NUM_PROCS,NUM_SERVERS if ( me .ge. num_procs ) then ! call server ! else ! !************************************************************************** !read namelist open(5,file='itag') 98 read(5,111,end=1000) fileName print*,'fileName= ',fileName read(5,113) IOFORM print*,'IOFORM= ',IOFORM read(5,120) grib print*,'OUTFORM= ',grib if(index(grib,"grib")==0) then grib='grib1' rewind(5,iostat=ierr) read(5,111,end=1000) fileName read(5,113) IOFORM endif print*,'OUTFORM2= ',grib read(5,112) DateStr read(5,114) MODELNAME ! assume for now that the first date in the stdin file is the start date read(DateStr,300) iyear,imn,iday,ihrst,imin write(*,*) 'in WRFPOST iyear,imn,iday,ihrst,imin', & iyear,imn,iday,ihrst,imin 300 format(i4,1x,i2,1x,i2,1x,i2,1x,i2) IDAT(1)=imn IDAT(2)=iday IDAT(3)=iyear IDAT(4)=ihrst IDAT(5)=imin 111 format(a256) 112 format(a19) 113 format(a20) 114 format(a4) 120 format(a5) print*,'MODELNAME= ',MODELNAME,'grib=',grib !Chuang: If model is GFS, read in flux file name from unit5 if(MODELNAME .EQ. 'GFS')then read(5,111,end=117)fileNameFlux print*,'first two file names in GFS= ',trim(fileName), & trim(fileNameFlux) end if 117 continue if(MODELNAME .EQ. 'GFS')then read(5,111,end=118)fileNameD3D print*,'D3D names in GFS= ',trim(fileNameD3D) end if 118 continue ! fileNameD3D=' ' ! if(MODELNAME .EQ. 'GFS')then ! read(5,111,end=1118)fileNameAER ! print*,'AER names in GFS= ',trim(fileNameAER) ! end if !1118 continue ! ! set ndegr if(grib=='grib1') then gdsdegr=1000. else if (grib=='grib2') then gdsdegr=1000000. endif print *,'gdsdegr=',gdsdegr ! ! set default for kpo, kth, th, kpv, pv kpo=0 po=0 kth=1 th=(/320.,(0.,k=kth+1,komax)/) ! isentropic level to output kpv=8 pv=(/0.5,-0.5,1.0,-1.0,1.5,-1.5,2.0,-2.0,(0.,k=kpv+1,komax)/) if(MODELNAME.EQ.'RAPR')then read(5,*,iostat=iret,end=119) kpo else read(5,nampgb,iostat=iret,end=119) endif ! if(kpo > komax)print*,'pressure levels cannot exceed ',komax; STOP ! if(kth > komax)print*,'isent levels cannot exceed ',komax; STOP ! if(kpv > komax)print*,'PV levels cannot exceed ',komax; STOP 119 continue if(me==0)print*,'komax,iret for nampgb= ',komax,iret if(me==0)print*,'komax,kpo,kth,th,kpv,pv,fileNameAER= ',komax,kpo & & ,kth,th(1:kth),kpv,pv(1:kpv),trim(fileNameAER) ! set up pressure level from POSTGPVARS or DEFAULT if(kpo == 0)then ! use default pressure levels print*,'using default pressure levels,spldef=',(spldef(l),l=1,lsmdef) lsm=lsmdef do l=1,lsm spl(l)=spldef(l) end do else ! use POSTGPVARS print*,'using pressure levels from POSTGPVARS' if(MODELNAME.EQ.'RAPR')then read(5,*) (po(l),l=1,kpo) endif lsm=kpo if(po(lsm)=num_pset) go to 20 endif ! ! PROCESS NEXT GRID. ! GO TO 10 ! ! ALL GRIDS PROCESSED. ! 20 CONTINUE ! !------- call grib_info_finalize() ! IF(ME.EQ.0)THEN WRITE(6,*)' ' WRITE(6,*)'ALL GRIDS PROCESSED.' WRITE(6,*)' ' ENDIF ! call DE_ALLOCATE ! if(IOFORM .EQ. 'netcdf')THEN ! call ext_ncd_ioclose ( DataHandle, Status ) ! else ! call ext_int_ioclose ( DataHandle, Status ) ! end if ! GO TO 98 1000 CONTINUE !exp call ext_ncd_ioclose ( DataHandle, Status ) ! print*, 'INITPOST_tim = ', INITPOST_tim*1.0e-3 print*, 'MDLFLD_tim = ', ETAFLD2_tim*1.0e-3 print*, 'MDL2P_tim = ',ETA2P_tim *1.0e-3 print*, 'MDL2SIGMA_tim = ',MDL2SIGMA_tim *1.0e-3 print*, 'SURFCE_tim = ',SURFCE2_tim*1.0e-3 print*, 'CLDRAD_tim = ',CLDRAD_tim *1.0e-3 print*, 'MISCLN_tim = ',MISCLN_tim*1.0e-3 print*, 'FIXED_tim = ',FIXED_tim*1.0e-3 print*, 'Total time = ',(timef() - btim) * 1.0e-3 print*, 'Time for OUTPUT = ',time_output print*, 'Time for INITPOST = ',time_initpost ! ! END OF PROGRAM. ! ! ! MPI_LAST WILL SHUTDOWN THE IO SERVER, IF IT EXISTS ! CALL MPI_LAST ! ! end if ! ! ! call summary() CALL MPI_FINALIZE(IERR) STOP 0 END