# 1 "../mpp/mpp_io.F90"
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!-----------------------------------------------------------------------
! Parallel I/O for message-passing codes
!
! AUTHOR: V. Balaji (vb@gfdl.gov)
! SGI/GFDL Princeton University
!
! This program is free software; you can redistribute it and/or modify
! it under the terms of the GNU General Public License as published by
! the Free Software Foundation; either version 2 of the License, or
! (at your option) any later version.
!
! This program is distributed in the hope that it will be useful,
! but WITHOUT ANY WARRANTY; without even the implied warranty of
! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
! GNU General Public License for more details.
!
! For the full text of the GNU General Public License,
! write to: Free Software Foundation, Inc.,
! 675 Mass Ave, Cambridge, MA 02139, USA.
!-----------------------------------------------------------------------
!
! V. Balaji
!
!
!
!
! mpp_io_mod, is a set of simple calls for parallel I/O on
! distributed systems. It is geared toward the writing of data in netCDF
! format. It requires the modules mpp_domains_mod and mpp_mod, upon which it is built.
!
!
! In massively parallel environments, an often difficult problem is
! the reading and writing of data to files on disk. MPI-IO and MPI-2 IO
! are moving toward providing this capability, but are currently not
! widely implemented. Further, it is a rather abstruse
! API. mpp_io_mod is an attempt at a simple API encompassing a
! certain variety of the I/O tasks that will be required. It does not
! attempt to be an all-encompassing standard such as MPI, however, it
! can be implemented in MPI if so desired. It is equally simple to add
! parallel I/O capability to mpp_io_mod based on vendor-specific
! APIs while providing a layer of insulation for user codes.
!
! The mpp_io_mod parallel I/O API built on top of the mpp_domains_mod and mpp_mod API for domain decomposition and
! message passing. Features of mpp_io_mod include:
!
! 1) Simple, minimal API, with free access to underlying API for more
! complicated stuff.
! 2) Self-describing files: comprehensive header information
! (metadata) in the file itself.
! 3) Strong focus on performance of parallel write: the climate models
! for which it is designed typically read a minimal amount of data
! (typically at the beginning of the run); but on the other hand, tend
! to write copious amounts of data during the run. An interface for
! reading is also supplied, but its performance has not yet been optimized.
! 4) Integrated netCDF capability: netCDF is a
! data format widely used in the climate/weather modeling
! community. netCDF is considered the principal medium of data storage
! for mpp_io_mod. But I provide a raw unformatted
! fortran I/O capability in case netCDF is not an option, either due to
! unavailability, inappropriateness, or poor performance.
! 5) May require off-line post-processing: a tool for this purpose,
! mppnccombine, is available. GFDL users may use
! ~hnv/pub/mppnccombine. Outside users may obtain the
! source here. It
! can be compiled on any C compiler and linked with the netCDF
! library. The program is free and is covered by the GPL license.
!
! The internal representation of the data being written out is
! assumed be the default real type, which can be 4 or 8-byte. Time data
! is always written as 8-bytes to avoid overflow on climatic time scales
! in units of seconds.
!
!
I/O modes in mpp_io_mod
!
! The I/O activity critical to performance in the models for which
! mpp_io_mod is designed is typically the writing of large
! datasets on a model grid volume produced at intervals during
! a run. Consider a 3D grid volume, where model arrays are stored as
! (i,j,k). The domain decomposition is typically along
! i or j: thus to store data to disk as a global
! volume, the distributed chunks of data have to be seen as
! non-contiguous. If we attempt to have all PEs write this data into a
! single file, performance can be seriously compromised because of the
! data reordering that will be required. Possible options are to have
! one PE acquire all the data and write it out, or to have all the PEs
! write independent files, which are recombined offline. These three
! modes of operation are described in the mpp_io_mod terminology
! in terms of two parameters, threading and fileset,
! as follows:
!
! Single-threaded I/O: a single PE acquires all the data
! and writes it out.
! Multi-threaded, single-fileset I/O: many PEs write to a
! single file.
! Multi-threaded, multi-fileset I/O: many PEs write to
! independent files. This is also called distributed I/O.
!
! The middle option is the most difficult to achieve performance. The
! choice of one of these modes is made when a file is opened for I/O, in
! mpp_open.
!
!
Metadata in mpp_io_mod
!
! A requirement of the design of mpp_io_mod is that the file must
! be entirely self-describing: comprehensive header information
! describing its contents is present in the header of every file. The
! header information follows the model of netCDF. Variables in the file
! are divided into axes and fields. An axis describes a
! co-ordinate variable, e.g x,y,z,t. A field consists of data in
! the space described by the axes. An axis is described in
! mpp_io_mod using the defined type axistype:
!
!
! type, public :: axistype
! sequence
! character(len=128) :: name
! character(len=128) :: units
! character(len=256) :: longname
! character(len=8) :: cartesian
! integer :: len
! integer :: sense !+/-1, depth or height?
! type(domain1D), pointer :: domain
! real, dimension(:), pointer :: data
! integer :: id, did
! integer :: type ! external NetCDF type format for axis data
! integer :: natt
! type(atttype), pointer :: Att(:) ! axis attributes
! end type axistype
!
!
! A field is described using the type fieldtype:
!
!
! type, public :: fieldtype
! sequence
! character(len=128) :: name
! character(len=128) :: units
! character(len=256) :: longname
! real :: min, max, missing, fill, scale, add
! integer :: pack
! type(axistype), dimension(:), pointer :: axes
! integer, dimension(:), pointer :: size
! integer :: time_axis_index
! integer :: id
! integer :: type ! external NetCDF format for field data
! integer :: natt, ndim
! type(atttype), pointer :: Att(:) ! field metadata
! end type fieldtype
!
!
! An attribute (global, field or axis) is described using the atttype:
!
!
! type, public :: atttype
! sequence
! integer :: type, len
! character(len=128) :: name
! character(len=256) :: catt
! real(FLOAT_KIND), pointer :: fatt(:)
! end type atttype
!
!
! This default set of field attributes corresponds
! closely to various conventions established for netCDF files. The
! pack attribute of a field defines whether or not a
! field is to be packed on output. Allowed values of
! pack are 1,2,4 and 8. The value of
! pack is the number of variables written into 8
! bytes. In typical use, we write 4-byte reals to netCDF output; thus
! the default value of pack is 2. For
! pack = 4 or 8, packing uses a simple-minded linear
! scaling scheme using the scale and add
! attributes. There is thus likely to be a significant loss of dynamic
! range with packing. When a field is declared to be packed, the
! missing and fill attributes, if
! supplied, are packed also.
!
! Please note that the pack values are the same even if the default
! real is 4 bytes, i.e PACK=1 still follows the definition
! above and writes out 8 bytes.
!
! A set of attributes for each variable is also available. The
! variable definitions and attribute information is written/read by calling
! mpp_write_meta or mpp_read_meta. A typical calling
! sequence for writing data might be:
!
!
! ...
! type(domain2D), dimension(:), allocatable, target :: domain
! type(fieldtype) :: field
! type(axistype) :: x, y, z, t
! ...
! call mpp_define_domains( (/1,nx,1,ny/), domain )
! allocate( a(domain(pe)%x%data%start_index:domain(pe)%x%data%end_index, &
! domain(pe)%y%data%start_index:domain(pe)%y%data%end_index,nz) )
! ...
! call mpp_write_meta( unit, x, 'X', 'km', 'X distance', &
! domain=domain(pe)%x, data=(/(float(i),i=1,nx)/) )
! call mpp_write_meta( unit, y, 'Y', 'km', 'Y distance', &
! domain=domain(pe)%y, data=(/(float(i),i=1,ny)/) )
! call mpp_write_meta( unit, z, 'Z', 'km', 'Z distance', &
! data=(/(float(i),i=1,nz)/) )
! call mpp_write_meta( unit, t, 'Time', 'second', 'Time' )
!
! call mpp_write_meta( unit, field, (/x,y,z,t/), 'a', '(m/s)', AAA', &
! missing=-1e36 )
! ...
! call mpp_write( unit, x )
! call mpp_write( unit, y )
! call mpp_write( unit, z )
! ...
!
!
! In this example, x and y have been
! declared as distributed axes, since a domain decomposition has been
! associated. z and t are undistributed
! axes. t is known to be a record axis (netCDF
! terminology) since we do not allocate the data element
! of the axistype. Only one record axis may be
! associated with a file. The call to mpp_write_meta initializes
! the axes, and associates a unique variable ID with each axis. The call
! to mpp_write_meta with argument field
! declared field to be a 4D variable that is a function
! of (x,y,z,t), and a unique variable ID is associated
! with it. A 3D field will be written at each call to
! mpp_write(field).
!
! The data to any variable, including axes, is written by
! mpp_write.
!
! Any additional attributes of variables can be added through
! subsequent mpp_write_meta calls, using the variable ID as a
! handle. Global attributes, associated with the dataset as a
! whole, can also be written thus. See the mpp_write_meta call syntax below
! for further details.
!
! You cannot interleave calls to mpp_write and
! mpp_write_meta: the first call to
! mpp_write implies that metadata specification is
! complete.
!
! A typical calling sequence for reading data might be:
!
!
!
! In this example, the data are distributed as in the previous
! example. The call to mpp_read_meta initializes
! all of the metadata associated with the file, including global
! attributes, variable attributes and non-record dimension data. The
! call to mpp_get_info returns the number of global
! attributes (natt), variables (nvar) and
! time levels (ntime) associated with the file
! identified by a unique ID (unit).
! mpp_get_atts returns all global attributes for
! the file in the derived type atttype(natt).
! mpp_get_vars returns variable types
! (fieldtype(nvar)). Since the record dimension data are not allocated for calls to mpp_write, a separate call to mpp_get_times is required to access record dimension data. Subsequent calls to
! mpp_read return the field data arrays corresponding to
! the fieldtype. The domain type is an optional
! argument. If domain is omitted, the incoming field
! array should be dimensioned for the global domain, otherwise, the
! field data is assigned to the computational domain of a local array.
!
! Multi-fileset reads are not supported with mpp_read.
!
module mpp_io_mod
# 1 "../include/fms_platform.h" 1
! -*-f90-*-*
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!Set type kinds.
# 37
!These values are not necessarily portable.
!DEC$ MESSAGE:'Using 8-byte addressing'
!Control "pure" functions.
# 54
!DEC$ MESSAGE:'Using pure routines.'
!Control array members of derived types.
# 66
!DEC$ MESSAGE:'Using allocatable derived type array members.'
!Control use of cray pointers.
# 78
!DEC$ MESSAGE:'Using cray pointers.'
!Control size of integers that will hold address values.
!Appears for legacy reasons, but seems rather dangerous.
# 89
!If you do not want to use 64-bit integers.
# 95
!If you do not want to use 32-bit floats.
# 106
!If you want to use quad-precision.
# 115
# 330 "../mpp/mpp_io.F90" 2
use mpp_parameter_mod, only : MPP_WRONLY, MPP_RDONLY, MPP_APPEND, MPP_OVERWR, MPP_ASCII
use mpp_parameter_mod, only : MPP_IEEE32, MPP_NATIVE, MPP_NETCDF, MPP_SEQUENTIAL
use mpp_parameter_mod, only : MPP_DIRECT, MPP_SINGLE, MPP_MULTI, MPP_DELETE, MPP_COLLECT
use mpp_parameter_mod, only : MPP_DEBUG, MPP_VERBOSE, NULLUNIT, NULLTIME, ALL_PES
use mpp_parameter_mod, only : CENTER, EAST, NORTH, CORNER
use mpp_parameter_mod, only : MAX_FILE_SIZE, GLOBAL_ROOT_ONLY, XUPDATE, YUPDATE
use mpp_mod, only : mpp_error, FATAL, WARNING, NOTE, stdin, stdout, stderr, stdlog
use mpp_mod, only : mpp_pe, mpp_root_pe, mpp_npes, lowercase, mpp_transmit, mpp_sync_self
use mpp_mod, only : mpp_init, mpp_sync, mpp_clock_id, mpp_clock_begin, mpp_clock_end
use mpp_mod, only : MPP_CLOCK_SYNC, MPP_CLOCK_DETAILED, CLOCK_ROUTINE
use mpp_mod, only : input_nml_file, mpp_gather, mpp_broadcast
use mpp_mod, only : mpp_send, mpp_recv, mpp_sync_self, EVENT_RECV, COMM_TAG_1
use mpp_domains_mod, only : domain1d, domain2d, NULL_DOMAIN1D, mpp_domains_init
use mpp_domains_mod, only : mpp_get_global_domain, mpp_get_compute_domain
use mpp_domains_mod, only : mpp_get_data_domain, mpp_get_memory_domain, mpp_get_pelist
use mpp_domains_mod, only : mpp_update_domains, mpp_global_field, mpp_domain_is_symmetry
use mpp_domains_mod, only : operator( .NE. ), mpp_get_domain_shift, mpp_get_UG_compute_domains
use mpp_domains_mod, only : mpp_get_io_domain, mpp_domain_is_tile_root_pe, mpp_get_domain_tile_root_pe
use mpp_domains_mod, only : mpp_get_tile_id, mpp_get_tile_npes, mpp_get_io_domain_layout
use mpp_domains_mod, only : mpp_get_domain_name, mpp_get_domain_npes
use mpp_parameter_mod, only : MPP_FILL_DOUBLE,MPP_FILL_INT
use mpp_mod, only : mpp_chksum
!----------
!ug support
use mpp_domains_mod, only: domainUG, &
mpp_get_UG_io_domain, &
mpp_domain_UG_is_tile_root_pe, &
mpp_get_UG_domain_tile_id, &
mpp_get_UG_domain_npes, &
mpp_get_io_domain_UG_layout, &
mpp_get_UG_compute_domain, &
mpp_get_UG_domain_pelist
!----------
implicit none
private
# 1 "/apps/prod/hpc-stack/intel-19.1.3.304/cray-mpich-8.1.4/netcdf/4.7.4/include/netcdf.inc" 1
! NetCDF-3.
!
! netcdf version 3 fortran interface:
!
!
! external netcdf data types:
!
integer nf_byte
integer nf_int1
integer nf_char
integer nf_short
integer nf_int2
integer nf_int
integer nf_float
integer nf_real
integer nf_double
integer nf_ubyte
integer nf_ushort
integer nf_uint
integer nf_int64
integer nf_uint64
parameter (nf_byte = 1)
parameter (nf_int1 = nf_byte)
parameter (nf_char = 2)
parameter (nf_short = 3)
parameter (nf_int2 = nf_short)
parameter (nf_int = 4)
parameter (nf_float = 5)
parameter (nf_real = nf_float)
parameter (nf_double = 6)
parameter (nf_ubyte = 7)
parameter (nf_ushort = 8)
parameter (nf_uint = 9)
parameter (nf_int64 = 10)
parameter (nf_uint64 = 11)
!
! default fill values:
!
integer nf_fill_byte
integer nf_fill_int1
integer nf_fill_char
integer nf_fill_short
integer nf_fill_int2
integer nf_fill_int
real nf_fill_float
real nf_fill_real
doubleprecision nf_fill_double
parameter (nf_fill_byte = -127)
parameter (nf_fill_int1 = nf_fill_byte)
parameter (nf_fill_char = 0)
parameter (nf_fill_short = -32767)
parameter (nf_fill_int2 = nf_fill_short)
parameter (nf_fill_int = -2147483647)
parameter (nf_fill_float = 9.9692099683868690e+36)
parameter (nf_fill_real = nf_fill_float)
parameter (nf_fill_double = 9.9692099683868690d+36)
!
! mode flags for opening and creating a netcdf dataset:
!
integer nf_nowrite
integer nf_write
integer nf_clobber
integer nf_noclobber
integer nf_fill
integer nf_nofill
integer nf_lock
integer nf_share
integer nf_64bit_offset
integer nf_64bit_data
integer nf_cdf5
integer nf_sizehint_default
integer nf_align_chunk
integer nf_format_classic
integer nf_format_64bit
integer nf_format_64bit_offset
integer nf_format_64bit_data
integer nf_format_cdf5
integer nf_diskless
integer nf_mmap
parameter (nf_nowrite = 0)
parameter (nf_write = 1)
parameter (nf_clobber = 0)
parameter (nf_noclobber = 4)
parameter (nf_fill = 0)
parameter (nf_nofill = 256)
parameter (nf_lock = 1024)
parameter (nf_share = 2048)
parameter (nf_64bit_offset = 512)
parameter (nf_64bit_data = 32)
parameter (nf_cdf5 = nf_64bit_data)
parameter (nf_sizehint_default = 0)
parameter (nf_align_chunk = -1)
parameter (nf_format_classic = 1)
parameter (nf_format_64bit = 2)
parameter (nf_format_64bit_offset = nf_format_64bit)
parameter (nf_format_64bit_data = 5)
parameter (nf_format_cdf5 = nf_format_64bit_data)
parameter (nf_diskless = 8)
parameter (nf_mmap = 16)
!
! size argument for defining an unlimited dimension:
!
integer nf_unlimited
parameter (nf_unlimited = 0)
!
! global attribute id:
!
integer nf_global
parameter (nf_global = 0)
!
! implementation limits:
!
integer nf_max_dims
integer nf_max_attrs
integer nf_max_vars
integer nf_max_name
integer nf_max_var_dims
parameter (nf_max_dims = 1024)
parameter (nf_max_attrs = 8192)
parameter (nf_max_vars = 8192)
parameter (nf_max_name = 256)
parameter (nf_max_var_dims = nf_max_dims)
!
! error codes:
!
integer nf_noerr
integer nf_ebadid
integer nf_eexist
integer nf_einval
integer nf_eperm
integer nf_enotindefine
integer nf_eindefine
integer nf_einvalcoords
integer nf_emaxdims
integer nf_enameinuse
integer nf_enotatt
integer nf_emaxatts
integer nf_ebadtype
integer nf_ebaddim
integer nf_eunlimpos
integer nf_emaxvars
integer nf_enotvar
integer nf_eglobal
integer nf_enotnc
integer nf_ests
integer nf_emaxname
integer nf_eunlimit
integer nf_enorecvars
integer nf_echar
integer nf_eedge
integer nf_estride
integer nf_ebadname
integer nf_erange
integer nf_enomem
integer nf_evarsize
integer nf_edimsize
integer nf_etrunc
parameter (nf_noerr = 0)
parameter (nf_ebadid = -33)
parameter (nf_eexist = -35)
parameter (nf_einval = -36)
parameter (nf_eperm = -37)
parameter (nf_enotindefine = -38)
parameter (nf_eindefine = -39)
parameter (nf_einvalcoords = -40)
parameter (nf_emaxdims = -41)
parameter (nf_enameinuse = -42)
parameter (nf_enotatt = -43)
parameter (nf_emaxatts = -44)
parameter (nf_ebadtype = -45)
parameter (nf_ebaddim = -46)
parameter (nf_eunlimpos = -47)
parameter (nf_emaxvars = -48)
parameter (nf_enotvar = -49)
parameter (nf_eglobal = -50)
parameter (nf_enotnc = -51)
parameter (nf_ests = -52)
parameter (nf_emaxname = -53)
parameter (nf_eunlimit = -54)
parameter (nf_enorecvars = -55)
parameter (nf_echar = -56)
parameter (nf_eedge = -57)
parameter (nf_estride = -58)
parameter (nf_ebadname = -59)
parameter (nf_erange = -60)
parameter (nf_enomem = -61)
parameter (nf_evarsize = -62)
parameter (nf_edimsize = -63)
parameter (nf_etrunc = -64)
!
! error handling modes:
!
integer nf_fatal
integer nf_verbose
parameter (nf_fatal = 1)
parameter (nf_verbose = 2)
!
! miscellaneous routines:
!
character*80 nf_inq_libvers
external nf_inq_libvers
character*80 nf_strerror
! (integer ncerr)
external nf_strerror
logical nf_issyserr
! (integer ncerr)
external nf_issyserr
!
! control routines:
!
integer nf_inq_base_pe
! (integer ncid,
! integer pe)
external nf_inq_base_pe
integer nf_set_base_pe
! (integer ncid,
! integer pe)
external nf_set_base_pe
integer nf_create
! (character*(*) path,
! integer cmode,
! integer ncid)
external nf_create
integer nf__create
! (character*(*) path,
! integer cmode,
! integer initialsz,
! integer chunksizehint,
! integer ncid)
external nf__create
integer nf__create_mp
! (character*(*) path,
! integer cmode,
! integer initialsz,
! integer basepe,
! integer chunksizehint,
! integer ncid)
external nf__create_mp
integer nf_open
! (character*(*) path,
! integer mode,
! integer ncid)
external nf_open
integer nf__open
! (character*(*) path,
! integer mode,
! integer chunksizehint,
! integer ncid)
external nf__open
integer nf__open_mp
! (character*(*) path,
! integer mode,
! integer basepe,
! integer chunksizehint,
! integer ncid)
external nf__open_mp
integer nf_set_fill
! (integer ncid,
! integer fillmode,
! integer old_mode)
external nf_set_fill
integer nf_set_default_format
! (integer format,
! integer old_format)
external nf_set_default_format
integer nf_redef
! (integer ncid)
external nf_redef
integer nf_enddef
! (integer ncid)
external nf_enddef
integer nf__enddef
! (integer ncid,
! integer h_minfree,
! integer v_align,
! integer v_minfree,
! integer r_align)
external nf__enddef
integer nf_sync
! (integer ncid)
external nf_sync
integer nf_abort
! (integer ncid)
external nf_abort
integer nf_close
! (integer ncid)
external nf_close
integer nf_delete
! (character*(*) ncid)
external nf_delete
!
! general inquiry routines:
!
integer nf_inq
! (integer ncid,
! integer ndims,
! integer nvars,
! integer ngatts,
! integer unlimdimid)
external nf_inq
! new inquire path
integer nf_inq_path
external nf_inq_path
integer nf_inq_ndims
! (integer ncid,
! integer ndims)
external nf_inq_ndims
integer nf_inq_nvars
! (integer ncid,
! integer nvars)
external nf_inq_nvars
integer nf_inq_natts
! (integer ncid,
! integer ngatts)
external nf_inq_natts
integer nf_inq_unlimdim
! (integer ncid,
! integer unlimdimid)
external nf_inq_unlimdim
integer nf_inq_format
! (integer ncid,
! integer format)
external nf_inq_format
!
! dimension routines:
!
integer nf_def_dim
! (integer ncid,
! character(*) name,
! integer len,
! integer dimid)
external nf_def_dim
integer nf_inq_dimid
! (integer ncid,
! character(*) name,
! integer dimid)
external nf_inq_dimid
integer nf_inq_dim
! (integer ncid,
! integer dimid,
! character(*) name,
! integer len)
external nf_inq_dim
integer nf_inq_dimname
! (integer ncid,
! integer dimid,
! character(*) name)
external nf_inq_dimname
integer nf_inq_dimlen
! (integer ncid,
! integer dimid,
! integer len)
external nf_inq_dimlen
integer nf_rename_dim
! (integer ncid,
! integer dimid,
! character(*) name)
external nf_rename_dim
!
! general attribute routines:
!
integer nf_inq_att
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype,
! integer len)
external nf_inq_att
integer nf_inq_attid
! (integer ncid,
! integer varid,
! character(*) name,
! integer attnum)
external nf_inq_attid
integer nf_inq_atttype
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype)
external nf_inq_atttype
integer nf_inq_attlen
! (integer ncid,
! integer varid,
! character(*) name,
! integer len)
external nf_inq_attlen
integer nf_inq_attname
! (integer ncid,
! integer varid,
! integer attnum,
! character(*) name)
external nf_inq_attname
integer nf_copy_att
! (integer ncid_in,
! integer varid_in,
! character(*) name,
! integer ncid_out,
! integer varid_out)
external nf_copy_att
integer nf_rename_att
! (integer ncid,
! integer varid,
! character(*) curname,
! character(*) newname)
external nf_rename_att
integer nf_del_att
! (integer ncid,
! integer varid,
! character(*) name)
external nf_del_att
!
! attribute put/get routines:
!
integer nf_put_att_text
! (integer ncid,
! integer varid,
! character(*) name,
! integer len,
! character(*) text)
external nf_put_att_text
integer nf_get_att_text
! (integer ncid,
! integer varid,
! character(*) name,
! character(*) text)
external nf_get_att_text
integer nf_put_att_int1
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype,
! integer len,
! nf_int1_t i1vals(1))
external nf_put_att_int1
integer nf_get_att_int1
! (integer ncid,
! integer varid,
! character(*) name,
! nf_int1_t i1vals(1))
external nf_get_att_int1
integer nf_put_att_int2
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype,
! integer len,
! nf_int2_t i2vals(1))
external nf_put_att_int2
integer nf_get_att_int2
! (integer ncid,
! integer varid,
! character(*) name,
! nf_int2_t i2vals(1))
external nf_get_att_int2
integer nf_put_att_int
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype,
! integer len,
! integer ivals(1))
external nf_put_att_int
integer nf_get_att_int
! (integer ncid,
! integer varid,
! character(*) name,
! integer ivals(1))
external nf_get_att_int
integer nf_put_att_int64
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype,
! integer len,
! nf_int8_t i8vals(1))
external nf_put_att_int64
integer nf_get_att_int64
! (integer ncid,
! integer varid,
! character(*) name,
! nf_int8_t i8vals(1))
external nf_get_att_int64
integer nf_put_att_real
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype,
! integer len,
! real rvals(1))
external nf_put_att_real
integer nf_get_att_real
! (integer ncid,
! integer varid,
! character(*) name,
! real rvals(1))
external nf_get_att_real
integer nf_put_att_double
! (integer ncid,
! integer varid,
! character(*) name,
! integer xtype,
! integer len,
! double dvals(1))
external nf_put_att_double
integer nf_get_att_double
! (integer ncid,
! integer varid,
! character(*) name,
! double dvals(1))
external nf_get_att_double
!
! general variable routines:
!
integer nf_def_var
! (integer ncid,
! character(*) name,
! integer datatype,
! integer ndims,
! integer dimids(1),
! integer varid)
external nf_def_var
integer nf_inq_var
! (integer ncid,
! integer varid,
! character(*) name,
! integer datatype,
! integer ndims,
! integer dimids(1),
! integer natts)
external nf_inq_var
integer nf_inq_varid
! (integer ncid,
! character(*) name,
! integer varid)
external nf_inq_varid
integer nf_inq_varname
! (integer ncid,
! integer varid,
! character(*) name)
external nf_inq_varname
integer nf_inq_vartype
! (integer ncid,
! integer varid,
! integer xtype)
external nf_inq_vartype
integer nf_inq_varndims
! (integer ncid,
! integer varid,
! integer ndims)
external nf_inq_varndims
integer nf_inq_vardimid
! (integer ncid,
! integer varid,
! integer dimids(1))
external nf_inq_vardimid
integer nf_inq_varnatts
! (integer ncid,
! integer varid,
! integer natts)
external nf_inq_varnatts
integer nf_rename_var
! (integer ncid,
! integer varid,
! character(*) name)
external nf_rename_var
integer nf_copy_var
! (integer ncid_in,
! integer varid,
! integer ncid_out)
external nf_copy_var
!
! entire variable put/get routines:
!
integer nf_put_var_text
! (integer ncid,
! integer varid,
! character(*) text)
external nf_put_var_text
integer nf_get_var_text
! (integer ncid,
! integer varid,
! character(*) text)
external nf_get_var_text
integer nf_put_var_int1
! (integer ncid,
! integer varid,
! nf_int1_t i1vals(1))
external nf_put_var_int1
integer nf_get_var_int1
! (integer ncid,
! integer varid,
! nf_int1_t i1vals(1))
external nf_get_var_int1
integer nf_put_var_int2
! (integer ncid,
! integer varid,
! nf_int2_t i2vals(1))
external nf_put_var_int2
integer nf_get_var_int2
! (integer ncid,
! integer varid,
! nf_int2_t i2vals(1))
external nf_get_var_int2
integer nf_put_var_int
! (integer ncid,
! integer varid,
! integer ivals(1))
external nf_put_var_int
integer nf_get_var_int
! (integer ncid,
! integer varid,
! integer ivals(1))
external nf_get_var_int
integer nf_put_var_real
! (integer ncid,
! integer varid,
! real rvals(1))
external nf_put_var_real
integer nf_get_var_real
! (integer ncid,
! integer varid,
! real rvals(1))
external nf_get_var_real
integer nf_put_var_double
! (integer ncid,
! integer varid,
! doubleprecision dvals(1))
external nf_put_var_double
integer nf_get_var_double
! (integer ncid,
! integer varid,
! doubleprecision dvals(1))
external nf_get_var_double
!
! single variable put/get routines:
!
integer nf_put_var1_text
! (integer ncid,
! integer varid,
! integer index(1),
! character*1 text)
external nf_put_var1_text
integer nf_get_var1_text
! (integer ncid,
! integer varid,
! integer index(1),
! character*1 text)
external nf_get_var1_text
integer nf_put_var1_int1
! (integer ncid,
! integer varid,
! integer index(1),
! nf_int1_t i1val)
external nf_put_var1_int1
integer nf_get_var1_int1
! (integer ncid,
! integer varid,
! integer index(1),
! nf_int1_t i1val)
external nf_get_var1_int1
integer nf_put_var1_int2
! (integer ncid,
! integer varid,
! integer index(1),
! nf_int2_t i2val)
external nf_put_var1_int2
integer nf_get_var1_int2
! (integer ncid,
! integer varid,
! integer index(1),
! nf_int2_t i2val)
external nf_get_var1_int2
integer nf_put_var1_int
! (integer ncid,
! integer varid,
! integer index(1),
! integer ival)
external nf_put_var1_int
integer nf_get_var1_int
! (integer ncid,
! integer varid,
! integer index(1),
! integer ival)
external nf_get_var1_int
integer nf_put_var1_real
! (integer ncid,
! integer varid,
! integer index(1),
! real rval)
external nf_put_var1_real
integer nf_get_var1_real
! (integer ncid,
! integer varid,
! integer index(1),
! real rval)
external nf_get_var1_real
integer nf_put_var1_double
! (integer ncid,
! integer varid,
! integer index(1),
! doubleprecision dval)
external nf_put_var1_double
integer nf_get_var1_double
! (integer ncid,
! integer varid,
! integer index(1),
! doubleprecision dval)
external nf_get_var1_double
!
! variable array put/get routines:
!
integer nf_put_vara_text
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! character(*) text)
external nf_put_vara_text
integer nf_get_vara_text
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! character(*) text)
external nf_get_vara_text
integer nf_put_vara_int1
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! nf_int1_t i1vals(1))
external nf_put_vara_int1
integer nf_get_vara_int1
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! nf_int1_t i1vals(1))
external nf_get_vara_int1
integer nf_put_vara_int2
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! nf_int2_t i2vals(1))
external nf_put_vara_int2
integer nf_get_vara_int2
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! nf_int2_t i2vals(1))
external nf_get_vara_int2
integer nf_put_vara_int
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer ivals(1))
external nf_put_vara_int
integer nf_get_vara_int
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer ivals(1))
external nf_get_vara_int
integer nf_put_vara_real
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! real rvals(1))
external nf_put_vara_real
integer nf_get_vara_real
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! real rvals(1))
external nf_get_vara_real
integer nf_put_vara_double
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! doubleprecision dvals(1))
external nf_put_vara_double
integer nf_get_vara_double
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! doubleprecision dvals(1))
external nf_get_vara_double
!
! strided variable put/get routines:
!
integer nf_put_vars_text
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! character(*) text)
external nf_put_vars_text
integer nf_get_vars_text
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! character(*) text)
external nf_get_vars_text
integer nf_put_vars_int1
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! nf_int1_t i1vals(1))
external nf_put_vars_int1
integer nf_get_vars_int1
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! nf_int1_t i1vals(1))
external nf_get_vars_int1
integer nf_put_vars_int2
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! nf_int2_t i2vals(1))
external nf_put_vars_int2
integer nf_get_vars_int2
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! nf_int2_t i2vals(1))
external nf_get_vars_int2
integer nf_put_vars_int
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer ivals(1))
external nf_put_vars_int
integer nf_get_vars_int
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer ivals(1))
external nf_get_vars_int
integer nf_put_vars_real
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! real rvals(1))
external nf_put_vars_real
integer nf_get_vars_real
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! real rvals(1))
external nf_get_vars_real
integer nf_put_vars_double
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! doubleprecision dvals(1))
external nf_put_vars_double
integer nf_get_vars_double
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! doubleprecision dvals(1))
external nf_get_vars_double
!
! mapped variable put/get routines:
!
integer nf_put_varm_text
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! character(*) text)
external nf_put_varm_text
integer nf_get_varm_text
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! character(*) text)
external nf_get_varm_text
integer nf_put_varm_int1
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! nf_int1_t i1vals(1))
external nf_put_varm_int1
integer nf_get_varm_int1
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! nf_int1_t i1vals(1))
external nf_get_varm_int1
integer nf_put_varm_int2
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! nf_int2_t i2vals(1))
external nf_put_varm_int2
integer nf_get_varm_int2
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! nf_int2_t i2vals(1))
external nf_get_varm_int2
integer nf_put_varm_int
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! integer ivals(1))
external nf_put_varm_int
integer nf_get_varm_int
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! integer ivals(1))
external nf_get_varm_int
integer nf_put_varm_real
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! real rvals(1))
external nf_put_varm_real
integer nf_get_varm_real
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! real rvals(1))
external nf_get_varm_real
integer nf_put_varm_double
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! doubleprecision dvals(1))
external nf_put_varm_double
integer nf_get_varm_double
! (integer ncid,
! integer varid,
! integer start(1),
! integer count(1),
! integer stride(1),
! integer imap(1),
! doubleprecision dvals(1))
external nf_get_varm_double
! 64-bit int functions.
integer nf_put_var1_int64
external nf_put_var1_int64
integer nf_put_vara_int64
external nf_put_vara_int64
integer nf_put_vars_int64
external nf_put_vars_int64
integer nf_put_varm_int64
external nf_put_varm_int64
integer nf_put_var_int64
external nf_put_var_int64
integer nf_get_var1_int64
external nf_get_var1_int64
integer nf_get_vara_int64
external nf_get_vara_int64
integer nf_get_vars_int64
external nf_get_vars_int64
integer nf_get_varm_int64
external nf_get_varm_int64
integer nf_get_var_int64
external nf_get_var_int64
! NetCDF-4.
! This is part of netCDF-4. Copyright 2006, UCAR, See COPYRIGHT
! file for distribution information.
! Netcdf version 4 fortran interface.
! $Id: netcdf4.inc,v 1.28 2010/05/25 13:53:02 ed Exp $
! New netCDF-4 types.
integer nf_string
integer nf_vlen
integer nf_opaque
integer nf_enum
integer nf_compound
parameter (nf_string = 12)
parameter (nf_vlen = 13)
parameter (nf_opaque = 14)
parameter (nf_enum = 15)
parameter (nf_compound = 16)
! New netCDF-4 fill values.
integer nf_fill_ubyte
integer nf_fill_ushort
! real nf_fill_uint
! real nf_fill_int64
! real nf_fill_uint64
parameter (nf_fill_ubyte = 255)
parameter (nf_fill_ushort = 65535)
! New constants.
integer nf_format_netcdf4
parameter (nf_format_netcdf4 = 3)
integer nf_format_netcdf4_classic
parameter (nf_format_netcdf4_classic = 4)
integer nf_netcdf4
parameter (nf_netcdf4 = 4096)
integer nf_classic_model
parameter (nf_classic_model = 256)
integer nf_chunk_seq
parameter (nf_chunk_seq = 0)
integer nf_chunk_sub
parameter (nf_chunk_sub = 1)
integer nf_chunk_sizes
parameter (nf_chunk_sizes = 2)
integer nf_endian_native
parameter (nf_endian_native = 0)
integer nf_endian_little
parameter (nf_endian_little = 1)
integer nf_endian_big
parameter (nf_endian_big = 2)
! For NF_DEF_VAR_CHUNKING
integer nf_chunked
parameter (nf_chunked = 0)
integer nf_contiguous
parameter (nf_contiguous = 1)
integer nf_compact
parameter (nf_compact = 2)
! For NF_DEF_VAR_FLETCHER32
integer nf_nochecksum
parameter (nf_nochecksum = 0)
integer nf_fletcher32
parameter (nf_fletcher32 = 1)
! For NF_DEF_VAR_DEFLATE
integer nf_noshuffle
parameter (nf_noshuffle = 0)
integer nf_shuffle
parameter (nf_shuffle = 1)
! For NF_DEF_VAR_SZIP
integer nf_szip_ec_option_mask
parameter (nf_szip_ec_option_mask = 4)
integer nf_szip_nn_option_mask
parameter (nf_szip_nn_option_mask = 32)
! For parallel I/O.
integer nf_mpiio
parameter (nf_mpiio = 8192)
integer nf_mpiposix
parameter (nf_mpiposix = 16384)
integer nf_pnetcdf
parameter (nf_pnetcdf = 32768)
! For NF_VAR_PAR_ACCESS.
integer nf_independent
parameter (nf_independent = 0)
integer nf_collective
parameter (nf_collective = 1)
! New error codes.
integer nf_ehdferr ! Error at HDF5 layer.
parameter (nf_ehdferr = -101)
integer nf_ecantread ! Can't read.
parameter (nf_ecantread = -102)
integer nf_ecantwrite ! Can't write.
parameter (nf_ecantwrite = -103)
integer nf_ecantcreate ! Can't create.
parameter (nf_ecantcreate = -104)
integer nf_efilemeta ! Problem with file metadata.
parameter (nf_efilemeta = -105)
integer nf_edimmeta ! Problem with dimension metadata.
parameter (nf_edimmeta = -106)
integer nf_eattmeta ! Problem with attribute metadata.
parameter (nf_eattmeta = -107)
integer nf_evarmeta ! Problem with variable metadata.
parameter (nf_evarmeta = -108)
integer nf_enocompound ! Not a compound type.
parameter (nf_enocompound = -109)
integer nf_eattexists ! Attribute already exists.
parameter (nf_eattexists = -110)
integer nf_enotnc4 ! Attempting netcdf-4 operation on netcdf-3 file.
parameter (nf_enotnc4 = -111)
integer nf_estrictnc3 ! Attempting netcdf-4 operation on strict nc3 netcdf-4 file.
parameter (nf_estrictnc3 = -112)
integer nf_enotnc3 ! Attempting netcdf-3 operation on netcdf-4 file.
parameter (nf_enotnc3 = -113)
integer nf_enopar ! Parallel operation on file opened for non-parallel access.
parameter (nf_enopar = -114)
integer nf_eparinit ! Error initializing for parallel access.
parameter (nf_eparinit = -115)
integer nf_ebadgrpid ! Bad group ID.
parameter (nf_ebadgrpid = -116)
integer nf_ebadtypid ! Bad type ID.
parameter (nf_ebadtypid = -117)
integer nf_etypdefined ! Type has already been defined and may not be edited.
parameter (nf_etypdefined = -118)
integer nf_ebadfield ! Bad field ID.
parameter (nf_ebadfield = -119)
integer nf_ebadclass ! Bad class.
parameter (nf_ebadclass = -120)
integer nf_emaptype ! Mapped access for atomic types only.
parameter (nf_emaptype = -121)
integer nf_elatefill ! Attempt to define fill value when data already exists.
parameter (nf_elatefill = -122)
integer nf_elatedef ! Attempt to define var properties, like deflate, after enddef.
parameter (nf_elatedef = -123)
integer nf_edimscale ! Probem with HDF5 dimscales.
parameter (nf_edimscale = -124)
integer nf_enogrp ! No group found.
parameter (nf_enogrp = -125)
! New functions.
! Parallel I/O.
integer nf_create_par
external nf_create_par
integer nf_open_par
external nf_open_par
integer nf_var_par_access
external nf_var_par_access
! Functions to handle groups.
integer nf_inq_ncid
external nf_inq_ncid
integer nf_inq_grps
external nf_inq_grps
integer nf_inq_grpname
external nf_inq_grpname
integer nf_inq_grpname_full
external nf_inq_grpname_full
integer nf_inq_grpname_len
external nf_inq_grpname_len
integer nf_inq_grp_parent
external nf_inq_grp_parent
integer nf_inq_grp_ncid
external nf_inq_grp_ncid
integer nf_inq_grp_full_ncid
external nf_inq_grp_full_ncid
integer nf_inq_varids
external nf_inq_varids
integer nf_inq_dimids
external nf_inq_dimids
integer nf_def_grp
external nf_def_grp
! New rename grp function
integer nf_rename_grp
external nf_rename_grp
! New options for netCDF variables.
integer nf_def_var_deflate
external nf_def_var_deflate
integer nf_inq_var_deflate
external nf_inq_var_deflate
integer nf_def_var_szip
external nf_def_var_szip
integer nf_inq_var_szip
external nf_inq_var_szip
integer nf_def_var_fletcher32
external nf_def_var_fletcher32
integer nf_inq_var_fletcher32
external nf_inq_var_fletcher32
integer nf_def_var_chunking
external nf_def_var_chunking
integer nf_inq_var_chunking
external nf_inq_var_chunking
integer nf_def_var_fill
external nf_def_var_fill
integer nf_inq_var_fill
external nf_inq_var_fill
integer nf_def_var_endian
external nf_def_var_endian
integer nf_inq_var_endian
external nf_inq_var_endian
integer nf_def_var_filter
external nf_def_var_filter
integer nf_inq_var_filter
external nf_inq_var_filter
! User defined types.
integer nf_inq_typeids
external nf_inq_typeids
integer nf_inq_typeid
external nf_inq_typeid
integer nf_inq_type
external nf_inq_type
integer nf_inq_user_type
external nf_inq_user_type
! User defined types - compound types.
integer nf_def_compound
external nf_def_compound
integer nf_insert_compound
external nf_insert_compound
integer nf_insert_array_compound
external nf_insert_array_compound
integer nf_inq_compound
external nf_inq_compound
integer nf_inq_compound_name
external nf_inq_compound_name
integer nf_inq_compound_size
external nf_inq_compound_size
integer nf_inq_compound_nfields
external nf_inq_compound_nfields
integer nf_inq_compound_field
external nf_inq_compound_field
integer nf_inq_compound_fieldname
external nf_inq_compound_fieldname
integer nf_inq_compound_fieldindex
external nf_inq_compound_fieldindex
integer nf_inq_compound_fieldoffset
external nf_inq_compound_fieldoffset
integer nf_inq_compound_fieldtype
external nf_inq_compound_fieldtype
integer nf_inq_compound_fieldndims
external nf_inq_compound_fieldndims
integer nf_inq_compound_fielddim_sizes
external nf_inq_compound_fielddim_sizes
! User defined types - variable length arrays.
integer nf_def_vlen
external nf_def_vlen
integer nf_inq_vlen
external nf_inq_vlen
integer nf_free_vlen
external nf_free_vlen
! User defined types - enums.
integer nf_def_enum
external nf_def_enum
integer nf_insert_enum
external nf_insert_enum
integer nf_inq_enum
external nf_inq_enum
integer nf_inq_enum_member
external nf_inq_enum_member
integer nf_inq_enum_ident
external nf_inq_enum_ident
! User defined types - opaque.
integer nf_def_opaque
external nf_def_opaque
integer nf_inq_opaque
external nf_inq_opaque
! Write and read attributes of any type, including user defined
! types.
integer nf_put_att
external nf_put_att
integer nf_get_att
external nf_get_att
! Write and read variables of any type, including user defined
! types.
integer nf_put_var
external nf_put_var
integer nf_put_var1
external nf_put_var1
integer nf_put_vara
external nf_put_vara
integer nf_put_vars
external nf_put_vars
integer nf_get_var
external nf_get_var
integer nf_get_var1
external nf_get_var1
integer nf_get_vara
external nf_get_vara
integer nf_get_vars
external nf_get_vars
! For helping F77 users with VLENs.
integer nf_get_vlen_element
external nf_get_vlen_element
integer nf_put_vlen_element
external nf_put_vlen_element
! For dealing with file level chunk cache.
integer nf_set_chunk_cache
external nf_set_chunk_cache
integer nf_get_chunk_cache
external nf_get_chunk_cache
! For dealing with per variable chunk cache.
integer nf_set_var_chunk_cache
external nf_set_var_chunk_cache
integer nf_get_var_chunk_cache
external nf_get_var_chunk_cache
! NetCDF-2.
!ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
! begin netcdf 2.4 backward compatibility:
!
!
! functions in the fortran interface
!
integer nccre
integer ncopn
integer ncddef
integer ncdid
integer ncvdef
integer ncvid
integer nctlen
integer ncsfil
external nccre
external ncopn
external ncddef
external ncdid
external ncvdef
external ncvid
external nctlen
external ncsfil
integer ncrdwr
integer nccreat
integer ncexcl
integer ncindef
integer ncnsync
integer nchsync
integer ncndirty
integer nchdirty
integer nclink
integer ncnowrit
integer ncwrite
integer ncclob
integer ncnoclob
integer ncglobal
integer ncfill
integer ncnofill
integer maxncop
integer maxncdim
integer maxncatt
integer maxncvar
integer maxncnam
integer maxvdims
integer ncnoerr
integer ncebadid
integer ncenfile
integer nceexist
integer nceinval
integer nceperm
integer ncenotin
integer nceindef
integer ncecoord
integer ncemaxds
integer ncename
integer ncenoatt
integer ncemaxat
integer ncebadty
integer ncebadd
integer ncests
integer nceunlim
integer ncemaxvs
integer ncenotvr
integer nceglob
integer ncenotnc
integer ncfoobar
integer ncsyserr
integer ncfatal
integer ncverbos
integer ncentool
!
! netcdf data types:
!
integer ncbyte
integer ncchar
integer ncshort
integer nclong
integer ncfloat
integer ncdouble
parameter(ncbyte = 1)
parameter(ncchar = 2)
parameter(ncshort = 3)
parameter(nclong = 4)
parameter(ncfloat = 5)
parameter(ncdouble = 6)
!
! masks for the struct nc flag field; passed in as 'mode' arg to
! nccreate and ncopen.
!
! read/write, 0 => readonly
parameter(ncrdwr = 1)
! in create phase, cleared by ncendef
parameter(nccreat = 2)
! on create destroy existing file
parameter(ncexcl = 4)
! in define mode, cleared by ncendef
parameter(ncindef = 8)
! synchronise numrecs on change (x'10')
parameter(ncnsync = 16)
! synchronise whole header on change (x'20')
parameter(nchsync = 32)
! numrecs has changed (x'40')
parameter(ncndirty = 64)
! header info has changed (x'80')
parameter(nchdirty = 128)
! prefill vars on endef and increase of record, the default behavior
parameter(ncfill = 0)
! do not fill vars on endef and increase of record (x'100')
parameter(ncnofill = 256)
! isa link (x'8000')
parameter(nclink = 32768)
!
! 'mode' arguments for nccreate and ncopen
!
parameter(ncnowrit = 0)
parameter(ncwrite = ncrdwr)
parameter(ncclob = nf_clobber)
parameter(ncnoclob = nf_noclobber)
!
! 'size' argument to ncdimdef for an unlimited dimension
!
integer ncunlim
parameter(ncunlim = 0)
!
! attribute id to put/get a global attribute
!
parameter(ncglobal = 0)
!
! advisory maximums:
!
parameter(maxncop = 64)
parameter(maxncdim = 1024)
parameter(maxncatt = 8192)
parameter(maxncvar = 8192)
! not enforced
parameter(maxncnam = 256)
parameter(maxvdims = maxncdim)
!
! global netcdf error status variable
! initialized in error.c
!
! no error
parameter(ncnoerr = nf_noerr)
! not a netcdf id
parameter(ncebadid = nf_ebadid)
! too many netcdfs open
parameter(ncenfile = -31) ! nc_syserr
! netcdf file exists && ncnoclob
parameter(nceexist = nf_eexist)
! invalid argument
parameter(nceinval = nf_einval)
! write to read only
parameter(nceperm = nf_eperm)
! operation not allowed in data mode
parameter(ncenotin = nf_enotindefine )
! operation not allowed in define mode
parameter(nceindef = nf_eindefine)
! coordinates out of domain
parameter(ncecoord = nf_einvalcoords)
! maxncdims exceeded
parameter(ncemaxds = nf_emaxdims)
! string match to name in use
parameter(ncename = nf_enameinuse)
! attribute not found
parameter(ncenoatt = nf_enotatt)
! maxncattrs exceeded
parameter(ncemaxat = nf_emaxatts)
! not a netcdf data type
parameter(ncebadty = nf_ebadtype)
! invalid dimension id
parameter(ncebadd = nf_ebaddim)
! ncunlimited in the wrong index
parameter(nceunlim = nf_eunlimpos)
! maxncvars exceeded
parameter(ncemaxvs = nf_emaxvars)
! variable not found
parameter(ncenotvr = nf_enotvar)
! action prohibited on ncglobal varid
parameter(nceglob = nf_eglobal)
! not a netcdf file
parameter(ncenotnc = nf_enotnc)
parameter(ncests = nf_ests)
parameter (ncentool = nf_emaxname)
parameter(ncfoobar = 32)
parameter(ncsyserr = -31)
!
! global options variable. used to determine behavior of error handler.
! initialized in lerror.c
!
parameter(ncfatal = 1)
parameter(ncverbos = 2)
!
! default fill values. these must be the same as in the c interface.
!
integer filbyte
integer filchar
integer filshort
integer fillong
real filfloat
doubleprecision fildoub
parameter (filbyte = -127)
parameter (filchar = 0)
parameter (filshort = -32767)
parameter (fillong = -2147483647)
parameter (filfloat = 9.9692099683868690e+36)
parameter (fildoub = 9.9692099683868690e+36)
! This is to turn on netCDF internal logging.
integer nf_set_log_level
external nf_set_log_level
# 372 "../mpp/mpp_io.F90" 2
!--- public parameters -----------------------------------------------
public :: MPP_WRONLY, MPP_RDONLY, MPP_APPEND, MPP_OVERWR, MPP_ASCII, MPP_IEEE32
public :: MPP_NATIVE, MPP_NETCDF, MPP_SEQUENTIAL, MPP_DIRECT, MPP_SINGLE
public :: MPP_MULTI, MPP_DELETE, MPP_COLLECT
public :: FILE_TYPE_USED
public :: MAX_FILE_SIZE
!--- public data type ------------------------------------------------
public :: axistype, atttype, fieldtype, validtype, filetype
!--- public data -----------------------------------------------------
public :: default_field, default_axis, default_att
!--- public interface from mpp_io_util.h ----------------------
public :: mpp_get_iospec, mpp_get_id, mpp_get_ncid, mpp_get_unit_range, mpp_is_valid
public :: mpp_set_unit_range, mpp_get_info, mpp_get_atts, mpp_get_fields
public :: mpp_get_times, mpp_get_axes, mpp_get_recdimid, mpp_get_axis_data, mpp_get_axis_by_name
public :: mpp_io_set_stack_size, mpp_get_field_index, mpp_get_axis_index
public :: mpp_get_field_name, mpp_get_att_value, mpp_get_att_length
public :: mpp_get_att_type, mpp_get_att_name, mpp_get_att_real, mpp_get_att_char
public :: mpp_get_att_real_scalar, mpp_get_axis_length, mpp_is_dist_ioroot
public :: mpp_get_file_name, mpp_file_is_opened, mpp_attribute_exist
public :: mpp_io_clock_on, mpp_get_time_axis, mpp_get_default_calendar
public :: mpp_get_dimension_length, mpp_get_axis_bounds
!--- public interface from mpp_io_misc.h ----------------------
public :: mpp_io_init, mpp_io_exit, netcdf_err, mpp_flush, mpp_get_maxunits, do_cf_compliance
!--- public interface from mpp_io_write.h ---------------------
public :: mpp_write, mpp_write_meta, mpp_copy_meta, mpp_modify_meta, mpp_write_axis_data, mpp_def_dim
!--- public interface from mpp_io_read.h ---------------------
public :: mpp_read, mpp_read_meta, mpp_get_tavg_info
public :: mpp_read_compressed, mpp_write_compressed, mpp_read_distributed_ascii, mpp_write_unlimited_axis
!--- public interface from mpp_io_switch.h ---------------------
public :: mpp_open, mpp_close
!-----------------------------------------------------------------------------
!--- mpp_io data types
!-----------------------------------------------------------------------------
integer FILE_TYPE_USED
integer, parameter :: MAX_ATT_LENGTH = 1280
type :: atttype
private
integer :: type, len
character(len=128) :: name
character(len=MAX_ATT_LENGTH) :: catt
real, pointer :: fatt(:) =>NULL() ! just use type conversion for integers
end type atttype
type :: axistype
private
character(len=128) :: name
character(len=128) :: name_bounds
character(len=128) :: units
character(len=256) :: longname
character(len=8) :: cartesian
character(len=256) :: compressed
character(len=24) :: calendar
integer :: sense, len !+/-1, depth or height?
type(domain1D) :: domain !if pointer is associated, it is a distributed data axis
real, pointer :: data(:) =>NULL() !axis values (not used if time axis)
real, pointer :: data_bounds(:) =>NULL() !axis bounds values
integer, pointer :: idata(:) =>NULL() !compressed axis valuesi
integer :: id, did, type, natt !id is the "variable ID", did is the "dimension ID":
!netCDF requires 2 IDs for axes
integer :: shift !normally is 0. when domain is symmetry, its value maybe 1.
type(atttype), pointer :: Att(:) =>NULL()
end type axistype
type :: validtype
private
logical :: is_range ! if true, then the data represent the valid range
real :: min,max ! boundaries of the valid range or missing value
end type validtype
type :: fieldtype
private
character(len=128) :: name
character(len=128) :: units
character(len=256) :: longname
character(len=256) :: standard_name ! CF standard name
real :: min, max, missing, fill, scale, add
integer :: pack
integer(8), dimension(3) :: checksum
type(axistype), pointer :: axes(:) =>NULL() !axes associated with field size, time_axis_index redundantly
!hold info already contained in axes. it's clunky and inelegant,
!but required so that axes can be shared among multiple files
integer, pointer :: size(:) =>NULL()
integer :: time_axis_index
integer :: id, type, natt, ndim
type(atttype), pointer :: Att(:) =>NULL()
integer :: position ! indicate the location of the data ( CENTER, NORTH, EAST, CORNER )
end type fieldtype
type :: filetype
private
character(len=256) :: name
integer :: action, format, access, threading, fileset, record, ncid
logical :: opened, initialized, nohdrs
integer :: time_level
real(8) :: time
logical :: valid
logical :: write_on_this_pe ! indicate if will write out from this pe
logical :: read_on_this_pe ! indicate if will read from this pe
logical :: io_domain_exist ! indicate if io_domain exist or not.
integer :: id !variable ID of time axis associated with file (only one time axis per file)
integer :: recdimid !dim ID of time axis associated with file (only one time axis per file)
real(8), pointer :: time_values(:) =>NULL() ! time axis values are stored here instead of axis%data
! since mpp_write assumes these values are not time values.
! Not used in mpp_write
! additional elements of filetype for mpp_read (ignored for mpp_write)
integer :: ndim, nvar, natt ! number of dimensions, non-dimension variables and global attributes
! redundant axis types stored here and in associated fieldtype
! some axes are not used by any fields, i.e. "edges"
type(axistype), pointer :: axis(:) =>NULL()
type(fieldtype), pointer :: var(:) =>NULL()
type(atttype), pointer :: att(:) =>NULL()
type(domain2d), pointer :: domain =>NULL()
!----------
!ug support
type(domainUG),pointer :: domain_ug => null() !Is this actually pointed to?
!----------
end type filetype
!***********************************************************************
!
! public interface from mpp_io_util.h
!
!***********************************************************************
interface mpp_get_id
module procedure mpp_get_axis_id
module procedure mpp_get_field_id
end interface
!
!
! Get file global metdata.
!
!
! Get file global metdata.
!
!
! call mpp_get_atts( unit, global_atts)
!
!
!
!
interface mpp_get_atts
module procedure mpp_get_global_atts
module procedure mpp_get_field_atts
module procedure mpp_get_axis_atts
end interface
interface mpp_get_att_value
module procedure mpp_get_field_att_text
end interface
!***********************************************************************
!
! public interface from mpp_io_read.h
!
!***********************************************************************
!
!
! Read from an open file.
!
!
! mpp_read is used to read data to the file on an I/O unit
! using the file parameters supplied by mpp_open. There are two
! forms of mpp_read, one to read
! distributed field data, and one to read non-distributed field
! data. Distributed data refer to arrays whose two
! fastest-varying indices are domain-decomposed. Distributed data must
! be 2D or 3D (in space). Non-distributed data can be 0-3D.
!
! The data argument for distributed data is expected by
! mpp_read to contain data specified on the data domain,
! and will read the data belonging to the compute domain,
! fetching data as required by the parallel I/O mode specified in the mpp_open call. This
! is consistent with our definition of domains, where all arrays are
! expected to be dimensioned on the data domain, and all operations
! performed on the compute domain.
!
!
! call mpp_read( unit, field, data, time_index )
!
!
! call mpp_read( unit, field, domain, data, time_index )
!
!
!
!
!
!
! time_index is an optional argument. It is to be omitted if the
! field was defined not to be a function of time. Results are
! unpredictable if the argument is supplied for a time- independent
! field, or omitted for a time-dependent field.
!
!
! The type of read performed by mpp_read depends on
! the file characteristics on the I/O unit specified at the mpp_open call. Specifically, the
! format of the input data (e.g netCDF or IEEE) and the
! threading flags, etc., can be changed there, and
! require no changes to the mpp_read
! calls. (fileset = MPP_MULTI is not supported by
! mpp_read; IEEE is currently not supported).
!
! Packed variables are unpacked using the scale and
! add attributes.
!
! mpp_read_meta must be called prior to calling mpp_read.
!
!
interface mpp_read
module procedure mpp_read_2ddecomp_r2d
module procedure mpp_read_2ddecomp_r3d
module procedure mpp_read_2ddecomp_r4d
module procedure mpp_read_r0D
module procedure mpp_read_r1D
module procedure mpp_read_r2D
module procedure mpp_read_r3D
module procedure mpp_read_r4D
module procedure mpp_read_text
module procedure mpp_read_region_r2D
module procedure mpp_read_region_r3D
module procedure mpp_read_region_r2D_r8
module procedure mpp_read_region_r3D_r8
module procedure mpp_read_2ddecomp_r2d_r8
module procedure mpp_read_2ddecomp_r3d_r8
module procedure mpp_read_2ddecomp_r4d_r8
end interface
!***********************************************************************
!
! public interfaces from mpp_io_read_distributed_ascii.inc
!
!***********************************************************************
!
!
! Read from an opened, ascii file, translating data to the desired format
!
!
! These routines are part of the mpp_read family. It is intended to
! provide a general purpose, distributed, list directed read
!
!
! call mpp_read_distributed_ascii(unit,fmt,ssize,data,iostat)
!
!
!
!
!
!
!
!
! mpp_read_distributed_ascii
! The stripe size must be greater than or equal to 1. The stripe
! size does not have to be a common denominator of the number of
! MPI ranks.
!
!
interface mpp_read_distributed_ascii
module procedure mpp_read_distributed_ascii_r1d
module procedure mpp_read_distributed_ascii_i1d
module procedure mpp_read_distributed_ascii_a1d
end interface
!***********************************************************************
!
! public interfaces from mpp_io_read_compressed.h
!
!***********************************************************************
!
!
! Read from an opened, sparse data, compressed file (e.g. land_model)
!
!
! These routines are similar to mpp_read except that they are designed
! to handle sparse, compressed vectors of data such as from the
! land model. Currently, the sparse vector may vary in z. Hence
! the need for the rank 2 treatment.
!
!
! call mpp_read_compressed( unit, field, domain, data, time_index )
!
!
!
!
!
!
! time_index is an optional argument. It is to be omitted if the
! field was defined not to be a function of time. Results are
! unpredictable if the argument is supplied for a time- independent
! field, or omitted for a time-dependent field.
!
!
! mpp_read_meta must be called prior to calling
! mpp_read_compressed.
! Since in general, the vector is distributed across the io-domain
! The read expects the io_domain to be defined.
!
!
interface mpp_read_compressed
module procedure mpp_read_compressed_r1d
module procedure mpp_read_compressed_r2d
module procedure mpp_read_compressed_r3d
end interface mpp_read_compressed
!***********************************************************************
!
! public interface from mpp_io_write.h
!
!***********************************************************************
!
!
! Write metadata.
!
!
! This routine is used to write the metadata
! describing the contents of a file being written. Each file can contain
! any number of fields, which are functions of 0-3 space axes and 0-1
! time axes. (Only one time axis can be defined per file). The basic
! metadata defined above for axistype
! and fieldtype are written in the first two forms of the call
! shown below. These calls will associate a unique variable ID with each
! variable (axis or field). These can be used to attach any other real,
! integer or character attribute to a variable. The last form is used to
! define a global real, integer or character attribute that
! applies to the dataset as a whole.
!
!
! call mpp_write_meta( unit, axis, name, units, longname,
! cartesian, sense, domain, data )
!
!
! The first form defines a time or space axis. Metadata corresponding to the type
! above are written to the file on <unit>. A unique ID for subsequen
! references to this axis is returned in axis%id. If the <domain>
! element is present, this is recognized as a distributed data axis
! and domain decomposition information is also written if required (the
! domain decomposition info is required for multi-fileset multi-threaded
! I/O). If the <data> element is allocated, it is considered to be a
! space axis, otherwise it is a time axis with an unlimited dimension. Only
! one time axis is allowed per file.
!
!
! call mpp_write_meta( unit, field, axes, name, units, longname,
! min, max, missing, fill, scale, add, pack )
!
!
! The second form defines a field. Metadata corresponding to the type
! above are written to the file on <unit>. A unique ID for subsequen
! references to this field is returned in field%id. At least one axis
! must be associated, 0D variables are not considered. mpp_write_meta
! must previously have been called on all axes associated with this
! field.
!
!
! call mpp_write_meta( unit, id, name, rval=rval, pack=pack )
!
!
! call mpp_write_meta( unit, id, name, ival=ival )
!
!
! call mpp_write_meta( unit, id, name, cval=cval )
!
!
! The third form (3 - 5) defines metadata associated with a previously defined
! axis or field, identified to mpp_write_meta by its unique ID <id>.
! The attribute is named <name> and can take on a real, integer
! or character value. <rval> and <ival> can be scalar or 1D arrays.
! This need not be called for attributes already contained in
! the type.
!
!
! call mpp_write_meta( unit, name, rval=rval, pack=pack )
!
!
! call mpp_write_meta( unit, name, ival=ival )
!
!
! call mpp_write_meta( unit, name, cval=cval )
!
!
! The last form (6 - 8) defines global metadata associated with the file as a
! whole. The attribute is named <name> and can take on a real, integer
! or character value. <rval> and <ival> can be scalar or 1D arrays.
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
! Note that mpp_write_meta is expecting axis data on the
! global domain even if it is a domain-decomposed axis.
!
! You cannot interleave calls to mpp_write and
! mpp_write_meta: the first call to
! mpp_write implies that metadata specification is complete.
!
!
interface mpp_write_meta
module procedure mpp_write_meta_var
module procedure mpp_write_meta_scalar_r
module procedure mpp_write_meta_scalar_i
module procedure mpp_write_meta_axis_r1d
module procedure mpp_write_meta_axis_i1d
module procedure mpp_write_meta_axis_unlimited
module procedure mpp_write_meta_field
module procedure mpp_write_meta_global
module procedure mpp_write_meta_global_scalar_r
module procedure mpp_write_meta_global_scalar_i
end interface
interface mpp_copy_meta
module procedure mpp_copy_meta_axis
module procedure mpp_copy_meta_field
module procedure mpp_copy_meta_global
end interface
interface mpp_modify_meta
! module procedure mpp_modify_att_meta
module procedure mpp_modify_field_meta
module procedure mpp_modify_axis_meta
end interface
!
!
! Write to an open file.
!
!
! mpp_write is used to write data to the file on an I/O unit
! using the file parameters supplied by mpp_open. Axis and field definitions must
! have previously been written to the file using mpp_write_meta. There are three
! forms of mpp_write, one to write axis data, one to write
! distributed field data, and one to write non-distributed field
! data. Distributed data refer to arrays whose two
! fastest-varying indices are domain-decomposed. Distributed data must
! be 2D or 3D (in space). Non-distributed data can be 0-3D.
!
! The data argument for distributed data is expected by
! mpp_write to contain data specified on the data domain,
! and will write the data belonging to the compute domain,
! fetching or sending data as required by the parallel I/O mode specified in the mpp_open call. This
! is consistent with our definition of domains, where all arrays are
! expected to be dimensioned on the data domain, and all operations
! performed on the compute domain.
!
! The type of the data argument must be a default
! real, which can be 4 or 8 byte.
!
!
! mpp_write( unit, axis )
!
!
! mpp_write( unit, field, data, tstamp )
!
!
! mpp_write( unit, field, domain, data, tstamp )
!
!
! tstamp is an optional argument. It is to
! be omitted if the field was defined not to be a function of time.
! Results are unpredictable if the argument is supplied for a time-
! independent field, or omitted for a time-dependent field. Repeated
! writes of a time-independent field are also not recommended. One
! time level of one field is written per call. tstamp must be an 8-byte
! real, even if the default real type is 4-byte.
!
!
! The type of write performed by mpp_write depends on the file
! characteristics on the I/O unit specified at the mpp_open call. Specifically, the format of
! the output data (e.g netCDF or IEEE), the threading and
! fileset flags, etc., can be changed there, and require no
! changes to the mpp_write calls.
!
! Packing is currently not implemented for non-netCDF files, and the
! pack attribute is ignored. On netCDF files,
! NF_DOUBLEs (8-byte IEEE floating point numbers) are
! written for pack=1 and NF_FLOATs for
! pack=2. (pack=2 gives the customary
! and default behaviour). We write NF_SHORTs (2-byte
! integers) for pack=4, or NF_BYTEs
! (1-byte integers) for pack=8. Integer scaling is done
! using the scale and add attributes at
! pack=4 or 8, satisfying the relation
!
!
! data = packed_data*scale + add
!
!
! NOTE: mpp_write does not check to see if the scaled
! data in fact fits into the dynamic range implied by the specified
! packing. It is incumbent on the user to supply correct scaling
! attributes.
!
! You cannot interleave calls to mpp_write and
! mpp_write_meta: the first call to
! mpp_write implies that metadata specification is
! complete.
!
!
interface write_record
module procedure write_record_default
module procedure write_record_r8
end interface
interface mpp_write
module procedure mpp_write_2ddecomp_r2d
module procedure mpp_write_2ddecomp_r3d
module procedure mpp_write_2ddecomp_r4d
module procedure mpp_write_2ddecomp_r2d_r8
module procedure mpp_write_2ddecomp_r3d_r8
module procedure mpp_write_2ddecomp_r4d_r8
module procedure mpp_write_r0D
module procedure mpp_write_r1D
module procedure mpp_write_r2D
module procedure mpp_write_r3D
module procedure mpp_write_r4D
module procedure mpp_write_axis
end interface
!***********************************************************************
!
!
! Write to an opened, sparse data, compressed file (e.g. land_model)
!
!
! These routines are similar to mpp_write except that they are
! designed to handle sparse, compressed vectors of data such
! as from the land model. Currently, the sparse vector may vary in z.
! Hence the need for the rank 2 treatment.
!
!
! call mpp_write(unit, field, domain, data, nelems_io, tstamp, default_data )
!
!
!
!
!
!
! nelems is a vector containing the number of elements expected
! from each member of the io_domain. It MUST have the same order as
! the io_domain pelist.
!
!
! tstamp is an optional argument. It is to
! be omitted if the field was defined not to be a function of time.
! Results are unpredictable if the argument is supplied for a time-
! independent field, or omitted for a time-dependent field. Repeated
! writes of a time-independent field are also not recommended. One
! time level of one field is written per call. tstamp must be an 8-byte
! real, even if the default real type is 4-byte.
!
!
!
! mpp_write_meta must be called prior to calling
! mpp_write_compressed.
! Since in general, the vector is distributed across the io-domain
! The write expects the io_domain to be defined.
!
!
interface mpp_write_compressed
module procedure mpp_write_compressed_r1d
module procedure mpp_write_compressed_r2d
module procedure mpp_write_compressed_r3d
end interface mpp_write_compressed
!***********************************************************************
!
!
! Write to an opened file along the unlimited axis (e.g. icebergs)
!
!
! These routines are similar to mpp_write except that they are
! designed to handle data written along the unlimited axis that
! is not time (e.g. icebergs).
!
!
! call mpp_write(unit, field, domain, data, nelems_io)
!
!
!
!
!
!
! nelems is a vector containing the number of elements expected
! from each member of the io_domain. It MUST have the same order as
! the io_domain pelist.
!
!
! mpp_write_meta must be called prior to calling
! mpp_write_unlimited_axis.
! Since in general, the vector is distributed across the io-domain
! The write expects the io_domain to be defined.
!
!
interface mpp_write_unlimited_axis
module procedure mpp_write_unlimited_axis_r1d
end interface mpp_write_unlimited_axis
!***********************************************************************
!
!
! Define an dimension variable
!
!
! Similar to the mpp_write_meta routines, but simply defines the
! a dimension variable with the optional attributes
!
!
! call mpp_def_dim( unit, name, dsize, longname, data )
!
!
!
!
!
!
interface mpp_def_dim
module procedure mpp_def_dim_nodata
module procedure mpp_def_dim_int
module procedure mpp_def_dim_real
end interface mpp_def_dim
!***********************************************************************
!
! module variables
!
!***********************************************************************
logical :: module_is_initialized = .FALSE.
logical :: verbose =.FALSE.
logical :: debug = .FALSE.
integer :: maxunits, unit_begin, unit_end
integer :: mpp_io_stack_size=0, mpp_io_stack_hwm=0
integer :: varnum=0
integer :: pe, npes
character(len=256) :: text
integer :: error
integer :: records_per_pe
integer :: mpp_read_clock=0, mpp_write_clock=0
integer :: mpp_open_clock=0, mpp_close_clock=0
!initial value of buffer between meta_data and data in .nc file
integer :: header_buffer_val = 16384 ! value used in NF__ENDDEF
logical :: global_field_on_root_pe = .true.
logical :: io_clocks_on = .false.
integer :: shuffle = 0
integer :: deflate = 0
integer :: deflate_level = -1
logical :: cf_compliance = .false.
namelist /mpp_io_nml/header_buffer_val, global_field_on_root_pe, io_clocks_on, &
shuffle, deflate_level, cf_compliance
real(8), allocatable :: mpp_io_stack(:)
type(axistype),save :: default_axis !provided to users with default components
type(fieldtype),save :: default_field !provided to users with default components
type(atttype),save :: default_att !provided to users with default components
type(filetype), allocatable :: mpp_file(:)
integer :: pack_size ! = 1 when compiling with -r8 and = 2 when compiling with -r4.
! Include variable "version" to be written to log file.
# 1 "../include/file_version.h" 1
! -*-f90-*-
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
# 23
character(len=*), parameter :: version = 'unknown'
# 1080 "../mpp/mpp_io.F90" 2
!----------
!ug support
public :: mpp_io_unstructured_write
public :: mpp_io_unstructured_read
interface mpp_io_unstructured_write
module procedure mpp_io_unstructured_write_r_1D
module procedure mpp_io_unstructured_write_r_2D
module procedure mpp_io_unstructured_write_r_3D
module procedure mpp_io_unstructured_write_r_4D
end interface mpp_io_unstructured_write
interface mpp_io_unstructured_read
module procedure mpp_io_unstructured_read_r_1D
module procedure mpp_io_unstructured_read_r_2D
module procedure mpp_io_unstructured_read_r_3D
end interface mpp_io_unstructured_read
!----------
contains
# 1 "../mpp/include/mpp_io_util.inc" 1
! -*-f90-*-
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!#####################################################################
!
!
! Get some general information about a file.
!
!
! Get some general information about a file.
!
!
! call mpp_get_info( unit, ndim, nvar, natt, ntime )
!
!
!
!
!
!
!
subroutine mpp_get_info( unit, ndim, nvar, natt, ntime )
integer, intent(in) :: unit
integer, intent(out) :: ndim, nvar, natt, ntime
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_INFO: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )&
call mpp_error(FATAL, 'MPP_GET_INFO: invalid unit number, file '//trim(mpp_file(unit)%name))
ndim = mpp_file(unit)%ndim
nvar = mpp_file(unit)%nvar
natt = mpp_file(unit)%natt
ntime = mpp_file(unit)%time_level
return
end subroutine mpp_get_info
!#####################################################################
!
!
!
!
subroutine mpp_get_global_atts( unit, global_atts )
!
! copy global file attributes for use by user
!
! global_atts is an attribute type which is allocated from the
! calling routine
integer, intent(in) :: unit
type(atttype), intent(inout) :: global_atts(:)
integer :: natt,i
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_INFO: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )&
call mpp_error( FATAL, 'MPP_GET_INFO: invalid unit number,file '//trim(mpp_file(unit)%name))
if (size(global_atts(:)).lt.mpp_file(unit)%natt) &
call mpp_error(FATAL, 'MPP_GET_ATTS: atttype not dimensioned properly in calling routine, file '// &
trim(mpp_file(unit)%name))
natt = mpp_file(unit)%natt
global_atts = default_att
do i=1,natt
global_atts(i) = mpp_file(unit)%Att(i)
enddo
return
end subroutine mpp_get_global_atts
!#####################################################################
subroutine mpp_get_field_atts(field, name, units, longname, min, max, missing, ndim, siz, axes, atts, &
valid, scale, add, checksum)
type(fieldtype), intent(in) :: field
character(len=*), intent(out), optional :: name, units
character(len=*), intent(out), optional :: longname
real, intent(out), optional :: min,max,missing
integer, intent(out), optional :: ndim
integer, intent(out), dimension(:), optional :: siz
type(validtype), intent(out), optional :: valid
real, intent(out), optional :: scale
real, intent(out), optional :: add
integer(8), intent(out), dimension(:), optional :: checksum
type(atttype), intent(inout), dimension(:), optional :: atts
type(axistype), intent(inout), dimension(:), optional :: axes
integer :: n,m, check_exist
if (PRESENT(name)) name = field%name
if (PRESENT(units)) units = field%units
if (PRESENT(longname)) longname = field%longname
if (PRESENT(min)) min = field%min
if (PRESENT(max)) max = field%max
if (PRESENT(missing)) missing = field%missing
if (PRESENT(ndim)) ndim = field%ndim
if (PRESENT(atts)) then
atts = default_att
n = size(atts(:));m=size(field%Att(:))
if (n.LT.m)&
call mpp_error(FATAL,'attribute array not large enough in mpp_get_field_atts, field '//&
trim(field%name))
do n=1,m
atts(n) = field%Att(n)
end do
end if
if (PRESENT(axes)) then
axes = default_axis
n = size(axes(:));m=field%ndim
if (n.LT.m) &
call mpp_error(FATAL,'axis array not large enough in mpp_get_field_atts, field '//&
trim(field%name))
do n=1,m
axes(n) = field%axes(n)
end do
end if
if (PRESENT(siz)) then
siz = -1
n = size(siz(:));m=field%ndim
if (n.LT.m) &
call mpp_error(FATAL,'size array not large enough in mpp_get_field_atts, field '//&
trim(field%name))
do n=1,m
siz(n) = field%size(n)
end do
end if
if(PRESENT(valid)) then
call mpp_get_valid(field,valid)
endif
if(PRESENT(scale)) scale = field%scale
if(present(add)) add = field%add
if(present(checksum)) then
checksum = 0
check_exist = mpp_find_att(field%Att(:),"checksum")
if ( check_exist >= 0 ) then
if(size(checksum(:)) >size(field%checksum(:))) call mpp_error(FATAL,"size(checksum(:)) >size(field%checksum(:))")
checksum = field%checksum(1:size(checksum(:)))
endif
endif
return
end subroutine mpp_get_field_atts
!#####################################################################
subroutine mpp_get_axis_atts( axis, name, units, longname, cartesian, &
calendar, sense, len, natts, atts, compressed )
type(axistype), intent(in) :: axis
character(len=*), intent(out) , optional :: name, units
character(len=*), intent(out), optional :: longname, cartesian
character(len=*), intent(out), optional :: compressed, calendar
integer,intent(out), optional :: sense, len , natts
type(atttype), intent(inout), optional, dimension(:) :: atts
integer :: n,m
if (PRESENT(name)) name = axis%name
if (PRESENT(units)) units = axis%units
if (PRESENT(longname)) longname = axis%longname
if (PRESENT(cartesian)) cartesian = axis%cartesian
if (PRESENT(compressed)) compressed = axis%compressed
if (PRESENT(calendar)) calendar = axis%calendar
if (PRESENT(sense)) sense = axis%sense
if (PRESENT(len)) len = axis%len
if (PRESENT(atts)) then
atts = default_att
n = size(atts(:));m=size(axis%Att(:))
if (n.LT.m) &
call mpp_error(FATAL,'attribute array not large enough in mpp_get_field_atts, axis '//&
trim(axis%name))
do n=1,m
atts(n) = axis%Att(n)
end do
end if
if (PRESENT(natts)) natts = size(axis%Att(:))
return
end subroutine mpp_get_axis_atts
!#####################################################################
subroutine mpp_get_fields( unit, variables )
!
! copy variable information from file (excluding data)
! global_atts is an attribute type which is allocated from the
! calling routine
!
integer, intent(in) :: unit
type(fieldtype), intent(inout) :: variables(:)
integer :: nvar,i
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_FIELDS: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_GET_FIELDS: invalid unit number.' )
if (size(variables(:)).ne.mpp_file(unit)%nvar) &
call mpp_error(FATAL,'MPP_GET_FIELDS: fieldtype not dimensioned properly in calling routine, file '//&
trim(mpp_file(unit)%name))
nvar = mpp_file(unit)%nvar
do i=1,nvar
variables(i) = mpp_file(unit)%Var(i)
enddo
return
end subroutine mpp_get_fields
!#####################################################################
subroutine mpp_get_axes( unit, axes, time_axis )
!
! copy variable information from file (excluding data)
! global_atts is an attribute type which is allocated from the
! calling routine
!
integer, intent(in) :: unit
type(axistype), intent(inout) :: axes(:)
type(axistype), intent(inout), optional :: time_axis
integer :: ndim,i
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_AXES: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )&
call mpp_error( FATAL, 'MPP_GET_AXES: invalid unit number, file '//trim(mpp_file(unit)%name))
if (size(axes(:)).ne.mpp_file(unit)%ndim) &
call mpp_error(FATAL, 'MPP_GET_AXES: axistype not dimensioned properly in calling routine, file '//&
trim(mpp_file(unit)%name))
if (PRESENT(time_axis)) time_axis = default_axis
ndim = mpp_file(unit)%ndim
do i=1,ndim
axes(i)=mpp_file(unit)%Axis(i)
if (PRESENT(time_axis) &
.AND. .NOT. ASSOCIATED(mpp_file(unit)%Axis(i)%data) &
.AND. mpp_file(unit)%Axis(i)%type /= -1) then
time_axis = mpp_file(unit)%Axis(i)
endif
enddo
return
end subroutine mpp_get_axes
!#####################################################################
function mpp_get_dimension_length(unit, dimname, found)
integer, intent(in) :: unit
character(len=*), intent(in) :: dimname
logical, optional, intent(out) :: found
integer :: mpp_get_dimension_length
logical :: found_dim
integer :: i
if( .NOT.module_is_initialized ) &
call mpp_error( FATAL, 'mpp_get_dimension_length: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )&
call mpp_error( FATAL, 'mpp_get_dimension_length: invalid unit number, file '//trim(mpp_file(unit)%name))
found_dim = .false.
mpp_get_dimension_length = -1
do i = 1, mpp_file(unit)%ndim
if(trim(dimname) == trim(mpp_file(unit)%Axis(i)%name)) then
mpp_get_dimension_length = mpp_file(unit)%Axis(i)%len
found_dim = .true.
exit
endif
enddo
if(present(found)) found = found_dim
end function mpp_get_dimension_length
!#####################################################################
subroutine mpp_get_time_axis( unit, time_axis )
integer, intent(in) :: unit
type(axistype), intent(inout) :: time_axis
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_AXES: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )&
call mpp_error( FATAL, 'MPP_GET_AXES: invalid unit number, file '//trim(mpp_file(unit)%name))
time_axis = mpp_file(unit)%Axis(mpp_file(unit)%recdimid)
return
end subroutine mpp_get_time_axis
!####################################################################
function mpp_get_default_calendar( )
character(len=len(default_axis%calendar)) :: mpp_get_default_calendar
mpp_get_default_calendar = default_axis%calendar
end function mpp_get_default_calendar
!#####################################################################
!
!
! Get file time data.
!
!
! Get file time data.
!
!
! call mpp_get_times( unit, time_values )
!
!
!
!
subroutine mpp_get_times( unit, time_values )
!
! copy time information from file and convert to time_type
!
integer, intent(in) :: unit
real, intent(inout) :: time_values(:)
integer :: ntime,i
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_TIMES: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )&
call mpp_error(FATAL, 'MPP_GET_TIMES: invalid unit number, file '//trim(mpp_file(unit)%name))
! NF_INQ_DIM returns -1 for the length of a record dimension if
! it does not exist
if (mpp_file(unit)%time_level == -1) then
time_values = 0.0
return
endif
if (size(time_values(:)).ne.mpp_file(unit)%time_level) &
call mpp_error(FATAL,'MPP_GET_TIMES: time_values not dimensioned properly in calling routine, file '//&
trim(mpp_file(unit)%name))
ntime = mpp_file(unit)%time_level
do i=1,ntime
time_values(i) = mpp_file(unit)%time_values(i)
enddo
return
end subroutine mpp_get_times
!#####################################################################
function mpp_get_field_index(fields,fieldname)
type(fieldtype), dimension(:) :: fields
character(len=*) :: fieldname
integer :: mpp_get_field_index
integer :: n
mpp_get_field_index = -1
do n=1,size(fields(:))
if (lowercase(fields(n)%name) == lowercase(fieldname)) then
mpp_get_field_index = n
exit
endif
enddo
return
end function mpp_get_field_index
!#####################################################################
function mpp_get_axis_index(axes,axisname)
type(axistype), dimension(:) :: axes
character(len=*) :: axisname
integer :: mpp_get_axis_index
integer :: n
mpp_get_axis_index = -1
do n=1,size(axes(:))
if (lowercase(axes(n)%name) == lowercase(axisname)) then
mpp_get_axis_index = n
exit
endif
enddo
return
end function mpp_get_axis_index
!#####################################################################
function mpp_get_axis_by_name(unit,axisname)
integer :: unit
character(len=*) :: axisname
type(axistype) :: mpp_get_axis_by_name
integer :: n
mpp_get_axis_by_name = default_axis
do n=1,size(mpp_file(unit)%Axis(:))
if (lowercase(mpp_file(unit)%Axis(n)%name) == lowercase(axisname)) then
mpp_get_axis_by_name = mpp_file(unit)%Axis(n)
exit
endif
enddo
return
end function mpp_get_axis_by_name
!#####################################################################
function mpp_get_field_size(field)
type(fieldtype) :: field
integer :: mpp_get_field_size(4)
mpp_get_field_size = -1
mpp_get_field_size(1) = field%size(1)
mpp_get_field_size(2) = field%size(2)
mpp_get_field_size(3) = field%size(3)
mpp_get_field_size(4) = field%size(4)
return
end function mpp_get_field_size
!#####################################################################
function mpp_get_axis_length(axis)
type(axistype) :: axis
integer :: mpp_get_axis_length
mpp_get_axis_length = axis%len
return
end function mpp_get_axis_length
!#####################################################################
function mpp_get_axis_bounds(axis, data, name)
type(axistype), intent(in) :: axis
real, dimension(:), intent(out) :: data
character(len=*), optional, intent(out) :: name
logical :: mpp_get_axis_bounds
if (size(data(:)).lt.axis%len+1)&
call mpp_error(FATAL,'MPP_GET_AXIS_BOUNDS: data array not large enough, axis '//trim(axis%name))
if (.NOT.ASSOCIATED(axis%data_bounds)) then
mpp_get_axis_bounds = .false.
else
mpp_get_axis_bounds = .true.
data(1:axis%len+1) = axis%data_bounds(:)
endif
if(present(name)) name = trim(axis%name_bounds)
return
end function mpp_get_axis_bounds
!#####################################################################
subroutine mpp_get_axis_data( axis, data )
type(axistype), intent(in) :: axis
real, dimension(:), intent(out) :: data
if (size(data(:)).lt.axis%len)&
call mpp_error(FATAL,'MPP_GET_AXIS_DATA: data array not large enough, axis '//trim(axis%name))
if (.NOT.ASSOCIATED(axis%data)) then
call mpp_error(NOTE,'MPP_GET_AXIS_DATA: use mpp_get_times for record dims')
data = 0.
else
data(1:axis%len) = axis%data
endif
return
end subroutine mpp_get_axis_data
!#####################################################################
function mpp_get_recdimid(unit)
!
integer, intent(in) :: unit
integer :: mpp_get_recdimid
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_RECDIMID: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_GET_RECDIMID: invalid unit number.' )
mpp_get_recdimid = mpp_file(unit)%recdimid
return
end function mpp_get_recdimid
subroutine mpp_get_iospec( unit, iospec )
integer, intent(in) :: unit
character(len=*), intent(inout) :: iospec
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_GET_IOSPEC: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_GET_IOSPEC: invalid unit number.' )
# 528
return
end subroutine mpp_get_iospec
!#####################################################################
!
!
! Get netCDF ID of an open file.
!
!
! This returns the ncid associated with the open file on
! unit. It is used in the instance that the user desires to
! perform netCDF calls upon the file that are not provided by the
! mpp_io_mod API itself.
!
!
! mpp_get_ncid(unit)
!
!
!
function mpp_get_ncid(unit)
integer :: mpp_get_ncid
integer, intent(in) :: unit
mpp_get_ncid = mpp_file(unit)%ncid
return
end function mpp_get_ncid
!#####################################################################
function mpp_get_axis_id(axis)
integer mpp_get_axis_id
type(axistype), intent(in) :: axis
mpp_get_axis_id = axis%id
return
end function mpp_get_axis_id
!#####################################################################
function mpp_get_field_id(field)
integer mpp_get_field_id
type(fieldtype), intent(in) :: field
mpp_get_field_id = field%id
return
end function mpp_get_field_id
!#####################################################################
subroutine mpp_get_unit_range( unit_begin_out, unit_end_out )
integer, intent(out) :: unit_begin_out, unit_end_out
unit_begin_out = unit_begin; unit_end_out = unit_end
return
end subroutine mpp_get_unit_range
!#####################################################################
subroutine mpp_set_unit_range( unit_begin_in, unit_end_in )
integer, intent(in) :: unit_begin_in, unit_end_in
if( unit_begin_in.GT.unit_end_in )call mpp_error( FATAL, 'MPP_SET_UNIT_RANGE: unit_begin_in.GT.unit_end_in.' )
if( unit_begin_in.LT.0 )call mpp_error( FATAL, 'MPP_SET_UNIT_RANGE: unit_begin_in.LT.0.' )
if( unit_end_in .GT.maxunits )call mpp_error( FATAL, 'MPP_SET_UNIT_RANGE: unit_end_in.GT.maxunits.' )
unit_begin = unit_begin_in; unit_end = unit_end_in
return
end subroutine mpp_set_unit_range
!#####################################################################
subroutine mpp_io_set_stack_size(n)
!set the mpp_io_stack variable to be at least n LONG words long
integer, intent(in) :: n
character(len=10) :: text
if( n.GT.mpp_io_stack_size .AND. allocated(mpp_io_stack) )deallocate(mpp_io_stack)
if( .NOT.allocated(mpp_io_stack) )then
allocate( mpp_io_stack(n) )
mpp_io_stack_size = n
write( text,'(i10)' )n
if( pe.EQ.mpp_root_pe() )call mpp_error( NOTE, 'MPP_IO_SET_STACK_SIZE: stack size set to '//text//'.' )
end if
return
end subroutine mpp_io_set_stack_size
!#####################################################################
! based on presence/absence of attributes, defines valid range or missing
! value. For details, see section 8.1 of NetCDF User Guide
subroutine mpp_get_valid(f,v)
type(fieldtype),intent(in) :: f ! field
type(validtype),intent(out) :: v ! validator
integer :: irange,imin,imax,ifill,imissing,iscale
integer :: valid_T, scale_T ! types of attributes
v%is_range = .true.
v%min = -HUGE(v%min); v%max = HUGE(v%max)
if (f%natt == 0) return
! find indices of relevant attributes
irange = mpp_find_att(f%att,'valid_range')
imin = mpp_find_att(f%att,'valid_min')
imax = mpp_find_att(f%att,'valid_max')
ifill = mpp_find_att(f%att,'_FillValue')
imissing = mpp_find_att(f%att,'missing_value')
! find the widest type of scale and offset; note that the code
! uses assumption that NetCDF types are arranged in th order of rank,
! that is NF_BYTE < NF_SHORT < NF_INT < NF_FLOAT < NF_DOUBLE
scale_T = 0
iscale = mpp_find_att(f%att,'scale_factor')
if(iscale>0) scale_T = f%att(iscale)%type
iscale = mpp_find_att(f%att,'add_offset')
if(iscale>0) scale_T = max(scale_T,f%att(iscale)%type)
! examine possible range attributes
valid_T = 0
if (irange>0) then
v%min = f%att(irange)%fatt(1)
v%max = f%att(irange)%fatt(2)
valid_T = f%att(irange)%type
else if (imax>0.or.imin>0) then
if(imax>0) then
v%max = f%att(imax)%fatt(1)
valid_T = max(valid_T,f%att(imax)%type)
endif
if(imin>0) then
v%min = f%att(imin)%fatt(1)
valid_T = max(valid_T,f%att(imin)%type)
endif
else if (imissing > 0) then
v%is_range = .false.
! here we always scale, since missing_value is supposed to be in
! external representation
v%min = f%att(imissing)%fatt(1)*f%scale + f%add
else if (ifill>0) then
!z1l ifdef is added in to be able to compile without using 1.
! define min and max according to _FillValue
if(f%att(ifill)%fatt(1)>0) then
! if _FillValue is positive, then it defines valid maximum
v%max = f%att(ifill)%fatt(1)
select case(f%type)
case (NF_BYTE,NF_SHORT,NF_INT)
v%max = v%max-1
case (NF_FLOAT)
v%max = nearest(nearest(real(v%max,4),-1.0),-1.0)
case (NF_DOUBLE)
v%max = nearest(nearest(real(v%max,8),-1.0),-1.0)
end select
! always do the scaling, as the _FillValue is in external
! representation
v%max = v%max*f%scale + f%add
else
! if _FillValue is negative or zero, then it defines valid minimum
v%min = f%att(ifill)%fatt(1)
select case(f%type)
case (NF_BYTE,NF_SHORT,NF_INT)
v%min = v%min+1
case (NF_FLOAT)
v%min = nearest(nearest(real(v%min,4),+1.0),+1.0)
case (NF_DOUBLE)
v%min = nearest(nearest(real(v%min,8),+1.0),+1.0)
end select
! always do the scaling, as the _FillValue is in external
! representation
v%min = v%min*f%scale + f%add
endif
endif
! If valid_range is the same type as scale_factor (actually the wider of
! scale_factor and add_offset) and this is wider than the external data, then it
! will be interpreted as being in the units of the internal (unpacked) data.
! Otherwise it is in the units of the external (packed) data.
! Note that it is not relevant if we went through the missing_data of _FillValue
! brances, because in this case all irange, imin, and imax are less then 0
if(.not.((valid_T == scale_T).and.(scale_T>f%type))) then
if(irange>0 .or. imin>0) then
v%min = v%min*f%scale + f%add
endif
if(irange>0 .or. imax>0) then
v%max = v%max*f%scale + f%add
endif
endif
end subroutine mpp_get_valid
!#####################################################################
logical elemental function mpp_is_valid(x, v)
real , intent(in) :: x ! real value to be eaxmined
type(validtype), intent(in) :: v ! validator
if (v%is_range) then
mpp_is_valid = (v%min<=x).and.(x<=v%max)
else
mpp_is_valid = x/=v%min
endif
end function mpp_is_valid
!#####################################################################
! finds an attribute by name in the array; returns -1 if it is not
! found
function mpp_find_att(atts, name)
integer :: mpp_find_att
type(atttype), intent(in) :: atts(:) ! array of attributes
character(len=*) :: name ! name of the attributes
integer :: i
mpp_find_att = -1
do i = 1, size(atts)
if (trim(name)==trim(atts(i)%name)) then
mpp_find_att=i
exit
endif
enddo
end function mpp_find_att
!#####################################################################
! return the name of an attribute.
function mpp_get_att_name(att)
type(atttype), intent(in) :: att
character(len=len(att%name)) :: mpp_get_att_name
mpp_get_att_name = att%name
return
end function mpp_get_att_name
!#####################################################################
! return the type of an attribute.
function mpp_get_att_type(att)
type(atttype), intent(in) :: att
integer :: mpp_get_att_type
mpp_get_att_type = att%type
return
end function mpp_get_att_type
!#####################################################################
! return the length of an attribute.
function mpp_get_att_length(att)
type(atttype), intent(in) :: att
integer :: mpp_get_att_length
mpp_get_att_length = att%len
return
end function mpp_get_att_length
!#####################################################################
! return the char value of an attribute.
function mpp_get_att_char(att)
type(atttype), intent(in) :: att
character(len=att%len) :: mpp_get_att_char
mpp_get_att_char = att%catt
return
end function mpp_get_att_char
!#####################################################################
! return the real array value of an attribute.
function mpp_get_att_real(att)
type(atttype), intent(in) :: att
real, dimension(size(att%fatt(:))) :: mpp_get_att_real
mpp_get_att_real = att%fatt
return
end function mpp_get_att_real
!#####################################################################
! return the real array value of an attribute.
function mpp_get_att_real_scalar(att)
type(atttype), intent(in) :: att
real :: mpp_get_att_real_scalar
mpp_get_att_real_scalar = att%fatt(1)
return
end function mpp_get_att_real_scalar
!#####################################################################
! return the name of an field
function mpp_get_field_name(field)
type(fieldtype), intent(in) :: field
character(len=len(field%name)) :: mpp_get_field_name
mpp_get_field_name = field%name
return
end function mpp_get_field_name
!#####################################################################
! return the file name of corresponding unit
function mpp_get_file_name(unit)
integer, intent(in) :: unit
character(len=len(mpp_file(1)%name)) :: mpp_get_file_name
mpp_get_file_name = mpp_file(unit)%name
return
end function mpp_get_file_name
!####################################################################
! return if certain file with unit is opened or not
function mpp_file_is_opened(unit)
integer, intent(in) :: unit
logical :: mpp_file_is_opened
mpp_file_is_opened = mpp_file(unit)%opened
return
end function mpp_file_is_opened
!####################################################################
! return the attribute value of given field name
subroutine mpp_get_field_att_text(unit, fieldname, attname, attvalue)
integer, intent(in) :: unit
character(len=*), intent(in) :: fieldname, attname
character(len=*), intent(out) :: attvalue
logical :: found_field, found_att
integer :: i, j, length
found_field = .false.
found_att = .false.
do i=1,mpp_file(unit)%nvar
if( trim(mpp_file(unit)%Var(i)%name) == trim(fieldname)) then
found_field = .true.
do j=1, size(mpp_file(unit)%Var(i)%Att(:))
if( trim(mpp_file(unit)%Var(i)%Att(j)%name) == trim(attname) ) then
found_att = .true.
length = mpp_file(unit)%Var(i)%Att(j)%len
if(len(attvalue) .LE. length ) call mpp_error(FATAL, &
'mpp_io_util.inc: length of attvalue is less than the length of catt')
attvalue = trim(mpp_file(unit)%Var(i)%Att(j)%catt(1:length))
exit
end if
end do
exit
end if
end do
if(.NOT. found_field) call mpp_error(FATAL,"mpp_io_util.inc: field "//trim(fieldname)// &
" does not exist in the file "//trim(mpp_file(unit)%name) )
if(.NOT. found_att) call mpp_error(FATAL,"mpp_io_util.inc: attribute "//trim(attname)//" of field "&
//trim(fieldname)// " does not exist in the file "//trim(mpp_file(unit)%name) )
return
end subroutine mpp_get_field_att_text
!####################################################################
! return mpp_io_nml variable io_clock_on
function mpp_io_clock_on()
logical :: mpp_io_clock_on
mpp_io_clock_on = io_clocks_on
return
end function mpp_io_clock_on
function mpp_attribute_exist(field,name)
logical :: mpp_attribute_exist
type(fieldtype), intent(in) :: field ! The field that you are searching for the attribute.
character(len=*), intent(in) :: name ! name of the attributes
if(field%natt > 0) then
mpp_attribute_exist = ( mpp_find_att(field%Att(:),name) > 0 )
else
mpp_attribute_exist = .false.
endif
end function mpp_attribute_exist
!#######################################################################
subroutine mpp_dist_io_pelist(ssize,pelist)
integer, intent(in) :: ssize ! Stripe size for dist read
integer, allocatable, intent(out) :: pelist(:)
integer :: i, lsize, ioroot
logical :: is_ioroot=.false.
! Did you make a mistake?
if(ssize < 1) call mpp_error(FATAL,'mpp_dist_io_pelist: I/O stripe size < 1')
is_ioroot = mpp_is_dist_ioroot(ssize,ioroot=ioroot,lsize=lsize)
! Did I make a mistake?
if(lsize < 1) call mpp_error(FATAL,'mpp_dist_io_pelist: size of pelist < 1')
allocate(pelist(lsize))
do i=1,lsize
pelist(i) = ioroot + i - 1
enddo
end subroutine mpp_dist_io_pelist
!#######################################################################
logical function mpp_is_dist_ioroot(ssize,ioroot,lsize)
integer, intent(in) :: ssize ! Dist io set size
integer, intent(out), optional :: ioroot, lsize
integer :: pe, npes, mypos, maxpe, d_ioroot, d_lsize, last_ioroot
integer :: rootpe
if(ssize < 1) call mpp_error(FATAL,'mpp_is_dist_ioroot: I/O stripe size < 1')
mpp_is_dist_ioroot = .false.
rootpe = mpp_root_pe()
d_lsize = ssize
pe = mpp_pe()
mypos = modulo(pe-rootpe,ssize) ! Which PE am I in the io group?
d_ioroot = pe - mypos ! What is the io root for the group?
npes = mpp_npes()
maxpe = min(d_ioroot+ssize,npes+rootpe) - 1 ! Handle end case
d_lsize = maxpe - d_ioroot + 1
if(mod(npes,ssize) == 1)then ! Ensure there are no sets with 1 member
last_ioroot = (npes-1) - ssize
if(pe >= last_ioroot) then
d_ioroot = last_ioroot
d_lsize = ssize + 1
endif
endif
if(pe == d_ioroot) mpp_is_dist_ioroot = .true.
if(PRESENT(ioroot)) ioroot = d_ioroot
if(PRESENT(lsize)) lsize = d_lsize
end function mpp_is_dist_ioroot
# 1103 "../mpp/mpp_io.F90" 2
# 1 "../mpp/include/mpp_io_misc.inc" 1
! -*-f90-*-
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! !
! mpp_io_init: initialize parallel I/O !
! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
!
! Initialize mpp_io_mod.
!
!
! Called to initialize the mpp_io_mod package. Sets the range
! of valid fortran units and initializes the mpp_file array of
! type(filetype). mpp_io_init will call mpp_init and
! mpp_domains_init, to make sure its parent modules have been
! initialized. (Repeated calls to the init routines do no harm,
! so don't worry if you already called it).
!
!
! call mpp_io_init( flags, maxunit )
!
!
!
!
subroutine mpp_io_init( flags, maxunit )
integer, intent(in), optional :: flags, maxunit
integer :: unit_nml, io_status, iunit
integer :: logunit, outunit, inunit, errunit
logical :: opened
real(8) :: doubledata = 0
real :: realarray(4)
if( module_is_initialized )return
!initialize IO package: initialize mpp_file array, set valid range of units for fortran IO
call mpp_init(flags) !if mpp_init has been called, this call will merely return
pe = mpp_pe()
npes = mpp_npes()
call mpp_domains_init(flags)
maxunits = 1024
if( PRESENT(maxunit) )maxunits = maxunit
if( PRESENT(flags) )then
debug = flags.EQ.MPP_DEBUG
verbose = flags.EQ.MPP_VERBOSE .OR. debug
end if
!set range of allowed fortran unit numbers: could be compiler-dependent (should not overlap stdin/out/err)
call mpp_set_unit_range( 103, maxunits )
!--- namelist
read (input_nml_file, mpp_io_nml, iostat=io_status)
# 85
if (io_status > 0) then
call mpp_error(FATAL,'=>mpp_io_init: Error reading input.nml')
endif
outunit = stdout(); logunit=stdlog()
write(outunit, mpp_io_nml)
write(logunit, mpp_io_nml)
!--- check the deflate level, set deflate = 1 if deflate_level is greater than equal to 0
if(deflate_level .GE. 0) deflate = 1
if(deflate .NE. 0) then
if(deflate_level <0 .OR. deflate > 9) then
call mpp_error(FATAL, "mpp_io_mod(mpp_io_init): mpp_io_nml variable must be between 0 and 9 when set")
endif
endif
! determine the pack_size
pack_size = size(transfer(doubledata, realarray))
if( pack_size .NE. 1 .AND. pack_size .NE. 2) call mpp_error(FATAL,'mpp_io_mod(mpp_io_init): pack_size should be 1 or 2')
!initialize default_field
default_field%name = 'noname'
default_field%units = 'nounits'
default_field%longname = 'noname'
default_field%id = -1
default_field%type = -1
default_field%natt = -1
default_field%ndim = -1
default_field%checksum = 0
!largest possible 4-byte reals
default_field%min = -huge(1._4)
default_field%max = huge(1._4)
default_field%missing = MPP_FILL_DOUBLE ! now using netcdf:NF_FILL_DOUBLE instead of -1e36
default_field%fill = MPP_FILL_DOUBLE ! now using netcdf:NF_FILL_DOUBLE instead of -1e36
default_field%scale = 1.0
default_field%add = 0.0
default_field%pack = 1
default_field%time_axis_index = -1 !this value will never match any index
! Initialize default axis
default_axis%name = 'noname'
default_axis%units = 'nounits'
default_axis%longname = 'noname'
default_axis%cartesian = 'none'
default_axis%compressed = 'unspecified'
default_axis%calendar = 'unspecified'
default_axis%sense = 0
default_axis%len = -1
default_axis%id = -1
default_axis%did = -1
default_axis%type = -1
default_axis%natt = -1
! Initialize default attribute
default_att%name = 'noname'
default_att%type = -1
default_att%len = -1
default_att%catt = 'none'
!up to MAXUNITS fortran units and MAXUNITS netCDF units are supported
!file attributes (opened, format, access, threading, fileset) are saved against the unit number
!external handles to netCDF units are saved from maxunits+1:2*maxunits
allocate( mpp_file(NULLUNIT:2*maxunits) ) !starts at NULLUNIT=-1, used by non-participant PEs in single-threaded I/O
mpp_file(:)%name = ' '
mpp_file(:)%action = -1
mpp_file(:)%format = -1
mpp_file(:)%threading = -1
mpp_file(:)%fileset = -1
mpp_file(:)%record = -1
mpp_file(:)%ncid = -1
mpp_file(:)%opened = .FALSE.
mpp_file(:)%initialized = .FALSE.
mpp_file(:)%write_on_this_pe = .FALSE.
mpp_file(:)%io_domain_exist = .FALSE.
mpp_file(:)%time_level = 0
mpp_file(:)%time = NULLTIME
mpp_file(:)%id = -1
mpp_file(:)%valid = .FALSE.
mpp_file(:)%ndim = -1
mpp_file(:)%nvar = -1
!NULLUNIT "file" is always single-threaded, open and initialized (to pass checks in mpp_write)
mpp_file(NULLUNIT)%threading = MPP_SINGLE
mpp_file(NULLUNIT)%opened = .TRUE.
mpp_file(NULLUNIT)%valid = .TRUE.
mpp_file(NULLUNIT)%initialized = .TRUE.
!declare the stdunits to be open
mpp_file(outunit)%opened = .TRUE.
mpp_file(logunit)%opened = .TRUE.
inunit = stdin() ; mpp_file(inunit)%opened = .TRUE.
errunit = stderr() ; mpp_file(errunit)%opened = .TRUE.
if( pe.EQ.mpp_root_pe() )then
iunit = stdlog() ! PGI compiler does not like stdlog() doing I/O within write call
write( iunit,'(/a)' )'MPP_IO module '//trim(version)
text = NF_INQ_LIBVERS()
write( iunit,'(/a)' )'Using netCDF library version '//trim(text)
endif
# 189
call mpp_io_set_stack_size(131072) ! default initial value
call mpp_sync()
if( io_clocks_on )then
mpp_read_clock = mpp_clock_id( 'mpp_read')
mpp_write_clock = mpp_clock_id( 'mpp_write')
mpp_open_clock = mpp_clock_id( 'mpp_open')
mpp_close_clock = mpp_clock_id( 'mpp_close')
endif
module_is_initialized = .TRUE.
return
end subroutine mpp_io_init
!
!
! Exit mpp_io_mod.
!
!
! It is recommended, though not at present required, that you call this
! near the end of a run. This will close all open files that were opened
! with mpp_open. Files opened otherwise
! are not affected.
!
!
! call mpp_io_exit()
!
!
subroutine mpp_io_exit(string)
character(len=*), optional :: string
integer :: unit,istat
logical :: dosync
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_IO_EXIT: must first call mpp_io_init.' )
dosync = .TRUE.
if( PRESENT(string) )then
dosync = .NOT.( trim(string).EQ.'NOSYNC' )
end if
!close all open fortran units
do unit = unit_begin,unit_end
if( mpp_file(unit)%opened )call FLUSH(unit)
end do
if( dosync )call mpp_sync()
do unit = unit_begin,unit_end
if( mpp_file(unit)%opened )close(unit)
end do
!close all open netCDF units
do unit = maxunits+1,2*maxunits
if( mpp_file(unit)%opened )error = NF_CLOSE(mpp_file(unit)%ncid)
end do
! call mpp_max(mpp_io_stack_hwm)
if( pe.EQ.mpp_root_pe() )then
! write( stdout,'(/a)' )'Exiting MPP_IO module...'
! write( stdout,* )'MPP_IO_STACK high water mark=', mpp_io_stack_hwm
end if
deallocate(mpp_file)
module_is_initialized = .FALSE.
return
end subroutine mpp_io_exit
subroutine netcdf_err( err, file, axis, field, attr, string )
integer, intent(in) :: err
type(filetype), optional :: file
type(axistype), optional :: axis
type(fieldtype), optional :: field
type(atttype), optional :: attr
character(len=*), optional :: string
character(len=256) :: errmsg
if( err.EQ.NF_NOERR )return
errmsg = NF_STRERROR(err)
if( PRESENT(file) )errmsg = trim(errmsg)//' File='//file%name
if( PRESENT(axis) )errmsg = trim(errmsg)//' Axis='//axis%name
if( PRESENT(field) )errmsg = trim(errmsg)//' Field='//field%name
if( PRESENT(attr) )errmsg = trim(errmsg)//' Attribute='//attr%name
if( PRESENT(string) )errmsg = trim(errmsg)//string
call mpp_io_exit('NOSYNC') !make sure you close all open files
call mpp_error( FATAL, 'NETCDF ERROR: '//trim(errmsg) )
return
end subroutine netcdf_err
subroutine mpp_flush(unit)
!flush the output on a unit, syncing with disk
integer, intent(in) :: unit
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_FLUSH: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%write_on_this_pe) return
if( .NOT.mpp_file(unit)%opened ) call mpp_error( FATAL, 'MPP_FLUSH: invalid unit number.' )
if( .NOT.mpp_file(unit)%initialized )call mpp_error( FATAL, 'MPP_FLUSH: cannot flush a file during writing of metadata.' )
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
error = NF_SYNC(mpp_file(unit)%ncid); call netcdf_err( error, mpp_file(unit) )
else
call FLUSH(unit)
end if
return
end subroutine mpp_flush
!> Return the maximum number of MPP file units available.
!!
!! maxunits is a mpp_io_mod module variable and defines the maximum number
!! of Fortran file units that can be open simultaneously. mpp_get_maxunits
!! simply returns this number.
integer function mpp_get_maxunits()
mpp_get_maxunits = maxunits
end function mpp_get_maxunits
logical function do_cf_compliance()
do_cf_compliance = cf_compliance
end function do_cf_compliance
# 1104 "../mpp/mpp_io.F90" 2
# 1 "../mpp/include/mpp_io_connect.inc" 1
! -*-f90-*-
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!
!
! Open a file for parallel I/O.
!
!
! Open a file for parallel I/O.
!
!
! call mpp_open( unit, file, action, form, access, threading, fileset,
! iospec, nohdrs, recl, pelist )
!
!
! unit is intent(OUT): always _returned_by_ mpp_open().
!
!
! file is the filename: REQUIRED
! we append .nc to filename if it is a netCDF file
! we append . to filename if fileset is private (pppp is PE number)
!
!
! action is one of MPP_RDONLY, MPP_APPEND, MPP_WRONLY or MPP_OVERWR.
!
!
! form is one of MPP_ASCII: formatted read/write
! MPP_NATIVE: unformatted read/write with no conversion
! MPP_IEEE32: unformatted read/write with conversion to IEEE32
! MPP_NETCDF: unformatted read/write with conversion to netCDF
!
!
! access is one of MPP_SEQUENTIAL or MPP_DIRECT (ignored for netCDF).
! RECL argument is REQUIRED for direct access IO.
!
!
! threading is one of MPP_SINGLE or MPP_MULTI
! single-threaded IO in a multi-PE run is done by PE0.
!
!
! fileset is one of MPP_MULTI and MPP_SINGLE
! fileset is only used for multi-threaded I/O
! if all I/O PEs in use a single fileset, they write to the same file
! if all I/O PEs in use a multi fileset, they each write an independent file
!
!
! pelist is the list of I/O PEs (currently ALL).
!
!
! recl is the record length in bytes.
!
!
! iospec is a system hint for I/O organization, e.g assign(1) on SGI/Cray systems.
!
!
! nohdrs has no effect when action=MPP_RDONLY|MPP_APPEND or when form=MPP_NETCDF
!
!
! The integer parameters to be passed as flags (MPP_RDONLY,
! etc) are all made available by use association. The unit
! returned by mpp_open is guaranteed unique. For non-netCDF I/O
! it is a valid fortran unit number and fortran I/O can be directly called
! on the file.
!
! MPP_WRONLY will guarantee that existing files named
! file will not be clobbered. MPP_OVERWR
! allows overwriting of files.
!
! Files opened read-only by many processors will give each processor
! an independent pointer into the file, i.e:
!
!
!
! will result in each PE independently reading the same namelist.
!
! Metadata identifying the file and the version of
! mpp_io_mod are written to a file that is opened
! MPP_WRONLY or MPP_OVERWR. If this is a
! multi-file set, and an additional global attribute
! NumFilesInSet is written to be used by post-processing
! software.
!
! If nohdrs=.TRUE. all calls to write attributes will
! return successfully without performing any writes to the
! file. The default is .FALSE..
!
! For netCDF files, headers are always written even if
! nohdrs=.TRUE.
!
! The string iospec is passed to the OS to
! characterize the I/O to be performed on the file opened on
! unit. This is typically used for I/O optimization. For
! example, the FFIO layer on SGI/Cray systems can be used for
! controlling synchronicity of reads and writes, buffering of data
! between user space and disk for I/O optimization, striping across
! multiple disk partitions, automatic data conversion and the like
! (man intro_ffio). All these actions are controlled through
! the assign command. For example, to specify asynchronous
! caching of data going to a file open on unit, one would do:
!
!
! call mpp_open( unit, ... iospec='-F cachea' )
!
!
! on an SGI/Cray system, which would pass the supplied
! iospec to the assign(3F) system call.
!
! Currently iospec performs no action on non-SGI/Cray
! systems. The interface is still provided, however: users are cordially
! invited to add the requisite system calls for other systems.
!
!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! !
! OPENING AND CLOSING FILES: mpp_open() and mpp_close() !
! !
! mpp_open( unit, file, action, form, access, threading, & !
! fileset, iospec, nohdrs, recl, pelist ) !
! integer, intent(out) :: unit !
! character(len=*), intent(in) :: file !
! integer, intent(in), optional :: action, form, access, threading, !
! fileset, recl !
! character(len=*), intent(in), optional :: iospec !
! logical, intent(in), optional :: nohdrs !
! integer, optional, intent(in) :: pelist(:) !default ALL !
! !
! unit is intent(OUT): always _returned_by_ mpp_open() !
! file is the filename: REQUIRED !
! we append .nc to filename if it is a netCDF file !
! we append . to filename if fileset is private (pppp is PE number) !
! iospec is a system hint for I/O organization !
! e.g assign(1) on SGI/Cray systems. !
! if nohdrs is .TRUE. headers are not written on non-netCDF writes. !
! nohdrs has no effect when action=MPP_RDONLY|MPP_APPEND !
! or when form=MPP_NETCDF !
! FLAGS: !
! action is one of MPP_RDONLY, MPP_APPEND or MPP_WRONLY !
! form is one of MPP_ASCII: formatted read/write !
! MPP_NATIVE: unformatted read/write, no conversion !
! MPP_IEEE32: unformatted read/write, conversion to IEEE32 !
! MPP_NETCDF: unformatted read/write, conversion to netCDF !
! access is one of MPP_SEQUENTIAL or MPP_DIRECT (ignored for netCDF) !
! RECL argument is REQUIRED for direct access IO !
! threading is one of MPP_SINGLE or MPP_MULTI !
! single-threaded IO in a multi-PE run is done by PE0 !
! fileset is one of MPP_MULTI and MPP_SINGLE !
! fileset is only used for multi-threaded I/O !
! if all I/O PEs in use a single fileset, !
! they write to the same file !
! if all I/O PEs in use a multi fileset, !
! they each write an independent file !
! recl is the record length in bytes !
! pelist is the list of I/O PEs (currently ALL) !
! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
subroutine mpp_open( unit, file, action, form, access, threading, &
fileset, iospec, nohdrs, recl, &
iostat, is_root_pe, domain, &
!----------
!ug support
domain_ug)
!----------
integer, intent(out) :: unit
character(len=*), intent(in) :: file
integer, intent(in), optional :: action, form, access
integer, intent(in), optional :: threading, fileset, recl
character(len=*), intent(in), optional :: iospec
logical, intent(in), optional :: nohdrs
integer, intent(out), optional :: iostat
logical, intent(in), optional :: is_root_pe
type(domain2d), intent(in), optional :: domain
!----------
!ug support
type(domainUG),target,intent(in),optional :: domain_ug
!----------
character(len=16) :: act, acc, for, pos
character(len=128) :: mesg
character(len=256) :: text2
integer :: action_flag, form_flag, access_flag, threading_flag, fileset_flag, length
integer :: nfiles, tile_id(1), io_layout(2)
logical :: exists, on_root_pe, dist_file
logical :: write_on_this_pe, read_on_this_pe, io_domain_exist
integer :: ios, nc_pos !position of .nc in file name
type(axistype) :: unlim !used by netCDF with mpp_append
!----------
!ug support
type(domain2d),pointer :: io_domain
type(domainUG),pointer :: io_domain_ug
integer(4) :: io_layout_ug
integer(4) :: tile_id_ug
!----------
integer*8 :: lenp
integer :: comm
integer :: info, ierror
integer,dimension(:), allocatable :: glist(:)
integer ::lena, lenb
character(len=12) ::ncblk
character(len=128) ::nc_name
integer ::f_size, f_stat
integer ::fsize, inital = 0
character(len=128) :: f_test
!----------
!ug support
!Only allow one type of mpp domain.
if (present(domain) .and. present(domain_ug)) then
call mpp_error(FATAL, &
"mpp_open: domain and domain_ug cannot both be" &
//" present in the same mpp_open call.")
endif
!Null initialize the unstructured I/O domain pointer.
io_domain => null()
io_domain_ug => null()
!----------
call mpp_clock_begin(mpp_open_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_OPEN: must first call mpp_io_init.' )
on_root_pe = mpp_pe() == mpp_root_pe()
if(present(is_root_pe)) on_root_pe = is_root_pe
dist_file = .false.
!set flags
action_flag = MPP_WRONLY !default
if( PRESENT(action) )action_flag = action
form_flag = MPP_ASCII
if( PRESENT(form) )form_flag = form
# 257
access_flag = MPP_SEQUENTIAL
if( PRESENT(access) )access_flag = access
threading_flag = MPP_SINGLE
if( npes.GT.1 .AND. PRESENT(threading) )threading_flag = threading
fileset_flag = MPP_MULTI
if( PRESENT(fileset) )fileset_flag = fileset
if( threading_flag.EQ.MPP_SINGLE )fileset_flag = MPP_SINGLE
io_domain_exist = .false.
if( PRESENT(domain) ) then
io_domain => mpp_get_io_domain(domain)
if(associated(io_domain)) io_domain_exist = .true.
!----------
!ug support
elseif (present(domain_ug)) then
io_domain_ug => mpp_get_UG_io_domain(domain_ug)
io_domain_exist = .true.
!----------
endif
write_on_this_pe = .true.
read_on_this_pe = .true.
if( threading_flag.EQ.MPP_SINGLE .AND. .NOT.on_root_pe ) then
write_on_this_pe = .false.
read_on_this_pe = .false.
endif
if(form_flag == MPP_NETCDF .AND. action_flag .NE. MPP_RDONLY) then
if(fileset_flag .EQ.MPP_SINGLE .AND. threading_flag.EQ.MPP_MULTI) then
call mpp_error(FATAL, "mpp_io_connect.inc(mpp_open): multiple thread and single "// &
"file writing/appending is not supported for netCDF file")
endif
if( fileset_flag.EQ.MPP_SINGLE .AND. .NOT.on_root_pe ) then
write_on_this_pe = .false.
read_on_this_pe = .false.
endif
endif
if( io_domain_exist) then
!----------
!ug support
if (associated(io_domain)) then
! in this case, only write out from the root_pe of io_domain.
write_on_this_pe = mpp_domain_is_tile_root_pe(io_domain)
elseif (associated(io_domain_ug)) then
write_on_this_pe = mpp_domain_UG_is_tile_root_pe(io_domain_ug)
endif
!----------
endif
if( action_flag == MPP_RDONLY) write_on_this_pe = .false.
!get a unit number
if( .NOT. write_on_this_pe .AND. action_flag.NE.MPP_RDONLY .AND. .NOT. io_domain_exist)then
unit = NULLUNIT !PEs not participating in IO from this mpp_open() will return this value for unit
call mpp_clock_end(mpp_open_clock)
return
end if
if( form_flag.EQ.MPP_NETCDF )then
do unit = maxunits+1,2*maxunits
if( .NOT.mpp_file(unit)%valid )exit
end do
if( unit.GT.2*maxunits ) then
write(mesg,*) 'all the units between ',maxunits+1,' and ',2*maxunits,' are used'
call mpp_error( FATAL, 'MPP_OPEN: too many open netCDF files.'//trim(mesg) )
endif
else
do unit = unit_begin, unit_end
inquire( unit,OPENED=mpp_file(unit)%opened )
if( .NOT.mpp_file(unit)%opened )exit
end do
if( unit.GT.unit_end ) then
write(mesg,*) 'all the units between ',unit_begin,' and ',unit_end,' are used'
call mpp_error( FATAL, 'MPP_OPEN: no available units.'//trim(mesg) )
endif
end if
mpp_file(unit)%valid = .true.
mpp_file(unit)%write_on_this_pe = write_on_this_pe
mpp_file(unit)%read_on_this_pe = read_on_this_pe
mpp_file(unit)%io_domain_exist = io_domain_exist
if( PRESENT(domain) ) then
allocate(mpp_file(unit)%domain)
mpp_file(unit)%domain = domain
!----------
!ug support
elseif (present(domain_ug)) then
mpp_file(unit)%domain_ug => domain_ug
!----------
endif
!get a filename
nc_pos = index(file,'.nc.')
dist_file = nc_pos>0 ! this is a distributed file ending with filename.nc.0???
text = file
length = len_trim(file)
if(form_flag.EQ.MPP_NETCDF.AND. file(length-2:length) /= '.nc' .AND. .NOT.dist_file) &
text = trim(file)//'.nc'
!----------
!ug support
!HELP: Is there any way to retrieve the I/O layout for an unstructured grid?
! I could not find a way, so I added it into mpp_domains.
if (present(domain)) then
io_layout = mpp_get_io_domain_layout(domain)
elseif (present(domain_ug)) then
io_layout_ug = mpp_get_io_domain_UG_layout(domain_ug)
endif
!----------
if( io_domain_exist) then
!----------
!ug support
if (present(domain) .and. io_layout(1)*io_layout(2) .gt. 1) then
fileset_flag = MPP_MULTI
threading_flag = MPP_MULTI
tile_id = mpp_get_tile_id(io_domain)
text2 = trim(text)
if (tile_id(1) .ge. 10000) then
call mpp_error(FATAL, &
"mpp_open: tile_id should be less than" &
//" 10000 when io_domain exist")
endif
write(text,'(a,i4.4)') trim(text)//'.',tile_id(1)
if (action_flag .eq. MPP_RDONLY) then
inquire(file=trim(text),EXIST=exists)
if (.not. exists) then
write(text2,'(a,i6.6)') trim(text2)//'.',tile_id(1)
inquire(file=trim(text2),EXIST=exists)
if (.not.exists) then
call mpp_error(FATAL, &
"mpp_open: neither "// &
trim(text)//" nor "// &
trim(text2)//" exist and io" &
//" domain exist.")
endif
text = trim(text2)
endif
endif
elseif (present(domain_ug) .and. io_layout_ug .gt. 1) then
fileset_flag = MPP_MULTI
threading_flag = MPP_MULTI
tile_id_ug = mpp_get_UG_domain_tile_id(io_domain_ug)
text2 = trim(text)
if (tile_id_ug .ge. 10000) then
call mpp_error(FATAL, &
"mpp_open: tile_id should be less than" &
//" 10000 when io_domain exist")
endif
write(text,'(a,i4.4)') trim(text)//'.',tile_id_ug
if (action_flag .eq. MPP_RDONLY) then
inquire(file=trim(text),EXIST=exists)
if (.not. exists) then
write(text2,'(a,i6.6)') trim(text2)//'.',tile_id_ug
inquire(file=trim(text2),EXIST=exists)
if (.not.exists) then
call mpp_error(FATAL, &
"mpp_open: neither "// &
trim(text)//" nor "// &
trim(text2)//" exist and io" &
//" domain exist.")
endif
text = trim(text2)
endif
endif
else
fileset_flag = MPP_SINGLE
threading_flag = MPP_SINGLE
endif
!----------
else if( fileset_flag.EQ.MPP_MULTI ) then
if(mpp_npes() > 10000) then
write( text,'(a,i6.6)' )trim(text)//'.', pe-mpp_root_pe()
else
write( text,'(a,i4.4)' )trim(text)//'.', pe-mpp_root_pe()
endif
endif
mpp_file(unit)%name = text
if( verbose )print '(a,2i6,x,a,5i5)', 'MPP_OPEN: PE, unit, filename, action, format, access, threading, fileset=', &
pe, unit, trim(mpp_file(unit)%name), action_flag, form_flag, access_flag, threading_flag, fileset_flag
!action: read, write, overwrite, append: act and pos are ignored by netCDF
if( action_flag.EQ.MPP_RDONLY )then
act = 'READ'
pos = 'REWIND'
else if( action_flag.EQ.MPP_WRONLY .OR. action_flag.EQ.MPP_OVERWR )then
act = 'WRITE'
pos = 'REWIND'
else if( action_flag.EQ.MPP_APPEND )then
act = 'WRITE'
pos = 'APPEND'
else
call mpp_error( FATAL, 'MPP_OPEN: action must be one of MPP_WRONLY, MPP_APPEND or MPP_RDONLY.' )
end if
mpp_file(unit)%threading = threading_flag
mpp_file(unit)%fileset = fileset_flag
if( .NOT. write_on_this_pe .AND. action_flag.NE.MPP_RDONLY ) then
call mpp_clock_end(mpp_open_clock)
return
endif
!access: sequential or direct: ignored by netCDF
if( form_flag.NE.MPP_NETCDF )then
if( access_flag.EQ.MPP_SEQUENTIAL )then
acc = 'SEQUENTIAL'
else if( access_flag.EQ.MPP_DIRECT )then
acc = 'DIRECT'
if( form_flag.EQ.MPP_ASCII )call mpp_error( FATAL, 'MPP_OPEN: formatted direct access I/O is prohibited.' )
if( .NOT.PRESENT(recl) ) &
call mpp_error( FATAL, 'MPP_OPEN: recl (record length in bytes) must be specified with access=MPP_DIRECT.' )
mpp_file(unit)%record = 1
records_per_pe = 1 !each PE writes 1 record per mpp_write
else
call mpp_error( FATAL, 'MPP_OPEN: access must be one of MPP_SEQUENTIAL or MPP_DIRECT.' )
end if
end if
!threading: SINGLE or MULTI
if( threading_flag.EQ.MPP_MULTI )then
!fileset: MULTI or SINGLE (only for multi-threaded I/O
if( fileset_flag.EQ.MPP_SINGLE )then
if( form_flag.EQ.MPP_NETCDF .AND. act.EQ.'WRITE' ) &
call mpp_error( FATAL, 'MPP_OPEN: netCDF currently does not support single-file multi-threaded output.' )
# 490
else if( fileset_flag.NE.MPP_MULTI )then
call mpp_error( FATAL, 'MPP_OPEN: fileset must be one of MPP_MULTI or MPP_SINGLE.' )
end if
else if( threading_flag.NE.MPP_SINGLE )then
call mpp_error( FATAL, 'MPP_OPEN: threading must be one of MPP_SINGLE or MPP_MULTI.' )
end if
!apply I/O specs before opening the file
!note that -P refers to the scope of a fortran unit, which is always thread-private even if file is shared
# 504
# 507
# 513
if( PRESENT(iospec) )then
!iospec provides hints to the system on how to organize I/O
!on Cray systems this is done through 'assign', see assign(1) and assign(3F)
!on other systems this will be expanded as needed
!no error checks here on whether the supplied iospec is valid
# 530
end if
!open the file as specified above for various formats
if( form_flag.EQ.MPP_NETCDF )then
# 564
!added by fmi to read NC_BLKSZ and NC_BLKSZ_filename...
!get regular nc_blksz...
!build env var for check
!write (*,*) 'hello', trim(mpp_file(unit)%name)
nc_name = 'NC_BLKSZ_'//trim(mpp_file(unit)%name)
!write (*,*) 'nc_name: ', nc_name, ' bcblk: ', ncblk
!make the call.....
!f2003 replaces GETENV with get_enviornment_variable so the guts are here if we need to switch
!call get_enviornment_variable(trim(nc_name),ncblk )
call GETENV( trim(nc_name),ncblk )
!might not be there...use the general setting
if (ncblk .EQ. '') then
!call get_enviornment_variable( 'NC_BLKSZ', ncblk)
call GETENV( 'NC_BLKSZ', ncblk)
endif
!if no general setting then use default
if (ncblk .EQ. '') then
ncblk = '64k' !change for platform...perhaps we should set an ifdef for this....
endif
!set or convert the chunksize
call file_size(ncblk, mpp_file(unit)%name, fsize)
!write (*,*) 'this is fsize after: ', fsize
if(debug) write(*,*) 'Blocksize for ', trim(mpp_file(unit)%name),' is ', fsize
!ends addition from fmi - oct.22.2008
# 686
if( action_flag.EQ.MPP_WRONLY )then
if(debug) write(*,*) 'Blocksize for create of ', trim(mpp_file(unit)%name),' is ', fsize
error=NF__CREATE( trim(mpp_file(unit)%name), IOR(NF_NETCDF4,NF_CLASSIC_MODEL), inital, fsize, mpp_file(unit)%ncid )
call netcdf_err( error, mpp_file(unit) )
if( verbose )print '(a,i6,i16)', 'MPP_OPEN: new netCDF file: pe, ncid=', pe, mpp_file(unit)%ncid
else if( action_flag.EQ.MPP_OVERWR )then
if(debug) write(*,*) 'Blocksize for create of ', trim(mpp_file(unit)%name),' is ', fsize
error=NF__CREATE( trim(mpp_file(unit)%name), IOR(NF_NETCDF4,NF_CLASSIC_MODEL), inital, fsize, mpp_file(unit)%ncid )
call netcdf_err( error, mpp_file(unit) )
action_flag = MPP_WRONLY !after setting clobber, there is no further distinction btwn MPP_WRONLY and MPP_OVERWR
if( verbose )print '(a,i6,i16)', 'MPP_OPEN: overwrite netCDF file: pe, ncid=', pe, mpp_file(unit)%ncid
else if( action_flag.EQ.MPP_APPEND )then
inquire(file=trim(mpp_file(unit)%name),EXIST=exists)
if (.NOT.exists) call mpp_error(FATAL,'MPP_OPEN:'&
&//trim(mpp_file(unit)%name)//' does not exist.')
error=NF__OPEN(trim(mpp_file(unit)%name),NF_WRITE,fsize,mpp_file(unit)%ncid);call netcdf_err(error,mpp_file(unit))
!get the current time level of the file: writes to this file will be at next time level
error = NF_INQ_UNLIMDIM( mpp_file(unit)%ncid, unlim%did )
if( error.EQ.NF_NOERR )then
error = NF_INQ_DIM( mpp_file(unit)%ncid, unlim%did, unlim%name, mpp_file(unit)%time_level )
call netcdf_err( error, mpp_file(unit) )
error = NF_INQ_VARID( mpp_file(unit)%ncid, unlim%name, mpp_file(unit)%id )
call netcdf_err( error, mpp_file(unit), unlim )
end if
if( verbose )print '(a,i6,i16,i4)', 'MPP_OPEN: append to existing netCDF file: pe, ncid, time_axis_id=',&
pe, mpp_file(unit)%ncid, mpp_file(unit)%id
mpp_file(unit)%format=form_flag ! need this for mpp_read
call mpp_read_meta(unit, read_time=.FALSE.)
else if( action_flag.EQ.MPP_RDONLY )then
inquire(file=trim(mpp_file(unit)%name),EXIST=exists)
if (.NOT.exists) call mpp_error(FATAL,'MPP_OPEN:'&
&//trim(mpp_file(unit)%name)//' does not exist.')
error=NF__OPEN(trim(mpp_file(unit)%name),NF_NOWRITE,fsize,mpp_file(unit)%ncid);call netcdf_err(error,mpp_file(unit))
if( verbose )print '(a,i6,i16,i4)', 'MPP_OPEN: opening existing netCDF file: pe, ncid, time_axis_id=',&
pe, mpp_file(unit)%ncid, mpp_file(unit)%id
mpp_file(unit)%format=form_flag ! need this for mpp_read
call mpp_read_meta(unit, read_time=.TRUE.)
end if
mpp_file(unit)%opened = .TRUE.
else
!format: ascii, native, or IEEE 32 bit
if( form_flag.EQ.MPP_ASCII )then
for = 'FORMATTED'
else if( form_flag.EQ.MPP_IEEE32 )then
for = 'UNFORMATTED'
!assign -N is currently unsupported on SGI
# 740
else if( form_flag.EQ.MPP_NATIVE )then
for = 'UNFORMATTED'
else
call mpp_error( FATAL, 'MPP_OPEN: form must be one of MPP_ASCII, MPP_NATIVE, MPP_IEEE32 or MPP_NETCDF.' )
end if
inquire( file=trim(mpp_file(unit)%name), EXIST=exists )
if( exists .AND. action_flag.EQ.MPP_WRONLY ) &
call mpp_error( WARNING, 'MPP_OPEN: File '//trim(mpp_file(unit)%name)//' opened WRONLY already exists!' )
if( action_flag.EQ.MPP_OVERWR )action_flag = MPP_WRONLY
!perform the OPEN here
ios = 0
if( PRESENT(recl) )then
if( verbose )print '(2(x,a,i6),5(x,a),a,i8)', 'MPP_OPEN: PE=', pe, &
'unit=', unit, trim(mpp_file(unit)%name), 'attributes=', trim(acc), trim(for), trim(act), ' RECL=', recl
open( unit, file=trim(mpp_file(unit)%name), access=acc, form=for, action=act, recl=recl,iostat=ios )
else
if( verbose )print '(2(x,a,i6),6(x,a))', 'MPP_OPEN: PE=', pe, &
'unit=', unit, trim(mpp_file(unit)%name), 'attributes=', trim(acc), trim(for), trim(pos), trim(act)
open( unit, file=trim(mpp_file(unit)%name), access=acc, form=for, action=act, position=pos, iostat=ios)
end if
!check if OPEN worked
inquire( unit,OPENED=mpp_file(unit)%opened )
if (ios/=0) then
if (PRESENT(iostat)) then
iostat=ios
call mpp_error( WARNING, 'MPP_OPEN: error in OPEN for '//trim(mpp_file(unit)%name)//'.' )
call mpp_clock_end(mpp_open_clock)
return
else
call mpp_error( FATAL, 'MPP_OPEN: error in OPEN for '//trim(mpp_file(unit)%name)//'.' )
endif
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_OPEN: error in OPEN() statement.' )
end if
mpp_file(unit)%action = action_flag
mpp_file(unit)%format = form_flag
mpp_file(unit)%access = access_flag
if( PRESENT(nohdrs) )mpp_file(unit)%nohdrs = nohdrs
if( action_flag.EQ.MPP_WRONLY )then
if( form_flag.NE.MPP_NETCDF .AND. access_flag.EQ.MPP_DIRECT )call mpp_write_meta( unit, 'record_length', ival=recl )
!actual file name
call mpp_write_meta( unit, 'filename', cval=mpp_file(unit)%name)
!MPP_IO package version
! call mpp_write_meta( unit, 'MPP_IO_VERSION', cval=trim(version) )
!filecount for multifileset.
if( threading_flag.EQ.MPP_MULTI .AND. fileset_flag.EQ.MPP_MULTI ) then
if(present(domain)) then
nfiles = io_layout(1)*io_layout(2)
npes = mpp_get_domain_npes(domain)
if(nfiles > npes) nfiles = npes
!----------
!ug support
elseif (present(domain_ug)) then
nfiles = io_layout_ug
npes = mpp_get_UG_domain_npes(domain_ug)
if (nfiles .gt. npes) then
nfiles = npes
endif
!----------
else
nfiles = mpp_npes()
endif
call mpp_write_meta( unit, 'NumFilesInSet', ival=nfiles)
end if
end if
!----------
!ug support
! NULL()
endif
if (associated(io_domain_ug)) then
io_domain_ug => null()
endif
!----------
call mpp_clock_end(mpp_open_clock)
return
end subroutine mpp_open
!
!
! Close an open file.
!
!
! Closes the open file on unit. Clears the
! type(filetype) object mpp_file(unit) making it
! available for reuse.
!
!
! call mpp_close( unit, action )
!
!
!
!
subroutine mpp_close( unit, action )
integer, intent(in) :: unit
integer, intent(in), optional :: action
character(len=8) :: status
logical :: collect
integer :: i, j
call mpp_clock_begin(mpp_close_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_CLOSE: must first call mpp_io_init.' )
if( unit.EQ.NULLUNIT .OR. unit .EQ. stderr() ) then
call mpp_clock_end(mpp_close_clock)
return !nothing was actually opened on this unit
endif
!action on close
status = 'KEEP'
!collect is supposed to launch the post-processing collector tool for multi-fileset
collect = .FALSE.
if( PRESENT(action) )then
if( action.EQ.MPP_DELETE )then
if( pe.EQ.mpp_root_pe() .OR. mpp_file(unit)%fileset.EQ.MPP_MULTI )status = 'DELETE'
else if( action.EQ.MPP_COLLECT )then
collect = .FALSE. !should be TRUE but this is not yet ready
call mpp_error( WARNING, 'MPP_CLOSE: the COLLECT operation is not yet implemented.' )
else
call mpp_error( FATAL, 'MPP_CLOSE: action must be one of MPP_DELETE or MPP_COLLECT.' )
end if
end if
if( mpp_file(unit)%fileset.NE.MPP_MULTI )collect = .FALSE.
if( mpp_file(unit)%opened) then
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
error = NF_CLOSE(mpp_file(unit)%ncid); call netcdf_err( error, mpp_file(unit) )
else
close(unit,status=status)
end if
endif
# 883
if ( associated(mpp_file(unit)%Axis) ) then
do i=1, mpp_file(unit)%ndim
if ( associated(mpp_file(unit)%Axis(i)%data) ) then
deallocate(mpp_file(unit)%Axis(i)%data)
nullify(mpp_file(unit)%Axis(i)%data)
end if
if ( associated(mpp_file(unit)%Axis(i)%Att) ) then
do j=1, mpp_file(unit)%Axis(i)%natt
if ( associated(mpp_file(unit)%Axis(i)%Att(j)%fatt) ) then
deallocate(mpp_file(unit)%Axis(i)%Att(j)%fatt)
nullify(mpp_file(unit)%Axis(i)%Att(j)%fatt)
end if
end do
deallocate(mpp_file(unit)%Axis(i)%Att)
nullify(mpp_file(unit)%Axis(i)%Att)
end if
end do
deallocate(mpp_file(unit)%Axis)
nullify(mpp_file(unit)%Axis)
end if
if ( associated(mpp_file(unit)%var) ) then
do i=1, mpp_file(unit)%nvar
if ( associated(mpp_file(unit)%var(i)%Axes) ) then
! Do not need to deallocate/nullify child pointers, handled above with mpp_file(unit)%Axis(:)%*
deallocate(mpp_file(unit)%var(i)%Axes)
nullify(mpp_file(unit)%var(i)%Axes)
end if
if ( associated(mpp_file(unit)%var(i)%size) ) then
deallocate(mpp_file(unit)%var(i)%size)
nullify(mpp_file(unit)%var(i)%size)
end if
if ( associated(mpp_file(unit)%var(i)%Att) ) then
do j=1, mpp_file(unit)%var(i)%natt
if ( associated(mpp_file(unit)%var(i)%Att(j)%fatt) ) then
deallocate(mpp_file(unit)%var(i)%Att(j)%fatt)
nullify(mpp_file(unit)%var(i)%Att(j)%fatt)
end if
end do
deallocate(mpp_file(unit)%var(i)%Att)
nullify(mpp_file(unit)%var(i)%Att)
end if
end do
deallocate(mpp_file(unit)%var)
nullify(mpp_file(unit)%var)
end if
if ( associated(mpp_file(unit)%att) ) then
do i=1, mpp_file(unit)%natt
if ( associated(mpp_file(unit)%att(i)%fatt) ) then
deallocate(mpp_file(unit)%att(i)%fatt)
nullify(mpp_file(unit)%att(i)%fatt)
end if
end do
deallocate(mpp_file(unit)%att)
nullify(mpp_file(unit)%att)
end if
if ( associated(mpp_file(unit)%time_values) ) then
deallocate(mpp_file(unit)%time_values)
nullify(mpp_file(unit)%time_values)
end if
mpp_file(unit)%name = ' '
mpp_file(unit)%action = -1
mpp_file(unit)%format = -1
mpp_file(unit)%access = -1
mpp_file(unit)%threading = -1
mpp_file(unit)%fileset = -1
mpp_file(unit)%record = -1
mpp_file(unit)%ncid = -1
mpp_file(unit)%opened = .FALSE.
mpp_file(unit)%initialized = .FALSE.
mpp_file(unit)%id = -1
mpp_file(unit)%ndim = -1
mpp_file(unit)%nvar = -1
mpp_file(unit)%time_level = 0
mpp_file(unit)%time = NULLTIME
mpp_file(unit)%valid = .false.
mpp_file(unit)%io_domain_exist = .false.
mpp_file(unit)%write_on_this_pe = .false.
!----------
!ug support
! null()
elseif (associated(mpp_file(unit)%domain_ug)) then
mpp_file(unit)%domain_ug => null()
endif
!----------
call mpp_clock_end(mpp_close_clock)
return
end subroutine mpp_close
subroutine file_size(fsize, fname, size)
character(len=12), intent(in) ::fsize
character(len=128) ::filesize
character(len=128), intent(in),optional :: fname
character(len=128) :: filename
integer*4 :: fstat(13)
integer :: length
character(len=16) ::number
integer,intent(OUT) :: size
integer*4 ::ierr, stat
integer :: tend
logical :: there
size = 0
filesize = fsize
length = len(trim(fsize))
tend = length - 1
if (filesize .EQ. 'file') then
filename = trim(fname)
INQUIRE( FILE=filename, EXIST=THERE )
if (THERE) then
ierr = stat(filename, fstat)
if (ierr .EQ. 0) then
size = fstat(8)
else
size = 0
end if
end if
elseif((filesize(length:length)>='a'.AND.fsize(length:length)<='z').OR.(filesize(length:length)>='A' &
.AND.fsize(length:length)<='Z'))then
number = filesize(1:tend)
READ(number, FMT='(I9)') size
if (filesize(length:length) >= 'a' .AND. fsize(length:length) <= 'z') then
filesize(length:length) = ACHAR ( ICHAR (filesize(length:length)) - 32)
end if
if ( filesize(length:length) .EQ. 'K') then
size = size*1024
elseif ( filesize(length:length) .EQ. 'M') then
size = (size*1024)*1024
elseif ( filesize(length:length) .EQ. 'G') then
size = (((size*1024)*1024)*1024)
else
size = size
end if
else
READ(filesize, FMT='(I9)') size
endif
if (size .eq. 0) then
size = 65536
endif
return
end subroutine file_size
# 1105 "../mpp/mpp_io.F90" 2
# 1 "../mpp/include/mpp_io_read.inc" 1
! -*-f90-*-
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! !
! MPP_READ !
! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# 1 "../mpp/include/mpp_read_2Ddecomp.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine read_record_core(unit, field, nwords, data, start, axsiz)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
integer, intent(in) :: nwords
real, intent(inout) :: data(nwords)
integer, intent(in) :: start(:), axsiz(:)
integer(2) :: i2vals(nwords)
!rab used in conjunction with transfer intrinsic to determine size of a variable
integer(KIND=1) :: one_byte(8)
integer :: word_sz
!#ifdef __sgi
integer(4) :: ivals(nwords)
real(4) :: rvals(nwords)
!#else
! integer :: ivals(nwords)
! real :: rvals(nwords)
!#endif
real(8) :: r8vals(nwords)
pointer( ptr1, i2vals )
pointer( ptr2, ivals )
pointer( ptr3, rvals )
pointer( ptr4, r8vals )
if (mpp_io_stack_size < nwords) call mpp_io_set_stack_size(nwords)
word_sz = size(transfer(data(1),one_byte))
select case (field%type)
case(NF_BYTE)
! use type conversion
call mpp_error( FATAL, 'MPP_READ: does not support NF_BYTE packing' )
case(NF_SHORT)
ptr1 = LOC(mpp_io_stack(1))
error = NF_GET_VARA_INT2 ( mpp_file(unit)%ncid, field%id, start, axsiz, i2vals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=i2vals(:)
else
data(:)=i2vals(:)*field%scale + field%add
end if
case(NF_INT)
ptr2 = LOC(mpp_io_stack(1))
error = NF_GET_VARA_INT ( mpp_file(unit)%ncid, field%id, start, axsiz, ivals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=ivals(:)
else
data(:)=ivals(:)*field%scale + field%add
end if
case(NF_FLOAT)
ptr3 = LOC(mpp_io_stack(1))
if (size(transfer(rvals(1),one_byte)) .eq. word_sz) then
error = NF_GET_VARA_REAL ( mpp_file(unit)%ncid, field%id, start, axsiz, data )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale /= 1.0 .or. field%add /= 0.0) then
data(:)=data(:)*field%scale + field%add
end if
else
error = NF_GET_VARA_REAL ( mpp_file(unit)%ncid, field%id, start, axsiz, rvals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=rvals(:)
else
data(:)=rvals(:)*field%scale + field%add
end if
end if
case(NF_DOUBLE)
ptr4 = LOC(mpp_io_stack(1))
if (size(transfer(r8vals(1),one_byte)) .eq. word_sz) then
error = NF_GET_VARA_DOUBLE( mpp_file(unit)%ncid, field%id, start, axsiz, data )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale /= 1.0 .or. field%add /= 0.0) then
data(:)=data(:)*field%scale + field%add
end if
else
error = NF_GET_VARA_DOUBLE( mpp_file(unit)%ncid, field%id, start, axsiz, r8vals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=r8vals(:)
else
data(:)=r8vals(:)*field%scale + field%add
end if
end if
case default
call mpp_error( FATAL, 'MPP_READ: invalid pack value' )
end select
# 111
end subroutine read_record_core
subroutine read_record( unit, field, nwords, data, time_level, domain, position, tile_count, start_in, axsiz_in )
!routine that is finally called by all mpp_read routines to perform the read
!a non-netCDF record contains:
! field ID
! a set of 4 coordinates (is:ie,js:je) giving the data subdomain
! a timelevel and a timestamp (=NULLTIME if field is static)
! 3D real data (stored as 1D)
!if you are using direct access I/O, the RECL argument to OPEN must be large enough for the above
!in a global direct access file, record position on PE is given by %record.
!Treatment of timestamp:
! We assume that static fields have been passed without a timestamp.
! Here that is converted into a timestamp of NULLTIME.
! For non-netCDF fields, field is treated no differently, but is written
! with a timestamp of NULLTIME. There is no check in the code to prevent
! the user from repeatedly writing a static field.
integer, intent(in) :: unit, nwords
type(fieldtype), intent(in) :: field
real, intent(inout) :: data(nwords)
integer, intent(in), optional :: time_level
type(domain2D), intent(in), optional :: domain
integer, intent(in), optional :: position, tile_count
integer, intent(in), optional :: start_in(:), axsiz_in(:)
integer, dimension(size(field%axes(:))) :: start, axsiz
integer :: tlevel !,subdomain(4)
integer :: i, error, is, ie, js, je, isg, ieg, jsg, jeg
type(domain2d), pointer :: io_domain=>NULL()
if (.not.PRESENT(time_level)) then
tlevel = 0
else
tlevel = time_level
endif
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'READ_RECORD: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'READ_RECORD: invalid unit number.' )
if( .NOT.mpp_file(unit)%read_on_this_pe )return
if( .NOT.mpp_file(unit)%initialized ) call mpp_error( FATAL, 'MPP_READ: must first call mpp_read_meta.' )
if( mpp_file(unit)%format .NE. MPP_NETCDF ) call mpp_error( FATAL, 'Currently dont support non-NetCDF mpp read' )
if (.not.PRESENT(time_level)) then
tlevel = 0
else
tlevel = time_level
endif
if( verbose )print '(a,2i6,2i5)', 'MPP_READ: PE, unit, %id, %time_level =',&
pe, unit, mpp_file(unit)%id, tlevel
if( PRESENT(start_in) .AND. PRESENT(axsiz_in) ) then
if(size(start(:)) > size(start_in(:)) )call mpp_error( FATAL, 'MPP_READ: size(start_in) < size(start)')
if(size(axsiz(:)) > size(axsiz_in(:)) )call mpp_error( FATAL, 'MPP_READ: size(axsiz_in) < size(axsiz)')
start(:) = start_in(1:size(start(:)))
axsiz(:) = axsiz_in(1:size(axsiz(:)))
else
!define netCDF data block to be read:
! time axis: START = time level
! AXSIZ = 1
! space axis: if there is no domain info
! START = 1
! AXSIZ = field%size(axis)
! if there IS domain info:
! start of domain is compute%start_index for multi-file I/O
! global%start_index for all other cases
! this number must be converted to 1 for NF_GET_VAR
! (netCDF fortran calls are with reference to 1),
! So, START = compute%start_index - + 1
! AXSIZ = usually compute%size
! However, if compute%start_index-compute%end_index+1.NE.compute%size,
! we assume that the call is passing a subdomain.
! To pass a subdomain, you must pass a domain2D object that satisfies the following:
! global%start_index must contain the as defined above;
! the data domain and compute domain must refer to the subdomain being passed.
! In this case, START = compute%start_index - + 1
! AXSIZ = compute%start_index - compute%end_index + 1
! NOTE: passing of subdomains will fail for multi-PE single-threaded I/O,
! since that attempts to gather all data on PE 0.
start = 1
do i = 1,size(field%axes(:))
axsiz(i) = field%size(i)
if( i .EQ. field%time_axis_index )start(i) = tlevel
end do
if( PRESENT(domain) )then
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
axsiz(1) = ie-is+1
axsiz(2) = je-js+1
if( mpp_file(unit)%fileset.EQ.MPP_SINGLE )then
if( npes.GT.1 )then
start(1) = is - isg + 1
start(2) = js - jsg + 1
else !--- z1l fix a problem related obc when npes = 1
if( ie-is+1.NE.ieg-isg+1 )then
start(1) = is - isg + 1
axsiz(1) = ie - is + 1
end if
if( je-js+1.NE.jeg-jsg+1 )then
start(2) = js - jsg + 1
axsiz(2) = je - js + 1
end if
end if
else if( mpp_file(unit)%io_domain_exist ) then
io_domain=>mpp_get_io_domain(domain)
call mpp_get_compute_domain( io_domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_global_domain ( io_domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
start(1) = is - isg + 1
start(2) = js - jsg + 1
io_domain => NULL()
end if
end if
endif
if( verbose )print '(a,2i6,i6,12i4)', 'READ_RECORD: PE, unit, nwords, start, axsiz=', pe, unit, nwords, start, axsiz
call read_record_core(unit, field, nwords, data, start, axsiz)
return
end subroutine read_record
subroutine mpp_read_2ddecomp_r2d( unit, field, domain, data, tindex, tile_count )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real, intent(inout) :: data(:,:)
integer, intent(in), optional :: tindex, tile_count
real :: data3D(size(data,1),size(data,2),1)
pointer( ptr, data3D )
ptr = LOC(data)
call mpp_read( unit, field, domain, data3D, tindex, tile_count)
return
end subroutine mpp_read_2ddecomp_r2d
subroutine mpp_read_2ddecomp_r3d( unit, field, domain, data, tindex, tile_count )
!mpp_read reads which has the domain decomposition
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real, intent(inout) :: data(:,:,:)
integer, intent(in), optional :: tindex, tile_count
real, allocatable :: cdata(:,:,:)
real, allocatable :: gdata(:)
integer :: len, lenx,leny,lenz,i,j,k,n
!NEW: data may be on compute OR data domain
logical :: data_has_halos, halos_are_global, x_is_global, y_is_global
integer :: is, ie, js, je, isd, ied, jsd, jed, isg, ieg, jsg, jeg, ism, iem, jsm, jem
integer :: ioff, joff, position
call mpp_clock_begin(mpp_read_clock)
if (.NOT. present(tindex) .AND. mpp_file(unit)%time_level .ne. -1) &
call mpp_error(FATAL, 'MPP_READ: need to specify a time level for data with time axis')
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_READ: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_READ: invalid unit number.' )
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count )
call mpp_get_data_domain ( domain, isd, ied, jsd, jed, x_is_global=x_is_global, &
y_is_global=y_is_global, tile_count=tile_count )
call mpp_get_memory_domain ( domain, ism, iem, jsm, jem )
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count )
! when domain is symmetry, extra point is needed for some data on x/y direction
position = CENTER
if(mpp_domain_is_symmetry(domain)) then
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 ) then ! CENTER
data_has_halos = .FALSE.
else if( size(data,1).EQ.ie-is+2 .AND. size(data,2).EQ.je-js+1 ) then ! EAST
data_has_halos = .FALSE.
position = EAST
ie = ie + 1
else if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+2 ) then ! NORTH
position = NORTH
data_has_halos = .FALSE.
je = je + 1
else if( size(data,1).EQ.ie-is+2 .AND. size(data,2).EQ.je-js+2 ) then ! CORNER
position = CORNER
data_has_halos = .FALSE.
ie = ie + 1; je = je + 1
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then ! CENTER
data_has_halos = .TRUE.
else if( size(data,1).EQ.iem-ism+2 .AND. size(data,2).EQ.jem-jsm+1 )then ! EAST
position = EAST
data_has_halos = .TRUE.
ie = ie + 1
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+2 )then ! NORTH
position = NORTH
data_has_halos = .TRUE.
je = je + 1
else if( size(data,1).EQ.iem-ism+2 .AND. size(data,2).EQ.jem-jsm+2 )then ! CORNER
position = CORNER
data_has_halos = .TRUE.
ie = ie + 1; je = je + 1
else
call mpp_error( FATAL, 'MPP_READ: when domain is symmetry, data must be either on ' &
//'compute domain or data domain with the consideration of shifting.' )
end if
else
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 )then
data_has_halos = .FALSE.
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then
data_has_halos = .TRUE.
else
call mpp_error( FATAL, 'MPP_READ: data must be either on compute domain or data domain.' )
end if
endif
halos_are_global = x_is_global .AND. y_is_global
if( npes.GT.1 .AND. mpp_file(unit)%threading.EQ.MPP_SINGLE )then
if( halos_are_global )then !you can read directly into data array
if( pe.EQ.0 )call read_record( unit, field, size(data(:,:,:)), data, tindex )
else
lenx=size(data,1)
leny=size(data,2)
lenz=size(data,3)
len=lenx*leny*lenz
allocate(gdata(len))
! read field on pe 0 and pass to all pes
if( pe.EQ.0 ) call read_record( unit, field, len, gdata, tindex )
! broadcasting global array, this can be expensive!
call mpp_transmit( put_data=gdata(1), plen=len, to_pe=ALL_PES, &
get_data=gdata(1), glen=len, from_pe=0 )
ioff = is; joff = js
if( data_has_halos )then
ioff = isd; joff = jsd
end if
do k=1,size(data,3)
do j=js,je
do i=is,ie
n=(i-isg+1) + (j-jsg)*lenx + (k-1)*lenx*leny
data(i-ioff+1,j-joff+1,k)=gdata(n)
enddo
enddo
enddo
deallocate(gdata)
call mpp_sync_self() ! ensure MPI_ISEND is done.
end if
else if( data_has_halos )then
! for uniprocessor or multithreaded read
! read compute domain as contiguous data
allocate( cdata(is:ie,js:je,size(data,3)) )
call read_record(unit,field,size(cdata(:,:,:)),cdata,tindex,domain,position,tile_count)
data(is-isd+1:ie-isd+1,js-jsd+1:je-jsd+1,:) = cdata(:,:,:)
deallocate(cdata)
else
call read_record(unit,field,size(data(:,:,:)),data,tindex,domain,position,tile_count)
end if
call mpp_clock_end(mpp_read_clock)
return
end subroutine mpp_read_2ddecomp_r3d
subroutine mpp_read_2ddecomp_r4d( unit, field, domain, data, tindex, tile_count )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real, intent(inout) :: data(:,:,:,:)
integer, intent(in), optional :: tindex, tile_count
real :: data3D(size(data,1),size(data,2),size(data,3)*size(data,4))
pointer( ptr, data3D )
ptr = LOC(data)
call mpp_read( unit, field, domain, data3D, tindex, tile_count)
return
end subroutine mpp_read_2ddecomp_r4d
# 41 "../mpp/include/mpp_io_read.inc" 2
# 1 "../mpp/include/mpp_read_2Ddecomp.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine read_record_core_r8(unit, field, nwords, data, start, axsiz)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
integer, intent(in) :: nwords
real(8), intent(inout) :: data(nwords)
integer, intent(in) :: start(:), axsiz(:)
integer(2) :: i2vals(nwords)
!rab used in conjunction with transfer intrinsic to determine size of a variable
integer(KIND=1) :: one_byte(8)
integer :: word_sz
!#ifdef __sgi
integer(4) :: ivals(nwords)
real(4) :: rvals(nwords)
!#else
! integer :: ivals(nwords)
! real :: rvals(nwords)
!#endif
real(8) :: r8vals(nwords)
pointer( ptr1, i2vals )
pointer( ptr2, ivals )
pointer( ptr3, rvals )
pointer( ptr4, r8vals )
if (mpp_io_stack_size < nwords) call mpp_io_set_stack_size(nwords)
word_sz = size(transfer(data(1),one_byte))
select case (field%type)
case(NF_BYTE)
! use type conversion
call mpp_error( FATAL, 'MPP_READ: does not support NF_BYTE packing' )
case(NF_SHORT)
ptr1 = LOC(mpp_io_stack(1))
error = NF_GET_VARA_INT2 ( mpp_file(unit)%ncid, field%id, start, axsiz, i2vals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=i2vals(:)
else
data(:)=i2vals(:)*field%scale + field%add
end if
case(NF_INT)
ptr2 = LOC(mpp_io_stack(1))
error = NF_GET_VARA_INT ( mpp_file(unit)%ncid, field%id, start, axsiz, ivals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=ivals(:)
else
data(:)=ivals(:)*field%scale + field%add
end if
case(NF_FLOAT)
ptr3 = LOC(mpp_io_stack(1))
if (size(transfer(rvals(1),one_byte)) .eq. word_sz) then
error = NF_GET_VARA_REAL ( mpp_file(unit)%ncid, field%id, start, axsiz, data )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale /= 1.0 .or. field%add /= 0.0) then
data(:)=data(:)*field%scale + field%add
end if
else
error = NF_GET_VARA_REAL ( mpp_file(unit)%ncid, field%id, start, axsiz, rvals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=rvals(:)
else
data(:)=rvals(:)*field%scale + field%add
end if
end if
case(NF_DOUBLE)
ptr4 = LOC(mpp_io_stack(1))
if (size(transfer(r8vals(1),one_byte)) .eq. word_sz) then
error = NF_GET_VARA_DOUBLE( mpp_file(unit)%ncid, field%id, start, axsiz, data )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale /= 1.0 .or. field%add /= 0.0) then
data(:)=data(:)*field%scale + field%add
end if
else
error = NF_GET_VARA_DOUBLE( mpp_file(unit)%ncid, field%id, start, axsiz, r8vals )
call netcdf_err( error, mpp_file(unit), field=field )
if(field%scale == 1.0 .and. field%add == 0.0) then
data(:)=r8vals(:)
else
data(:)=r8vals(:)*field%scale + field%add
end if
end if
case default
call mpp_error( FATAL, 'MPP_READ: invalid pack value' )
end select
# 111
end subroutine read_record_core_r8
subroutine read_record_r8( unit, field, nwords, data, time_level, domain, position, tile_count, start_in, axsiz_in )
!routine that is finally called by all mpp_read routines to perform the read
!a non-netCDF record contains:
! field ID
! a set of 4 coordinates (is:ie,js:je) giving the data subdomain
! a timelevel and a timestamp (=NULLTIME if field is static)
! 3D real data (stored as 1D)
!if you are using direct access I/O, the RECL argument to OPEN must be large enough for the above
!in a global direct access file, record position on PE is given by %record.
!Treatment of timestamp:
! We assume that static fields have been passed without a timestamp.
! Here that is converted into a timestamp of NULLTIME.
! For non-netCDF fields, field is treated no differently, but is written
! with a timestamp of NULLTIME. There is no check in the code to prevent
! the user from repeatedly writing a static field.
integer, intent(in) :: unit, nwords
type(fieldtype), intent(in) :: field
real(8), intent(inout) :: data(nwords)
integer, intent(in), optional :: time_level
type(domain2D), intent(in), optional :: domain
integer, intent(in), optional :: position, tile_count
integer, intent(in), optional :: start_in(:), axsiz_in(:)
integer, dimension(size(field%axes(:))) :: start, axsiz
integer :: tlevel !,subdomain(4)
integer :: i, error, is, ie, js, je, isg, ieg, jsg, jeg
type(domain2d), pointer :: io_domain=>NULL()
if (.not.PRESENT(time_level)) then
tlevel = 0
else
tlevel = time_level
endif
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'READ_RECORD: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'READ_RECORD: invalid unit number.' )
if( .NOT.mpp_file(unit)%read_on_this_pe )return
if( .NOT.mpp_file(unit)%initialized ) call mpp_error( FATAL, 'MPP_READ: must first call mpp_read_meta.' )
if( mpp_file(unit)%format .NE. MPP_NETCDF ) call mpp_error( FATAL, 'Currently dont support non-NetCDF mpp read' )
if (.not.PRESENT(time_level)) then
tlevel = 0
else
tlevel = time_level
endif
if( verbose )print '(a,2i6,2i5)', 'MPP_READ: PE, unit, %id, %time_level =',&
pe, unit, mpp_file(unit)%id, tlevel
if( PRESENT(start_in) .AND. PRESENT(axsiz_in) ) then
if(size(start(:)) > size(start_in(:)) )call mpp_error( FATAL, 'MPP_READ: size(start_in) < size(start)')
if(size(axsiz(:)) > size(axsiz_in(:)) )call mpp_error( FATAL, 'MPP_READ: size(axsiz_in) < size(axsiz)')
start(:) = start_in(1:size(start(:)))
axsiz(:) = axsiz_in(1:size(axsiz(:)))
else
!define netCDF data block to be read:
! time axis: START = time level
! AXSIZ = 1
! space axis: if there is no domain info
! START = 1
! AXSIZ = field%size(axis)
! if there IS domain info:
! start of domain is compute%start_index for multi-file I/O
! global%start_index for all other cases
! this number must be converted to 1 for NF_GET_VAR
! (netCDF fortran calls are with reference to 1),
! So, START = compute%start_index - + 1
! AXSIZ = usually compute%size
! However, if compute%start_index-compute%end_index+1.NE.compute%size,
! we assume that the call is passing a subdomain.
! To pass a subdomain, you must pass a domain2D object that satisfies the following:
! global%start_index must contain the as defined above;
! the data domain and compute domain must refer to the subdomain being passed.
! In this case, START = compute%start_index - + 1
! AXSIZ = compute%start_index - compute%end_index + 1
! NOTE: passing of subdomains will fail for multi-PE single-threaded I/O,
! since that attempts to gather all data on PE 0.
start = 1
do i = 1,size(field%axes(:))
axsiz(i) = field%size(i)
if( i .EQ. field%time_axis_index )start(i) = tlevel
end do
if( PRESENT(domain) )then
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
axsiz(1) = ie-is+1
axsiz(2) = je-js+1
if( mpp_file(unit)%fileset.EQ.MPP_SINGLE )then
if( npes.GT.1 )then
start(1) = is - isg + 1
start(2) = js - jsg + 1
else !--- z1l fix a problem related obc when npes = 1
if( ie-is+1.NE.ieg-isg+1 )then
start(1) = is - isg + 1
axsiz(1) = ie - is + 1
end if
if( je-js+1.NE.jeg-jsg+1 )then
start(2) = js - jsg + 1
axsiz(2) = je - js + 1
end if
end if
else if( mpp_file(unit)%io_domain_exist ) then
io_domain=>mpp_get_io_domain(domain)
call mpp_get_compute_domain( io_domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_global_domain ( io_domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
start(1) = is - isg + 1
start(2) = js - jsg + 1
io_domain => NULL()
end if
end if
endif
if( verbose )print '(a,2i6,i6,12i4)', 'READ_RECORD: PE, unit, nwords, start, axsiz=', pe, unit, nwords, start, axsiz
call read_record_core_r8(unit, field, nwords, data, start, axsiz)
return
end subroutine read_record_r8
subroutine mpp_read_2ddecomp_r2d_r8( unit, field, domain, data, tindex, tile_count )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real(8), intent(inout) :: data(:,:)
integer, intent(in), optional :: tindex, tile_count
real(8) :: data3D(size(data,1),size(data,2),1)
pointer( ptr, data3D )
ptr = LOC(data)
call mpp_read( unit, field, domain, data3D, tindex, tile_count)
return
end subroutine mpp_read_2ddecomp_r2d_r8
subroutine mpp_read_2ddecomp_r3d_r8( unit, field, domain, data, tindex, tile_count )
!mpp_read reads which has the domain decomposition
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real(8), intent(inout) :: data(:,:,:)
integer, intent(in), optional :: tindex, tile_count
real(8), allocatable :: cdata(:,:,:)
real(8), allocatable :: gdata(:)
integer :: len, lenx,leny,lenz,i,j,k,n
!NEW: data may be on compute OR data domain
logical :: data_has_halos, halos_are_global, x_is_global, y_is_global
integer :: is, ie, js, je, isd, ied, jsd, jed, isg, ieg, jsg, jeg, ism, iem, jsm, jem
integer :: ioff, joff, position
call mpp_clock_begin(mpp_read_clock)
if (.NOT. present(tindex) .AND. mpp_file(unit)%time_level .ne. -1) &
call mpp_error(FATAL, 'MPP_READ: need to specify a time level for data with time axis')
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_READ: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_READ: invalid unit number.' )
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count )
call mpp_get_data_domain ( domain, isd, ied, jsd, jed, x_is_global=x_is_global, &
y_is_global=y_is_global, tile_count=tile_count )
call mpp_get_memory_domain ( domain, ism, iem, jsm, jem )
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count )
! when domain is symmetry, extra point is needed for some data on x/y direction
position = CENTER
if(mpp_domain_is_symmetry(domain)) then
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 ) then ! CENTER
data_has_halos = .FALSE.
else if( size(data,1).EQ.ie-is+2 .AND. size(data,2).EQ.je-js+1 ) then ! EAST
data_has_halos = .FALSE.
position = EAST
ie = ie + 1
else if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+2 ) then ! NORTH
position = NORTH
data_has_halos = .FALSE.
je = je + 1
else if( size(data,1).EQ.ie-is+2 .AND. size(data,2).EQ.je-js+2 ) then ! CORNER
position = CORNER
data_has_halos = .FALSE.
ie = ie + 1; je = je + 1
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then ! CENTER
data_has_halos = .TRUE.
else if( size(data,1).EQ.iem-ism+2 .AND. size(data,2).EQ.jem-jsm+1 )then ! EAST
position = EAST
data_has_halos = .TRUE.
ie = ie + 1
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+2 )then ! NORTH
position = NORTH
data_has_halos = .TRUE.
je = je + 1
else if( size(data,1).EQ.iem-ism+2 .AND. size(data,2).EQ.jem-jsm+2 )then ! CORNER
position = CORNER
data_has_halos = .TRUE.
ie = ie + 1; je = je + 1
else
call mpp_error( FATAL, 'MPP_READ: when domain is symmetry, data must be either on ' &
//'compute domain or data domain with the consideration of shifting.' )
end if
else
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 )then
data_has_halos = .FALSE.
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then
data_has_halos = .TRUE.
else
call mpp_error( FATAL, 'MPP_READ: data must be either on compute domain or data domain.' )
end if
endif
halos_are_global = x_is_global .AND. y_is_global
if( npes.GT.1 .AND. mpp_file(unit)%threading.EQ.MPP_SINGLE )then
if( halos_are_global )then !you can read directly into data array
if( pe.EQ.0 )call read_record_r8( unit, field, size(data(:,:,:)), data, tindex )
else
lenx=size(data,1)
leny=size(data,2)
lenz=size(data,3)
len=lenx*leny*lenz
allocate(gdata(len))
! read field on pe 0 and pass to all pes
if( pe.EQ.0 ) call read_record_r8( unit, field, len, gdata, tindex )
! broadcasting global array, this can be expensive!
call mpp_transmit( put_data=gdata(1), plen=len, to_pe=ALL_PES, &
get_data=gdata(1), glen=len, from_pe=0 )
ioff = is; joff = js
if( data_has_halos )then
ioff = isd; joff = jsd
end if
do k=1,size(data,3)
do j=js,je
do i=is,ie
n=(i-isg+1) + (j-jsg)*lenx + (k-1)*lenx*leny
data(i-ioff+1,j-joff+1,k)=gdata(n)
enddo
enddo
enddo
deallocate(gdata)
call mpp_sync_self() ! ensure MPI_ISEND is done.
end if
else if( data_has_halos )then
! for uniprocessor or multithreaded read
! read compute domain as contiguous data
allocate( cdata(is:ie,js:je,size(data,3)) )
call read_record_r8(unit,field,size(cdata(:,:,:)),cdata,tindex,domain,position,tile_count)
data(is-isd+1:ie-isd+1,js-jsd+1:je-jsd+1,:) = cdata(:,:,:)
deallocate(cdata)
else
call read_record_r8(unit,field,size(data(:,:,:)),data,tindex,domain,position,tile_count)
end if
call mpp_clock_end(mpp_read_clock)
return
end subroutine mpp_read_2ddecomp_r3d_r8
subroutine mpp_read_2ddecomp_r4d_r8( unit, field, domain, data, tindex, tile_count )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real(8), intent(inout) :: data(:,:,:,:)
integer, intent(in), optional :: tindex, tile_count
real(8) :: data3D(size(data,1),size(data,2),size(data,3)*size(data,4))
pointer( ptr, data3D )
ptr = LOC(data)
call mpp_read( unit, field, domain, data3D, tindex, tile_count)
return
end subroutine mpp_read_2ddecomp_r4d_r8
# 56 "../mpp/include/mpp_io_read.inc" 2
# 1 "../mpp/include/mpp_read_compressed.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_read_compressed_r1d(unit, field, domain, data, tindex)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real, intent(inout) :: data(:)
integer, intent(in), optional :: tindex
real :: data2D(size(data,1),1)
pointer( ptr, data2D )
ptr = LOC(data)
call mpp_read(unit, field, domain, data2D, tindex)
return
end subroutine mpp_read_compressed_r1d
subroutine mpp_read_compressed_r2d(unit, field, domain, data, tindex, start, nread, threading)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real, intent(inout) :: data(:,:)
integer, intent(in), optional :: tindex
integer, intent(in), optional :: start(:), nread(:)
integer, intent(in), optional :: threading
integer, allocatable :: pelist(:)
integer :: npes, p, threading_flag
type(domain2d), pointer :: io_domain=>NULL()
logical :: compute_chksum,print_compressed_chksum
integer(8) ::chk
call mpp_clock_begin(mpp_read_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_READ_COMPRESSED_2D_: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_READ_COMPRESSED_2D_: invalid unit number.' )
print_compressed_chksum = .FALSE.
if(size(data) > 0) then
data = 0 !! zero out data so other tiles do not contribute junk to chksum
threading_flag = MPP_SINGLE
if( PRESENT(threading) )threading_flag = threading
if( threading_flag == MPP_MULTI ) then
call read_record(unit,field,size(data(:,:)),data,tindex,start_in=start, axsiz_in=nread)
else if( threading_flag == MPP_SINGLE ) then
io_domain=>mpp_get_io_domain(domain)
if(.not. ASSOCIATED(io_domain)) call mpp_error( FATAL, 'MPP_READ_COMPRESSED_2D_: io_domain must be defined.' )
npes = mpp_get_domain_npes(io_domain)
allocate(pelist(npes))
call mpp_get_pelist(io_domain,pelist)
if(mpp_pe() == pelist(1)) call read_record(unit,field,size(data(:,:)),data,tindex,start_in=start, axsiz_in=nread)
!--- z1l replace mpp_broadcast with mpp_send/mpp_recv to avoid hang in calling MPI_COMM_CREATE
!--- because size(pelist) might be different for different rank.
!--- prepost receive
if( mpp_pe() == pelist(1) ) then
do p = 2, npes
call mpp_send(data(1,1), plen=size(data(:,:)), to_pe=pelist(p), tag=COMM_TAG_1)
enddo
call mpp_sync_self()
else
call mpp_recv(data(1,1), glen=size(data(:,:)), from_pe=pelist(1), block=.false., tag=COMM_TAG_1)
call mpp_sync_self(check=EVENT_RECV)
endif
deallocate(pelist)
else
call mpp_error( FATAL, 'MPP_READ_COMPRESSED_2D_: threading should be MPP_SINGLE or MPP_MULTI')
endif
endif
compute_chksum = .FALSE.
if (ANY(field%checksum /= default_field%checksum) ) compute_chksum = .TRUE.
if (compute_chksum) then
if (field%type==NF_INT) then
if (field%fill == MPP_FILL_DOUBLE .or. field%fill == real(MPP_FILL_INT) ) then
chk = mpp_chksum( ceiling(data), mask_val=MPP_FILL_INT )
else
call mpp_error(NOTE,"During mpp_io(mpp_read_compressed_2d) int field "//trim(field%name)// &
" found fill. Icebergs, or code using defaults can safely ignore. "// &
" If manually overriding compressed restart fills, confirm this is what you want.")
chk = mpp_chksum( ceiling(data), mask_val=field%fill)
end if
else !!real data
chk = mpp_chksum(data,mask_val=field%fill)
end if
!!compare
if ( print_compressed_chksum) then
if ( mpp_pe() == mpp_root_pe() ) then
print '(A,Z16)', "mpp_read_compressed_2d chksum: "//trim(field%name)//" = ", chk
!! discuss making fatal after testing/review to match other routines.
!! Need to do some nword-counting and digging with pjp
!! this should be if ( chk /= field%checksum ) as it was tested @ulm_201505..
if ( MOD(chk, field%checksum(1)) /= 0 ) then
print '(A,Z16)', "File stored checksum: "//trim(field%name)//" = ", field%checksum(1)
call mpp_error(NOTE,"mpp_read_compressed_2d chksum: "//trim(field%name)//" failed!")
end if
endif
end if
end if
call mpp_clock_end(mpp_read_clock)
return
end subroutine mpp_read_compressed_r2d
subroutine mpp_read_compressed_r3d(unit, field, domain, data, tindex, start, nread, threading)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(in) :: domain
real, intent(inout) :: data(:,:,:)
integer, intent(in), optional :: tindex
integer, intent(in), optional :: start(:), nread(:)
integer, intent(in), optional :: threading
integer, allocatable :: pelist(:)
integer :: npes, p, threading_flag
type(domain2d), pointer :: io_domain=>NULL()
logical :: compute_chksum,print_compressed_chksum
integer(8) ::chk
call mpp_clock_begin(mpp_read_clock)
data = 0 !! zero out data so other tiles do not contribute junk to chksum
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_READ_COMPRESSED_3D_: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_READ_COMPRESSED_3D_: invalid unit number.' )
print_compressed_chksum = .FALSE.
threading_flag = MPP_SINGLE
if( PRESENT(threading) )threading_flag = threading
if( threading_flag == MPP_MULTI ) then
call read_record(unit,field,size(data(:,:,:)),data,tindex,start_in=start, axsiz_in=nread)
else if( threading_flag == MPP_SINGLE ) then
io_domain=>mpp_get_io_domain(domain)
if(.not. ASSOCIATED(io_domain)) call mpp_error( FATAL, 'MPP_READ_COMPRESSED_3D_: io_domain must be defined.' )
npes = mpp_get_domain_npes(io_domain)
allocate(pelist(npes))
call mpp_get_pelist(io_domain,pelist)
if(mpp_pe() == pelist(1)) call read_record(unit,field,size(data(:,:,:)),data,tindex,start_in=start, axsiz_in=nread)
!--- z1l replace mpp_broadcast with mpp_send/mpp_recv to avoid hang in calling MPI_COMM_CREATE
!--- because size(pelist) might be different for different rank.
!--- prepost receive
if( mpp_pe() == pelist(1) ) then
do p = 2, npes
call mpp_send(data(1,1,1), plen=size(data(:,:,:)), to_pe=pelist(p), tag=COMM_TAG_1)
enddo
call mpp_sync_self()
else
call mpp_recv(data(1,1,1), glen=size(data(:,:,:)), from_pe=pelist(1), block=.false., tag=COMM_TAG_1)
call mpp_sync_self(check=EVENT_RECV)
endif
deallocate(pelist)
else
call mpp_error( FATAL, 'MPP_READ_COMPRESSED_3D_: threading should be MPP_SINGLE or MPP_MULTI')
endif
compute_chksum = .FALSE.
if (ANY(field%checksum /= default_field%checksum) ) compute_chksum = .TRUE.
if (compute_chksum) then
if (field%type==NF_INT) then
if (field%fill == MPP_FILL_DOUBLE .or. field%fill == real(MPP_FILL_INT) ) then
chk = mpp_chksum( ceiling(data), mask_val=MPP_FILL_INT )
else
call mpp_error(NOTE,"During mpp_io(mpp_read_compressed_3d) int field "//trim(field%name)// &
" found fill. Icebergs, or code using defaults can safely ignore. "// &
" If manually overriding compressed restart fills, confirm this is what you want.")
chk = mpp_chksum( ceiling(data), mask_val=field%fill)
end if
else !!real
chk = mpp_chksum(data,mask_val=field%fill)
end if
!!compare
if ( print_compressed_chksum) then
if ( mpp_pe() == mpp_root_pe() ) then
print '(A,Z16)', "mpp_read_compressed_3d chksum: "//trim(field%name)//" = ", chk
!! discuss making fatal after testing/review to match other routines.
!! Need to do some nword-counting and digging with pjp
!! this should be if ( chk /= field%checksum ) as it was tested @ulm_201505..
if ( MOD(chk, field%checksum(1)) /= 0 ) then
print '(A,Z16)', "File stored checksum: "//trim(field%name)//" = ", field%checksum(1)
call mpp_error(NOTE,"mpp_read_compressed_3d chksum: "//trim(field%name)//" failed!")
end if
endif
end if
end if
call mpp_clock_end(mpp_read_clock)
return
end subroutine mpp_read_compressed_r3d
# 67 "../mpp/include/mpp_io_read.inc" 2
# 1 "../mpp/include/mpp_read_distributed_ascii.inc" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
# 1 "../mpp/include/mpp_read_distributed_ascii.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_read_distributed_ascii_r1D (unit,fmt,ssize,data,iostat)
integer, intent(in) :: unit
character(*), intent(in) :: fmt
integer, intent(in) :: ssize
real, dimension(:), intent(inout) :: data
integer, intent(out) :: iostat
integer, allocatable :: pelist(:)
logical :: is_ioroot=.false.
if(.not.module_is_initialized) call mpp_error(FATAL,'MPP_READ_DISTRIBUTED_ASCII_1D_: module not initialized')
iostat = 0
call mpp_dist_io_pelist(ssize,pelist) ! ALLOCATE and create pelist if size of group > 1
if(.not. ALLOCATED(pelist)) &
call mpp_error(FATAL,'MPP_READ_DISTRIBUTED_ASCII_1D_:: pelist allocation failed')
is_ioroot = mpp_is_dist_ioroot(ssize)
if(is_ioroot) then
if(trim(fmt)=='*')then
read(unit,*,iostat=iostat) data
else
read(unit,fmt=trim(fmt),iostat=iostat) data
endif
if(iostat /= 0) return ! Calling routine must handle error
endif
call mpp_broadcast(data,size(data),pelist(1),pelist)
deallocate(pelist) ! Don't forget to deallocate pelist
end subroutine mpp_read_distributed_ascii_r1D
# 25 "../mpp/include/mpp_read_distributed_ascii.inc" 2
# 1 "../mpp/include/mpp_read_distributed_ascii.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_read_distributed_ascii_i1D (unit,fmt,ssize,data,iostat)
integer, intent(in) :: unit
character(*), intent(in) :: fmt
integer, intent(in) :: ssize
integer, dimension(:), intent(inout) :: data
integer, intent(out) :: iostat
integer, allocatable :: pelist(:)
logical :: is_ioroot=.false.
if(.not.module_is_initialized) call mpp_error(FATAL,'MPP_READ_DISTRIBUTED_ASCII_1D_: module not initialized')
iostat = 0
call mpp_dist_io_pelist(ssize,pelist) ! ALLOCATE and create pelist if size of group > 1
if(.not. ALLOCATED(pelist)) &
call mpp_error(FATAL,'MPP_READ_DISTRIBUTED_ASCII_1D_:: pelist allocation failed')
is_ioroot = mpp_is_dist_ioroot(ssize)
if(is_ioroot) then
if(trim(fmt)=='*')then
read(unit,*,iostat=iostat) data
else
read(unit,fmt=trim(fmt),iostat=iostat) data
endif
if(iostat /= 0) return ! Calling routine must handle error
endif
call mpp_broadcast(data,size(data),pelist(1),pelist)
deallocate(pelist) ! Don't forget to deallocate pelist
end subroutine mpp_read_distributed_ascii_i1D
# 31 "../mpp/include/mpp_read_distributed_ascii.inc" 2
subroutine mpp_read_distributed_ascii_a1D(unit,fmt,ssize,data,iostat)
integer, intent(in) :: unit
character(*), intent(in) :: fmt
integer, intent(in) :: ssize
character(len=*), dimension(:), intent(inout) :: data
integer, intent(out) :: iostat
integer, allocatable :: pelist(:)
logical :: is_ioroot=.false.
if(.not.module_is_initialized) call mpp_error(FATAL,'mpp_read_distributed_ascii_a1D: module not initialized')
iostat = 0
call mpp_dist_io_pelist(ssize,pelist)
if(.not. ALLOCATED(pelist)) &
call mpp_error(FATAL,'mpp_read_distributed_ascii_a1D: pelist allocation failed')
is_ioroot = mpp_is_dist_ioroot(ssize)
if(is_ioroot) then
if(trim(fmt)=='*')then
read(unit,*,iostat=iostat) data
else
read(unit,fmt=trim(fmt),iostat=iostat) data
endif
if(iostat /= 0) return ! Calling routine must handle error
endif
call mpp_broadcast(data,len(data(1)),pelist(1),pelist)
deallocate(pelist) ! Don't forget to deallocate pelist
end subroutine mpp_read_distributed_ascii_a1D
# 69 "../mpp/include/mpp_io_read.inc" 2
!
!
!
!
!
!
subroutine mpp_read_r4D( unit, field, data, tindex)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(inout) :: data(:,:,:,:)
integer, intent(in), optional :: tindex
call read_record( unit, field, size(data(:,:,:,:)), data, tindex )
end subroutine mpp_read_r4D
!
!
!
!
!
!
subroutine mpp_read_r3D( unit, field, data, tindex)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(inout) :: data(:,:,:)
integer, intent(in), optional :: tindex
call read_record( unit, field, size(data(:,:,:)), data, tindex )
end subroutine mpp_read_r3D
subroutine mpp_read_r2D( unit, field, data, tindex )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(inout) :: data(:,:)
integer, intent(in), optional :: tindex
call read_record( unit, field, size(data(:,:)), data, tindex )
end subroutine mpp_read_r2D
subroutine mpp_read_r1D( unit, field, data, tindex )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(inout) :: data(:)
integer, intent(in), optional :: tindex
call read_record( unit, field, size(data(:)), data, tindex )
end subroutine mpp_read_r1D
subroutine mpp_read_r0D( unit, field, data, tindex )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(inout) :: data
integer, intent(in), optional :: tindex
real, dimension(1) :: data_tmp
data_tmp(1)=data
call read_record( unit, field, 1, data_tmp, tindex )
data=data_tmp(1)
end subroutine mpp_read_r0D
subroutine mpp_read_region_r2D(unit, field, data, start, nread)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(inout) :: data(:,:)
integer, intent(in) :: start(:), nread(:)
if(size(start(:)) .NE. 4 .OR. size(nread(:)) .NE. 4) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r2D): size of start and nread must be 4")
if(size(data,1) .NE. nread(1) .OR. size(data,2) .NE. nread(2)) then
call mpp_error( FATAL, 'mpp_io_read.inc(mpp_read_block_r2D): size mismatch between data and nread')
endif
if(nread(3) .NE. 1 .OR. nread(4) .NE. 1) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r2D): nread(3) and nread(4) must be 1")
call read_record_core(unit, field, nread(1)*nread(2), data, start, nread)
return
end subroutine mpp_read_region_r2D
subroutine mpp_read_region_r3D(unit, field, data, start, nread)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(inout) :: data(:,:,:)
integer, intent(in) :: start(:), nread(:)
if(size(start(:)) .NE. 4 .OR. size(nread(:)) .NE. 4) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r3D): size of start and nread must be 4")
if(size(data,1) .NE. nread(1) .OR. size(data,2) .NE. nread(2) .OR. size(data,3) .NE. nread(3) ) then
call mpp_error( FATAL, 'mpp_io_read.inc(mpp_read_block_r3D): size mismatch between data and nread')
endif
if(nread(4) .NE. 1) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r3D): nread(4) must be 1")
call read_record_core(unit, field, nread(1)*nread(2)*nread(3), data, start, nread)
return
end subroutine mpp_read_region_r3D
subroutine mpp_read_region_r2D_r8(unit, field, data, start, nread)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real(kind=8), intent(inout) :: data(:,:)
integer, intent(in) :: start(:), nread(:)
if(size(start(:)) .NE. 4 .OR. size(nread(:)) .NE. 4) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r2D_r8): size of start and nread must be 4")
if(size(data,1).NE.nread(1) .OR. size(data,2).NE.nread(2)) then
call mpp_error( FATAL, 'mpp_io_read.inc(mpp_read_block_r2D_r8): size mismatch between data and nread')
endif
if(nread(3) .NE. 1 .OR. nread(4) .NE. 1) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r2D_r8): nread(3) and nread(4) must be 1")
call read_record_core_r8(unit, field, nread(1)*nread(2), data, start, nread)
return
end subroutine mpp_read_region_r2D_r8
subroutine mpp_read_region_r3D_r8(unit, field, data, start, nread)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real(kind=8), intent(inout) :: data(:,:,:)
integer, intent(in) :: start(:), nread(:)
if(size(start(:)) .NE. 4 .OR. size(nread(:)) .NE. 4) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r3D_r8): size of start and nread must be 4")
if(size(data,1).NE.nread(1) .OR. size(data,2).NE.nread(2) .OR. size(data,3).NE.nread(3) ) then
call mpp_error( FATAL, 'mpp_io_read.inc(mpp_read_block_r3D_r8): size mismatch between data and nread')
endif
if(nread(4) .NE. 1) call mpp_error(FATAL, &
"mpp_io_read.inc(mpp_read_region_r3D_r8): nread(4) must be 1")
call read_record_core_r8(unit, field, nread(1)*nread(2)*nread(3), data, start, nread)
return
end subroutine mpp_read_region_r3D_r8
!--- Assume the text field is at most two-dimensional
!--- the level is always for the first dimension
subroutine mpp_read_text( unit, field, data, level )
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
character(len=*), intent(inout) :: data
integer, intent(in), optional :: level
integer :: lev, n
character(len=256) :: error_msg
integer, dimension(size(field%axes(:))) :: start, axsiz
character(len=len(data)) :: text
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'READ_RECORD: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'READ_RECORD: invalid unit number.' )
if( mpp_file(unit)%threading.EQ.MPP_SINGLE .AND. pe.NE.mpp_root_pe() )return
if( .NOT.mpp_file(unit)%initialized ) call mpp_error( FATAL, 'MPP_READ: must first call mpp_read_meta.' )
lev = 1
if(present(level)) lev = level
if( verbose )print '(a,2i6,2i5)', 'MPP_READ: PE, unit, %id, level =', pe, unit, mpp_file(unit)%id, lev
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
start = 1
axsiz(:) = field%size(:)
if(len(data) < field%size(1) ) call mpp_error(FATAL, &
'mpp_io(mpp_read_text): the first dimension size is greater than data length')
select case( field%ndim)
case(1)
if(lev .NE. 1) call mpp_error(FATAL,'mpp_io(mpp_read_text): level should be 1 when ndim is 1')
case(2)
if(lev<1 .OR. lev > field%size(2)) then
write(error_msg,'(I5,"/",I5)') lev, field%size(2)
call mpp_error(FATAL,'mpp_io(mpp_read_text): level out of range, level/max_level='//trim(error_msg))
end if
start(2) = lev
axsiz(2) = 1
case default
call mpp_error( FATAL, 'MPP_READ: ndim of text field should be at most 2')
end select
if( verbose )print '(a,2i6,i6,12i4)', 'mpp_read_text: PE, unit, nwords, start, axsiz=', pe, unit, len(data), start, axsiz
select case (field%type)
case(NF_CHAR)
if(field%ndim==1) then
error = NF_GET_VAR_TEXT(mpp_file(unit)%ncid, field%id, text)
else
error = NF_GET_VARA_TEXT(mpp_file(unit)%ncid, field%id, start, axsiz, text)
end if
call netcdf_err( error, mpp_file(unit), field=field )
do n = 1, len_trim(text)
if(text(n:n) == CHAR(0) ) exit
end do
n = n-1
data = text(1:n)
case default
call mpp_error( FATAL, 'mpp_read_text: the field type should be NF_CHAR' )
end select
else !non-netCDF
call mpp_error( FATAL, 'Currently dont support non-NetCDF mpp read' )
end if
# 278
return
end subroutine mpp_read_text
!
!
! Read metadata.
!
!
! This routine is used to read the metadata
! describing the contents of a file. Each file can contain any number of
! fields, which are functions of 0-3 space axes and 0-1 time axes. (Only
! one time axis can be defined per file). The basic metadata defined above for axistype and
! fieldtype are stored in mpp_io_mod and
! can be accessed outside of mpp_io_mod using calls to
! mpp_get_info, mpp_get_atts,
! mpp_get_vars and
! mpp_get_times.
!
!
! call mpp_read_meta(unit)
!
!
!
! mpp_read_meta must be called prior to mpp_read.
!
!
subroutine mpp_read_meta(unit, read_time)
!
! read file attributes including dimension and variable attributes
! and store in filetype structure. All of the file information
! with the exception of the (variable) data is stored. Attributes
! are supplied to the user by get_info,get_atts,get_axes and get_fields
!
! every PE is eligible to call mpp_read_meta
!
integer, intent(in) :: unit
logical, intent(in), optional :: read_time ! read_time is set to false when file action is appending or writing.
! This is temporary fix for MOM6 reopen_file issue.
integer :: ncid,ndim,nvar_total,natt,recdim,nv,nvar,len
integer :: error, i, j, istat, check_exist
integer :: type, nvdims, nvatts, dimid
integer, allocatable, dimension(:) :: dimids
character(len=128) :: name, attname, unlimname, attval, bounds_name
logical :: isdim, found_bounds, get_time_info
integer(8) :: checksumf
character(len=64) :: checksum_char
integer :: num_checksumf, last, is, k
integer(2), allocatable :: i2vals(:)
integer(4), allocatable :: ivals(:)
real(4), allocatable :: rvals(:)
real(8), allocatable :: r8vals(:)
get_time_info = .TRUE.
if(present(read_time)) get_time_info = read_time
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
ncid = mpp_file(unit)%ncid
error = NF_INQ(ncid,ndim, nvar_total,&
natt, recdim);call netcdf_err( error, mpp_file(unit) )
mpp_file(unit)%ndim = ndim
mpp_file(unit)%natt = natt
mpp_file(unit)%recdimid = recdim
!
! if no recdim exists, recdimid = -1
! variable id of unlimdim and length
!
if( recdim.NE.-1 )then
error = NF_INQ_DIM( ncid, recdim, unlimname, mpp_file(unit)%time_level )
call netcdf_err( error, mpp_file(unit) )
error = NF_INQ_VARID( ncid, unlimname, mpp_file(unit)%id )
call netcdf_err( error, mpp_file(unit), string='Field='//unlimname )
else
mpp_file(unit)%time_level = -1 ! set to zero so mpp_get_info returns ntime=0 if no time axis present
endif
allocate(mpp_file(unit)%Att(natt))
allocate(dimids(ndim))
allocate(mpp_file(unit)%Axis(ndim))
!
! initialize fieldtype and axis type
!
do i=1,ndim
mpp_file(unit)%Axis(i) = default_axis
enddo
do i=1,natt
mpp_file(unit)%Att(i) = default_att
enddo
!
! assign global attributes
!
do i=1,natt
error=NF_INQ_ATTNAME(ncid,NF_GLOBAL,i,name);call netcdf_err( error, mpp_file(unit), string=' Global attribute error.' )
error=NF_INQ_ATT(ncid,NF_GLOBAL,trim(name),type,len);call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
mpp_file(unit)%Att(i)%name = name
mpp_file(unit)%Att(i)%len = len
mpp_file(unit)%Att(i)%type = type
!
! allocate space for att data and assign
!
select case (type)
case (NF_CHAR)
if (len.gt.MAX_ATT_LENGTH) then
call mpp_error(NOTE,'GLOBAL ATT too long - not reading this metadata')
len=7
mpp_file(unit)%Att(i)%len=len
mpp_file(unit)%Att(i)%catt = 'unknown'
else
error=NF_GET_ATT_TEXT(ncid,NF_GLOBAL,name,mpp_file(unit)%Att(i)%catt)
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%att(i) )
if (verbose.and.pe == 0) print *, 'GLOBAL ATT ',trim(name),' ',mpp_file(unit)%Att(i)%catt(1:len)
endif
!
! store integers in float arrays
!
case (NF_SHORT)
allocate(mpp_file(unit)%Att(i)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Att%fatt, NF_SHORT case. "//&
& "STAT = "//trim(text))
end if
allocate(i2vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array i2vals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_INT2(ncid,NF_GLOBAL,name,i2vals)
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%att(i) )
if( verbose .and. pe == 0 )print *, 'GLOBAL ATT ',trim(name),' ',i2vals(1:len)
mpp_file(unit)%Att(i)%fatt(1:len)=i2vals(1:len)
deallocate(i2vals)
case (NF_INT)
allocate(mpp_file(unit)%Att(i)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Att%fatt, NF_INT case. "//&
& "STAT = "//trim(text))
end if
allocate(ivals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array ivals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_INT(ncid,NF_GLOBAL,name,ivals)
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%att(i) )
if( verbose .and. pe == 0 )print *, 'GLOBAL ATT ',trim(name),' ',ivals(1:len)
mpp_file(unit)%Att(i)%fatt(1:len)=ivals(1:len)
if(lowercase(trim(name)) == 'time_axis' .and. ivals(1)==0) &
mpp_file(unit)%time_level = -1 ! This file is an unlimited axis restart
deallocate(ivals)
case (NF_FLOAT)
allocate(mpp_file(unit)%Att(i)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Att%fatt, NF_FLOAT case. "//&
& "STAT = "//trim(text))
end if
allocate(rvals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array rvals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_REAL(ncid,NF_GLOBAL,name,rvals)
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%att(i) )
mpp_file(unit)%Att(i)%fatt(1:len)=rvals(1:len)
if( verbose .and. pe == 0)print *, 'GLOBAL ATT ',trim(name),' ',mpp_file(unit)%Att(i)%fatt(1:len)
deallocate(rvals)
case (NF_DOUBLE)
allocate(mpp_file(unit)%Att(i)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Att%fatt, NF_DOUBLE case. "//&
& "STAT = "//trim(text))
end if
allocate(r8vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array r8vals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_DOUBLE(ncid,NF_GLOBAL,name,r8vals)
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%att(i) )
mpp_file(unit)%Att(i)%fatt(1:len)=r8vals(1:len)
if( verbose .and. pe == 0)print *, 'GLOBAL ATT ',trim(name),' ',mpp_file(unit)%Att(i)%fatt(1:len)
deallocate(r8vals)
end select
enddo
!
! assign dimension name and length
!
do i=1,ndim
error = NF_INQ_DIM(ncid,i,name,len);call netcdf_err( error, mpp_file(unit) )
mpp_file(unit)%Axis(i)%name = name
mpp_file(unit)%Axis(i)%len = len
enddo
nvar=0
do i=1, nvar_total
error=NF_INQ_VAR(ncid,i,name,type,nvdims,dimids,nvatts);call netcdf_err( error, mpp_file(unit) )
isdim=.false.
do j=1,ndim
if( trim(lowercase(name)).EQ.trim(lowercase(mpp_file(unit)%Axis(j)%name)) )isdim=.true.
enddo
if (.not.isdim) nvar=nvar+1
enddo
mpp_file(unit)%nvar = nvar
allocate(mpp_file(unit)%Var(nvar))
do i=1,nvar
mpp_file(unit)%Var(i) = default_field
enddo
!
! assign dimension info
!
do i=1, nvar_total
error=NF_INQ_VAR(ncid,i,name,type,nvdims,dimids,nvatts);call netcdf_err( error, mpp_file(unit) )
isdim=.false.
do j=1,ndim
if( trim(lowercase(name)).EQ.trim(lowercase(mpp_file(unit)%Axis(j)%name)) )isdim=.true.
enddo
if( isdim )then
error=NF_INQ_DIMID(ncid,name,dimid);call netcdf_err( error, mpp_file(unit), string=' Axis='//name )
mpp_file(unit)%Axis(dimid)%type = type
mpp_file(unit)%Axis(dimid)%did = dimid
mpp_file(unit)%Axis(dimid)%id = i
mpp_file(unit)%Axis(dimid)%natt = nvatts
! get axis values
if( i.NE.mpp_file(unit)%id )then ! non-record dims
select case (type)
case (NF_INT)
len=mpp_file(unit)%Axis(dimid)%len
allocate(mpp_file(unit)%Axis(dimid)%data(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Axis%data, NF_INT case. "//&
& "STAT = "//trim(text))
end if
allocate(ivals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array ivals. STAT = "&
& //trim(text))
end if
error = NF_GET_VAR_INT(ncid,i,ivals);call netcdf_err( error, mpp_file(unit), mpp_file(unit)%Axis(dimid) )
mpp_file(unit)%Axis(dimid)%data(1:len)=ivals(1:len)
deallocate(ivals)
case (NF_FLOAT)
len=mpp_file(unit)%Axis(dimid)%len
allocate(mpp_file(unit)%Axis(dimid)%data(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Axis%data, "//&
& "NF_FLOAT case. STAT = "//trim(text))
end if
allocate(rvals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array rvals. STAT = "&
& //trim(text))
end if
error = NF_GET_VAR_REAL(ncid,i,rvals);call netcdf_err( error, mpp_file(unit), mpp_file(unit)%Axis(dimid) )
mpp_file(unit)%Axis(dimid)%data(1:len)=rvals(1:len)
deallocate(rvals)
case (NF_DOUBLE)
len=mpp_file(unit)%Axis(dimid)%len
allocate(mpp_file(unit)%Axis(dimid)%data(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Axis%data, "//&
& "NF_DOUBLE case. STAT = "//trim(text))
end if
allocate(r8vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array r8vals. STAT = "&
& //trim(text))
end if
error = NF_GET_VAR_DOUBLE(ncid,i,r8vals);call netcdf_err( error, mpp_file(unit), mpp_file(unit)%Axis(dimid) )
mpp_file(unit)%Axis(dimid)%data(1:len) = r8vals(1:len)
deallocate(r8vals)
case default
call mpp_error( FATAL, 'Invalid data type for dimension' )
end select
else if(get_time_info) then
len = mpp_file(unit)%time_level
if( len > 0 ) then
allocate(mpp_file(unit)%time_values(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%time_valuse. STAT = "&
& //trim(text))
end if
select case (type)
case (NF_FLOAT)
allocate(rvals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array rvals. STAT = "&
& //trim(text))
end if
!z1l read from root pe and broadcast to other processor.
!In the future we will modify the code if there is performance issue for very high MPI ranks.
if(mpp_pe()==mpp_root_pe()) then
error = NF_GET_VAR_REAL(ncid,i,rvals)
call netcdf_err( error, mpp_file(unit), mpp_file(unit)%Axis(dimid) )
endif
call mpp_broadcast(rvals, len, mpp_root_pe())
mpp_file(unit)%time_values(1:len) = rvals(1:len)
deallocate(rvals)
case (NF_DOUBLE)
allocate(r8vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array r8vals. STAT = "&
& //trim(text))
end if
!z1l read from root pe and broadcast to other processor.
!In the future we will modify the code if there is performance issue for very high MPI ranks.
if(mpp_pe()==mpp_root_pe()) then
error = NF_GET_VAR_DOUBLE(ncid,i,r8vals)
call netcdf_err( error, mpp_file(unit), mpp_file(unit)%Axis(dimid) )
endif
call mpp_broadcast(r8vals, len, mpp_root_pe())
mpp_file(unit)%time_values(1:len) = r8vals(1:len)
deallocate(r8vals)
case default
call mpp_error( FATAL, 'Invalid data type for dimension' )
end select
endif
endif
! assign dimension atts
if( nvatts.GT.0 )allocate(mpp_file(unit)%Axis(dimid)%Att(nvatts))
do j=1,nvatts
mpp_file(unit)%Axis(dimid)%Att(j) = default_att
enddo
do j=1,nvatts
error=NF_INQ_ATTNAME(ncid,i,j,attname);call netcdf_err( error, mpp_file(unit) )
error=NF_INQ_ATT(ncid,i,trim(attname),type,len)
call netcdf_err( error, mpp_file(unit), string=' Attribute='//attname )
mpp_file(unit)%Axis(dimid)%Att(j)%name = trim(attname)
mpp_file(unit)%Axis(dimid)%Att(j)%type = type
mpp_file(unit)%Axis(dimid)%Att(j)%len = len
select case (type)
case (NF_CHAR)
if (len.gt.MAX_ATT_LENGTH) call mpp_error(FATAL,'DIM ATT too long')
error=NF_GET_ATT_TEXT(ncid,i,trim(attname),mpp_file(unit)%Axis(dimid)%Att(j)%catt);
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%Axis(dimid)%att(j) )
if( verbose .and. pe == 0 ) &
print *, 'AXIS ',trim(mpp_file(unit)%Axis(dimid)%name),' ATT ',trim(attname),' ',mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
! store integers in float arrays
! assume dimension data not packed
case (NF_SHORT)
allocate(mpp_file(unit)%Axis(dimid)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Axis%Att%fatt, "//&
& "NF_SHORT CASE. STAT = "//trim(text))
end if
allocate(i2vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array i2vals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_INT2(ncid,i,trim(attname),i2vals);
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%Axis(dimid)%att(j) )
mpp_file(unit)%Axis(dimid)%Att(j)%fatt(1:len)=i2vals(1:len)
if( verbose .and. pe == 0 ) &
print *, 'AXIS ',trim(mpp_file(unit)%Axis(dimid)%name),' ATT ',trim(attname),' ',mpp_file(unit)&
&%Axis(dimid)%Att(j)%fatt
deallocate(i2vals)
case (NF_INT)
allocate(mpp_file(unit)%Axis(dimid)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Axis%Att%fatt, "//&
& "NF_INT CASE. STAT = "//trim(text))
end if
allocate(ivals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array ivals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_INT(ncid,i,trim(attname),ivals);
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%Axis(dimid)%att(j) )
mpp_file(unit)%Axis(dimid)%Att(j)%fatt(1:len)=ivals(1:len)
if( verbose .and. pe == 0 ) &
print *, 'AXIS ',trim(mpp_file(unit)%Axis(dimid)%name),' ATT ',trim(attname),' ',&
& mpp_file(unit)%Axis(dimid)%Att(j)%fatt
deallocate(ivals)
case (NF_FLOAT)
allocate(mpp_file(unit)%Axis(dimid)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Axis%Att%fatt, "//&
& "NF_FLOAT CASE. STAT = "//trim(text))
end if
allocate(rvals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array rvals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_REAL(ncid,i,trim(attname),rvals);
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%Axis(dimid)%att(j) )
mpp_file(unit)%Axis(dimid)%Att(j)%fatt(1:len)=rvals(1:len)
if( verbose .and. pe == 0 ) &
print *, 'AXIS ',trim(mpp_file(unit)%Axis(dimid)%name),' ATT ',trim(attname),' ',&
& mpp_file(unit)%Axis(dimid)%Att(j)%fatt
deallocate(rvals)
case (NF_DOUBLE)
allocate(mpp_file(unit)%Axis(dimid)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Axis%Att%fatt, "//&
& "NF_DOUBLE CASE. STAT = "//trim(text))
end if
allocate(r8vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array r8vals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_DOUBLE(ncid,i,trim(attname),r8vals);
call netcdf_err( error, mpp_file(unit), attr=mpp_file(unit)%Axis(dimid)%att(j) )
mpp_file(unit)%Axis(dimid)%Att(j)%fatt(1:len)=r8vals(1:len)
if( verbose .and. pe == 0 ) &
print *, 'AXIS ',trim(mpp_file(unit)%Axis(dimid)%name),' ATT ',trim(attname),' ',&
& mpp_file(unit)%Axis(dimid)%Att(j)%fatt
deallocate(r8vals)
case default
call mpp_error( FATAL, 'Invalid data type for dimension at' )
end select
! assign pre-defined axis attributes
select case(trim(attname))
case('long_name')
mpp_file(unit)%Axis(dimid)%longname=mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
case('units')
mpp_file(unit)%Axis(dimid)%units=mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
case('cartesian_axis')
mpp_file(unit)%Axis(dimid)%cartesian=mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
case('calendar')
mpp_file(unit)%Axis(dimid)%calendar=mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
mpp_file(unit)%Axis(dimid)%calendar = lowercase(cut0(mpp_file(unit)%Axis(dimid)%calendar))
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == 'none') &
mpp_file(unit)%Axis(dimid)%calendar = 'no_calendar'
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == 'no_leap') &
mpp_file(unit)%Axis(dimid)%calendar = 'noleap'
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == '365_days') &
mpp_file(unit)%Axis(dimid)%calendar = '365_day'
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == '360_days') &
mpp_file(unit)%Axis(dimid)%calendar = '360_day'
case('calendar_type')
mpp_file(unit)%Axis(dimid)%calendar=mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
mpp_file(unit)%Axis(dimid)%calendar = lowercase(cut0(mpp_file(unit)%Axis(dimid)%calendar))
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == 'none') &
mpp_file(unit)%Axis(dimid)%calendar = 'no_calendar'
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == 'no_leap') &
mpp_file(unit)%Axis(dimid)%calendar = 'noleap'
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == '365_days') &
mpp_file(unit)%Axis(dimid)%calendar = '365_day'
if (trim(mpp_file(unit)%Axis(dimid)%calendar) == '360_days') &
mpp_file(unit)%Axis(dimid)%calendar = '360_day'
case('compress')
mpp_file(unit)%Axis(dimid)%compressed=mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
case('positive')
attval = mpp_file(unit)%Axis(dimid)%Att(j)%catt(1:len)
if( attval.eq.'down' )then
mpp_file(unit)%Axis(dimid)%sense=-1
else if( attval.eq.'up' )then
mpp_file(unit)%Axis(dimid)%sense=1
endif
end select
enddo
endif
enddo
! assign axis bounds
do j = 1, mpp_file(unit)%ndim
if(.not. associated(mpp_file(unit)%Axis(j)%data)) cycle
len = size(mpp_file(unit)%Axis(j)%data(:))
allocate(mpp_file(unit)%Axis(j)%data_bounds(len+1))
mpp_file(unit)%Axis(j)%name_bounds = 'none'
bounds_name = 'none'
found_bounds = .false.
do i = 1, mpp_file(unit)%Axis(j)%natt
if(trim(mpp_file(unit)%Axis(j)%Att(i)%name) == 'bounds' .OR. &
trim(mpp_file(unit)%Axis(j)%Att(i)%name) == 'edges' ) then
bounds_name = mpp_file(unit)%Axis(j)%Att(i)%catt
found_bounds = .true.
exit
endif
enddo
!-- loop through all the fields to locate bounds_name
if( found_bounds ) then
found_bounds = .false.
do i = 1, mpp_file(unit)%ndim
if(.not. associated(mpp_file(unit)%Axis(i)%data)) cycle
if(trim(mpp_file(unit)%Axis(i)%name) == trim(bounds_name)) then
found_bounds = .true.
if(size(mpp_file(unit)%Axis(i)%data(:)) .NE. len+1) &
call mpp_error(FATAL, "mpp_read_meta: improperly size bounds for field "// &
trim(bounds_name)//" in file "// trim(mpp_file(unit)%name) )
mpp_file(unit)%Axis(j)%data_bounds(:) = mpp_file(unit)%Axis(i)%data(:)
exit
endif
enddo
if( .not. found_bounds ) then
do i=1, nvar_total
error=NF_INQ_VAR(ncid,i,name,type,nvdims,dimids,nvatts);call netcdf_err( error, mpp_file(unit) )
if(trim(name) == trim(bounds_name)) then
found_bounds = .true.
if(nvdims .NE. 2) &
call mpp_error(FATAL, "mpp_read_meta: field "//trim(bounds_name)//" in file "//&
trim(mpp_file(unit)%name)//" must be 2-D field")
if(mpp_file(unit)%Axis(dimids(1))%len .NE. 2) &
call mpp_error(FATAL, "mpp_read_meta: first dimension size of field "// &
trim(mpp_file(unit)%Var(i)%name)//" from file "//trim(mpp_file(unit)%name)// &
" must be 2")
if(mpp_file(unit)%Axis(dimids(2))%len .NE. len) &
call mpp_error(FATAL, "mpp_read_meta: second dimension size of field "// &
trim(mpp_file(unit)%Var(i)%name)//" from file "//trim(mpp_file(unit)%name)// &
" is not correct")
select case (type)
case (NF_INT)
allocate(ivals(2*len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate array ivals."//&
" STAT = "//trim(text))
end if
error = NF_GET_VAR_INT(ncid,i,ivals)
call netcdf_err( error, mpp_file(unit), string=" Field="//trim(bounds_name) )
mpp_file(unit)%Axis(j)%data_bounds(1:len) =ivals(1:(2*len-1):2)
mpp_file(unit)%Axis(j)%data_bounds(len+1) = ivals(2*len)
deallocate(ivals)
case (NF_FLOAT)
allocate(rvals(2*len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate array rvals. "// &
" STAT = "//trim(text))
end if
error = NF_GET_VAR_REAL(ncid,i,rvals)
call netcdf_err( error, mpp_file(unit), string=" Field="//trim(bounds_name) )
mpp_file(unit)%Axis(j)%data_bounds(1:len) =rvals(1:(2*len-1):2)
mpp_file(unit)%Axis(j)%data_bounds(len+1) = rvals(2*len)
deallocate(rvals)
case (NF_DOUBLE)
allocate(r8vals(2*len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate array r8vals. "//&
" STAT = "//trim(text))
end if
error = NF_GET_VAR_DOUBLE(ncid,i,r8vals)
call netcdf_err( error, mpp_file(unit), string=" Field="//trim(bounds_name) )
mpp_file(unit)%Axis(j)%data_bounds(1:len) =r8vals(1:(2*len-1):2)
mpp_file(unit)%Axis(j)%data_bounds(len+1) = r8vals(2*len)
deallocate(r8vals)
case default
call mpp_error( FATAL, 'mpp_io_mod(mpp_read_meta): Invalid data type for dimension' )
end select
exit
endif
enddo
endif
endif
if (found_bounds) then
mpp_file(unit)%Axis(j)%name_bounds = trim(bounds_name)
else
deallocate(mpp_file(unit)%Axis(j)%data_bounds)
mpp_file(unit)%Axis(j)%data_bounds =>NULL()
endif
enddo
! assign variable info
nv = 0
do i=1, nvar_total
error=NF_INQ_VAR(ncid,i,name,type,nvdims,dimids,nvatts);call netcdf_err( error, mpp_file(unit) )
!
! is this a dimension variable?
!
isdim=.false.
do j=1,ndim
if( trim(lowercase(name)).EQ.trim(lowercase(mpp_file(unit)%Axis(j)%name)) )isdim=.true.
enddo
if( .not.isdim )then
! for non-dimension variables
nv=nv+1; if( nv.GT.mpp_file(unit)%nvar )call mpp_error( FATAL, 'variable index exceeds number of defined variables' )
mpp_file(unit)%Var(nv)%type = type
mpp_file(unit)%Var(nv)%id = i
mpp_file(unit)%Var(nv)%name = name
mpp_file(unit)%Var(nv)%natt = nvatts
! determine packing attribute based on NetCDF variable type
select case (type)
case(NF_SHORT)
mpp_file(unit)%Var(nv)%pack = 4
case(NF_FLOAT)
mpp_file(unit)%Var(nv)%pack = 2
case(NF_DOUBLE)
mpp_file(unit)%Var(nv)%pack = 1
case (NF_INT)
mpp_file(unit)%Var(nv)%pack = 2
case (NF_CHAR)
mpp_file(unit)%Var(nv)%pack = 1
case default
call mpp_error( FATAL, 'Invalid variable type in NetCDF file' )
end select
! assign dimension ids
mpp_file(unit)%Var(nv)%ndim = nvdims
allocate(mpp_file(unit)%Var(nv)%axes(nvdims))
do j=1,nvdims
mpp_file(unit)%Var(nv)%axes(j) = mpp_file(unit)%Axis(dimids(j))
enddo
allocate(mpp_file(unit)%Var(nv)%size(nvdims))
do j=1,nvdims
if(dimids(j).eq.mpp_file(unit)%recdimid .and. mpp_file(unit)%time_level/=-1)then
mpp_file(unit)%Var(nv)%time_axis_index = j !dimids(j). z1l: Should be j.
!This will cause problem when appending to existed file.
mpp_file(unit)%Var(nv)%size(j)=1 ! dimid length set to 1 here for consistency w/ mpp_write
else
mpp_file(unit)%Var(nv)%size(j)=mpp_file(unit)%Axis(dimids(j))%len
endif
enddo
! assign variable atts
if( nvatts.GT.0 )allocate(mpp_file(unit)%Var(nv)%Att(nvatts))
do j=1,nvatts
mpp_file(unit)%Var(nv)%Att(j) = default_att
enddo
do j=1,nvatts
error=NF_INQ_ATTNAME(ncid,i,j,attname);call netcdf_err( error, mpp_file(unit), field=mpp_file(unit)%Var(nv) )
error=NF_INQ_ATT(ncid,i,attname,type,len)
call netcdf_err( error, mpp_file(unit),field= mpp_file(unit)%Var(nv), string=' Attribute='//attname )
mpp_file(unit)%Var(nv)%Att(j)%name = trim(attname)
mpp_file(unit)%Var(nv)%Att(j)%type = type
mpp_file(unit)%Var(nv)%Att(j)%len = len
select case (type)
case (NF_CHAR)
if (len.gt.512) call mpp_error(FATAL,'VAR ATT too long')
error=NF_GET_ATT_TEXT(ncid,i,trim(attname),mpp_file(unit)%Var(nv)%Att(j)%catt(1:len))
call netcdf_err( error, mpp_file(unit), field=mpp_file(unit)%var(nv), attr=mpp_file(unit)%var(nv)%att(j) )
if (verbose .and. pe == 0 )&
print *, 'Var ',nv,' ATT ',trim(attname),' ',mpp_file(unit)%Var(nv)%Att(j)%catt(1:len)
! store integers as float internally
case (NF_SHORT)
allocate(mpp_file(unit)%Var(nv)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Var%Att%fatt, "//&
& "NF_SHORT CASE. STAT = "//trim(text))
end if
allocate(i2vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array i2vals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_INT2(ncid,i,trim(attname),i2vals)
call netcdf_err( error, mpp_file(unit), field=mpp_file(unit)%var(nv), attr=mpp_file(unit)%var(nv)%att(j) )
mpp_file(unit)%Var(nv)%Att(j)%fatt(1:len)= i2vals(1:len)
if( verbose .and. pe == 0 )&
print *, 'Var ',nv,' ATT ',trim(attname),' ',mpp_file(unit)%Var(nv)%Att(j)%fatt
deallocate(i2vals)
case (NF_INT)
allocate(mpp_file(unit)%Var(nv)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Var%Att%fatt, "//&
& "NF_INT CASE. STAT = "//trim(text))
end if
allocate(ivals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array ivals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_INT(ncid,i,trim(attname),ivals)
call netcdf_err( error, mpp_file(unit), field=mpp_file(unit)%var(nv), attr=mpp_file(unit)%var(nv)%att(j) )
mpp_file(unit)%Var(nv)%Att(j)%fatt(1:len)=ivals(1:len)
if( verbose .and. pe == 0 )&
print *, 'Var ',nv,' ATT ',trim(attname),' ',mpp_file(unit)%Var(nv)%Att(j)%fatt
deallocate(ivals)
case (NF_FLOAT)
allocate(mpp_file(unit)%Var(nv)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Var%Att%fatt, "//&
& "NF_FLOAT CASE. STAT = "//trim(text))
end if
allocate(rvals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array rvals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_REAL(ncid,i,trim(attname),rvals)
call netcdf_err( error, mpp_file(unit), field=mpp_file(unit)%var(nv), attr=mpp_file(unit)%var(nv)%att(j) )
mpp_file(unit)%Var(nv)%Att(j)%fatt(1:len)=rvals(1:len)
if( verbose .and. pe == 0 )&
print *, 'Var ',nv,' ATT ',trim(attname),' ',mpp_file(unit)%Var(nv)%Att(j)%fatt
deallocate(rvals)
case (NF_DOUBLE)
allocate(mpp_file(unit)%Var(nv)%Att(j)%fatt(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate mpp_file%Var%Att%fatt, "//&
& "NF_DOUBLE CASE. STAT = "//trim(text))
end if
allocate(r8vals(len), STAT=istat)
if ( istat .ne. 0 ) then
write(text,'(A)') istat
call mpp_error(FATAL, "mpp_io_mod(mpp_read_meta): Unable to allocate temporary array r8vals. STAT = "&
& //trim(text))
end if
error=NF_GET_ATT_DOUBLE(ncid,i,trim(attname),r8vals)
call netcdf_err( error, mpp_file(unit), field=mpp_file(unit)%var(nv), attr=mpp_file(unit)%var(nv)%att(j) )
mpp_file(unit)%Var(nv)%Att(j)%fatt(1:len)=r8vals(1:len)
if( verbose .and. pe == 0 ) &
print *, 'Var ',nv,' ATT ',trim(attname),' ',mpp_file(unit)%Var(nv)%Att(j)%fatt
deallocate(r8vals)
case default
call mpp_error( FATAL, 'Invalid data type for variable att' )
end select
! assign pre-defined field attributes
select case (trim(attname))
case ('long_name')
mpp_file(unit)%Var(nv)%longname=mpp_file(unit)%Var(nv)%Att(j)%catt(1:len)
case('units')
mpp_file(unit)%Var(nv)%units=mpp_file(unit)%Var(nv)%Att(j)%catt(1:len)
case('scale_factor')
mpp_file(unit)%Var(nv)%scale=mpp_file(unit)%Var(nv)%Att(j)%fatt(1)
case('missing')
mpp_file(unit)%Var(nv)%missing=mpp_file(unit)%Var(nv)%Att(j)%fatt(1)
case('missing_value')
mpp_file(unit)%Var(nv)%missing=mpp_file(unit)%Var(nv)%Att(j)%fatt(1)
case('_FillValue')
mpp_file(unit)%Var(nv)%fill=mpp_file(unit)%Var(nv)%Att(j)%fatt(1)
case('add_offset')
mpp_file(unit)%Var(nv)%add=mpp_file(unit)%Var(nv)%Att(j)%fatt(1)
case('packing')
mpp_file(unit)%Var(nv)%pack=mpp_file(unit)%Var(nv)%Att(j)%fatt(1)
case('valid_range')
mpp_file(unit)%Var(nv)%min=mpp_file(unit)%Var(nv)%Att(j)%fatt(1)
mpp_file(unit)%Var(nv)%max=mpp_file(unit)%Var(nv)%Att(j)%fatt(2)
case('checksum')
checksum_char = mpp_file(unit)%Var(nv)%Att(j)%catt
! Scan checksum attribute for , delimiter. If found implies multiple time levels.
checksumf = 0
num_checksumf = 1
last = len_trim(checksum_char)
is = index (trim(checksum_char),",") ! A value of 0 implies only 1 checksum value
do while ((is > 0) .and. (is < (last-15)))
is = is + scan(checksum_char(is:last), "," ) ! move starting pointer after ","
num_checksumf = num_checksumf + 1
enddo
is =1
do k = 1, num_checksumf
read (checksum_char(is:is+15),'(Z16)') checksumf
mpp_file(unit)%Var(nv)%checksum(k) = checksumf
is = is + 17 ! Move index past the ,
enddo
end select
enddo
endif
enddo ! end variable loop
else
call mpp_error( FATAL, 'MPP READ CURRENTLY DOES NOT SUPPORT NON-NETCDF' )
endif
mpp_file(unit)%initialized = .TRUE.
# 1082
return
end subroutine mpp_read_meta
function cut0(string)
character(len=256) :: cut0
character(len=*), intent(in) :: string
integer :: i
cut0 = string
i = index(string,achar(0))
if(i > 0) cut0(i:i) = ' '
return
end function cut0
subroutine mpp_get_tavg_info(unit, field, fields, tstamp, tstart, tend, tavg)
implicit none
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(fieldtype), intent(in), dimension(:) :: fields
real, intent(inout), dimension(:) :: tstamp, tstart, tend, tavg
!balaji: added because mpp_read can only read default reals
! when running with -r4 this will read a default real and then cast double
real :: t_default_real
integer :: n, m
logical :: tavg_info_exists
tavg = -1.0
if (size(tstamp,1) /= size(tstart,1)) call mpp_error(FATAL,&
'size mismatch in mpp_get_tavg_info')
if ((size(tstart,1) /= size(tend,1)) .OR. (size(tstart,1) /= size(tavg,1))) then
call mpp_error(FATAL,'size mismatch in mpp_get_tavg_info')
endif
tstart = tstamp
tend = tstamp
tavg_info_exists = .false.
do n= 1, field%natt
if (field%Att(n)%type .EQ. NF_CHAR) then
if (field%Att(n)%name(1:13) == 'time_avg_info') then
tavg_info_exists = .true.
exit
endif
endif
enddo
if (tavg_info_exists) then
do n = 1, size(fields(:))
if (trim(fields(n)%name) == 'average_T1') then
do m = 1, size(tstart(:))
call mpp_read(unit, fields(n),t_default_real, m)
tstart(m) = t_default_real
enddo
endif
if (trim(fields(n)%name) == 'average_T2') then
do m = 1, size(tend(:))
call mpp_read(unit, fields(n),t_default_real, m)
tend(m) = t_default_real
enddo
endif
if (trim(fields(n)%name) == 'average_DT') then
do m = 1, size(tavg(:))
call mpp_read(unit, fields(n),t_default_real, m)
tavg(m) = t_default_real
enddo
endif
enddo
end if
return
end subroutine mpp_get_tavg_info
!#######################################################################
# 1106 "../mpp/mpp_io.F90" 2
# 1 "../mpp/include/mpp_io_write.inc" 1
! -*-f90-*-
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! !
! MPP_WRITE_META !
! !
! This series of routines is used to describe the contents of the file !
! being written on . Each file can contain any number of fields, !
! which can be functions of 0-3 spatial axes and 0-1 time axes. Axis !
! descriptors are stored in the structure and field !
! descriptors in the structure. !
! !
! type, public :: axistype !
! sequence !
! character(len=128) :: name !
! character(len=128) :: units !
! character(len=256) :: longname !
! integer :: sense !+/-1, depth or height? !
! type(domain1D) :: domain !
! real, pointer :: data(:) !axis values (not used if time axis) !
! integer :: id !
! end type axistype !
! !
! type, public :: fieldtype !
! sequence !
! character(len=128) :: name !
! character(len=128) :: units !
! character(len=256) :: longname !
! character(len=256) :: standard_name !CF standard name !
! real :: min, max, missing, fill, scale, add !
! type(axistype), pointer :: axis(:) !
! integer :: id !
! end type fieldtype !
! !
! The metadata contained in the type is always written for each axis and !
! field. Any other metadata one wishes to attach to an axis or field !
! can subsequently be passed to mpp_write_meta using the ID, as shown below. !
! !
! mpp_write_meta can take several forms: !
! !
! mpp_write_meta( unit, name, rval=rval, pack=pack ) !
! mpp_write_meta( unit, name, ival=ival ) !
! mpp_write_meta( unit, name, cval=cval ) !
! integer, intent(in) :: unit !
! character(len=*), intent(in) :: name !
! real, intent(in), optional :: rval(:) !
! integer, intent(in), optional :: ival(:) !
! character(len=*), intent(in), optional :: cval !
! !
! This form defines global metadata associated with the file as a !
! whole. The attribute is named and can take on a real, integer !
! or character value. and can be scalar or 1D arrays. !
! !
! mpp_write_meta( unit, id, name, rval=rval, pack=pack ) !
! mpp_write_meta( unit, id, name, ival=ival ) !
! mpp_write_meta( unit, id, name, cval=cval ) !
! integer, intent(in) :: unit, id !
! character(len=*), intent(in) :: name !
! real, intent(in), optional :: rval(:) !
! integer, intent(in), optional :: ival(:) !
! character(len=*), intent(in), optional :: cval !
! !
! This form defines metadata associated with a previously defined !
! axis or field, identified to mpp_write_meta by its unique ID . !
! The attribute is named and can take on a real, integer !
! or character value. and can be scalar or 1D arrays. !
! This need not be called for attributes already contained in !
! the type. !
! !
! PACK can take values 1,2,4,8. This only has meaning when writing !
! floating point numbers. The value of PACK defines the number of words !
! written into 8 bytes. For pack=4 and pack=8, an integer value is !
! written: rval is assumed to have been scaled to the appropriate dynamic !
! range. !
! PACK currently only works for netCDF files, and is ignored otherwise. !
! !
! subroutine mpp_write_meta_axis( unit, axis, name, units, longname, & !
! cartesian, sense, domain, data ) !
! integer, intent(in) :: unit !
! type(axistype), intent(inout) :: axis !
! character(len=*), intent(in) :: name, units, longname !
! character(len=*), intent(in), optional :: cartesian !
! integer, intent(in), optional :: sense !
! type(domain1D), intent(in), optional :: domain !
! real, intent(in), optional :: data(:) !
! !
! This form defines a time or space axis. Metadata corresponding to the !
! type above are written to the file on . A unique ID for subsequent !
! references to this axis is returned in axis%id. If the !
! element is present, this is recognized as a distributed data axis !
! and domain decomposition information is also written if required (the !
! domain decomposition info is required for multi-fileset multi-threaded !
! I/O). If the element is allocated, it is considered to be a !
! space axis, otherwise it is a time axis with an unlimited dimension. !
! Only one time axis is allowed per file. !
! !
! subroutine mpp_write_meta_field( unit, field, axes, name, units, longname !
! stanadard_name, min, max, missing, fill, scale, add, pack) !
! integer, intent(in) :: unit !
! type(fieldtype), intent(out) :: field !
! type(axistype), intent(in) :: axes(:) !
! character(len=*), intent(in) :: name, units, longname, standard_name !
! real, intent(in), optional :: min, max, missing, fill, scale, add !
! integer, intent(in), optional :: pack !
! !
! This form defines a field. Metadata corresponding to the type !
! above are written to the file on . A unique ID for subsequent !
! references to this field is returned in field%id. At least one axis !
! must be associated, 0D variables are not considered. mpp_write_meta !
! must previously have been called on all axes associated with this !
! field. !
! !
! The mpp_write_meta package also includes subroutines write_attribute and !
! write_attribute_netcdf, that are private to this module. !
! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
subroutine mpp_write_meta_global( unit, name, rval, ival, cval, pack)
!writes a global metadata attribute to unit
!attribute can be an real, integer or character
!one and only one of rval, ival, and cval should be present
!the first found will be used
!for a non-netCDF file, it is encoded into a string "GLOBAL "
integer, intent(in) :: unit
character(len=*), intent(in) :: name
real, intent(in), optional :: rval(:)
integer, intent(in), optional :: ival(:)
character(len=*), intent(in), optional :: cval
integer, intent(in), optional :: pack
! call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe) then
! call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) &
call mpp_error( FATAL, 'MPP_WRITE_META: cannot write metadata to file after an mpp_write.' )
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
call write_attribute_netcdf( unit, NF_GLOBAL, name, rval, ival, cval, pack )
else
call write_attribute( unit, 'GLOBAL '//trim(name), rval, ival, cval, pack )
end if
! call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_meta_global
!versions of above to support and as scalars (because of f90 strict rank matching)
subroutine mpp_write_meta_global_scalar_r( unit, name, rval, pack )
integer, intent(in) :: unit
character(len=*), intent(in) :: name
real, intent(in) :: rval
integer, intent(in), optional :: pack
call mpp_write_meta_global( unit, name, rval=(/rval/), pack=pack )
return
end subroutine mpp_write_meta_global_scalar_r
subroutine mpp_write_meta_global_scalar_i( unit, name, ival, pack )
integer, intent(in) :: unit
character(len=*), intent(in) :: name
integer, intent(in) :: ival
integer, intent(in), optional :: pack
call mpp_write_meta_global( unit, name, ival=(/ival/), pack=pack )
return
end subroutine mpp_write_meta_global_scalar_i
subroutine mpp_write_meta_var( unit, id, name, rval, ival, cval, pack)
!writes a metadata attribute for variable to unit
!attribute can be an real, integer or character
!one and only one of rval, ival, and cval should be present
!the first found will be used
!for a non-netCDF file, it is encoded into a string ""
integer, intent(in) :: unit, id
character(len=*), intent(in) :: name
real, intent(in), optional :: rval(:)
integer, intent(in), optional :: ival(:)
character(len=*), intent(in), optional :: cval
integer, intent(in), optional :: pack
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe) then
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) &
call mpp_error( FATAL, 'MPP_WRITE_META: cannot write metadata to file after an mpp_write.' )
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
call write_attribute_netcdf( unit, id, name, rval, ival, cval, pack )
else
write( text, '(a,i4,a)' )'VARIABLE ', id, ' '//name
call write_attribute( unit, trim(text), rval, ival, cval, pack )
end if
return
end subroutine mpp_write_meta_var
!versions of above to support and as scalar (because of f90 strict rank matching)
subroutine mpp_write_meta_scalar_r( unit, id, name, rval, pack )
integer, intent(in) :: unit, id
character(len=*), intent(in) :: name
real, intent(in) :: rval
integer, intent(in), optional :: pack
call mpp_write_meta( unit, id, name, rval=(/rval/), pack=pack )
return
end subroutine mpp_write_meta_scalar_r
subroutine mpp_write_meta_scalar_i( unit, id, name, ival,pack )
integer, intent(in) :: unit, id
character(len=*), intent(in) :: name
integer, intent(in) :: ival
integer, intent(in), optional :: pack
call mpp_write_meta( unit, id, name, ival=(/ival/),pack=pack )
return
end subroutine mpp_write_meta_scalar_i
subroutine mpp_write_axis_data (unit, axes )
integer, intent(in) :: unit
type(axistype), dimension(:), intent(in) :: axes
integer :: naxis
naxis = size (axes)
allocate (mpp_file(unit)%axis(naxis))
mpp_file(unit)%axis(1:naxis) = axes(1:naxis)
if( mpp_file(unit)%action.EQ.MPP_WRONLY )then
if(header_buffer_val>0) then
error = NF__ENDDEF(mpp_file(unit)%ncid,header_buffer_val,4,0,4)
else
error = NF_ENDDEF(mpp_file(unit)%ncid)
endif
endif
end subroutine mpp_write_axis_data
subroutine mpp_def_dim_nodata(unit,name,size)
integer, intent(in) :: unit
character(len=*), intent(in) :: name
integer, intent(in) :: size
integer :: error,did
! This routine assumes the file is in define mode
if(.NOT. mpp_file(unit)%write_on_this_pe) return
error = NF_DEF_DIM(mpp_file(unit)%ncid,name,size,did)
call netcdf_err(error, mpp_file(unit),string='Axis='//trim(name))
end subroutine mpp_def_dim_nodata
subroutine mpp_def_dim_int(unit,name,dsize,longname,data)
integer, intent(in) :: unit
character(len=*), intent(in) :: name
integer, intent(in) :: dsize
character(len=*), intent(in) :: longname
integer, intent(in) :: data(:)
integer :: error,did,id
! This routine assumes the file is in define mode
if(.NOT. mpp_file(unit)%write_on_this_pe) return
error = NF_DEF_DIM(mpp_file(unit)%ncid,name,dsize,did)
call netcdf_err(error, mpp_file(unit),string='Axis='//trim(name))
! Write dimension data.
error = NF_DEF_VAR( mpp_file(unit)%ncid, name, NF_INT, 1, did, id )
call netcdf_err( error, mpp_file(unit), string=' axis varable '//trim(name))
error = NF_PUT_ATT_TEXT( mpp_file(unit)%ncid, id, 'long_name', len_trim(longname), longname )
call netcdf_err( error, mpp_file(unit), string=' Attribute=long_name' )
if( mpp_file(unit)%action.EQ.MPP_WRONLY )then
if(header_buffer_val>0) then
error = NF__ENDDEF(mpp_file(unit)%ncid,header_buffer_val,4,0,4)
else
error = NF_ENDDEF(mpp_file(unit)%ncid)
endif
endif
call netcdf_err( error, mpp_file(unit), string=' subroutine mpp_def_dim')
error = NF_PUT_VARA_INT ( mpp_file(unit)%ncid, id, 1, size(data), data )
call netcdf_err( error, mpp_file(unit), string=' axis varable '//trim(name))
error = NF_REDEF(mpp_file(unit)%ncid)
call netcdf_err( error, mpp_file(unit), string=' subroutine mpp_def_dim')
return
end subroutine mpp_def_dim_int
subroutine mpp_def_dim_real(unit,name,dsize,longname,data)
integer, intent(in) :: unit
character(len=*), intent(in) :: name
integer, intent(in) :: dsize
character(len=*), intent(in) :: longname
real, intent(in) :: data(:)
integer :: error,did,id
! This routine assumes the file is in define mode
if(.NOT. mpp_file(unit)%write_on_this_pe) return
error = NF_DEF_DIM(mpp_file(unit)%ncid,name,dsize,did)
call netcdf_err(error, mpp_file(unit),string='Axis='//trim(name))
! Write dimension data.
error = NF_DEF_VAR( mpp_file(unit)%ncid, name, NF_INT, 1, did, id )
call netcdf_err( error, mpp_file(unit), string=' axis varable '//trim(name))
error = NF_PUT_ATT_TEXT( mpp_file(unit)%ncid, id, 'long_name', len_trim(longname), longname )
call netcdf_err( error, mpp_file(unit), string=' Attribute=long_name' )
if( mpp_file(unit)%action.EQ.MPP_WRONLY )then
if(header_buffer_val>0) then
error = NF__ENDDEF(mpp_file(unit)%ncid,header_buffer_val,4,0,4)
else
error = NF_ENDDEF(mpp_file(unit)%ncid)
endif
endif
call netcdf_err( error, mpp_file(unit), string=' subroutine mpp_def_dim')
error = NF_PUT_VARA_INT ( mpp_file(unit)%ncid, id, 1, size(data), data )
call netcdf_err( error, mpp_file(unit), string=' axis varable '//trim(name))
error = NF_REDEF(mpp_file(unit)%ncid)
call netcdf_err( error, mpp_file(unit), string=' subroutine mpp_def_dim')
return
end subroutine mpp_def_dim_real
subroutine mpp_write_meta_axis_r1d( unit, axis, name, units, longname, cartesian, sense, domain, data, min, calendar)
!load the values in an axistype (still need to call mpp_write)
!write metadata attributes for axis
!it is declared intent(inout) so you can nullify pointers in the incoming object if needed
!the f90 standard doesn't guarantee that intent(out) on a type guarantees that its pointer components will be unassociated
integer, intent(in) :: unit
type(axistype), intent(inout) :: axis
character(len=*), intent(in) :: name, units, longname
character(len=*), intent(in), optional :: cartesian
integer, intent(in), optional :: sense
type(domain1D), intent(in), optional :: domain
real, intent(in), optional :: data(:)
real, intent(in), optional :: min
character(len=*), intent(in), optional :: calendar
integer :: is, ie, isg, ieg
integer :: istat
logical :: domain_exist
type(domain2d), pointer :: io_domain => NULL()
! call mpp_clock_begin(mpp_write_clock)
!--- the shift and cartesian information is needed in mpp_write_meta_field from all the pe.
!--- we may revise this in the future.
axis%cartesian = 'N'
if( PRESENT(cartesian) )axis%cartesian = cartesian
domain_exist = .false.
if( PRESENT(domain) ) then
domain_exist = .true.
call mpp_get_global_domain( domain, isg, ieg )
if(mpp_file(unit)%io_domain_exist) then
io_domain => mpp_get_io_domain(mpp_file(unit)%domain)
if(axis%cartesian=='X') then
call mpp_get_global_domain( io_domain, xbegin=is, xend=ie)
else if(axis%cartesian=='Y') then
call mpp_get_global_domain( io_domain, ybegin=is, yend=ie)
endif
else
call mpp_get_compute_domain( domain, is, ie )
endif
else if( PRESENT(data) )then
isg=1; ieg=size(data(:)); is=isg; ie=ieg
endif
axis%shift = 0
if( PRESENT(data) .AND. domain_exist ) then
if( size(data(:)) == ieg-isg+2 ) then
axis%shift = 1
ie = ie + 1
ieg = ieg + 1
endif
endif
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe) then
! call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) &
call mpp_error( FATAL, 'MPP_WRITE_META: cannot write metadata to file after an mpp_write.' )
!pre-existing pointers need to be nullified
if( ASSOCIATED(axis%data) ) then
DEALLOCATE(axis%data, stat=istat)
endif
!load axistype
axis%name = name
axis%units = units
axis%longname = longname
if( PRESENT(calendar) ) axis%calendar = calendar
if( PRESENT(sense) ) axis%sense = sense
if( PRESENT(data) )then
if( mpp_file(unit)%fileset.EQ.MPP_MULTI .AND. domain_exist ) then
axis%len = ie - is + 1
allocate(axis%data(axis%len))
axis%data = data(is-isg+1:ie-isg+1)
else
axis%len = size(data(:))
allocate(axis%data(axis%len))
axis%data = data
endif
endif
!write metadata
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
!write axis def
!space axes are always floats, time axis is always double
if( ASSOCIATED(axis%data) )then !space axis
error = NF_DEF_DIM( mpp_file(unit)%ncid, axis%name, axis%len, axis%did )
call netcdf_err( error, mpp_file(unit), axis )
if(pack_size == 1) then
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_DOUBLE, 1, axis%did, axis%id )
else ! pack_size == 2
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_FLOAT, 1, axis%did, axis%id )
endif
call netcdf_err( error, mpp_file(unit), axis )
else !time axis
if( mpp_file(unit)%id.NE.-1 ) &
call mpp_error( FATAL, 'MPP_WRITE_META_AXIS: There is already a time axis for this file.' )
error = NF_DEF_DIM( mpp_file(unit)%ncid, axis%name, NF_UNLIMITED, axis%did )
call netcdf_err( error, mpp_file(unit), axis )
if(pack_size == 1) then
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_DOUBLE, 1, axis%did, axis%id )
else ! pack_size == 2
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_FLOAT, 1, axis%did, axis%id )
endif
call netcdf_err( error, mpp_file(unit), axis )
mpp_file(unit)%id = axis%id !file ID is the same as time axis varID
end if
else
varnum = varnum + 1
axis%id = varnum
axis%did = varnum
!write axis def
write( text, '(a,i4,a)' )'AXIS ', axis%id, ' name'
call write_attribute( unit, trim(text), cval=axis%name )
write( text, '(a,i4,a)' )'AXIS ', axis%id, ' size'
if( ASSOCIATED(axis%data) )then !space axis
! if( mpp_file(unit)%fileset.EQ.MPP_MULTI .AND. axis%domain.NE.NULL_DOMAIN1D )then
! call write_attribute( unit, trim(text), ival=(/ie-is+1/) )
! else
call write_attribute( unit, trim(text), ival=(/size(axis%data(:))/) )
! end if
else !time axis
if( mpp_file(unit)%id.NE.-1 ) &
call mpp_error( FATAL, 'MPP_WRITE_META_AXIS: There is already a time axis for this file.' )
call write_attribute( unit, trim(text), ival=(/0/) ) !a size of 0 indicates time axis
mpp_file(unit)%id = axis%id
end if
end if
!write axis attributes
call mpp_write_meta( unit, axis%id, 'long_name', cval=axis%longname) ; axis%natt = axis%natt + 1
if (lowercase(trim(axis%units)).ne.'none' .OR. .NOT.cf_compliance) then
call mpp_write_meta( unit, axis%id, 'units', cval=axis%units) ; axis%natt = axis%natt + 1
endif
if( PRESENT(calendar) ) then
if (.NOT.cf_compliance) then
call mpp_write_meta( unit, axis%id, 'calendar', cval=axis%calendar)
else
call mpp_write_meta( unit, axis%id, 'calendar', cval=lowercase(axis%calendar))
endif
axis%natt = axis%natt + 1
endif
if( PRESENT(cartesian) ) then
if (.NOT.cf_compliance) then
call mpp_write_meta( unit, axis%id, 'cartesian_axis', cval=axis%cartesian)
axis%natt = axis%natt + 1
else
if (trim(axis%cartesian).ne.'N') then
call mpp_write_meta( unit, axis%id, 'axis', cval=axis%cartesian)
axis%natt = axis%natt + 1
endif
endif
endif
if( PRESENT(sense) )then
if( sense.EQ.-1 )then
call mpp_write_meta( unit, axis%id, 'positive', cval='down')
axis%natt = axis%natt + 1
else if( sense.EQ.1 )then
call mpp_write_meta( unit, axis%id, 'positive', cval='up')
axis%natt = axis%natt + 1
else
! silently ignore values of sense other than +/-1.
end if
end if
if( PRESENT(min) ) then
call mpp_write_meta( unit, axis%id, 'valid_min', rval=min)
axis%natt = axis%natt + 1
endif
if( mpp_file(unit)%threading.EQ.MPP_MULTI .AND. mpp_file(unit)%fileset.EQ.MPP_MULTI .AND. domain_exist )then
call mpp_write_meta( unit, axis%id, 'domain_decomposition', ival=(/isg,ieg,is,ie/))
axis%natt = axis%natt + 1
end if
if( verbose )print '(a,2i6,x,a,2i3)', 'MPP_WRITE_META: Wrote axis metadata, pe, unit, axis%name, axis%id, axis%did=', &
pe, unit, trim(axis%name), axis%id, axis%did
mpp_file(unit)%ndim = max(1,mpp_file(unit)%ndim + 1)
! call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_meta_axis_r1d
subroutine mpp_write_meta_axis_i1d(unit, axis, name, units, longname, data, min, compressed)
!load the values in an axistype (still need to call mpp_write)
!write metadata attributes for axis
!it is declared intent(inout) so you can nullify pointers in the incoming object if needed
!the f90 standard doesn't guarantee that intent(out) on a type guarantees that its pointer components will be unassociated
integer, intent(in) :: unit
type(axistype), intent(inout) :: axis
character(len=*), intent(in) :: name, units, longname
integer, intent(in) :: data(:)
integer, intent(in), optional :: min
character(len=*), intent(in), optional :: compressed
integer :: istat
logical :: domain_exist
type(domain2d), pointer :: io_domain => NULL()
! call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META_I1D: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe) then
! call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) &
call mpp_error( FATAL, 'MPP_WRITE_META_I1D: cannot write metadata to file after an mpp_write.' )
!pre-existing pointers need to be nullified
if( ASSOCIATED(axis%idata) ) then
DEALLOCATE(axis%idata, stat=istat)
endif
!load axistype
axis%name = name
axis%units = units
axis%longname = longname
if( PRESENT(compressed)) axis%compressed = trim(compressed)
axis%len = size(data(:))
allocate(axis%idata(axis%len))
axis%idata = data
!write metadata
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
error = NF_DEF_DIM( mpp_file(unit)%ncid, axis%name, axis%len, axis%did )
call netcdf_err( error, mpp_file(unit), axis )
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_INT, 1, axis%did, axis%id )
call netcdf_err( error, mpp_file(unit), axis )
else
call mpp_error( FATAL, 'MPP_WRITE_META_AXIS_I1D: Only netCDF format is currently supported.' )
end if
!write axis attributes
call mpp_write_meta( unit, axis%id, 'long_name', cval=axis%longname) ; axis%natt = axis%natt + 1
if (lowercase(trim(axis%units)).ne.'none' .OR. .NOT.cf_compliance) then
call mpp_write_meta( unit, axis%id, 'units', cval=axis%units) ; axis%natt = axis%natt + 1
endif
if( PRESENT(compressed) ) then
call mpp_write_meta( unit, axis%id, 'compress', cval=axis%compressed)
axis%natt = axis%natt + 1
endif
if( PRESENT(min) ) then
call mpp_write_meta( unit, axis%id, 'valid_min', ival=min)
axis%natt = axis%natt + 1
endif
if( verbose )print '(a,2i6,x,a,2i3)', 'MPP_WRITE_META: Wrote axis metadata, pe, unit, axis%name, axis%id, axis%did=', &
pe, unit, trim(axis%name), axis%id, axis%did
mpp_file(unit)%ndim = max(1,mpp_file(unit)%ndim + 1)
! call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_meta_axis_i1d
subroutine mpp_write_meta_axis_unlimited(unit, axis, name, data, unlimited, units, longname)
!load the values in an axistype (still need to call mpp_write)
!write metadata attributes for axis
!it is declared intent(inout) so you can nullify pointers in the incoming object if needed
!the f90 standard doesn't guarantee that intent(out) on a type guarantees that its pointer components will be unassociated
integer, intent(in) :: unit
type(axistype), intent(inout) :: axis
character(len=*), intent(in) :: name
integer, intent(in) :: data ! Number of elements to be written
logical, intent(in) :: unlimited ! Provides unique arg signature
character(len=*), intent(in), optional :: units, longname
integer :: istat
logical :: domain_exist
type(domain2d), pointer :: io_domain => NULL()
! call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META_I1D: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe) then
! call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) &
call mpp_error( FATAL, 'MPP_WRITE_META_I1D: cannot write metadata to file after an mpp_write.' )
!load axistype
axis%name = name
if(present(units)) axis%units = units
if(present(longname)) axis%longname = longname
axis%len = 1
allocate(axis%idata(1))
axis%idata = data
!write metadata
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
error = NF_DEF_DIM( mpp_file(unit)%ncid, axis%name, NF_UNLIMITED, axis%did )
call netcdf_err( error, mpp_file(unit), axis )
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_INT, 0, axis%did, axis%id )
call netcdf_err( error, mpp_file(unit), axis )
else
call mpp_error( FATAL, 'MPP_WRITE_META_AXIS_UNLIMITED: Only netCDF format is currently supported.' )
end if
!write axis attributes
if(present(longname)) then
call mpp_write_meta(unit,axis%id,'long_name',cval=axis%longname); axis%natt=axis%natt+1
endif
if(present(units)) then
if (lowercase(trim(axis%units)).ne.'none' .OR. .NOT.cf_compliance) then
call mpp_write_meta(unit,axis%id,'units', cval=axis%units); axis%natt=axis%natt+1
endif
endif
if( verbose )print '(a,2i6,x,a,2i3)', &
'MPP_WRITE_META_UNLIMITED: Wrote axis metadata, pe, unit, axis%name, axis%id, axis%did=', &
pe, unit, trim(axis%name), axis%id, axis%did
mpp_file(unit)%ndim = max(1,mpp_file(unit)%ndim + 1)
! call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_meta_axis_unlimited
subroutine mpp_write_meta_field( unit, field, axes, name, units, longname,&
min, max, missing, fill, scale, add, pack, time_method, standard_name, checksum)
!define field: must have already called mpp_write_meta(axis) for each axis
integer, intent(in) :: unit
type(fieldtype), intent(inout) :: field
type(axistype), intent(in) :: axes(:)
character(len=*), intent(in) :: name, units, longname
real, intent(in), optional :: min, max, missing, fill, scale, add
integer, intent(in), optional :: pack
character(len=*), intent(in), optional :: time_method
character(len=*), intent(in), optional :: standard_name
integer(8), dimension(:), intent(in), optional :: checksum
!this array is required because of f77 binding on netCDF interface
integer, allocatable :: axis_id(:)
real :: a, b
integer :: i, istat, ishift, jshift
character(len=64) :: checksum_char
! call mpp_clock_begin(mpp_write_clock)
!--- figure out the location of data, this is needed in mpp_write.
!--- for NON-symmetry domain, the position is not an issue.
!--- we may need to rethink how to address the symmetric issue.
ishift = 0; jshift = 0
do i = 1, size(axes(:))
select case ( lowercase( axes(i)%cartesian ) )
case ( 'x' )
ishift = axes(i)%shift
case ( 'y' )
jshift = axes(i)%shift
end select
end do
field%position = CENTER
if(ishift == 1 .AND. jshift == 1) then
field%position = CORNER
else if(ishift == 1) then
field%position = EAST
else if(jshift == 1) then
field%position = NORTH
endif
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%write_on_this_pe) then
if( .NOT. ASSOCIATED(field%axes) )allocate(field%axes(1)) !temporary fix
! call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened ) call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) then
! File has already been written to and needs to be returned to define mode.
error = NF_REDEF(mpp_file(unit)%ncid)
mpp_file(unit)%initialized = .false.
endif
! call mpp_error( FATAL, 'MPP_WRITE_META: cannot write metadata to file after an mpp_write.' )
!pre-existing pointers need to be nullified
if( ASSOCIATED(field%axes) ) DEALLOCATE(field%axes, stat=istat)
if( ASSOCIATED(field%size) ) DEALLOCATE(field%size, stat=istat)
!fill in field metadata
field%name = name
field%units = units
field%longname = longname
allocate( field%axes(size(axes(:))) )
field%axes = axes
field%ndim = size(axes(:))
field%time_axis_index = -1 !this value will never match any axis index
!size is buffer area for the corresponding axis info: it is required to buffer this info in the fieldtype
!because axis might be reused in different files
allocate( field%size(size(axes(:))) )
do i = 1,size(axes(:))
if( ASSOCIATED(axes(i)%data) )then !space axis
field%size(i) = size(axes(i)%data(:))
else !time
field%size(i) = 1
field%time_axis_index = i
end if
end do
!attributes
if( PRESENT(min) ) field%min = min
if( PRESENT(max) ) field%max = max
if( PRESENT(scale) ) field%scale = scale
if( PRESENT(add) ) field%add = add
if( PRESENT(standard_name)) field%standard_name = standard_name
if( PRESENT(missing) ) field%missing = missing
if( PRESENT(fill) ) field%fill = fill
field%checksum = 0
if( PRESENT(checksum) ) field%checksum(1:size(checksum)) = checksum(:)
! Issue warning if fill and missing are different
if (present(fill).and.present(missing)) then
if (field%missing .ne. field%fill) then
call mpp_error(WARNING, 'MPP_WRITE_META: NetCDF attributes &
&_FillValue and missing_value should be equal.')
end if
end if
!pack is currently used only for netCDF
field%pack = 2 !default write 32-bit floats
if( PRESENT(pack) )field%pack = pack
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
allocate( axis_id(size(field%axes(:))) )
do i = 1,size(field%axes(:))
axis_id(i) = field%axes(i)%did
end do
!write field def
select case (field%pack)
case(0)
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_INT, size(field%axes(:)), axis_id, field%id )
case(1)
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_DOUBLE, size(field%axes(:)), axis_id, field%id )
case(2)
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_FLOAT, size(field%axes(:)), axis_id, field%id )
case(4)
if( .NOT.PRESENT(scale) .OR. .NOT.PRESENT(add) ) &
call mpp_error( FATAL, 'MPP_WRITE_META_FIELD: scale and add must be supplied when pack=4.' )
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_SHORT, size(field%axes(:)), axis_id, field%id )
case(8)
if( .NOT.PRESENT(scale) .OR. .NOT.PRESENT(add) ) &
call mpp_error( FATAL, 'MPP_WRITE_META_FIELD: scale and add must be supplied when pack=8.' )
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_BYTE, size(field%axes(:)), axis_id, field%id )
case default
call mpp_error( FATAL, 'MPP_WRITE_META_FIELD: only legal packing values are 1,2,4,8.' )
end select
call netcdf_err( error, mpp_file(unit), field=field )
deallocate(axis_id)
if(shuffle .NE. 0 .OR. deflate .NE. 0) then
error = NF_DEF_VAR_DEFLATE(mpp_file(unit)%ncid, field%id, shuffle, deflate, deflate_level)
call netcdf_err( error, mpp_file(unit), field=field )
endif
else
varnum = varnum + 1
field%id = varnum
if( PRESENT(pack) )call mpp_error( WARNING, 'MPP_WRITE_META: Packing is currently available only on netCDF files.' )
!write field def
write( text, '(a,i4,a)' )'FIELD ', field%id, ' name'
call write_attribute( unit, trim(text), cval=field%name )
write( text, '(a,i4,a)' )'FIELD ', field%id, ' axes'
call write_attribute( unit, trim(text), ival=field%axes(:)%did )
end if
!write field attributes: these names follow netCDF conventions
call mpp_write_meta( unit, field%id, 'long_name', cval=field%longname)
if (lowercase(trim(field%units)).ne.'none' .OR. .NOT.cf_compliance) then
call mpp_write_meta( unit, field%id, 'units', cval=field%units)
endif
!all real attributes must be written as packed
if( PRESENT(min) .AND. PRESENT(max) )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'valid_range', rval=(/min,max/), pack=pack )
else
a = nint((min-add)/scale)
b = nint((max-add)/scale)
call mpp_write_meta( unit, field%id, 'valid_range', rval=(/a, b /), pack=pack )
end if
else if( PRESENT(min) )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'valid_min', rval=field%min, pack=pack )
else
a = nint((min-add)/scale)
call mpp_write_meta( unit, field%id, 'valid_min', rval=a, pack=pack )
end if
else if( PRESENT(max) )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'valid_max', rval=field%max, pack=pack )
else
a = nint((max-add)/scale)
call mpp_write_meta( unit, field%id, 'valid_max', rval=a, pack=pack )
end if
end if
! write missing_value
if ( present(missing) ) then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'missing_value', rval=field%missing, pack=pack )
else
a = nint((missing-add)/scale)
call mpp_write_meta( unit, field%id, 'missing_value', rval=a, pack=pack )
end if
end if
! write _FillValue
if ( present(fill) ) then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, '_FillValue', rval=field%fill, pack=pack )
else if (field%pack==0) then ! some safety checks for integer fills
if ( present(scale).OR.present(add) ) then
call mpp_error(FATAL,"add,scale not currently implimented for pack=0 int handling, try reals instead.")
else
! Trust No One
call mpp_write_meta( unit, field%id, '_FillValue', ival=MPP_FILL_INT, pack=pack )
end if
else
a = nint((fill-add)/scale)
call mpp_write_meta( unit, field%id, '_FillValue', rval=a, pack=pack )
end if
end if
if( field%pack.NE.1 .AND. field%pack.NE.2 )then
call mpp_write_meta( unit, field%id, 'packing', ival=field%pack )
if( PRESENT(scale) )call mpp_write_meta( unit, field%id, 'scale_factor', rval=field%scale )
if( PRESENT(add) )call mpp_write_meta( unit, field%id, 'add_offset', rval=field%add )
end if
if( present(checksum) )then
write (checksum_char,'(Z16)') field%checksum(1)
do i = 2,size(checksum)
write (checksum_char,'(a,Z16)') trim(checksum_char)//",",checksum(i)
enddo
call mpp_write_meta( unit, field%id, 'checksum', cval=checksum_char )
end if
if ( PRESENT(time_method) ) then
call mpp_write_meta(unit,field%id, 'cell_methods',cval='time: '//trim(time_method))
endif
if ( PRESENT(standard_name)) &
call mpp_write_meta(unit,field%id,'standard_name ', cval=field%standard_name)
if( verbose )print '(a,2i6,x,a,i3)', 'MPP_WRITE_META: Wrote field metadata: pe, unit, field%name, field%id=', &
pe, unit, trim(field%name), field%id
! call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_meta_field
subroutine write_attribute( unit, name, rval, ival, cval, pack )
!called to write metadata for non-netCDF I/O
integer, intent(in) :: unit
character(len=*), intent(in) :: name
real, intent(in), optional :: rval(:)
integer, intent(in), optional :: ival(:)
character(len=*), intent(in), optional :: cval
!pack is currently ignored in this routine: only used by netCDF I/O
integer, intent(in), optional :: pack
if( mpp_file(unit)%nohdrs )return
!encode text string
if( PRESENT(rval) )then
write( text,* )trim(name)//'=', rval
else if( PRESENT(ival) )then
write( text,* )trim(name)//'=', ival
else if( PRESENT(cval) )then
text = ' '//trim(name)//'='//trim(cval)
else
call mpp_error( FATAL, 'WRITE_ATTRIBUTE: one of rval, ival, cval must be present.' )
end if
if( mpp_file(unit)%format.EQ.MPP_ASCII )then
!implies sequential access
write( unit,fmt='(a)' )trim(text)//char(10)
else !MPP_IEEE32 or MPP_NATIVE
if( mpp_file(unit)%access.EQ.MPP_SEQUENTIAL )then
write(unit)trim(text)//char(10)
else !MPP_DIRECT
write( unit,rec=mpp_file(unit)%record )trim(text)//char(10)
if( verbose )print '(a,i6,a,i3)', 'WRITE_ATTRIBUTE: PE=', pe, ' wrote record ', mpp_file(unit)%record
mpp_file(unit)%record = mpp_file(unit)%record + 1
end if
end if
return
end subroutine write_attribute
subroutine write_attribute_netcdf( unit, id, name, rval, ival, cval, pack )
!called to write metadata for netCDF I/O
integer, intent(in) :: unit
integer, intent(in) :: id
character(len=*), intent(in) :: name
real, intent(in), optional :: rval(:)
integer, intent(in), optional :: ival(:)
character(len=*), intent(in), optional :: cval
integer, intent(in), optional :: pack
integer, allocatable :: rval_i(:)
if( PRESENT(rval) )then
!pack was only meaningful for FP numbers, but is now extended by the ival branch of this routine
if( PRESENT(pack) )then
if( pack== 0 ) then !! here be dragons, use ival branch!...
if( KIND(rval).EQ.8 )then
call mpp_error( FATAL, &
'WRITE_ATTRIBUTE_NETCDF: attempting to write internal NF_INT, currently int32, as double.' )
error = NF_PUT_ATT_DOUBLE( mpp_file(unit)%ncid, id, name, NF_DOUBLE, size(rval(:)), rval )
else if( KIND(rval).EQ.4 )then
call mpp_error( FATAL, &
'WRITE_ATTRIBUTE_NETCDF: attempting to write internal NF_INT, currently int32, as float.' )
error = NF_PUT_ATT_REAL ( mpp_file(unit)%ncid, id, name, NF_DOUBLE, size(rval(:)), rval )
end if
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
else if( pack.EQ.1 )then
if( KIND(rval).EQ.8 )then
error = NF_PUT_ATT_DOUBLE( mpp_file(unit)%ncid, id, name, NF_DOUBLE, size(rval(:)), rval )
else if( KIND(rval).EQ.4 )then
call mpp_error( WARNING, &
'WRITE_ATTRIBUTE_NETCDF: attempting to write internal 32-bit real as external 64-bit.' )
error = NF_PUT_ATT_REAL ( mpp_file(unit)%ncid, id, name, NF_DOUBLE, size(rval(:)), rval )
end if
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
else if( pack.EQ.2 )then
if( KIND(rval).EQ.8 )then
error = NF_PUT_ATT_DOUBLE( mpp_file(unit)%ncid, id, name, NF_FLOAT, size(rval(:)), rval )
else if( KIND(rval).EQ.4 )then
error = NF_PUT_ATT_REAL ( mpp_file(unit)%ncid, id, name, NF_FLOAT, size(rval(:)), rval )
end if
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
else if( pack.EQ.4 )then
allocate( rval_i(size(rval(:))) )
rval_i = rval
if( KIND(rval).EQ.8 )then
error = NF_PUT_ATT_DOUBLE( mpp_file(unit)%ncid, id, name, NF_SHORT, size(rval_i(:)), rval )
else if( KIND(rval).EQ.4 )then
error = NF_PUT_ATT_REAL ( mpp_file(unit)%ncid, id, name, NF_SHORT, size(rval_i(:)), rval )
end if
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
deallocate(rval_i)
else if( pack.EQ.8 )then
allocate( rval_i(size(rval(:))) )
rval_i = rval
if( KIND(rval).EQ.8 )then
error = NF_PUT_ATT_DOUBLE( mpp_file(unit)%ncid, id, name, NF_BYTE, size(rval_i(:)), rval )
else if( KIND(rval).EQ.4 )then
error = NF_PUT_ATT_REAL ( mpp_file(unit)%ncid, id, name, NF_BYTE, size(rval_i(:)), rval )
end if
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
deallocate(rval_i)
else
call mpp_error( FATAL, 'WRITE_ATTRIBUTE_NETCDF: only legal packing values are 1,2,4,8.' )
end if
else
!default is to write FLOATs (32-bit)
if( KIND(rval).EQ.8 )then
error = NF_PUT_ATT_DOUBLE( mpp_file(unit)%ncid, id, name, NF_FLOAT, size(rval(:)), rval )
else if( KIND(rval).EQ.4 )then
error = NF_PUT_ATT_REAL ( mpp_file(unit)%ncid, id, name, NF_FLOAT, size(rval(:)), rval )
end if
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
end if
else if( PRESENT(ival) )then
if( PRESENT(pack) ) then
if (pack ==0) then
if (KIND(ival).EQ.8 ) then
call mpp_error(FATAL,'only use NF_INTs with pack=0 for now')
end if
error = NF_PUT_ATT_INT( mpp_file(unit)%ncid, id, name, NF_INT, size(ival(:)), ival ) !!XXX int32_t..
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
else
call mpp_error( FATAL, 'WRITE_ATTRIBUTE_NETCDF: only implimented ints when pack=0, else use reals.' )
endif
else
error = NF_PUT_ATT_INT ( mpp_file(unit)%ncid, id, name, NF_INT, size(ival(:)), ival )
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
end if
else if( present(cval) )then
if (.NOT.cf_compliance .or. trim(name).NE.'calendar') then
error = NF_PUT_ATT_TEXT( mpp_file(unit)%ncid, id, name, len_trim(cval), cval )
else
error = NF_PUT_ATT_TEXT( mpp_file(unit)%ncid, id, name, len_trim(cval), lowercase(cval) )
endif
call netcdf_err( error, mpp_file(unit), string=' Attribute='//name )
else
call mpp_error( FATAL, 'WRITE_ATTRIBUTE_NETCDF: one of rval, ival, cval must be present.' )
end if
return
end subroutine write_attribute_netcdf
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! !
! MPP_WRITE !
! !
! mpp_write is used to write data to the file on using the !
! file parameters supplied by mpp_open(). Axis and field definitions !
! must have previously been written to the file using mpp_write_meta. !
! !
! mpp_write can take 2 forms, one for distributed data and one for !
! non-distributed data. Distributed data refer to arrays whose two !
! fastest-varying indices are domain-decomposed. Distributed data !
! must be 2D or 3D (in space). Non-distributed data can be 0-3D. !
! !
! In all calls to mpp_write, tstamp is an optional argument. It is to !
! be omitted if the field was defined not to be a function of time. !
! Results are unpredictable if the argument is supplied for a time- !
! independent field, or omitted for a time-dependent field. Repeated !
! writes of a time-independent field are also not recommended. One !
! time level of one field is written per call. !
! !
! !
! For non-distributed data, use !
! !
! mpp_write( unit, field, data, tstamp ) !
! integer, intent(in) :: unit !
! type(fieldtype), intent(in) :: field !
! real(8), optional :: tstamp !
! data is real and can be scalar or of rank 1-3. !
! !
! For distributed data, use !
! !
! mpp_write( unit, field, domain, data, tstamp ) !
! integer, intent(in) :: unit !
! type(fieldtype), intent(in) :: field !
! type(domain2D), intent(in) :: domain !
! real(8), optional :: tstamp !
! data is real and can be of rank 2 or 3. !
! !
! mpp_write( unit, axis ) !
! integer, intent(in) :: unit !
! type(axistype), intent(in) :: axis !
! !
! This call writes the actual co-ordinate values along each space !
! axis. It must be called once for each space axis after all other !
! metadata has been written. !
! !
! The mpp_write package also includes the routine write_record which !
! performs the actual write. This routine is private to this module. !
! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# 1 "../mpp/include/mpp_write_2Ddecomp.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine write_record_default( unit, field, nwords, data, time_in, domain, tile_count)
!routine that is finally called by all mpp_write routines to perform the write
!a non-netCDF record contains:
! field ID
! a set of 4 coordinates (is:ie,js:je) giving the data subdomain
! a timelevel and a timestamp (=NULLTIME if field is static)
! 3D real data (stored as 1D)
!if you are using direct access I/O, the RECL argument to OPEN must be large enough for the above
!in a global direct access file, record position on PE is given by %record.
!Treatment of timestamp:
! We assume that static fields have been passed without a timestamp.
! Here that is converted into a timestamp of NULLTIME.
! For non-netCDF fields, field is treated no differently, but is written
! with a timestamp of NULLTIME. There is no check in the code to prevent
! the user from repeatedly writing a static field.
integer, intent(in) :: unit, nwords
type(fieldtype), intent(in) :: field
real, intent(in) :: data(nwords)
real, intent(in), optional :: time_in
type(domain2D), intent(in), optional :: domain
integer, intent(in), optional :: tile_count
integer, dimension(size(field%axes(:))) :: start, axsiz
real(8) :: time
integer :: time_level
logical :: newtime
integer :: subdomain(4)
integer :: packed_data(nwords)
integer :: i, is, ie, js, je
real(4) :: data_r4(nwords)
pointer( ptr1, data_r4)
pointer( ptr2, packed_data)
if (mpp_io_stack_size < nwords) call mpp_io_set_stack_size(nwords)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%write_on_this_pe) return
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE: invalid unit number.' )
if( .NOT.mpp_file(unit)%initialized )then
!this is the first call to mpp_write
!we now declare the file to be initialized
!if this is netCDF we switch file from DEFINE mode to DATA mode
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
!NOFILL is probably required for parallel: any circumstances in which not advisable?
error = NF_SET_FILL( mpp_file(unit)%ncid, NF_NOFILL, i ); call netcdf_err( error, mpp_file(unit) )
if( mpp_file(unit)%action.EQ.MPP_WRONLY )then
if(header_buffer_val>0) then
error = NF__ENDDEF(mpp_file(unit)%ncid,header_buffer_val,4,0,4)
else
error = NF_ENDDEF(mpp_file(unit)%ncid)
endif
endif
call netcdf_err( error, mpp_file(unit) )
else
call mpp_write_meta( unit, 'END', cval='metadata' )
end if
mpp_file(unit)%initialized = .TRUE.
if( verbose )print '(a,i6,a)', 'MPP_WRITE: PE=', pe, ' initialized file '//trim(mpp_file(unit)%name)//'.'
end if
!initialize time: by default assume NULLTIME
time = NULLTIME
time_level = -1
newtime = .FALSE.
if( PRESENT(time_in) )time = time_in
!increment time level if new time
if( time.GT.mpp_file(unit)%time+EPSILON(time) )then !new time
mpp_file(unit)%time_level = mpp_file(unit)%time_level + 1
mpp_file(unit)%time = time
newtime = .TRUE.
end if
if( verbose )print '(a,2i6,2i5,es13.5)', 'MPP_WRITE: PE, unit, %id, %time_level, %time=',&
pe, unit, mpp_file(unit)%id, mpp_file(unit)%time_level, mpp_file(unit)%time
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
ptr2 = LOC(mpp_io_stack(1))
!define netCDF data block to be written:
! time axis: START = time level
! AXSIZ = 1
! space axis: if there is no domain info
! START = 1
! AXSIZ = field%size(axis)
! if there IS domain info:
! start of domain is compute%start_index for multi-file I/O
! global%start_index for all other cases
! this number must be converted to 1 for NF_PUT_VAR
! (netCDF fortran calls are with reference to 1),
! So, START = compute%start_index - + 1
! AXSIZ = usually compute%size
! However, if compute%start_index-compute%end_index+1.NE.compute%size,
! we assume that the call is passing a subdomain.
! To pass a subdomain, you must pass a domain2D object that satisfies the following:
! global%start_index must contain the as defined above;
! the data domain and compute domain must refer to the subdomain being passed.
! In this case, START = compute%start_index - + 1
! AXSIZ = compute%start_index - compute%end_index + 1
! NOTE: passing of subdomains will fail for multi-PE single-threaded I/O,
! since that attempts to gather all data on PE 0.
start = 1
do i = 1,size(field%axes(:))
axsiz(i) = field%size(i)
if( i.EQ.field%time_axis_index )start(i) = mpp_file(unit)%time_level
start(i) = max(start(i),1)
end do
if( debug )print '(a,2i6,12i6)', 'WRITE_RECORD: PE, unit, start, axsiz=', pe, unit, start, axsiz
!write time information if new time
if( newtime )then
if( KIND(time).EQ.8 )then
error = NF_PUT_VAR1_DOUBLE( mpp_file(unit)%ncid, mpp_file(unit)%id, mpp_file(unit)%time_level, time )
else if( KIND(time).EQ.4 )then
error = NF_PUT_VAR1_REAL ( mpp_file(unit)%ncid, mpp_file(unit)%id, mpp_file(unit)%time_level, time )
end if
end if
if( field%pack == 0 )then
packed_data = CEILING(data)
error = NF_PUT_VARA_INT ( mpp_file(unit)%ncid, field%id, start, axsiz, packed_data )
elseif( field%pack.GT.0 .and. field%pack.LE.2 )then
if( KIND(data).EQ.8 )then
error = NF_PUT_VARA_DOUBLE( mpp_file(unit)%ncid, field%id, start, axsiz, data )
else if( KIND(data).EQ.4 )then
error = NF_PUT_VARA_REAL ( mpp_file(unit)%ncid, field%id, start, axsiz, data )
end if
else !convert to integer using scale and add: no error check on packed data representation
packed_data = nint((data-field%add)/field%scale)
error = NF_PUT_VARA_INT ( mpp_file(unit)%ncid, field%id, start, axsiz, packed_data )
end if
call netcdf_err( error, mpp_file(unit), field=field )
else !non-netCDF
ptr1 = LOC(mpp_io_stack(1))
!subdomain contains (/is,ie,js,je/)
if( PRESENT(domain) )then
call mpp_get_compute_domain(domain, is, ie, js, je)
subdomain(:) = (/ is, ie, js, je /)
else
subdomain(:) = -1 ! -1 means use global value from axis metadata
end if
if( mpp_file(unit)%format.EQ.MPP_ASCII )then
!implies sequential access
write( unit,* )field%id, subdomain, time_level, time, data
else !MPP_IEEE32 or MPP_NATIVE
if( mpp_file(unit)%access.EQ.MPP_SEQUENTIAL )then
# 174
write(unit)field%id, subdomain, time_level, time, data
else !MPP_DIRECT
# 185
write( unit, rec=mpp_file(unit)%record )field%id, subdomain, time_level, time, data
if( debug )print '(a,i6,a,i6)', 'MPP_WRITE: PE=', pe, ' wrote record ', mpp_file(unit)%record
end if
end if
end if
!recompute current record for direct access I/O
if( mpp_file(unit)%access.EQ.MPP_DIRECT )then
if( mpp_file(unit)%fileset.EQ.MPP_SINGLE )then
!assumes all PEs participate in I/O: modify later
mpp_file(unit)%record = mpp_file(unit)%record + records_per_pe*npes
else
mpp_file(unit)%record = mpp_file(unit)%record + records_per_pe
end if
end if
return
end subroutine write_record_default
subroutine mpp_write_2ddecomp_r2d( unit, field, domain, data, tstamp, tile_count, default_data)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(inout) :: domain
real, intent(inout) :: data(:,:)
real, intent(in), optional :: tstamp
integer, intent(in), optional :: tile_count
real, intent(in), optional :: default_data
real :: data3D(size(data,1),size(data,2),1)
pointer( ptr, data3D )
ptr = LOC(data)
call mpp_write( unit, field, domain, data3D, tstamp, tile_count, default_data)
return
end subroutine mpp_write_2ddecomp_r2d
subroutine mpp_write_2ddecomp_r3d( unit, field, domain, data, tstamp, tile_count, default_data)
!mpp_write writes which has the domain decomposition
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(inout) :: domain
real, intent(inout) :: data(:,:,:)
real, intent(in), optional :: tstamp
integer, intent(in), optional :: tile_count
real, intent(in), optional :: default_data
!cdata is used to store compute domain as contiguous data
!gdata is used to globalize data for multi-PE single-threaded I/O
real, allocatable, dimension(:,:,:) :: cdata, gdata
!NEW: data may be on compute OR data domain
logical :: data_has_halos, halos_are_global, x_is_global, y_is_global
integer :: is, ie, js, je, isd, ied, jsd, jed, isg, ieg, jsg, jeg, ism, iem, jsm, jem
integer :: position, errunit
type(domain2d), pointer :: io_domain=>NULL()
call mpp_clock_begin(mpp_write_clock)
errunit = stderr()
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_WRITE: invalid unit number.' )
position = field%position
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_data_domain ( domain, isd, ied, jsd, jed, x_is_global=x_is_global, &
y_is_global=y_is_global, tile_count=tile_count, position=position )
call mpp_get_memory_domain ( domain, ism, iem, jsm, jem, position=position )
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 )then
data_has_halos = .FALSE.
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then
data_has_halos = .TRUE.
else
write( errunit,'(a,10i5)' )'MPP_WRITE_2DDECOMP fails on field '//trim(field%name)// &
': is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)=', &
is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)
call mpp_error( FATAL, 'MPP_WRITE: data must be either on compute domain or data domain.' )
end if
halos_are_global = x_is_global .AND. y_is_global
if( npes.GT.1 .AND. mpp_file(unit)%threading.EQ.MPP_SINGLE )then
if( halos_are_global )then
call mpp_update_domains( data, domain, position = position )
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(data(:,:,:)), data, tstamp)
endif
else
!put field onto global domain
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3)) )
else
allocate( gdata(1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( domain, data, gdata, position = position, &
default_data=default_data)
endif
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(gdata(:,:,:)), gdata, tstamp)
endif
deallocate(gdata)
end if
else if(mpp_file(unit)%io_domain_exist ) then
if( halos_are_global )then
call mpp_update_domains( data, domain, position = position )
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(data(:,:,:)), data, tstamp)
endif
else
io_domain=>mpp_get_io_domain(mpp_file(unit)%domain)
call mpp_get_global_domain ( io_domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3)) )
else
allocate( gdata(1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( io_domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( io_domain, data, gdata, position = position, &
default_data=default_data)
endif
io_domain => NULL()
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(gdata(:,:,:)), gdata, tstamp)
endif
deallocate( gdata )
endif
else if( data_has_halos )then
!store compute domain as contiguous data and pass to write_record
allocate( cdata(is:ie,js:je,size(data,3)) )
cdata(:,:,:) = data(is-isd+1:ie-isd+1,js-jsd+1:je-jsd+1,:)
call write_record_default( unit, field, size(cdata(:,:,:)), cdata, tstamp, domain, tile_count )
else
!data is already contiguous
call write_record_default( unit, field, size(data(:,:,:)), data, tstamp, domain, tile_count )
end if
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_2ddecomp_r3d
subroutine mpp_write_2ddecomp_r4d( unit, field, domain, data, tstamp, tile_count, default_data)
!mpp_write writes which has the domain decomposition
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(inout) :: domain
real, intent(inout) :: data(:,:,:,:)
real, intent(in), optional :: tstamp
integer, intent(in), optional :: tile_count
real, intent(in), optional :: default_data
!cdata is used to store compute domain as contiguous data
!gdata is used to globalize data for multi-PE single-threaded I/O
real, allocatable, dimension(:,:,:,:) :: cdata, gdata
!NEW: data may be on compute OR data domain
logical :: data_has_halos, halos_are_global, x_is_global, y_is_global
integer :: is, ie, js, je, isd, ied, jsd, jed, isg, ieg, jsg, jeg, ism, iem, jsm, jem
integer :: position, errunit
type(domain2d), pointer :: io_domain=>NULL()
errunit = stderr()
call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_WRITE: invalid unit number.' )
position = field%position
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_data_domain ( domain, isd, ied, jsd, jed, x_is_global=x_is_global, &
y_is_global=y_is_global, tile_count=tile_count, position=position )
call mpp_get_memory_domain ( domain, ism, iem, jsm, jem, position=position )
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 )then
data_has_halos = .FALSE.
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then
data_has_halos = .TRUE.
else
write( errunit,'(a,10i5)' )'MPP_WRITE_2DDECOMP fails on field '//trim(field%name)// &
': is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)=', &
is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)
call mpp_error( FATAL, 'MPP_WRITE: data must be either on compute domain or data domain.' )
end if
halos_are_global = x_is_global .AND. y_is_global
if( npes.GT.1 .AND. mpp_file(unit)%threading.EQ.MPP_SINGLE )then
if( halos_are_global )then
call mpp_update_domains( data, domain, position = position )
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(data(:,:,:,:)), data, tstamp)
endif
else
!put field onto global domain
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3),size(data,4)) )
else
allocate( gdata(1,1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( domain, data, gdata, position = position, &
default_data=default_data)
endif
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(gdata(:,:,:,:)), gdata, tstamp)
endif
deallocate(gdata)
end if
else if(mpp_file(unit)%io_domain_exist ) then
if( halos_are_global )then
if(npes .GT. 1) call mpp_update_domains( data, domain, position = position )
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(data(:,:,:,:)), data, tstamp)
endif
else
io_domain=>mpp_get_io_domain(mpp_file(unit)%domain)
call mpp_get_global_domain ( io_domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3),size(data,4)) )
else
allocate( gdata(1,1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( io_domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( io_domain, data, gdata, position = position, &
default_data=default_data)
endif
io_domain => NULL()
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_default( unit, field, size(gdata(:,:,:,:)), gdata, tstamp)
endif
deallocate( gdata )
endif
else if( data_has_halos )then
!store compute domain as contiguous data and pass to write_record
allocate( cdata(is:ie,js:je,size(data,3),size(data,4)) )
cdata(:,:,:,:) = data(is-isd+1:ie-isd+1,js-jsd+1:je-jsd+1,:,:)
call write_record_default( unit, field, size(cdata(:,:,:,:)), cdata, tstamp, domain, tile_count )
else
!data is already contiguous
call write_record_default( unit, field, size(data(:,:,:,:)), data, tstamp, domain, tile_count )
end if
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_2ddecomp_r4d
# 1103 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write_2Ddecomp.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine write_record_r8( unit, field, nwords, data, time_in, domain, tile_count)
!routine that is finally called by all mpp_write routines to perform the write
!a non-netCDF record contains:
! field ID
! a set of 4 coordinates (is:ie,js:je) giving the data subdomain
! a timelevel and a timestamp (=NULLTIME if field is static)
! 3D real data (stored as 1D)
!if you are using direct access I/O, the RECL argument to OPEN must be large enough for the above
!in a global direct access file, record position on PE is given by %record.
!Treatment of timestamp:
! We assume that static fields have been passed without a timestamp.
! Here that is converted into a timestamp of NULLTIME.
! For non-netCDF fields, field is treated no differently, but is written
! with a timestamp of NULLTIME. There is no check in the code to prevent
! the user from repeatedly writing a static field.
integer, intent(in) :: unit, nwords
type(fieldtype), intent(in) :: field
real(8), intent(in) :: data(nwords)
real(8), intent(in), optional :: time_in
type(domain2D), intent(in), optional :: domain
integer, intent(in), optional :: tile_count
integer, dimension(size(field%axes(:))) :: start, axsiz
real(8) :: time
integer :: time_level
logical :: newtime
integer :: subdomain(4)
integer :: packed_data(nwords)
integer :: i, is, ie, js, je
real(4) :: data_r4(nwords)
pointer( ptr1, data_r4)
pointer( ptr2, packed_data)
if (mpp_io_stack_size < nwords) call mpp_io_set_stack_size(nwords)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%write_on_this_pe) return
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE: invalid unit number.' )
if( .NOT.mpp_file(unit)%initialized )then
!this is the first call to mpp_write
!we now declare the file to be initialized
!if this is netCDF we switch file from DEFINE mode to DATA mode
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
!NOFILL is probably required for parallel: any circumstances in which not advisable?
error = NF_SET_FILL( mpp_file(unit)%ncid, NF_NOFILL, i ); call netcdf_err( error, mpp_file(unit) )
if( mpp_file(unit)%action.EQ.MPP_WRONLY )then
if(header_buffer_val>0) then
error = NF__ENDDEF(mpp_file(unit)%ncid,header_buffer_val,4,0,4)
else
error = NF_ENDDEF(mpp_file(unit)%ncid)
endif
endif
call netcdf_err( error, mpp_file(unit) )
else
call mpp_write_meta( unit, 'END', cval='metadata' )
end if
mpp_file(unit)%initialized = .TRUE.
if( verbose )print '(a,i6,a)', 'MPP_WRITE: PE=', pe, ' initialized file '//trim(mpp_file(unit)%name)//'.'
end if
!initialize time: by default assume NULLTIME
time = NULLTIME
time_level = -1
newtime = .FALSE.
if( PRESENT(time_in) )time = time_in
!increment time level if new time
if( time.GT.mpp_file(unit)%time+EPSILON(time) )then !new time
mpp_file(unit)%time_level = mpp_file(unit)%time_level + 1
mpp_file(unit)%time = time
newtime = .TRUE.
end if
if( verbose )print '(a,2i6,2i5,es13.5)', 'MPP_WRITE: PE, unit, %id, %time_level, %time=',&
pe, unit, mpp_file(unit)%id, mpp_file(unit)%time_level, mpp_file(unit)%time
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
ptr2 = LOC(mpp_io_stack(1))
!define netCDF data block to be written:
! time axis: START = time level
! AXSIZ = 1
! space axis: if there is no domain info
! START = 1
! AXSIZ = field%size(axis)
! if there IS domain info:
! start of domain is compute%start_index for multi-file I/O
! global%start_index for all other cases
! this number must be converted to 1 for NF_PUT_VAR
! (netCDF fortran calls are with reference to 1),
! So, START = compute%start_index - + 1
! AXSIZ = usually compute%size
! However, if compute%start_index-compute%end_index+1.NE.compute%size,
! we assume that the call is passing a subdomain.
! To pass a subdomain, you must pass a domain2D object that satisfies the following:
! global%start_index must contain the as defined above;
! the data domain and compute domain must refer to the subdomain being passed.
! In this case, START = compute%start_index - + 1
! AXSIZ = compute%start_index - compute%end_index + 1
! NOTE: passing of subdomains will fail for multi-PE single-threaded I/O,
! since that attempts to gather all data on PE 0.
start = 1
do i = 1,size(field%axes(:))
axsiz(i) = field%size(i)
if( i.EQ.field%time_axis_index )start(i) = mpp_file(unit)%time_level
start(i) = max(start(i),1)
end do
if( debug )print '(a,2i6,12i6)', 'WRITE_RECORD: PE, unit, start, axsiz=', pe, unit, start, axsiz
!write time information if new time
if( newtime )then
if( KIND(time).EQ.8 )then
error = NF_PUT_VAR1_DOUBLE( mpp_file(unit)%ncid, mpp_file(unit)%id, mpp_file(unit)%time_level, time )
else if( KIND(time).EQ.4 )then
error = NF_PUT_VAR1_REAL ( mpp_file(unit)%ncid, mpp_file(unit)%id, mpp_file(unit)%time_level, time )
end if
end if
if( field%pack == 0 )then
packed_data = CEILING(data)
error = NF_PUT_VARA_INT ( mpp_file(unit)%ncid, field%id, start, axsiz, packed_data )
elseif( field%pack.GT.0 .and. field%pack.LE.2 )then
if( KIND(data).EQ.8 )then
error = NF_PUT_VARA_DOUBLE( mpp_file(unit)%ncid, field%id, start, axsiz, data )
else if( KIND(data).EQ.4 )then
error = NF_PUT_VARA_REAL ( mpp_file(unit)%ncid, field%id, start, axsiz, data )
end if
else !convert to integer using scale and add: no error check on packed data representation
packed_data = nint((data-field%add)/field%scale)
error = NF_PUT_VARA_INT ( mpp_file(unit)%ncid, field%id, start, axsiz, packed_data )
end if
call netcdf_err( error, mpp_file(unit), field=field )
else !non-netCDF
ptr1 = LOC(mpp_io_stack(1))
!subdomain contains (/is,ie,js,je/)
if( PRESENT(domain) )then
call mpp_get_compute_domain(domain, is, ie, js, je)
subdomain(:) = (/ is, ie, js, je /)
else
subdomain(:) = -1 ! -1 means use global value from axis metadata
end if
if( mpp_file(unit)%format.EQ.MPP_ASCII )then
!implies sequential access
write( unit,* )field%id, subdomain, time_level, time, data
else !MPP_IEEE32 or MPP_NATIVE
if( mpp_file(unit)%access.EQ.MPP_SEQUENTIAL )then
# 174
write(unit)field%id, subdomain, time_level, time, data
else !MPP_DIRECT
# 185
write( unit, rec=mpp_file(unit)%record )field%id, subdomain, time_level, time, data
if( debug )print '(a,i6,a,i6)', 'MPP_WRITE: PE=', pe, ' wrote record ', mpp_file(unit)%record
end if
end if
end if
!recompute current record for direct access I/O
if( mpp_file(unit)%access.EQ.MPP_DIRECT )then
if( mpp_file(unit)%fileset.EQ.MPP_SINGLE )then
!assumes all PEs participate in I/O: modify later
mpp_file(unit)%record = mpp_file(unit)%record + records_per_pe*npes
else
mpp_file(unit)%record = mpp_file(unit)%record + records_per_pe
end if
end if
return
end subroutine write_record_r8
subroutine mpp_write_2ddecomp_r2d_r8( unit, field, domain, data, tstamp, tile_count, default_data)
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(inout) :: domain
real(8), intent(inout) :: data(:,:)
real(8), intent(in), optional :: tstamp
integer, intent(in), optional :: tile_count
real(8), intent(in), optional :: default_data
real(8) :: data3D(size(data,1),size(data,2),1)
pointer( ptr, data3D )
ptr = LOC(data)
call mpp_write( unit, field, domain, data3D, tstamp, tile_count, default_data)
return
end subroutine mpp_write_2ddecomp_r2d_r8
subroutine mpp_write_2ddecomp_r3d_r8( unit, field, domain, data, tstamp, tile_count, default_data)
!mpp_write writes which has the domain decomposition
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(inout) :: domain
real(8), intent(inout) :: data(:,:,:)
real(8), intent(in), optional :: tstamp
integer, intent(in), optional :: tile_count
real(8), intent(in), optional :: default_data
!cdata is used to store compute domain as contiguous data
!gdata is used to globalize data for multi-PE single-threaded I/O
real(8), allocatable, dimension(:,:,:) :: cdata, gdata
!NEW: data may be on compute OR data domain
logical :: data_has_halos, halos_are_global, x_is_global, y_is_global
integer :: is, ie, js, je, isd, ied, jsd, jed, isg, ieg, jsg, jeg, ism, iem, jsm, jem
integer :: position, errunit
type(domain2d), pointer :: io_domain=>NULL()
call mpp_clock_begin(mpp_write_clock)
errunit = stderr()
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_WRITE: invalid unit number.' )
position = field%position
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_data_domain ( domain, isd, ied, jsd, jed, x_is_global=x_is_global, &
y_is_global=y_is_global, tile_count=tile_count, position=position )
call mpp_get_memory_domain ( domain, ism, iem, jsm, jem, position=position )
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 )then
data_has_halos = .FALSE.
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then
data_has_halos = .TRUE.
else
write( errunit,'(a,10i5)' )'MPP_WRITE_2DDECOMP fails on field '//trim(field%name)// &
': is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)=', &
is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)
call mpp_error( FATAL, 'MPP_WRITE: data must be either on compute domain or data domain.' )
end if
halos_are_global = x_is_global .AND. y_is_global
if( npes.GT.1 .AND. mpp_file(unit)%threading.EQ.MPP_SINGLE )then
if( halos_are_global )then
call mpp_update_domains( data, domain, position = position )
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(data(:,:,:)), data, tstamp)
endif
else
!put field onto global domain
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3)) )
else
allocate( gdata(1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( domain, data, gdata, position = position, &
default_data=default_data)
endif
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(gdata(:,:,:)), gdata, tstamp)
endif
deallocate(gdata)
end if
else if(mpp_file(unit)%io_domain_exist ) then
if( halos_are_global )then
call mpp_update_domains( data, domain, position = position )
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(data(:,:,:)), data, tstamp)
endif
else
io_domain=>mpp_get_io_domain(mpp_file(unit)%domain)
call mpp_get_global_domain ( io_domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3)) )
else
allocate( gdata(1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( io_domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( io_domain, data, gdata, position = position, &
default_data=default_data)
endif
io_domain => NULL()
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(gdata(:,:,:)), gdata, tstamp)
endif
deallocate( gdata )
endif
else if( data_has_halos )then
!store compute domain as contiguous data and pass to write_record
allocate( cdata(is:ie,js:je,size(data,3)) )
cdata(:,:,:) = data(is-isd+1:ie-isd+1,js-jsd+1:je-jsd+1,:)
call write_record_r8( unit, field, size(cdata(:,:,:)), cdata, tstamp, domain, tile_count )
else
!data is already contiguous
call write_record_r8( unit, field, size(data(:,:,:)), data, tstamp, domain, tile_count )
end if
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_2ddecomp_r3d_r8
subroutine mpp_write_2ddecomp_r4d_r8( unit, field, domain, data, tstamp, tile_count, default_data)
!mpp_write writes which has the domain decomposition
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
type(domain2D), intent(inout) :: domain
real(8), intent(inout) :: data(:,:,:,:)
real(8), intent(in), optional :: tstamp
integer, intent(in), optional :: tile_count
real(8), intent(in), optional :: default_data
!cdata is used to store compute domain as contiguous data
!gdata is used to globalize data for multi-PE single-threaded I/O
real(8), allocatable, dimension(:,:,:,:) :: cdata, gdata
!NEW: data may be on compute OR data domain
logical :: data_has_halos, halos_are_global, x_is_global, y_is_global
integer :: is, ie, js, je, isd, ied, jsd, jed, isg, ieg, jsg, jeg, ism, iem, jsm, jem
integer :: position, errunit
type(domain2d), pointer :: io_domain=>NULL()
errunit = stderr()
call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_WRITE: invalid unit number.' )
position = field%position
call mpp_get_compute_domain( domain, is, ie, js, je, tile_count=tile_count, position=position )
call mpp_get_data_domain ( domain, isd, ied, jsd, jed, x_is_global=x_is_global, &
y_is_global=y_is_global, tile_count=tile_count, position=position )
call mpp_get_memory_domain ( domain, ism, iem, jsm, jem, position=position )
if( size(data,1).EQ.ie-is+1 .AND. size(data,2).EQ.je-js+1 )then
data_has_halos = .FALSE.
else if( size(data,1).EQ.iem-ism+1 .AND. size(data,2).EQ.jem-jsm+1 )then
data_has_halos = .TRUE.
else
write( errunit,'(a,10i5)' )'MPP_WRITE_2DDECOMP fails on field '//trim(field%name)// &
': is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)=', &
is,ie,js,je, ism,iem,jsm,jem, size(data,1), size(data,2)
call mpp_error( FATAL, 'MPP_WRITE: data must be either on compute domain or data domain.' )
end if
halos_are_global = x_is_global .AND. y_is_global
if( npes.GT.1 .AND. mpp_file(unit)%threading.EQ.MPP_SINGLE )then
if( halos_are_global )then
call mpp_update_domains( data, domain, position = position )
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(data(:,:,:,:)), data, tstamp)
endif
else
!put field onto global domain
call mpp_get_global_domain ( domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3),size(data,4)) )
else
allocate( gdata(1,1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( domain, data, gdata, position = position, &
default_data=default_data)
endif
!all non-0 PEs have passed their data to PE 0 and may now exit
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(gdata(:,:,:,:)), gdata, tstamp)
endif
deallocate(gdata)
end if
else if(mpp_file(unit)%io_domain_exist ) then
if( halos_are_global )then
if(npes .GT. 1) call mpp_update_domains( data, domain, position = position )
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(data(:,:,:,:)), data, tstamp)
endif
else
io_domain=>mpp_get_io_domain(mpp_file(unit)%domain)
call mpp_get_global_domain ( io_domain, isg, ieg, jsg, jeg, tile_count=tile_count, position=position )
if(mpp_file(unit)%write_on_this_pe .OR. .NOT. global_field_on_root_pe) then
allocate( gdata(isg:ieg,jsg:jeg,size(data,3),size(data,4)) )
else
allocate( gdata(1,1,1,1))
endif
if(global_field_on_root_pe) then
call mpp_global_field( io_domain, data, gdata, position = position, &
flags=XUPDATE+YUPDATE+GLOBAL_ROOT_ONLY, &
default_data=default_data)
else
call mpp_global_field( io_domain, data, gdata, position = position, &
default_data=default_data)
endif
io_domain => NULL()
if(mpp_file(unit)%write_on_this_pe ) then
call write_record_r8( unit, field, size(gdata(:,:,:,:)), gdata, tstamp)
endif
deallocate( gdata )
endif
else if( data_has_halos )then
!store compute domain as contiguous data and pass to write_record
allocate( cdata(is:ie,js:je,size(data,3),size(data,4)) )
cdata(:,:,:,:) = data(is-isd+1:ie-isd+1,js-jsd+1:je-jsd+1,:,:)
call write_record_r8( unit, field, size(cdata(:,:,:,:)), cdata, tstamp, domain, tile_count )
else
!data is already contiguous
call write_record_r8( unit, field, size(data(:,:,:,:)), data, tstamp, domain, tile_count )
end if
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_2ddecomp_r4d_r8
# 1116 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write_compressed.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_write_compressed_r1d(unit, field, domain, data, nelems_io, tstamp, default_data)
integer, intent(in) :: unit
type(fieldtype), intent(inout) :: field
type(domain2D), intent(inout) :: domain
real, intent(inout) :: data(:)
integer, intent(in) :: nelems_io(:) ! number of compressed elements
real, intent(in), optional :: tstamp
real, intent(in), optional :: default_data
real :: data2D(size(data,1),1)
pointer( ptr, data2D )
ptr = LOC(data)
call mpp_write_compressed(unit, field, domain, data2D, nelems_io, tstamp, default_data)
return
end subroutine mpp_write_compressed_r1d
subroutine mpp_write_compressed_r3d(unit, field, domain, data, nelems_io, tstamp, default_data)
integer, intent(in) :: unit
type(fieldtype), intent(inout) :: field
type(domain2D), intent(inout) :: domain
real, intent(inout) :: data(:,:,:)
integer, intent(in) :: nelems_io(:) ! number of compressed elements
real, intent(in), optional :: tstamp
real, intent(in), optional :: default_data
real :: data2D(size(data,1),size(data,2)*size(data,3))
pointer( ptr, data2D )
ptr = LOC(data)
call mpp_write_compressed(unit, field, domain, data2D, nelems_io, tstamp, default_data)
return
end subroutine mpp_write_compressed_r3d
subroutine mpp_write_compressed_r2d(unit, field, domain, data, nelems_io, tstamp, default_data)
integer, intent(in) :: unit
type(fieldtype), intent(inout) :: field
type(domain2D), intent(inout) :: domain
real, intent(inout) :: data(:,:)
integer, intent(in) :: nelems_io(:) ! number of compressed elements from each
! member of the io_domain. It MUST have the
! same order as the io_domain pelist.
real, intent(in), optional :: tstamp
real, intent(in), optional :: default_data
!cdata is used to store the io-domain compressed data
real, allocatable, dimension(:,:) :: cdata
real, allocatable, dimension(:,:) :: sbuff,rbuff
real :: fill
real :: sbuff1D(size(data))
real :: rbuff1D(size(data,2)*sum(nelems_io(:)))
pointer(sptr,sbuff1D); pointer(rptr,rbuff1D)
integer, allocatable :: pelist(:)
integer, allocatable :: nz_gather(:)
integer :: i,j,nz,nelems,mynelems,idx,npes
type(domain2d), pointer :: io_domain=>NULL()
call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_COMPRESSED_2D_: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_WRITE_COMPRESSED_2D_: invalid unit number.' )
fill = 0
if(PRESENT(default_data)) fill = default_data
io_domain=>mpp_get_io_domain(domain)
if (.not. ASSOCIATED(io_domain)) call mpp_error( FATAL, 'MPP_WRITE_COMPRESSED_2D_: io_domain must be defined.' )
npes = mpp_get_domain_npes(io_domain)
allocate(pelist(npes))
call mpp_get_pelist(io_domain,pelist)
mynelems = size(data,1)
nz = size(data,2)
nelems = sum(nelems_io(:))
! Check that nz is consistent across all PEs in io_domain
allocate(nz_gather(npes))
call mpp_gather((/nz/), nz_gather, pelist)
if ( mpp_file(unit)%write_on_this_pe.and.maxloc(nz_gather,1).ne.minloc(nz_gather,1) ) then
call mpp_error( FATAL, 'MPP_WRITE_COMPRESSED_2D_: size(data,2) must be consistent across all PEs in io_domain' )
end if
deallocate(nz_gather)
if(mpp_file(unit)%write_on_this_pe ) allocate(rbuff(nz,nelems))
allocate(sbuff(nz,mynelems))
! this matrix inversion makes for easy gather to the IO root
! and a clear, concise unpack
do j=1,mynelems
do i=1,nz
sbuff(i,j) = data(j,i)
enddo; enddo
! Note that the gatherV implied here is asymmetric; only root needs to know the vector of recv size
sptr = LOC(sbuff)
rptr = 0
if(mpp_file(unit)%write_on_this_pe ) rptr = LOC(rbuff)
call mpp_gather(sbuff1D,size(sbuff),rbuff1D,nz*nelems_io(:),pelist)
if(mpp_file(unit)%write_on_this_pe ) then
allocate(cdata(nelems,nz))
cdata = fill
do j=1,nz
do i=1,nelems
cdata(i,j) = rbuff(j,i)
enddo; enddo
! cludge for now; need resizing accessor
field%size(1) = nelems
call write_record_default( unit, field, nelems*nz, cdata, tstamp)
deallocate(rbuff,cdata)
endif
deallocate(sbuff,pelist)
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_compressed_r2d
# 1127 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write_unlimited_axis.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_write_unlimited_axis_r1d(unit,field,domain,data,nelems_io)
integer, intent(in) :: unit
type(fieldtype), intent(inout) :: field
type(domain2D), intent(inout) :: domain
real, intent(inout) :: data(:)
integer, intent(in) :: nelems_io(:) ! number of compressed elements from each
! member of the io_domain. It MUST have the
! same order as the io_domain pelist.
integer, allocatable :: pelist(:)
integer :: i,j,nelems,npes
type(domain2d), pointer :: io_domain=>NULL()
real, allocatable, dimension(:) :: rbuff
call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_UNLIMITED_AXIS_1D_: must first call mpp_io_init.' )
if( .NOT.mpp_file(unit)%valid )call mpp_error( FATAL, 'MPP_WRITE_UNLIMITED_AXIS_1D_: invalid unit number.' )
io_domain=>mpp_get_io_domain(domain)
if (.not. ASSOCIATED(io_domain)) call mpp_error( FATAL, 'MPP_WRITE_UNLIMITED_AXIS_1D_: io_domain must be defined.' )
npes = mpp_get_domain_npes(io_domain)
allocate(pelist(npes))
call mpp_get_pelist(io_domain,pelist)
nelems = sum(nelems_io(:))
if(mpp_file(unit)%write_on_this_pe) allocate(rbuff(nelems))
! Note that the gatherV implied here is asymmetric; only root needs to know the vector of recv size
call mpp_gather(data,size(data),rbuff,nelems_io(:),pelist)
if(mpp_file(unit)%write_on_this_pe) then
field%size(1) = nelems ! Correct the field size now that we have all the data
call write_record(unit, field, nelems, rbuff)
deallocate(rbuff)
endif
deallocate(pelist)
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_unlimited_axis_r1d
# 1133 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_write_r0D( unit, field, data, tstamp)
use mpp_parameter_mod, only : NULLUNIT
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(in) :: data
real, intent(in), optional :: tstamp
if (unit == NULLUNIT) return
call write_record_default( unit, field, 1, (/data/), tstamp)
return
end subroutine mpp_write_r0D
# 1143 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_write_r1D( unit, field, data, tstamp)
use mpp_parameter_mod, only : NULLUNIT
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(in) :: data (:)
real, intent(in), optional :: tstamp
if (unit == NULLUNIT) return
call write_record_default( unit, field, size(data(:)), data, tstamp)
return
end subroutine mpp_write_r1D
# 1153 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_write_r2D( unit, field, data, tstamp)
use mpp_parameter_mod, only : NULLUNIT
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(in) :: data (:,:)
real, intent(in), optional :: tstamp
if (unit == NULLUNIT) return
call write_record_default( unit, field, size(data(:,:)), data, tstamp )
return
end subroutine mpp_write_r2D
# 1163 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_write_r3D( unit, field, data, tstamp)
use mpp_parameter_mod, only : NULLUNIT
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(in) :: data (:,:,:)
real, intent(in), optional :: tstamp
if (unit == NULLUNIT) return
call write_record_default( unit, field, size(data(:,:,:)), data, tstamp)
return
end subroutine mpp_write_r3D
# 1173 "../mpp/include/mpp_io_write.inc" 2
# 1 "../mpp/include/mpp_write.h" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
subroutine mpp_write_r4D( unit, field, data, tstamp)
use mpp_parameter_mod, only : NULLUNIT
integer, intent(in) :: unit
type(fieldtype), intent(in) :: field
real, intent(in) :: data (:,:,:,:)
real, intent(in), optional :: tstamp
if (unit == NULLUNIT) return
call write_record_default( unit, field, size(data(:,:,:,:)), data, tstamp)
return
end subroutine mpp_write_r4D
# 1183 "../mpp/include/mpp_io_write.inc" 2
subroutine mpp_write_axis( unit, axis )
integer, intent(in) :: unit
type(axistype), intent(in) :: axis
type(fieldtype) :: field
call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe ) then
call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE: invalid unit number.' )
!we convert axis to type(fieldtype) in order to call write_record
field = default_field
allocate( field%axes(1) )
field%axes(1) = axis
allocate( field%size(1) )
field%size(1) = axis%len
field%id = axis%id
field%name = axis%name
field%longname = axis%longname
field%units = axis%units
if(ASSOCIATED(axis%data))then
allocate( field%axes(1)%data(size(axis%data) ))
field%axes(1)%data = axis%data
call write_record( unit, field, axis%len, axis%data )
elseif(ASSOCIATED(axis%idata))then
allocate( field%axes(1)%data(size(axis%idata) ))
field%axes(1)%data = REAL(axis%idata)
field%pack=4
call write_record( unit, field, axis%len, REAL(axis%idata) )
else
call mpp_error( FATAL, 'MPP_WRITE_AXIS: No data associated with axis.' )
endif
deallocate(field%axes(1)%data)
deallocate(field%axes,field%size)
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_write_axis
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! !
! MPP_COPY_META !
! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
subroutine mpp_copy_meta_global( unit, gatt )
!writes a global metadata attribute to unit
!attribute can be an real, integer or character
!one and only one of rval, ival, and cval should be present
!the first found will be used
!for a non-netCDF file, it is encoded into a string "GLOBAL "
integer, intent(in) :: unit
type(atttype), intent(in) :: gatt
integer :: len, error
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe )return
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) then
! File has already been written to and needs to be returned to define mode.
error = NF_REDEF(mpp_file(unit)%ncid)
mpp_file(unit)%initialized = .false.
endif
! call mpp_error( FATAL, 'MPP_WRITE_META: cannot write metadata to file after an mpp_write.' )
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
if( gatt%type.EQ.NF_CHAR )then
len = gatt%len
call write_attribute_netcdf( unit, NF_GLOBAL, gatt%name, cval=gatt%catt(1:len) )
else
call write_attribute_netcdf( unit, NF_GLOBAL, gatt%name, rval=gatt%fatt )
endif
else
if( gatt%type.EQ.NF_CHAR )then
len=gatt%len
call write_attribute( unit, 'GLOBAL '//trim(gatt%name), cval=gatt%catt(1:len) )
else
call write_attribute( unit, 'GLOBAL '//trim(gatt%name), rval=gatt%fatt )
endif
end if
# 1272
return
end subroutine mpp_copy_meta_global
subroutine mpp_copy_meta_axis( unit, axis, domain )
!load the values in an axistype (still need to call mpp_write)
!write metadata attributes for axis. axis is declared inout
!because the variable and dimension ids are altered
integer, intent(in) :: unit
type(axistype), intent(inout) :: axis
type(domain1D), intent(in), optional :: domain
character(len=512) :: text
integer :: i, len, is, ie, isg, ieg, error
! call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe ) then
! call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) then
! File has already been written to and needs to be returned to define mode.
error = NF_REDEF(mpp_file(unit)%ncid)
mpp_file(unit)%initialized = .false.
endif
! call mpp_error( FATAL, 'MPP_WRITE_META: cannot write metadata to file after an mpp_write.' )
! redefine domain if present
if( PRESENT(domain) )then
axis%domain = domain
else
axis%domain = NULL_DOMAIN1D
end if
!write metadata
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
!write axis def
if( ASSOCIATED(axis%data) )then !space axis
if( mpp_file(unit)%fileset.EQ.MPP_MULTI .AND. axis%domain.NE.NULL_DOMAIN1D )then
call mpp_get_compute_domain( axis%domain, is, ie )
call mpp_get_global_domain( axis%domain, isg, ieg )
ie = ie + axis%shift
ieg = ieg + axis%shift
error = NF_DEF_DIM( mpp_file(unit)%ncid, axis%name, ie-is+1, axis%did )
else
error = NF_DEF_DIM( mpp_file(unit)%ncid, axis%name, size(axis%data(:)), axis%did )
end if
call netcdf_err( error, mpp_file(unit), axis )
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_FLOAT, 1, axis%did, axis%id )
call netcdf_err( error, mpp_file(unit), axis )
else !time axis
error = NF_DEF_DIM( mpp_file(unit)%ncid, axis%name, NF_UNLIMITED, axis%did )
call netcdf_err( error, mpp_file(unit), axis )
error = NF_DEF_VAR( mpp_file(unit)%ncid, axis%name, NF_DOUBLE, 1, axis%did, axis%id )
call netcdf_err( error, mpp_file(unit), axis )
mpp_file(unit)%id = axis%id !file ID is the same as time axis varID
mpp_file(unit)%recdimid = axis%did ! record dimension id
end if
else
varnum = varnum + 1
axis%id = varnum
axis%did = varnum
!write axis def
write( text, '(a,i4,a)' )'AXIS ', axis%id, ' name'
call write_attribute( unit, trim(text), cval=axis%name )
write( text, '(a,i4,a)' )'AXIS ', axis%id, ' size'
if( ASSOCIATED(axis%data) )then !space axis
if( mpp_file(unit)%fileset.EQ.MPP_MULTI .AND. axis%domain.NE.NULL_DOMAIN1D )then
call mpp_get_compute_domain(axis%domain, is, ie)
call write_attribute( unit, trim(text), ival=(/ie-is+1/) ) ! ??? is, ie is not initialized
else
call write_attribute( unit, trim(text), ival=(/size(axis%data(:))/) )
end if
else !time axis
if( mpp_file(unit)%id.NE.-1 ) &
call mpp_error( FATAL, 'MPP_WRITE_META_AXIS: There is already a time axis for this file.' )
call write_attribute( unit, trim(text), ival=(/0/) ) !a size of 0 indicates time axis
mpp_file(unit)%id = axis%id
end if
end if
!write axis attributes
do i=1,axis%natt
if( axis%Att(i)%name.NE.default_att%name )then
if( axis%Att(i)%type.EQ.NF_CHAR )then
len = axis%Att(i)%len
call mpp_write_meta( unit, axis%id, axis%Att(i)%name, cval=axis%Att(i)%catt(1:len) )
else
call mpp_write_meta( unit, axis%id, axis%Att(i)%name, rval=axis%Att(i)%fatt)
endif
endif
enddo
if( mpp_file(unit)%threading.EQ.MPP_MULTI .AND. mpp_file(unit)%fileset.EQ.MPP_MULTI .AND. axis%domain.NE.NULL_DOMAIN1D )then
call mpp_write_meta( unit, axis%id, 'domain_decomposition', ival=(/isg,ieg,is,ie/) )
end if
if( verbose )print '(a,2i6,x,a,2i3)', 'MPP_WRITE_META: Wrote axis metadata, pe, unit, axis%name, axis%id, axis%did=', &
pe, unit, trim(axis%name), axis%id, axis%did
# 1378
! call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_copy_meta_axis
subroutine mpp_copy_meta_field( unit, field, axes )
!useful for copying field metadata from a previous call to mpp_read_meta
!define field: must have already called mpp_write_meta(axis) for each axis
integer, intent(in) :: unit
type(fieldtype), intent(inout) :: field
type(axistype), intent(in), optional :: axes(:)
!this array is required because of f77 binding on netCDF interface
integer, allocatable :: axis_id(:)
real :: a, b
integer :: i, error
! call mpp_clock_begin(mpp_write_clock)
if( .NOT.module_is_initialized )call mpp_error( FATAL, 'MPP_WRITE_META: must first call mpp_io_init.' )
if( .NOT. mpp_file(unit)%write_on_this_pe ) then
! call mpp_clock_end(mpp_write_clock)
return
endif
if( .NOT.mpp_file(unit)%opened )call mpp_error( FATAL, 'MPP_WRITE_META: invalid unit number.' )
if( mpp_file(unit)%initialized ) then
! File has already been written to and needs to be returned to define mode.
error = NF_REDEF(mpp_file(unit)%ncid)
mpp_file(unit)%initialized = .false.
endif
! call mpp_error( FATAL, 'MPP_WRITE_META: cannot write metadata to file after an mpp_write.' )
if( field%pack.NE.1 .AND. field%pack.NE.2 )then
if( field%pack.NE.4 .AND. field%pack.NE.8 ) &
call mpp_error( FATAL, 'MPP_WRITE_META_FIELD: only legal packing values are 1,2,4,8.' )
end if
if (PRESENT(axes)) then
deallocate(field%axes)
deallocate(field%size)
allocate(field%axes(size(axes(:))))
allocate(field%size(size(axes(:))))
field%axes = axes
do i=1,size(axes(:))
if (ASSOCIATED(axes(i)%data)) then
field%size(i) = size(axes(i)%data(:))
else
field%size(i) = 1
field%time_axis_index = i
endif
enddo
endif
if( mpp_file(unit)%format.EQ.MPP_NETCDF )then
allocate( axis_id(size(field%axes(:))) )
do i = 1,size(field%axes(:))
axis_id(i) = field%axes(i)%did
end do
!write field def
select case (field%pack)
case(1)
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_DOUBLE, size(field%axes(:)), axis_id, field%id )
case(2)
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_FLOAT, size(field%axes(:)), axis_id, field%id )
case(4)
! if( field%scale.EQ.default_field%scale .OR. field%add.EQ.default_field%add ) &
! call mpp_error( FATAL, 'MPP_WRITE_META_FIELD: scale and add must be supplied when pack=4.' )
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_SHORT, size(field%axes(:)), axis_id, field%id )
case(8)
! if( field%scale.EQ.default_field%scale .OR. field%add.EQ.default_field%add ) &
! call mpp_error( FATAL, 'MPP_WRITE_META_FIELD: scale and add must be supplied when pack=8.' )
error = NF_DEF_VAR( mpp_file(unit)%ncid, field%name, NF_BYTE, size(field%axes(:)), axis_id, field%id )
case default
call mpp_error( FATAL, 'MPP_WRITE_META_FIELD: only legal packing values are 1,2,4,8.' )
end select
deallocate( axis_id )
else
varnum = varnum + 1
field%id = varnum
if( field%pack.NE.default_field%pack ) &
call mpp_error( WARNING, 'MPP_WRITE_META: Packing is currently available only on netCDF files.' )
!write field def
write( text, '(a,i4,a)' )'FIELD ', field%id, ' name'
call write_attribute( unit, trim(text), cval=field%name )
write( text, '(a,i4,a)' )'FIELD ', field%id, ' axes'
call write_attribute( unit, trim(text), ival=field%axes(:)%did )
end if
!write field attributes: these names follow netCDF conventions
call mpp_write_meta( unit, field%id, 'long_name', cval=field%longname )
if (lowercase(trim(field%units)).ne.'none' .OR. .NOT.cf_compliance) then
call mpp_write_meta( unit, field%id, 'units', cval=field%units )
endif
!all real attributes must be written as packed
if( (field%min.NE.default_field%min) .AND. (field%max.NE.default_field%max) )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'valid_range', rval=(/field%min,field%max/), pack=field%pack )
else
a = nint((field%min-field%add)/field%scale)
b = nint((field%max-field%add)/field%scale)
call mpp_write_meta( unit, field%id, 'valid_range', rval=(/a, b /), pack=field%pack )
end if
else if( field%min.NE.default_field%min )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'valid_min', rval=field%min, pack=field%pack )
else
a = nint((field%min-field%add)/field%scale)
call mpp_write_meta( unit, field%id, 'valid_min', rval=a, pack=field%pack )
end if
else if( field%max.NE.default_field%max )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'valid_max', rval=field%max, pack=field%pack )
else
a = nint((field%max-field%add)/field%scale)
call mpp_write_meta( unit, field%id, 'valid_max', rval=a, pack=field%pack )
end if
end if
if( field%missing.NE.default_field%missing )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, 'missing_value', rval=field%missing, pack=field%pack )
else
a = nint((field%missing-field%add)/field%scale)
call mpp_write_meta( unit, field%id, 'missing_value', rval=a, pack=field%pack )
end if
end if
if( field%fill.NE.default_field%fill )then
if( field%pack.EQ.1 .OR. field%pack.EQ.2 )then
call mpp_write_meta( unit, field%id, '_FillValue', rval=field%missing, pack=field%pack )
else
a = nint((field%fill-field%add)/field%scale)
call mpp_write_meta( unit, field%id, '_FillValue', rval=a, pack=field%pack )
end if
end if
if( field%pack.NE.1 .AND. field%pack.NE.2 )then
call mpp_write_meta( unit, field%id, 'packing', ival=field%pack )
if( field%scale.NE.default_field%scale )call mpp_write_meta( unit, field%id, 'scale_factor', rval=field%scale )
if( field%add.NE.default_field%add )call mpp_write_meta( unit, field%id, 'add_offset', rval=field%add )
end if
if( verbose )print '(a,2i6,x,a,i3)', 'MPP_WRITE_META: Wrote field metadata: pe, unit, field%name, field%id=', &
pe, unit, trim(field%name), field%id
! call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_copy_meta_field
subroutine mpp_modify_axis_meta( axis, name, units, longname, cartesian, data )
type(axistype), intent(inout) :: axis
character(len=*), intent(in), optional :: name, units, longname, cartesian
real, dimension(:), intent(in), optional :: data
if (PRESENT(name)) axis%name = trim(name)
if (PRESENT(units)) axis%units = trim(units)
if (PRESENT(longname)) axis%longname = trim(longname)
if (PRESENT(cartesian)) axis%cartesian = trim(cartesian)
if (PRESENT(data)) then
axis%len = size(data(:))
if (ASSOCIATED(axis%data)) deallocate(axis%data)
allocate(axis%data(axis%len))
axis%data = data
endif
return
end subroutine mpp_modify_axis_meta
subroutine mpp_modify_field_meta( field, name, units, longname, min, max, missing, axes )
type(fieldtype), intent(inout) :: field
character(len=*), intent(in), optional :: name, units, longname
real, intent(in), optional :: min, max, missing
type(axistype), dimension(:), intent(inout), optional :: axes
if (PRESENT(name)) field%name = trim(name)
if (PRESENT(units)) field%units = trim(units)
if (PRESENT(longname)) field%longname = trim(longname)
if (PRESENT(min)) field%min = min
if (PRESENT(max)) field%max = max
if (PRESENT(missing)) field%missing = missing
! if (PRESENT(axes)) then
! axis%len = size(data(:))
! deallocate(axis%data)
! allocate(axis%data(axis%len))
! axis%data = data
! endif
return
end subroutine mpp_modify_field_meta
# 1107 "../mpp/mpp_io.F90" 2
!----------
!ug support
# 1 "../mpp/include/mpp_io_unstructured_write.inc" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!----------
!ug support
!------------------------------------------------------------------------------
!>Write data for a 1D field associated with an unstructured mpp domain to a
!!restart file.
subroutine mpp_io_unstructured_write_r_1D(funit, &
field, &
domain, &
fdata, &
nelems_io, &
tstamp, &
default_data)
!Inputs/outputs
integer(4),intent(in) :: funit ! null()
io_domain => mpp_get_UG_io_domain(domain)
!Get the pelist associated with the I/O domain.
io_domain_npes = mpp_get_UG_domain_npes(io_domain)
allocate(pelist(io_domain_npes))
call mpp_get_UG_domain_pelist(io_domain, &
pelist)
!Make sure that only the root rank of the pelist will write to the file.
!This check is needed because data is only gathered on the lowest rank
!of the pelist.
if (mpp_pe() .eq. pelist(1) .and. .not. &
mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_r_1D:" &
//" the root rank of the pelist must be allowed" &
//" to perform the write.")
endif
if (mpp_pe() .ne. pelist(1) .and. mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_r_1D:" &
//" the non-root ranks of the pelist are not" &
//" allowed to perform the write.")
endif
!Allocate an array which will be used to gather the data to be written
!onto the root rank of the pelist.
if (mpp_pe() .eq. pelist(1)) then
nelems = sum(nelems_io)
allocate(rbuff(nelems))
else
allocate(rbuff(1))
endif
!Perform the gather of data onto the root rank (pelist(1)).
call mpp_gather(fdata, &
size(fdata), &
rbuff, &
nelems_io, &
pelist)
!Write out the data to the file. This is only done by the root rank
!of the pelist.
if (mpp_pe() .eq. pelist(1)) then
allocate(cdata(nelems))
cdata = fill
do i = 1,nelems
cdata(i) = rbuff(i)
enddo
field%size(1) = nelems
call write_record_default(funit, &
field, &
nelems, &
cdata, &
tstamp)
deallocate(cdata)
endif
!Deallocate local allocatables.
deallocate(rbuff)
deallocate(pelist)
!Stop the mpp timer.
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_io_unstructured_write_r_1D
!------------------------------------------------------------------------------
!>Write data for a 2D field associated with an unstructured mpp domain to a
!!restart file.
subroutine mpp_io_unstructured_write_r_2D(funit, &
field, &
domain, &
fdata, &
nelems_io, &
tstamp, &
default_data)
!Inputs/outputs
integer(4),intent(in) :: funit ! null()
io_domain => mpp_get_UG_io_domain(domain)
!Get the pelist associated with the I/O domain.
io_domain_npes = mpp_get_UG_domain_npes(io_domain)
allocate(pelist(io_domain_npes))
call mpp_get_UG_domain_pelist(io_domain, &
pelist)
!Make sure that only the root rank of the pelist will write to the file.
!This check is needed because data is only gathered on the lowest rank
!of the pelist.
if (mpp_pe() .eq. pelist(1) .and. .not. &
mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_r_2D:" &
//" the root rank of the pelist must be allowed" &
//" to perform the write.")
endif
if (mpp_pe() .ne. pelist(1) .and. mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_r_2D:" &
//" the non-root ranks of the pelist are not" &
//" allowed to perform the write.")
endif
!Load the data elements for each rank into a one dimensional array, which
!will be used to gather the data onto the root rank of the pelist.
allocate(sbuff(size(fdata)))
dim_size_1 = size(fdata,1)
dim_size_2 = size(fdata,2)
do j = 1,dim_size_2
do i = 1,dim_size_1
sbuff((j-1)*dim_size_1+i) = fdata(i,j)
enddo
enddo
!Allocate an array which will be used to gather the data to be written
!onto the root rank of the pelist.
if (mpp_pe() .eq. pelist(1)) then
nelems = sum(nelems_io)
allocate(rbuff(nelems*dim_size_2))
else
allocate(rbuff(1))
endif
!Perform the gather of data onto the root rank (pelist(1)).
call mpp_gather(sbuff, &
size(sbuff), &
rbuff, &
nelems_io*dim_size_2, &
pelist)
!Reorder the gather data so that is of the form (nelems,dim_size_2). Write
!out the data to the file. This is only done by the root rank of the
!pelist.
if (mpp_pe() .eq. pelist(1)) then
allocate(cdata(nelems,dim_size_2))
cdata = fill
do j = 1,dim_size_2
offset_c = 0
do k = 1,io_domain_npes
if (k .gt. 1) then
offset_r = (j-1)*nelems_io(k) + dim_size_2*(sum(nelems_io(1:k-1)))
else
offset_r = (j-1)*nelems_io(k)
endif
do i = 1,nelems_io(k)
cdata(i+offset_c,j) = rbuff(i+offset_r)
enddo
offset_c = offset_c + nelems_io(k)
enddo
enddo
field%size(1) = nelems
call write_record_default(funit, &
field, &
nelems*dim_size_2, &
cdata, &
tstamp)
deallocate(cdata)
endif
!Deallocate local allocatables.
deallocate(sbuff)
deallocate(rbuff)
deallocate(pelist)
!Stop the mpp timer.
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_io_unstructured_write_r_2D
!------------------------------------------------------------------------------
!>Write data for a 3D field associated with an unstructured mpp domain to a
!!restart file.
subroutine mpp_io_unstructured_write_r_3D(funit, &
field, &
domain, &
fdata, &
nelems_io, &
tstamp, &
default_data)
!Inputs/outputs
integer(4),intent(in) :: funit ! null()
io_domain => mpp_get_UG_io_domain(domain)
!Get the pelist associated with the I/O domain.
io_domain_npes = mpp_get_UG_domain_npes(io_domain)
allocate(pelist(io_domain_npes))
call mpp_get_UG_domain_pelist(io_domain, &
pelist)
!Make sure that only the root rank of the pelist will write to the file.
!This check is needed because data is only gathered on the lowest rank
!of the pelist.
if (mpp_pe() .eq. pelist(1) .and. .not. &
mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_r_3D:" &
//" the root rank of the pelist must be allowed" &
//" to perform the write.")
endif
if (mpp_pe() .ne. pelist(1) .and. mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_r_3D:" &
//" the non-root ranks of the pelist are not" &
//" allowed to perform the write.")
endif
!Load the data elements for each rank into a one dimensional array, which
!will be used to gather the data onto the root rank of the pelist.
allocate(sbuff(size(fdata)))
dim_size_1 = size(fdata,1)
dim_size_2 = size(fdata,2)
dim_size_3 = size(fdata,3)
do k = 1,dim_size_3
do j = 1,dim_size_2
do i = 1,dim_size_1
sbuff((k-1)*dim_size_2*dim_size_1+(j-1)*dim_size_1+i) = fdata(i,j,k)
enddo
enddo
enddo
!Allocate an array which will be used to gather the data to be written
!onto the root rank of the pelist.
if (mpp_pe() .eq. pelist(1)) then
nelems = sum(nelems_io)
allocate(rbuff(nelems*dim_size_2*dim_size_3))
else
allocate(rbuff(1))
endif
!Perform the gather of data onto the root rank (pelist(1)).
call mpp_gather(sbuff, &
size(sbuff), &
rbuff, &
nelems_io*dim_size_2*dim_size_3, &
pelist)
!Reorder the gather data so that is of the form (nelems,dim_size_2). Write
!out the data to the file. This is only done by the root rank of the
!pelist.
if (mpp_pe() .eq. pelist(1)) then
allocate(cdata(nelems,dim_size_2,dim_size_3))
cdata = fill
do m = 1,dim_size_3
do j = 1,dim_size_2
offset_c = 0
do k = 1,io_domain_npes
if (k .gt. 1) then
offset_r = (m-1)*dim_size_2*nelems_io(k) + &
(j-1)*nelems_io(k) + &
dim_size_2*dim_size_3*(sum(nelems_io(1:k-1)))
else
offset_r = (m-1)*dim_size_2*nelems_io(k) + &
(j-1)*nelems_io(k)
endif
do i = 1,nelems_io(k)
cdata(i+offset_c,j,m) = rbuff(i+offset_r)
enddo
offset_c = offset_c + nelems_io(k)
enddo
enddo
enddo
field%size(1) = nelems
call write_record_default(funit, &
field, &
nelems*dim_size_2*dim_size_3, &
cdata, &
tstamp)
deallocate(cdata)
endif
!Deallocate local allocatables.
deallocate(sbuff)
deallocate(rbuff)
deallocate(pelist)
!Stop the mpp timer.
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_io_unstructured_write_r_3D
!------------------------------------------------------------------------------
!>Write data for a 4D field associated with an unstructured mpp domain to a
!!restart file.
subroutine mpp_io_unstructured_write_r_4D(funit, &
field, &
domain, &
fdata, &
nelems_io_in, &
tstamp, &
default_data)
!Inputs/outputs
integer(4),intent(in) :: funit ! null()
io_domain => mpp_get_UG_io_domain(domain)
!Get the pelist associated with the I/O domain.
io_domain_npes = mpp_get_UG_domain_npes(io_domain)
allocate(pelist(io_domain_npes))
call mpp_get_UG_domain_pelist(io_domain, &
pelist)
!Make sure that only the root rank of the pelist will write to the file.
!This check is needed because data is only gathered on the lowest rank
!of the pelist.
if (mpp_pe() .eq. pelist(1) .and. .not. &
mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_compressed_r_4D:" &
//" the root rank of the pelist must be allowed" &
//" to perform the write.")
endif
if (mpp_pe() .ne. pelist(1) .and. mpp_file(funit)%write_on_this_pe) then
call mpp_error(FATAL, &
"mpp_io_unstructured_write_compressed_r_4D:" &
//" the non-root ranks of the pelist are not" &
//" allowed to perform the write.")
endif
!For the 3D unstructured case, data is assumed to be of the form
!fdata = fdata(unstructured,z,cc). The number of data elements in the
!unstructured dimension (size(fdata,1)) may differ between ranks.
!If not passed in, the number of data elements in the unstructured
!dimension must be gathered on the root rank of the pelist. The number
!data elements in the unstructured dimension should be equal to the size
!of the unstructured computed domain.
if (present(nelems_io_in)) then
allocate(nelems_io(size(nelems_io_in)))
nelems_io = nelems_io_in
else
allocate(nelems_io(io_domain_npes))
nelems_io = 0
call mpp_get_UG_compute_domains(io_domain, &
size=nelems_io)
endif
!The number of data elements in the non-unstructured dimensions are
!required to be the same for all ranks. Perform gathers to check this.
size_fdata_dim_2 = size(fdata,2)
size_fdata_dim_3 = size(fdata,3)
size_fdata_dim_4 = size(fdata,4)
!Allocate arrays which will be used to gather the data to be written
!onto the root rank of the pelist.
mynelems = size(fdata,1)
allocate(sbuff(mynelems*size_fdata_dim_2*size_fdata_dim_3*size_fdata_dim_4))
if (mpp_pe() .eq. pelist(1)) then
nelems = sum(nelems_io)
allocate(rbuff(nelems*size_fdata_dim_2*size_fdata_dim_3*size_fdata_dim_4))
else
allocate(rbuff(1))
endif
!Load the data into the sbuff array. The data is transposed so that the
!gather may be performed more easily.
do k = 1,mynelems
do j = 1,size_fdata_dim_2
do i = 1,size_fdata_dim_3
do n = 1,size_fdata_dim_4
sbuff((k-1)*size_fdata_dim_2*size_fdata_dim_3*size_fdata_dim_4 &
+ (j-1)*size_fdata_dim_3*size_fdata_dim_4 &
+ (i-1)*size_fdata_dim_4 + n) = fdata(k,j,i,n)
enddo
enddo
enddo
enddo
!Perform the gather of data onto the root rank (pelist(1)).
call mpp_gather(sbuff, &
size(sbuff), &
rbuff, &
nelems_io*size_fdata_dim_2*size_fdata_dim_3*size_fdata_dim_4, &
pelist)
!Write out the data to the file. This is only done by the root rank
!of the pelist.
if (mpp_pe() .eq. pelist(1)) then
allocate(cdata(nelems,size_fdata_dim_2,size_fdata_dim_3,size_fdata_dim_4))
cdata = fill
do n = 1,size_fdata_dim_4
do k = 1,size_fdata_dim_3
do j = 1,size_fdata_dim_2
do i = 1,nelems
cdata(i,j,k,n) = rbuff((i-1)*size_fdata_dim_2*size_fdata_dim_3*size_fdata_dim_4 &
+ (j-1)*size_fdata_dim_3*size_fdata_dim_4 &
+ (k-1)*size_fdata_dim_4 + n)
enddo
enddo
enddo
enddo
field%size(1) = nelems
call write_record_default(funit, &
field, &
nelems*size_fdata_dim_2*size_fdata_dim_3*size_fdata_dim_4, &
cdata, &
tstamp)
deallocate(cdata)
endif
!Deallocate local allocatables.
deallocate(sbuff)
deallocate(rbuff)
deallocate(pelist)
deallocate(nelems_io)
!Stop the mpp timer.
call mpp_clock_end(mpp_write_clock)
return
end subroutine mpp_io_unstructured_write_r_4D
!------------------------------------------------------------------------------
!----------
# 1111 "../mpp/mpp_io.F90" 2
# 1 "../mpp/include/mpp_io_unstructured_read.inc" 1
!***********************************************************************
!* GNU Lesser General Public License
!*
!* This file is part of the GFDL Flexible Modeling System (FMS).
!*
!* FMS is free software: you can redistribute it and/or modify it under
!* the terms of the GNU Lesser General Public License as published by
!* the Free Software Foundation, either version 3 of the License, or (at
!* your option) any later version.
!*
!* FMS is distributed in the hope that it will be useful, but WITHOUT
!* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
!* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
!* for more details.
!*
!* You should have received a copy of the GNU Lesser General Public
!* License along with FMS. If not, see .
!***********************************************************************
!----------
!ug support
!------------------------------------------------------------------------------
!>Read in one-dimensional data for a field associated with an unstructured
!!mpp domain.
subroutine mpp_io_unstructured_read_r_1D(funit, &
field, &
domain, &
fdata, &
tindex, &
start, &
nread, &
threading)
!Inputs/outputs
integer(4),intent(in) :: funit ! null()
io_domain => mpp_get_UG_io_domain(domain)
!Get the pelist associated with the I/O domain.
io_domain_npes = mpp_get_UG_domain_npes(io_domain)
allocate(pelist(io_domain_npes))
call mpp_get_UG_domain_pelist(io_domain, &
pelist)
io_domain => null()
!Let only the root rank of the pelist read in the data.
if (mpp_pe() .eq. pelist(1)) then
call read_record(funit, &
field, &
size(fdata), &
fdata, &
tindex, &
start_in=start, &
axsiz_in=nread)
endif
!Send the data from the root rank to the rest of the ranks on the
!pelist.
if (mpp_pe() .eq. pelist(1)) then
do p = 2,io_domain_npes
call mpp_send(fdata, &
size(fdata), &
pelist(p), &
tag=COMM_TAG_1)
enddo
call mpp_sync_self()
else
call mpp_recv(fdata, &
size(fdata), &
pelist(1), &
block=.false., &
tag=COMM_TAG_1)
call mpp_sync_self(check=EVENT_RECV)
endif
deallocate(pelist)
else
call mpp_error(FATAL, &
"mpp_io_unstructured_read_r_1D:" &
//" threading should be MPP_SINGLE or MPP_MULTI")
endif
endif
!Decided whether or not to compute a check-sum of the read-in data. The
!check-sum is calculated if the inputted field's checksum values are not
!equal to the default checksum value for a field.
compute_chksum = .false.
if (any(field%checksum .ne. default_field%checksum)) then
compute_chksum = .true.
endif
!If necessary, compute a check-sum of the read-in data.
if (compute_chksum) then
if (field%type .eq. NF_INT) then
if (field%fill .eq. MPP_FILL_DOUBLE .or. field%fill .eq. &
real(MPP_FILL_INT)) then
chk = mpp_chksum(ceiling(fdata), &
mask_val=MPP_FILL_INT)
else
call mpp_error(NOTE, &
"mpp_io_unstructured_read_r_1D:" &
//" int field "//trim(field%name) &
//" found fill. Icebergs, or code using" &
//" defaults can safely ignore." &
//" If manually overriding compressed" &
//" restart fills, confirm this is what you" &
//" want.")
chk = mpp_chksum(ceiling(fdata), &
mask_val=field%fill)
endif
else
chk = mpp_chksum(fdata, &
mask_val=field%fill)
endif
!Print out the computed check-sum for the field. This feature is
!currently turned off. Uncomment the following lines to turn it
!back on.
! if (mpp_pe() .eq. mpp_root_pe()) then
! write(stdout(),'(A,Z16)') "mpp_read_compressed_2d chksum: " &
! //trim(field%name)//" = ",chk
! if (mod(chk,field%checksum(1)) .ne. 0) then
! write(stdout(),'(A,Z16)') "File stored checksum: " &
! //trim(field%name)//" = ", &
! field%checksum(1)
! call mpp_error(NOTE, &
! "mpp_io_unstructured_read_r_1D: " &
! //trim(field%name)//" failed!")
! endif
! endif
endif
!Stop the mpp timer.
call mpp_clock_end(mpp_read_clock)
return
end subroutine mpp_io_unstructured_read_r_1D
!------------------------------------------------------------------------------
!>Read in two-dimensional data for a field associated with an unstructured
!!mpp domain.
subroutine mpp_io_unstructured_read_r_2D(funit, &
field, &
domain, &
fdata, &
tindex, &
start, &
nread, &
threading)
!Inputs/outputs
integer(4),intent(in) :: funit ! null()
io_domain => mpp_get_UG_io_domain(domain)
!Get the pelist associated with the I/O domain.
io_domain_npes = mpp_get_UG_domain_npes(io_domain)
allocate(pelist(io_domain_npes))
call mpp_get_UG_domain_pelist(io_domain, &
pelist)
io_domain => null()
!Let only the root rank of the pelist read in the data.
if (mpp_pe() .eq. pelist(1)) then
call read_record(funit, &
field, &
size(fdata), &
fdata, &
tindex, &
start_in=start, &
axsiz_in=nread)
endif
!Send the data from the root rank to the rest of the ranks on the
!pelist.
if (mpp_pe() .eq. pelist(1)) then
do p = 2,io_domain_npes
call mpp_send(fdata, &
size(fdata), &
pelist(p), &
tag=COMM_TAG_1)
enddo
call mpp_sync_self()
else
call mpp_recv(fdata, &
size(fdata), &
pelist(1), &
block=.false., &
tag=COMM_TAG_1)
call mpp_sync_self(check=EVENT_RECV)
endif
deallocate(pelist)
else
call mpp_error(FATAL, &
"mpp_io_unstructured_read_r_2D:" &
//" threading should be MPP_SINGLE or MPP_MULTI")
endif
endif
!Decided whether or not to compute a check-sum of the read-in data. The
!check-sum is calculated if the inputted field's checksum values are not
!equal to the default checksum value for a field.
compute_chksum = .false.
if (any(field%checksum .ne. default_field%checksum)) then
compute_chksum = .true.
endif
!If necessary, compute a check-sum of the read-in data.
if (compute_chksum) then
if (field%type .eq. NF_INT) then
if (field%fill .eq. MPP_FILL_DOUBLE .or. field%fill .eq. &
real(MPP_FILL_INT)) then
chk = mpp_chksum(ceiling(fdata), &
mask_val=MPP_FILL_INT)
else
call mpp_error(NOTE, &
"mpp_io_unstructured_read_r_2D:" &
//" int field "//trim(field%name) &
//" found fill. Icebergs, or code using" &
//" defaults can safely ignore." &
//" If manually overriding compressed" &
//" restart fills, confirm this is what you" &
//" want.")
chk = mpp_chksum(ceiling(fdata), &
mask_val=field%fill)
endif
else
chk = mpp_chksum(fdata, &
mask_val=field%fill)
endif
!Print out the computed check-sum for the field. This feature is
!currently turned off. Uncomment the following lines to turn it
!back on.
! if (mpp_pe() .eq. mpp_root_pe()) then
! write(stdout(),'(A,Z16)') "mpp_read_compressed_2d chksum: " &
! //trim(field%name)//" = ",chk
! if (mod(chk,field%checksum(1)) .ne. 0) then
! write(stdout(),'(A,Z16)') "File stored checksum: " &
! //trim(field%name)//" = ", &
! field%checksum(1)
! call mpp_error(NOTE, &
! "mpp_io_unstructured_read_r_2D: " &
! //trim(field%name)//" failed!")
! endif
! endif
endif
!Stop the mpp timer.
call mpp_clock_end(mpp_read_clock)
return
end subroutine mpp_io_unstructured_read_r_2D
!------------------------------------------------------------------------------
!>Read in three-dimensional data for a field associated with an unstructured
!!mpp domain.
subroutine mpp_io_unstructured_read_r_3D(funit, &
field, &
domain, &
fdata, &
tindex, &
start, &
nread, &
threading)
!Inputs/outputs
integer(4),intent(in) :: funit ! null()
io_domain => mpp_get_UG_io_domain(domain)
!Get the pelist associated with the I/O domain.
io_domain_npes = mpp_get_UG_domain_npes(io_domain)
allocate(pelist(io_domain_npes))
call mpp_get_UG_domain_pelist(io_domain, &
pelist)
io_domain => null()
!Let only the root rank of the pelist read in the data.
if (mpp_pe() .eq. pelist(1)) then
call read_record(funit, &
field, &
size(fdata), &
fdata, &
tindex, &
start_in=start, &
axsiz_in=nread)
endif
!Send the data from the root rank to the rest of the ranks on the
!pelist.
if (mpp_pe() .eq. pelist(1)) then
do p = 2,io_domain_npes
call mpp_send(fdata, &
size(fdata), &
pelist(p), &
tag=COMM_TAG_1)
enddo
call mpp_sync_self()
else
call mpp_recv(fdata, &
size(fdata), &
pelist(1), &
block=.false., &
tag=COMM_TAG_1)
call mpp_sync_self(check=EVENT_RECV)
endif
deallocate(pelist)
else
call mpp_error(FATAL, &
"mpp_io_unstructured_read_r_3D:" &
//" threading should be MPP_SINGLE or MPP_MULTI")
endif
endif
!Decided whether or not to compute a check-sum of the read-in data. The
!check-sum is calculated if the inputted field's checksum values are not
!equal to the default checksum value for a field.
compute_chksum = .false.
if (any(field%checksum .ne. default_field%checksum)) then
compute_chksum = .true.
endif
!If necessary, compute a check-sum of the read-in data.
if (compute_chksum) then
if (field%type .eq. NF_INT) then
if (field%fill .eq. MPP_FILL_DOUBLE .or. field%fill .eq. &
real(MPP_FILL_INT)) then
chk = mpp_chksum(ceiling(fdata), &
mask_val=MPP_FILL_INT)
else
call mpp_error(NOTE, &
"mpp_io_unstructured_read_r_3D:" &
//" int field "//trim(field%name) &
//" found fill. Icebergs, or code using" &
//" defaults can safely ignore." &
//" If manually overriding compressed" &
//" restart fills, confirm this is what you" &
//" want.")
chk = mpp_chksum(ceiling(fdata), &
mask_val=field%fill)
endif
else
chk = mpp_chksum(fdata, &
mask_val=field%fill)
endif
!Print out the computed check-sum for the field. This feature is
!currently turned off. Uncomment the following lines to turn it
!back on.
! if (mpp_pe() .eq. mpp_root_pe()) then
! write(stdout(),'(A,Z16)') "mpp_read_compressed_2d chksum: " &
! //trim(field%name)//" = ",chk
! if (mod(chk,field%checksum(1)) .ne. 0) then
! write(stdout(),'(A,Z16)') "File stored checksum: " &
! //trim(field%name)//" = ", &
! field%checksum(1)
! call mpp_error(NOTE, &
! "mpp_io_unstructured_read_r_3D: " &
! //trim(field%name)//" failed!")
! endif
! endif
endif
!Stop the mpp timer.
call mpp_clock_end(mpp_read_clock)
return
end subroutine mpp_io_unstructured_read_r_3D
!------------------------------------------------------------------------------
!----------
# 1112 "../mpp/mpp_io.F90" 2
!----------
end module mpp_io_mod