!*********************************************************************** !* GNU Lesser General Public License !* !* This file is part of the GFDL Flexible Modeling System (FMS). !* !* FMS is free software: you can redistribute it and/or modify it under !* the terms of the GNU Lesser General Public License as published by !* the Free Software Foundation, either version 3 of the License, or (at !* your option) any later version. !* !* FMS is distributed in the hope that it will be useful, but WITHOUT !* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or !* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License !* for more details. !* !* You should have received a copy of the GNU Lesser General Public !* License along with FMS. If not, see . !*********************************************************************** !----------------------------------------------------------------------- ! Parallel I/O for message-passing codes ! ! AUTHOR: V. Balaji (vb@gfdl.gov) ! SGI/GFDL Princeton University ! ! This program is free software; you can redistribute it and/or modify ! it under the terms of the GNU General Public License as published by ! the Free Software Foundation; either version 2 of the License, or ! (at your option) any later version. ! ! This program is distributed in the hope that it will be useful, ! but WITHOUT ANY WARRANTY; without even the implied warranty of ! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ! GNU General Public License for more details. ! ! For the full text of the GNU General Public License, ! write to: Free Software Foundation, Inc., ! 675 Mass Ave, Cambridge, MA 02139, USA. !----------------------------------------------------------------------- ! ! V. Balaji ! ! ! ! ! mpp_io_mod, is a set of simple calls for parallel I/O on ! distributed systems. It is geared toward the writing of data in netCDF ! format. It requires the modules mpp_domains_mod and mpp_mod, upon which it is built. ! ! ! In massively parallel environments, an often difficult problem is ! the reading and writing of data to files on disk. MPI-IO and MPI-2 IO ! are moving toward providing this capability, but are currently not ! widely implemented. Further, it is a rather abstruse ! API. mpp_io_mod is an attempt at a simple API encompassing a ! certain variety of the I/O tasks that will be required. It does not ! attempt to be an all-encompassing standard such as MPI, however, it ! can be implemented in MPI if so desired. It is equally simple to add ! parallel I/O capability to mpp_io_mod based on vendor-specific ! APIs while providing a layer of insulation for user codes. ! ! The mpp_io_mod parallel I/O API built on top of the mpp_domains_mod and mpp_mod API for domain decomposition and ! message passing. Features of mpp_io_mod include: ! ! 1) Simple, minimal API, with free access to underlying API for more ! complicated stuff.
! 2) Self-describing files: comprehensive header information ! (metadata) in the file itself.
! 3) Strong focus on performance of parallel write: the climate models ! for which it is designed typically read a minimal amount of data ! (typically at the beginning of the run); but on the other hand, tend ! to write copious amounts of data during the run. An interface for ! reading is also supplied, but its performance has not yet been optimized.
! 4) Integrated netCDF capability: netCDF is a ! data format widely used in the climate/weather modeling ! community. netCDF is considered the principal medium of data storage ! for mpp_io_mod. But I provide a raw unformatted ! fortran I/O capability in case netCDF is not an option, either due to ! unavailability, inappropriateness, or poor performance.
! 5) May require off-line post-processing: a tool for this purpose, ! mppnccombine, is available. GFDL users may use ! ~hnv/pub/mppnccombine. Outside users may obtain the ! source here. It ! can be compiled on any C compiler and linked with the netCDF ! library. The program is free and is covered by the GPL license. ! ! The internal representation of the data being written out is ! assumed be the default real type, which can be 4 or 8-byte. Time data ! is always written as 8-bytes to avoid overflow on climatic time scales ! in units of seconds. ! !

I/O modes in mpp_io_mod

! ! The I/O activity critical to performance in the models for which ! mpp_io_mod is designed is typically the writing of large ! datasets on a model grid volume produced at intervals during ! a run. Consider a 3D grid volume, where model arrays are stored as ! (i,j,k). The domain decomposition is typically along ! i or j: thus to store data to disk as a global ! volume, the distributed chunks of data have to be seen as ! non-contiguous. If we attempt to have all PEs write this data into a ! single file, performance can be seriously compromised because of the ! data reordering that will be required. Possible options are to have ! one PE acquire all the data and write it out, or to have all the PEs ! write independent files, which are recombined offline. These three ! modes of operation are described in the mpp_io_mod terminology ! in terms of two parameters, threading and fileset, ! as follows: ! ! Single-threaded I/O: a single PE acquires all the data ! and writes it out.
! Multi-threaded, single-fileset I/O: many PEs write to a ! single file.
! Multi-threaded, multi-fileset I/O: many PEs write to ! independent files. This is also called distributed I/O. ! ! The middle option is the most difficult to achieve performance. The ! choice of one of these modes is made when a file is opened for I/O, in ! mpp_open. ! !

Metadata in mpp_io_mod

! ! A requirement of the design of mpp_io_mod is that the file must ! be entirely self-describing: comprehensive header information ! describing its contents is present in the header of every file. The ! header information follows the model of netCDF. Variables in the file ! are divided into axes and fields. An axis describes a ! co-ordinate variable, e.g x,y,z,t. A field consists of data in ! the space described by the axes. An axis is described in ! mpp_io_mod using the defined type axistype: ! !
!   type, public :: axistype
!      sequence
!      character(len=128) :: name
!      character(len=128) :: units
!      character(len=256) :: longname
!      character(len=8) :: cartesian
!      integer :: len
!      integer :: sense           !+/-1, depth or height?
!      type(domain1D), pointer :: domain
!      real, dimension(:), pointer :: data
!      integer :: id, did
!      integer :: type  ! external NetCDF type format for axis data
!      integer :: natt
!      type(atttype), pointer :: Att(:) ! axis attributes
!   end type axistype
!   
! ! A field is described using the type fieldtype: ! !
!   type, public :: fieldtype
!      sequence
!      character(len=128) :: name
!      character(len=128) :: units
!      character(len=256) :: longname
!      real :: min, max, missing, fill, scale, add
!      integer :: pack
!      type(axistype), dimension(:), pointer :: axes
!      integer, dimension(:), pointer :: size
!      integer :: time_axis_index
!      integer :: id
!      integer :: type ! external NetCDF format for field data
!      integer :: natt, ndim
!      type(atttype), pointer :: Att(:) ! field metadata
!   end type fieldtype
!   
! ! An attribute (global, field or axis) is described using the atttype: ! !
!   type, public :: atttype
!      sequence
!      integer :: type, len
!      character(len=128) :: name
!      character(len=256)  :: catt
!      real(FLOAT_KIND), pointer :: fatt(:)
!   end type atttype
!   
! ! This default set of field attributes corresponds ! closely to various conventions established for netCDF files. The ! pack attribute of a field defines whether or not a ! field is to be packed on output. Allowed values of ! pack are 1,2,4 and 8. The value of ! pack is the number of variables written into 8 ! bytes. In typical use, we write 4-byte reals to netCDF output; thus ! the default value of pack is 2. For ! pack = 4 or 8, packing uses a simple-minded linear ! scaling scheme using the scale and add ! attributes. There is thus likely to be a significant loss of dynamic ! range with packing. When a field is declared to be packed, the ! missing and fill attributes, if ! supplied, are packed also. ! ! Please note that the pack values are the same even if the default ! real is 4 bytes, i.e PACK=1 still follows the definition ! above and writes out 8 bytes. ! ! A set of attributes for each variable is also available. The ! variable definitions and attribute information is written/read by calling ! mpp_write_meta or mpp_read_meta. A typical calling ! sequence for writing data might be: ! !
!   ...
!     type(domain2D), dimension(:), allocatable, target :: domain
!     type(fieldtype) :: field
!     type(axistype) :: x, y, z, t
!   ...
!     call mpp_define_domains( (/1,nx,1,ny/), domain )
!     allocate( a(domain(pe)%x%data%start_index:domain(pe)%x%data%end_index, &
!                 domain(pe)%y%data%start_index:domain(pe)%y%data%end_index,nz) )
!   ...
!     call mpp_write_meta( unit, x, 'X', 'km', 'X distance', &
!          domain=domain(pe)%x, data=(/(float(i),i=1,nx)/) )
!     call mpp_write_meta( unit, y, 'Y', 'km', 'Y distance', &
!          domain=domain(pe)%y, data=(/(float(i),i=1,ny)/) )
!     call mpp_write_meta( unit, z, 'Z', 'km', 'Z distance', &
!          data=(/(float(i),i=1,nz)/) )
!     call mpp_write_meta( unit, t, 'Time', 'second', 'Time' )
!
!     call mpp_write_meta( unit, field, (/x,y,z,t/), 'a', '(m/s)', AAA', &
!          missing=-1e36 )
!   ...
!     call mpp_write( unit, x )
!     call mpp_write( unit, y )
!     call mpp_write( unit, z )
!   ...
!   
! ! In this example, x and y have been ! declared as distributed axes, since a domain decomposition has been ! associated. z and t are undistributed ! axes. t is known to be a record axis (netCDF ! terminology) since we do not allocate the data element ! of the axistype. Only one record axis may be ! associated with a file. The call to mpp_write_meta initializes ! the axes, and associates a unique variable ID with each axis. The call ! to mpp_write_meta with argument field ! declared field to be a 4D variable that is a function ! of (x,y,z,t), and a unique variable ID is associated ! with it. A 3D field will be written at each call to ! mpp_write(field). ! ! The data to any variable, including axes, is written by ! mpp_write. ! ! Any additional attributes of variables can be added through ! subsequent mpp_write_meta calls, using the variable ID as a ! handle. Global attributes, associated with the dataset as a ! whole, can also be written thus. See the mpp_write_meta call syntax below ! for further details. ! ! You cannot interleave calls to mpp_write and ! mpp_write_meta: the first call to ! mpp_write implies that metadata specification is ! complete. ! ! A typical calling sequence for reading data might be: ! !
!   ...
!     integer :: unit, natt, nvar, ntime
!     type(domain2D), dimension(:), allocatable, target :: domain
!     type(fieldtype), allocatable, dimension(:) :: fields
!     type(atttype), allocatable, dimension(:) :: global_atts
!     real, allocatable, dimension(:) :: times
!   ...
!     call mpp_define_domains( (/1,nx,1,ny/), domain )
!
!     call mpp_read_meta(unit)
!     call mpp_get_info(unit,natt,nvar,ntime)
!     allocate(global_atts(natt))
!     call mpp_get_atts(unit,global_atts)
!     allocate(fields(nvar))
!     call mpp_get_vars(unit, fields)
!     allocate(times(ntime))
!     call mpp_get_times(unit, times)
!
!     allocate( a(domain(pe)%x%data%start_index:domain(pe)%x%data%end_index, &
!                 domain(pe)%y%data%start_index:domain(pe)%y%data%end_index,nz) )
!   ...
!     do i=1, nvar
!       if (fields(i)%name == 'a')  call mpp_read(unit,fields(i),domain(pe), a,
!                                                 tindex)
!     enddo
!   ...
!   
! ! In this example, the data are distributed as in the previous ! example. The call to mpp_read_meta initializes ! all of the metadata associated with the file, including global ! attributes, variable attributes and non-record dimension data. The ! call to mpp_get_info returns the number of global ! attributes (natt), variables (nvar) and ! time levels (ntime) associated with the file ! identified by a unique ID (unit). ! mpp_get_atts returns all global attributes for ! the file in the derived type atttype(natt). ! mpp_get_vars returns variable types ! (fieldtype(nvar)). Since the record dimension data are not allocated for calls to mpp_write, a separate call to mpp_get_times is required to access record dimension data. Subsequent calls to ! mpp_read return the field data arrays corresponding to ! the fieldtype. The domain type is an optional ! argument. If domain is omitted, the incoming field ! array should be dimensioned for the global domain, otherwise, the ! field data is assigned to the computational domain of a local array. ! ! Multi-fileset reads are not supported with mpp_read. !
module mpp_io_mod #include #define _MAX_FILE_UNITS 1024 use mpp_parameter_mod, only : MPP_WRONLY, MPP_RDONLY, MPP_APPEND, MPP_OVERWR, MPP_ASCII use mpp_parameter_mod, only : MPP_IEEE32, MPP_NATIVE, MPP_NETCDF, MPP_SEQUENTIAL use mpp_parameter_mod, only : MPP_DIRECT, MPP_SINGLE, MPP_MULTI, MPP_DELETE, MPP_COLLECT use mpp_parameter_mod, only : MPP_DEBUG, MPP_VERBOSE, NULLUNIT, NULLTIME, ALL_PES use mpp_parameter_mod, only : CENTER, EAST, NORTH, CORNER use mpp_parameter_mod, only : MAX_FILE_SIZE, GLOBAL_ROOT_ONLY, XUPDATE, YUPDATE use mpp_mod, only : mpp_error, FATAL, WARNING, NOTE, stdin, stdout, stderr, stdlog use mpp_mod, only : mpp_pe, mpp_root_pe, mpp_npes, lowercase, mpp_transmit, mpp_sync_self use mpp_mod, only : mpp_init, mpp_sync, mpp_clock_id, mpp_clock_begin, mpp_clock_end use mpp_mod, only : MPP_CLOCK_SYNC, MPP_CLOCK_DETAILED, CLOCK_ROUTINE use mpp_mod, only : input_nml_file, mpp_gather, mpp_broadcast use mpp_mod, only : mpp_send, mpp_recv, mpp_sync_self, EVENT_RECV, COMM_TAG_1 use mpp_domains_mod, only : domain1d, domain2d, NULL_DOMAIN1D, mpp_domains_init use mpp_domains_mod, only : mpp_get_global_domain, mpp_get_compute_domain use mpp_domains_mod, only : mpp_get_data_domain, mpp_get_memory_domain, mpp_get_pelist use mpp_domains_mod, only : mpp_update_domains, mpp_global_field, mpp_domain_is_symmetry use mpp_domains_mod, only : operator( .NE. ), mpp_get_domain_shift, mpp_get_UG_compute_domains use mpp_domains_mod, only : mpp_get_io_domain, mpp_domain_is_tile_root_pe, mpp_get_domain_tile_root_pe use mpp_domains_mod, only : mpp_get_tile_id, mpp_get_tile_npes, mpp_get_io_domain_layout use mpp_domains_mod, only : mpp_get_domain_name, mpp_get_domain_npes use mpp_parameter_mod, only : MPP_FILL_DOUBLE,MPP_FILL_INT use mpp_mod, only : mpp_chksum !---------- !ug support use mpp_domains_mod, only: domainUG, & mpp_get_UG_io_domain, & mpp_domain_UG_is_tile_root_pe, & mpp_get_UG_domain_tile_id, & mpp_get_UG_domain_npes, & mpp_get_io_domain_UG_layout, & mpp_get_UG_compute_domain, & mpp_get_UG_domain_pelist !---------- implicit none private #ifdef use_netCDF #include #endif !--- public parameters ----------------------------------------------- public :: MPP_WRONLY, MPP_RDONLY, MPP_APPEND, MPP_OVERWR, MPP_ASCII, MPP_IEEE32 public :: MPP_NATIVE, MPP_NETCDF, MPP_SEQUENTIAL, MPP_DIRECT, MPP_SINGLE public :: MPP_MULTI, MPP_DELETE, MPP_COLLECT public :: FILE_TYPE_USED public :: MAX_FILE_SIZE !--- public data type ------------------------------------------------ public :: axistype, atttype, fieldtype, validtype, filetype !--- public data ----------------------------------------------------- public :: default_field, default_axis, default_att !--- public interface from mpp_io_util.h ---------------------- public :: mpp_get_iospec, mpp_get_id, mpp_get_ncid, mpp_get_unit_range, mpp_is_valid public :: mpp_set_unit_range, mpp_get_info, mpp_get_atts, mpp_get_fields public :: mpp_get_times, mpp_get_axes, mpp_get_recdimid, mpp_get_axis_data, mpp_get_axis_by_name public :: mpp_io_set_stack_size, mpp_get_field_index, mpp_get_axis_index public :: mpp_get_field_name, mpp_get_att_value, mpp_get_att_length public :: mpp_get_att_type, mpp_get_att_name, mpp_get_att_real, mpp_get_att_char public :: mpp_get_att_real_scalar, mpp_get_axis_length, mpp_is_dist_ioroot public :: mpp_get_file_name, mpp_file_is_opened, mpp_attribute_exist public :: mpp_io_clock_on, mpp_get_time_axis, mpp_get_default_calendar public :: mpp_get_dimension_length, mpp_get_axis_bounds !--- public interface from mpp_io_misc.h ---------------------- public :: mpp_io_init, mpp_io_exit, netcdf_err, mpp_flush, mpp_get_maxunits, do_cf_compliance !--- public interface from mpp_io_write.h --------------------- public :: mpp_write, mpp_write_meta, mpp_copy_meta, mpp_modify_meta, mpp_write_axis_data, mpp_def_dim !--- public interface from mpp_io_read.h --------------------- public :: mpp_read, mpp_read_meta, mpp_get_tavg_info public :: mpp_read_compressed, mpp_write_compressed, mpp_read_distributed_ascii, mpp_write_unlimited_axis !--- public interface from mpp_io_switch.h --------------------- public :: mpp_open, mpp_close !----------------------------------------------------------------------------- !--- mpp_io data types !----------------------------------------------------------------------------- integer FILE_TYPE_USED integer, parameter :: MAX_ATT_LENGTH = 1280 type :: atttype private integer :: type, len character(len=128) :: name character(len=MAX_ATT_LENGTH) :: catt real, pointer :: fatt(:) =>NULL() ! just use type conversion for integers end type atttype type :: axistype private character(len=128) :: name character(len=128) :: name_bounds character(len=128) :: units character(len=256) :: longname character(len=8) :: cartesian character(len=256) :: compressed character(len=24) :: calendar integer :: sense, len !+/-1, depth or height? type(domain1D) :: domain !if pointer is associated, it is a distributed data axis real, pointer :: data(:) =>NULL() !axis values (not used if time axis) real, pointer :: data_bounds(:) =>NULL() !axis bounds values integer, pointer :: idata(:) =>NULL() !compressed axis valuesi integer :: id, did, type, natt !id is the "variable ID", did is the "dimension ID": !netCDF requires 2 IDs for axes integer :: shift !normally is 0. when domain is symmetry, its value maybe 1. type(atttype), pointer :: Att(:) =>NULL() end type axistype type :: validtype private logical :: is_range ! if true, then the data represent the valid range real :: min,max ! boundaries of the valid range or missing value end type validtype type :: fieldtype private character(len=128) :: name character(len=128) :: units character(len=256) :: longname character(len=256) :: standard_name ! CF standard name real :: min, max, missing, fill, scale, add integer :: pack integer(LONG_KIND), dimension(3) :: checksum type(axistype), pointer :: axes(:) =>NULL() !axes associated with field size, time_axis_index redundantly !hold info already contained in axes. it's clunky and inelegant, !but required so that axes can be shared among multiple files integer, pointer :: size(:) =>NULL() integer :: time_axis_index integer :: id, type, natt, ndim type(atttype), pointer :: Att(:) =>NULL() integer :: position ! indicate the location of the data ( CENTER, NORTH, EAST, CORNER ) end type fieldtype type :: filetype private character(len=256) :: name integer :: action, format, access, threading, fileset, record, ncid logical :: opened, initialized, nohdrs integer :: time_level real(DOUBLE_KIND) :: time logical :: valid logical :: write_on_this_pe ! indicate if will write out from this pe logical :: read_on_this_pe ! indicate if will read from this pe logical :: io_domain_exist ! indicate if io_domain exist or not. integer :: id !variable ID of time axis associated with file (only one time axis per file) integer :: recdimid !dim ID of time axis associated with file (only one time axis per file) real(DOUBLE_KIND), pointer :: time_values(:) =>NULL() ! time axis values are stored here instead of axis%data ! since mpp_write assumes these values are not time values. ! Not used in mpp_write ! additional elements of filetype for mpp_read (ignored for mpp_write) integer :: ndim, nvar, natt ! number of dimensions, non-dimension variables and global attributes ! redundant axis types stored here and in associated fieldtype ! some axes are not used by any fields, i.e. "edges" type(axistype), pointer :: axis(:) =>NULL() type(fieldtype), pointer :: var(:) =>NULL() type(atttype), pointer :: att(:) =>NULL() type(domain2d), pointer :: domain =>NULL() !---------- !ug support type(domainUG),pointer :: domain_ug => null() !Is this actually pointed to? !---------- end type filetype !*********************************************************************** ! ! public interface from mpp_io_util.h ! !*********************************************************************** interface mpp_get_id module procedure mpp_get_axis_id module procedure mpp_get_field_id end interface ! ! ! Get file global metdata. ! ! ! Get file global metdata. ! ! ! ! ! interface mpp_get_atts module procedure mpp_get_global_atts module procedure mpp_get_field_atts module procedure mpp_get_axis_atts end interface interface mpp_get_att_value module procedure mpp_get_field_att_text end interface !*********************************************************************** ! ! public interface from mpp_io_read.h ! !*********************************************************************** ! ! ! Read from an open file. ! ! ! mpp_read is used to read data to the file on an I/O unit ! using the file parameters supplied by mpp_open. There are two ! forms of mpp_read, one to read ! distributed field data, and one to read non-distributed field ! data. Distributed data refer to arrays whose two ! fastest-varying indices are domain-decomposed. Distributed data must ! be 2D or 3D (in space). Non-distributed data can be 0-3D. ! ! The data argument for distributed data is expected by ! mpp_read to contain data specified on the data domain, ! and will read the data belonging to the compute domain, ! fetching data as required by the parallel I/O mode specified in the mpp_open call. This ! is consistent with our definition of domains, where all arrays are ! expected to be dimensioned on the data domain, and all operations ! performed on the compute domain. ! ! ! ! ! ! ! ! ! time_index is an optional argument. It is to be omitted if the ! field was defined not to be a function of time. Results are ! unpredictable if the argument is supplied for a time- independent ! field, or omitted for a time-dependent field. ! ! ! The type of read performed by mpp_read depends on ! the file characteristics on the I/O unit specified at the mpp_open call. Specifically, the ! format of the input data (e.g netCDF or IEEE) and the ! threading flags, etc., can be changed there, and ! require no changes to the mpp_read ! calls. (fileset = MPP_MULTI is not supported by ! mpp_read; IEEE is currently not supported). ! ! Packed variables are unpacked using the scale and ! add attributes. ! ! mpp_read_meta must be called prior to calling mpp_read. ! ! interface mpp_read module procedure mpp_read_2ddecomp_r2d module procedure mpp_read_2ddecomp_r3d module procedure mpp_read_2ddecomp_r4d module procedure mpp_read_r0D module procedure mpp_read_r1D module procedure mpp_read_r2D module procedure mpp_read_r3D module procedure mpp_read_r4D module procedure mpp_read_text module procedure mpp_read_region_r2D module procedure mpp_read_region_r3D #ifdef OVERLOAD_R8 module procedure mpp_read_region_r2D_r8 module procedure mpp_read_region_r3D_r8 module procedure mpp_read_2ddecomp_r2d_r8 module procedure mpp_read_2ddecomp_r3d_r8 module procedure mpp_read_2ddecomp_r4d_r8 #endif end interface !*********************************************************************** ! ! public interfaces from mpp_io_read_distributed_ascii.inc ! !*********************************************************************** ! ! ! Read from an opened, ascii file, translating data to the desired format ! ! ! These routines are part of the mpp_read family. It is intended to ! provide a general purpose, distributed, list directed read ! ! ! ! ! ! ! ! ! ! mpp_read_distributed_ascii ! The stripe size must be greater than or equal to 1. The stripe ! size does not have to be a common denominator of the number of ! MPI ranks. ! ! interface mpp_read_distributed_ascii module procedure mpp_read_distributed_ascii_r1d module procedure mpp_read_distributed_ascii_i1d module procedure mpp_read_distributed_ascii_a1d end interface !*********************************************************************** ! ! public interfaces from mpp_io_read_compressed.h ! !*********************************************************************** ! ! ! Read from an opened, sparse data, compressed file (e.g. land_model) ! ! ! These routines are similar to mpp_read except that they are designed ! to handle sparse, compressed vectors of data such as from the ! land model. Currently, the sparse vector may vary in z. Hence ! the need for the rank 2 treatment. ! ! ! ! ! ! ! ! time_index is an optional argument. It is to be omitted if the ! field was defined not to be a function of time. Results are ! unpredictable if the argument is supplied for a time- independent ! field, or omitted for a time-dependent field. ! ! ! mpp_read_meta must be called prior to calling ! mpp_read_compressed. ! Since in general, the vector is distributed across the io-domain ! The read expects the io_domain to be defined. ! ! interface mpp_read_compressed module procedure mpp_read_compressed_r1d module procedure mpp_read_compressed_r2d module procedure mpp_read_compressed_r3d end interface mpp_read_compressed !*********************************************************************** ! ! public interface from mpp_io_write.h ! !*********************************************************************** ! ! ! Write metadata. ! ! ! This routine is used to write the metadata ! describing the contents of a file being written. Each file can contain ! any number of fields, which are functions of 0-3 space axes and 0-1 ! time axes. (Only one time axis can be defined per file). The basic ! metadata defined above for axistype ! and fieldtype are written in the first two forms of the call ! shown below. These calls will associate a unique variable ID with each ! variable (axis or field). These can be used to attach any other real, ! integer or character attribute to a variable. The last form is used to ! define a global real, integer or character attribute that ! applies to the dataset as a whole. ! ! ! ! The first form defines a time or space axis. Metadata corresponding to the type ! above are written to the file on <unit>. A unique ID for subsequen ! references to this axis is returned in axis%id. If the <domain> ! element is present, this is recognized as a distributed data axis ! and domain decomposition information is also written if required (the ! domain decomposition info is required for multi-fileset multi-threaded ! I/O). If the <data> element is allocated, it is considered to be a ! space axis, otherwise it is a time axis with an unlimited dimension. Only ! one time axis is allowed per file. ! ! ! ! The second form defines a field. Metadata corresponding to the type ! above are written to the file on <unit>. A unique ID for subsequen ! references to this field is returned in field%id. At least one axis ! must be associated, 0D variables are not considered. mpp_write_meta ! must previously have been called on all axes associated with this ! field. ! ! ! ! ! ! The third form (3 - 5) defines metadata associated with a previously defined ! axis or field, identified to mpp_write_meta by its unique ID <id>. ! The attribute is named <name> and can take on a real, integer ! or character value. <rval> and <ival> can be scalar or 1D arrays. ! This need not be called for attributes already contained in ! the type. ! ! ! ! ! ! The last form (6 - 8) defines global metadata associated with the file as a ! whole. The attribute is named <name> and can take on a real, integer ! or character value. <rval> and <ival> can be scalar or 1D arrays. ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! Note that mpp_write_meta is expecting axis data on the ! global domain even if it is a domain-decomposed axis. ! ! You cannot interleave calls to mpp_write and ! mpp_write_meta: the first call to ! mpp_write implies that metadata specification is complete. ! ! interface mpp_write_meta module procedure mpp_write_meta_var module procedure mpp_write_meta_scalar_r module procedure mpp_write_meta_scalar_i module procedure mpp_write_meta_axis_r1d module procedure mpp_write_meta_axis_i1d module procedure mpp_write_meta_axis_unlimited module procedure mpp_write_meta_field module procedure mpp_write_meta_global module procedure mpp_write_meta_global_scalar_r module procedure mpp_write_meta_global_scalar_i end interface interface mpp_copy_meta module procedure mpp_copy_meta_axis module procedure mpp_copy_meta_field module procedure mpp_copy_meta_global end interface interface mpp_modify_meta ! module procedure mpp_modify_att_meta module procedure mpp_modify_field_meta module procedure mpp_modify_axis_meta end interface ! ! ! Write to an open file. ! ! ! mpp_write is used to write data to the file on an I/O unit ! using the file parameters supplied by mpp_open. Axis and field definitions must ! have previously been written to the file using mpp_write_meta. There are three ! forms of mpp_write, one to write axis data, one to write ! distributed field data, and one to write non-distributed field ! data. Distributed data refer to arrays whose two ! fastest-varying indices are domain-decomposed. Distributed data must ! be 2D or 3D (in space). Non-distributed data can be 0-3D. ! ! The data argument for distributed data is expected by ! mpp_write to contain data specified on the data domain, ! and will write the data belonging to the compute domain, ! fetching or sending data as required by the parallel I/O mode specified in the mpp_open call. This ! is consistent with our definition of domains, where all arrays are ! expected to be dimensioned on the data domain, and all operations ! performed on the compute domain. ! ! The type of the data argument must be a default ! real, which can be 4 or 8 byte. ! ! ! ! ! ! tstamp is an optional argument. It is to ! be omitted if the field was defined not to be a function of time. ! Results are unpredictable if the argument is supplied for a time- ! independent field, or omitted for a time-dependent field. Repeated ! writes of a time-independent field are also not recommended. One ! time level of one field is written per call. tstamp must be an 8-byte ! real, even if the default real type is 4-byte. ! ! ! The type of write performed by mpp_write depends on the file ! characteristics on the I/O unit specified at the mpp_open call. Specifically, the format of ! the output data (e.g netCDF or IEEE), the threading and ! fileset flags, etc., can be changed there, and require no ! changes to the mpp_write calls. ! ! Packing is currently not implemented for non-netCDF files, and the ! pack attribute is ignored. On netCDF files, ! NF_DOUBLEs (8-byte IEEE floating point numbers) are ! written for pack=1 and NF_FLOATs for ! pack=2. (pack=2 gives the customary ! and default behaviour). We write NF_SHORTs (2-byte ! integers) for pack=4, or NF_BYTEs ! (1-byte integers) for pack=8. Integer scaling is done ! using the scale and add attributes at ! pack=4 or 8, satisfying the relation ! !
!    data = packed_data*scale + add
!    
! ! NOTE: mpp_write does not check to see if the scaled ! data in fact fits into the dynamic range implied by the specified ! packing. It is incumbent on the user to supply correct scaling ! attributes. ! ! You cannot interleave calls to mpp_write and ! mpp_write_meta: the first call to ! mpp_write implies that metadata specification is ! complete. !
!
interface write_record module procedure write_record_default #ifdef OVERLOAD_R8 module procedure write_record_r8 #endif end interface interface mpp_write module procedure mpp_write_2ddecomp_r2d module procedure mpp_write_2ddecomp_r3d module procedure mpp_write_2ddecomp_r4d #ifdef OVERLOAD_R8 module procedure mpp_write_2ddecomp_r2d_r8 module procedure mpp_write_2ddecomp_r3d_r8 module procedure mpp_write_2ddecomp_r4d_r8 #endif module procedure mpp_write_r0D module procedure mpp_write_r1D module procedure mpp_write_r2D module procedure mpp_write_r3D module procedure mpp_write_r4D module procedure mpp_write_axis end interface !*********************************************************************** ! ! ! Write to an opened, sparse data, compressed file (e.g. land_model) ! ! ! These routines are similar to mpp_write except that they are ! designed to handle sparse, compressed vectors of data such ! as from the land model. Currently, the sparse vector may vary in z. ! Hence the need for the rank 2 treatment. ! ! ! ! ! ! ! ! nelems is a vector containing the number of elements expected ! from each member of the io_domain. It MUST have the same order as ! the io_domain pelist. ! ! ! tstamp is an optional argument. It is to ! be omitted if the field was defined not to be a function of time. ! Results are unpredictable if the argument is supplied for a time- ! independent field, or omitted for a time-dependent field. Repeated ! writes of a time-independent field are also not recommended. One ! time level of one field is written per call. tstamp must be an 8-byte ! real, even if the default real type is 4-byte. ! ! ! ! mpp_write_meta must be called prior to calling ! mpp_write_compressed. ! Since in general, the vector is distributed across the io-domain ! The write expects the io_domain to be defined. ! ! interface mpp_write_compressed module procedure mpp_write_compressed_r1d module procedure mpp_write_compressed_r2d module procedure mpp_write_compressed_r3d end interface mpp_write_compressed !*********************************************************************** ! ! ! Write to an opened file along the unlimited axis (e.g. icebergs) ! ! ! These routines are similar to mpp_write except that they are ! designed to handle data written along the unlimited axis that ! is not time (e.g. icebergs). ! ! ! ! ! ! ! ! nelems is a vector containing the number of elements expected ! from each member of the io_domain. It MUST have the same order as ! the io_domain pelist. ! ! ! mpp_write_meta must be called prior to calling ! mpp_write_unlimited_axis. ! Since in general, the vector is distributed across the io-domain ! The write expects the io_domain to be defined. ! ! interface mpp_write_unlimited_axis module procedure mpp_write_unlimited_axis_r1d end interface mpp_write_unlimited_axis !*********************************************************************** ! ! ! Define an dimension variable ! ! ! Similar to the mpp_write_meta routines, but simply defines the ! a dimension variable with the optional attributes ! ! ! ! ! ! ! interface mpp_def_dim module procedure mpp_def_dim_nodata module procedure mpp_def_dim_int module procedure mpp_def_dim_real end interface mpp_def_dim !*********************************************************************** ! ! module variables ! !*********************************************************************** logical :: module_is_initialized = .FALSE. logical :: verbose =.FALSE. logical :: debug = .FALSE. integer :: maxunits, unit_begin, unit_end integer :: mpp_io_stack_size=0, mpp_io_stack_hwm=0 integer :: varnum=0 integer :: pe, npes character(len=256) :: text integer :: error integer :: records_per_pe integer :: mpp_read_clock=0, mpp_write_clock=0 integer :: mpp_open_clock=0, mpp_close_clock=0 !initial value of buffer between meta_data and data in .nc file integer :: header_buffer_val = 16384 ! value used in NF__ENDDEF logical :: global_field_on_root_pe = .true. logical :: io_clocks_on = .false. integer :: shuffle = 0 integer :: deflate = 0 integer :: deflate_level = -1 logical :: cf_compliance = .false. namelist /mpp_io_nml/header_buffer_val, global_field_on_root_pe, io_clocks_on, & shuffle, deflate_level, cf_compliance real(DOUBLE_KIND), allocatable :: mpp_io_stack(:) type(axistype),save :: default_axis !provided to users with default components type(fieldtype),save :: default_field !provided to users with default components type(atttype),save :: default_att !provided to users with default components type(filetype), allocatable :: mpp_file(:) integer :: pack_size ! = 1 when compiling with -r8 and = 2 when compiling with -r4. ! Include variable "version" to be written to log file. #include !---------- !ug support public :: mpp_io_unstructured_write public :: mpp_io_unstructured_read interface mpp_io_unstructured_write module procedure mpp_io_unstructured_write_r_1D module procedure mpp_io_unstructured_write_r_2D module procedure mpp_io_unstructured_write_r_3D module procedure mpp_io_unstructured_write_r_4D end interface mpp_io_unstructured_write interface mpp_io_unstructured_read module procedure mpp_io_unstructured_read_r_1D module procedure mpp_io_unstructured_read_r_2D module procedure mpp_io_unstructured_read_r_3D end interface mpp_io_unstructured_read !---------- contains #include #include #include #include #include !---------- !ug support #include #include !---------- end module mpp_io_mod