U g@s dZgZddlZddlZddlZddlZddlmZmZGdddeZ Gddde Z Gdd d eZ Gd d d eZ Gd d d eZ GdddeZGddde ZeZGdddeZGdddeZGdddeZGdddeZGdddeZddZdS)ap!Object structure for describing MPI programs. Do not load this module directly. It is meant to be loaded only by the produtil.run module. This module handles execution of MPI programs, and execution of groups of non-MPI programs through an MPI interface (which requires all sorts of tricks). This module is also the interface to the various produtil.mpi_impl.* modules that generate the shell command to run MPI programs. This module is built on top of the produtil.prog module and uses it to run the MPI-launching program for your local cluster (mpiexec, mpirun, poe, etc.) In addition, this module contains code to simplify adding new MPI implementations to the produtil.mpi_impl subpackage. High-level code, such as the HWRF scripts, use the produtil.run module to generate object trees of MPIRanksBase objects. The produtil.mpi_impl subpackages then implement an mpirunner function that turns those into a produtil.prog.Runner to be directly executed. The MPIRanksBase object, and its subclasses, implement a few utilites to automate that for you: * to_arglist --- converts the MPI ranks to an mpi launcher command as a produtil.prog.Runner, or to an array of strings for a command file. * nranks --- calculates the number of requested MPI ranks * expand_iter --- iterates over groups of identical MPI ranks * check_serial --- tells whether this program is running MPI programs, or running serial programs as if they were MPI (or both, which most MPI implementations don't support) For MPI implementations that require a command file, see the produtil.mpi_impl.mpi_impl_base CMDFGen class to have the produtil.prog module automatically write the command file before executing the program. The produtil.mpi_impl.mpirun_lsf shows an example of how to use it. See the produtil.run module for full documentation.N)ProgSyntaxError shbackslashc@seZdZdZdS)MPIProgSyntaxErrorz:!Base class of syntax errors in MPI program specificationsN__name__ __module__ __qualname____doc__r r =/lfs/h1/ops/prod/packages/hafs.v2.0.7/ush/produtil/mpiprog.pyr8src@seZdZdZdS)ComplexProgInputzc!Raised when something that cannot be expressed as a pure MPI rank is given as a pure MPI rank.Nrr r r r r :sr c@seZdZdZdS) NotMPIProgzJ!Raised when an MPI program was expected but something else was given.Nrr r r r r =sr c@seZdZdZdS) NotSerialProgzM!Raised when a serial program was expected, but something else was given.Nrr r r r r@src@seZdZdZdS)InputsNotStringsz!Raised when the validation scripts were expecting string arguments or string executable names, but something else was found.Nrr r r r rCsrc@seZdZdZdS) MixedValuesz(!Special type for MIXED_VALUES constant.Nrr r r r rHsrc@seZdZdZdS)MixedValuesErroraIndicates that an iterator over information specific to an MPI rank cannot iterate over a group of ranks collectively because the information in each rank differs. This is used in MPMD mode for local options when the various MPI ranks have different local options.Nrr r r r rKsrc @seZdZdZdddgggggidf ddZddZdd Zd d Zd d ZddZ ddZ ddZ ddZ ddZ ddZee e edZdSddZdTdd Zd!d"Zd#d$Zd%d&Zd'd(Zd)d*Zd+d,Zd-d.Zeeeed/Zd0d1Zd2d3Zd4d5ZdUd6d7Zd8d9Zed:d;Z dd?Z"eee!e"d@Z#dAdBZ$dCdDZ%dEdFZ&dGdHZ'dIdJZ(dKdLZ)dVdMdNZ*dOdPZ+dQdRZ,dS)W MPIRanksBasea!This is the abstract superclass of all classes that represent one or more MPI ranks, including MPI ranks that are actually serial programs. Subclasses of MPIRanksBase allow an MPI program to be represented as a tree of MPIRanksBase objects, in such a way that they can be easily converted to a produtil.prog.Runner object for execution. The actual conversion to a Runner is done in the produtil.mpi_impl package (see produtil/mpi_impl/__init__.py)FNc cs@t| } || d<|D]} | | Vqd} t|_|t|D]\}}t|tsXtt|t sft|dkspqB| rzd} n|D]} | | Vq~|| d<|D]} t| t st| | Vq| r| D] }|Vq|r|dk r|| Vn | Vn| D] }|Vq|D]} | | VqqB|D]} | | Vq*dS)ar !This is the underlying implementation of most of the mpi_impl modules, and hence make_runner as well. It converts this group of MPI ranks into a set of arguments suitable for sending to a Runner object or for writing to a command file. This is done by iterating over either all ranks (if expand=True) or groups of repeated ranks (if expand=False), converting their arguments to a list. It prepends an executable, and can insert other arguments in specified locations (given in the pre, before, between, after, and post arguments). It can also use the to_shell argument to convert programs to POSIX sh commands, and it performs simple string interpolation via the "extra" hash. If to_shell=False then the executable and arguments are inserted directly to the output list. Otherwise (when to_shell=True) the to_shell subroutine is called on the MPIRank object to produce a single argument that contains a shell command. That single argument is then used in place of the executable and arguments. Note that may raise NotValidPosixSh (or a subclass thereof) if the command cannot be expressed as a shell command. In addition, if shell_validate is not None, then it is called on each post-conversion shell argument, and the return value is used instead. You can specify additional argument lists to be inserted in certain locations. Each argument in those lists will be processed through the % operator, specifying "extra" as the keyword list with two new keywords added: nworld is the number of ranks in the MPI program, and "n" is the number in the current group of repeated ranks if expand=False (n=1 if expand=True). Those argument lists are: pre, before, between, after and post. @param to_shell If True, convert executable and arguments to a POSIX sh command instead of inserting them directly. @param expand If True, groups of repeated ranks are expanded. @param shell_validate A function to convert each argument to some "shell-acceptable" version. @param pre Inserted before everything else. This is where you would put the "mpiexec" and any global settings. @param before Inserted before each rank (if expand=True) or group (if expand=False) @param between Inserted between each rank (if expand=True) or group (if expand=False) @param after Inserted after each rank (if expand=True) or group (if expand=False) @param post Appended at the end of the list of arguments. @param extra used for string expansion @param include_localopts If True, then self._localopts is appended between the "before" argument and the command. ZnworldTrFnN)dictnrankslist _localopts expand_iterbool isinstancerAssertionErrorintstr localoptiterto_shellargs)selfrexpandZshell_validateprebeforeZbetweenZafterZpostextraZinclude_localoptskwxfirstrankcountloargr r r to_arglistds@6       zMPIRanksBase.to_arglistcCs t|jS)zk!Returns True if setlocalopts(), addlocalopts() or addlocalopt() was called to add localopt values.)rrr!r r r haslocaloptsszMPIRanksBase.haslocaloptscCsdd|D|_|S)a!Sets MPI options that are only meaningful to the currently used MPI configuration. This function lets the ush-level scripts pass platform-specific information to the produtil.mpi_impl package, in order to make platform-specific changes to the way in which MPI programs are launched. These local options are a list of options that are sent for groups of MPI ranks. If the setlocalopts is called in a high-level group of ranks, such as MPIRanksMPMD, then it will apply to all ranks within. @param localopts Options to set. These will replace any options already set. Use addlocalopts to append the end instead. @returns selfcSsg|]}|qSr r .0r'r r r sz-MPIRanksBase.setlocalopts..rr! localoptsr r r setlocaloptsszMPIRanksBase.setlocaloptscCs|j||S)aU!Adds MPI options that are only meaningful to the currently used MPI configuration. This function lets the ush-level scripts pass platform-specific information to the produtil.mpi_impl package, in order to make platform-specific changes to the way in which MPI programs are launched. These local options are a list of options that are sent for groups of MPI ranks. If the setlocalopts is called in a high-level group of ranks, such as MPIRanksMPMD, then it will apply to all ranks within. @param localopts Iterable of options to set. These will extend the list of local options, adding the iterable of specified options to the end. Use addlocalopt() to add one option, or setlocalopt() to replace the entire list. @returns self)rextendr4r r r addlocaloptss zMPIRanksBase.addlocaloptscCs|j||S)ar!Adds one MPI option to the local option list. This is an option that is only meaningful to the currently used MPI configuration. This function lets the ush-level scripts pass platform-specific information to the produtil.mpi_impl package, in order to make platform-specific changes to the way in which MPI programs are launched. These local options are a list of options that are sent for groups of MPI ranks. If the setlocalopts is called in a high-level group of ranks, such as MPIRanksMPMD, then it will apply to all ranks within. @param localopts Options to set. These will append the given options to the end of the list of local options. Use addlocalopts() to add a list to the end, or setlocalopts() to replace the entire list. @returns self)rappendr4r r r addlocalopts zMPIRanksBase.addlocaloptccs|jD] }|VqdS)zW!Iterates over local MPI configuration options for this rank or group of ranks.Nr3)r!r'r r r rs zMPIRanksBase.localoptitercCsdS)S!Do the MPI ranks within this group contain mixed values for local options?Fr r.r r r mixedlocaloptsszMPIRanksBase.mixedlocaloptscCsFd}|D]4}|t|jkr$dS|j||kr8dS|d7}q dS)NrFT)rlenr)r!otherilocaloptr r r samelocaloptss  zMPIRanksBase.samelocaloptscCs|jS)z!Do we want turbo mode to be enabled for this set of ranks? @returns None if unknown, True if turbo mode is explicitly enabled and False if turbo mode is explicitly disabled. _turbomoder.r r r getturbomodeszMPIRanksBase.getturbomodecCs,t|}||_tddt|j|S)z7!Sets the turbo mode setting: on (True) or off (False).z mpiprog.pyzTURBO MODE IS %s)rrDlogging getLoggerinforeprr!tmr r r setturbomode s   zMPIRanksBase.setturbomodecCs d|_dS)z4!Removes the request for turbo mode to be on or off.NrCrJr r r delturbomodeszMPIRanksBase.delturbomode/Turbo mode setting for this group of MPI ranks.TcCs ||_|SN) turbomode)r!flagr r r turboszMPIRanksBase.turborcCs ||_|SrO)ranks_per_node)r!r*r r r rpnszMPIRanksBase.rpncGs tddS)z7!Sets environment variables in the individual MPI ranksz Subclass did not implement env()NNotImplementedErrorr!kwargsr r r env szMPIRanksBase.envcCsdS)z!Returns a copy of this object where all child produtil.prog.Runner objects have been replaced with produtil.prog.ImmutableRunner objects.Nr r.r r r make_runners_immutable$sz#MPIRanksBase.make_runners_immutablecCsdS)z!Returns a logger.Logger object for this MPIRanksBase or one from its child MPIRanksBase objects (if it has any). If no logger is found, None is returned.Nr r.r r r get_logger(szMPIRanksBase.get_loggercCsdS)a0!Returns a tuple (s,p) where s=True if there are serial ranks in this part of the MPI program, and p=True if there are parallel ranks. Note that it is possible that both could be True, which is an error. It is also possible that neither are True if there are zero ranks.FFr r.r r r check_serial-szMPIRanksBase.check_serialcCs|jSza!Returns the number of MPI ranks per node requsted by this MPI rank, or 0 if unspecified._ranks_per_noder.r r r getranks_per_node5szMPIRanksBase.getranks_per_nodecCs&t|}|dkstd|||_dS)I!Sets the number of MPI ranks per node requsted by this MPI rank.rz!Ranks per node must be >=0 not %dN)r ValueErrorr`r!rTr r r setranks_per_node:s zMPIRanksBase.setranks_per_nodecCs d|_dS)/!Unsets the requested number of ranks per node.rNr_r.r r r delranks_per_nodeBszMPIRanksBase.delranks_per_nodeEThe number of MPI ranks per node or 0 if no specific request is made.cCsdS)zE!Returns the number of ranks in this part of the MPI program.rr r.r r r rLszMPIRanksBase.nrankscCsdS)zK!Iterates over all MPIRank objects in this part of the MPI program.Nr r.r r r ranksPszMPIRanksBase.rankscCsdS)zO!Returns the number of groups of repeated MPI ranks in the MPI program.rr r.r r r ngroupsTszMPIRanksBase.ngroupscCsdS)a;!Iterates over all groups of repeating MPI ranks in the MPI program returning tuples (r,c) containing a rank r and the count (number) of that rank c. @param threads If True, then a three-element tuple is iterated, (r,c,t) where the third element is the number of threads.Nr r!threadsr r r groupsXszMPIRanksBase.groupscCsRd}|jddD]<\}}}|dk r0|dkr0|}q|dk r|dk r||kr|}q|S)a!Returns the number of threads requested by this MPI rank, or by each MPI rank in this group of MPI ranks. If different ranks have different numbers of threads, returns the maximum requested. Returns None if no threads are requested.NTrl)rm)r!rrctr r r getthreadsaszMPIRanksBase.getthreadscCs"|}|dkrdStdt|S)zw!The number of threads requested, or 1 if no threads are requested. This is a simple wrapper around getthreadsNr=)rrmaxrrkr r r nonzero_threadsmszMPIRanksBase.nonzero_threadscCs$|D]\}}||k r||_q|S)^!Sets the number of threads requested by each MPI rank within this group of MPI ranks.rmrl)r!nthreadsrorpr r r setthreadsuszMPIRanksBase.setthreadscCs|D] \}}|`qdS)!!Removes the request for threads.Nrv)r!rorpr r r delthreads|szMPIRanksBase.delthreadsThe number of threads per rank.cCstS)z!Returns a new set of MPI ranks that consist of this group of ranks repeated "factor" times. @param factor how many times to duplicateNotImplementedr!Zfactorr r r __mul__szMPIRanksBase.__mul__cCstS)z!Returns a new set of MPI ranks that consist of this group of ranks repeated "factor" times. @param other how many times to duplicater|r!r?r r r __rmul__szMPIRanksBase.__rmul__cCstS)z!Returns a new set of MPI ranks that consist of this set of ranks with the "other" set appended. @param other the data to appendr|rr r r __add__szMPIRanksBase.__add__cCstS)z!Returns a new set of MPI ranks that consist of the "other" set of ranks with this set appended. @param other the data to prependr|rr r r __radd__szMPIRanksBase.__radd__cCsdS)z!Determines if this set of MPI ranks can be represented by a single serial executable with a single set of arguments run without MPI. Returns false by default: this function can only return true for MPISerial.Fr r.r r r isplainexeszMPIRanksBase.isplainexecCs tddS)a!Returns a POSIX sh command that will execute the serial program, if possible, or raise a subclass of NotValidPosixSh otherwise. Works only on single MPI ranks that are actually MPI wrappers around a serial program (ie.: from mpiserial).zRThis is an MPI program, so it cannot be represented as a non-MPI POSIX sh command.N)rr.r r r rszMPIRanksBase.to_shellccs|r@|r&|D]}|d|jfVqq|D]}|dfVq.nJ|rj|jddD]\}}}|||fVqPn |jddD]\}}||fVqvdS)a!This is a wrapper around ranks() and groups() which will call self.groups() if expand=False. If expand=True, this will call ranks() returning a tuple (rank,1) for each rank. @param expand If True, expand groups of identical ranks into one rank of each member @param threads If True, then a third element will be in each tuple: the number of requested threads per MPI rank.r=TrnFN)rirlrm)r!r"rlr)r*r r r rs  zMPIRanksBase.expand_itercCs tddS)zG!Returns a string representation of this object intended for debugging.z&This class did not implement __repr__.NrUr.r r r __repr__szMPIRanksBase.__repr__c Cst|d}t|d}d\}}|r|rd\}}zt|\}}d}Wntk r\YnXzt|\}} d}Wntk rYnXt|krdS||krdS|| kr$dSq$dS)NT)TTr\F)iterrnext StopIterationZ have_rank) r!r?ZsiterZoiterZ have_srankZ have_orankZsrankZscountZorankocountr r r __eq__s,  zMPIRanksBase.__eq__)T)r)F)F)-rrrr r-r/r6r8r:rr<rBrErLrMpropertyrPrRrTrYrZr[r]rarergrSrrirjrmrrrtrxrzrlrrrrrrrrrr r r r r[sl T       rc@seZdZdZddZddZddZdd Zd d Ze eeed Z d dZ ddZ ddZ e e e e dZddZddZddZddZddZddZd4d!d"Zd#d$Zd%d&Zd'd(Zd)d*Zd+d,Zd-d.Zd/d0Zd1d2Zd3S)5 MPIRanksSPMDz9!Represents one MPI program duplicated across many ranks.cCs:t|tstd||_t||_t|j|_|j|_ dS)zq!MPIRanksSPMD constructor @param mpirank the program to run @param count how many times to run itz)Input to MPIRanksSPMD must be an MPIRank.N) rMPIRankr_mpirankr_countrrrPrD)r!mpirankr*r r r __init__s    zMPIRanksSPMD.__init__cKs|jjf||_|SrO)rrYrWr r r rYszMPIRanksSPMD.envcCs|jjSr^rrSr.r r r raszMPIRanksSPMD.getranks_per_nodecCs ||j_dS)rbNrrdr r r reszMPIRanksSPMD.setranks_per_nodecCs |j`dS)rfNrr.r r r rgszMPIRanksSPMD.delranks_per_noderhcCst|}||j_||_|SrO)rrrPrD)r!rKrqr r r rLszMPIRanksSPMD.setturbomodecCs|jSrOrCr.r r r rEszMPIRanksSPMD.getturbomodecCs|j`d|_dSrO)rrPrDr.r r r rMszMPIRanksSPMD.delturbomoderNcCs dd|D|_|j||S)NcSsg|]}|qSr r r0r r r r2sz-MPIRanksSPMD.setlocalopts..)rrr6r4r r r r6s zMPIRanksSPMD.setlocaloptscCs|j||j||SrO)rr7rr8r4r r r r8s  zMPIRanksSPMD.addlocaloptscCs|j||j||SrO)rr9rr:)r!rAr r r r: s  zMPIRanksSPMD.addlocaloptcCst|j|jS)zO!Returns a new MPIRanksSPMD with an immutable version of self._mpirank.)rrrZrr.r r r rZ sz#MPIRanksSPMD.make_runners_immutablecCsdt|jt|jfS)zO!Returns "X*N" where X is the MPI program and N is the number of ranks.z%s*%d)rIrrrr.r r r rszMPIRanksSPMD.__repr__cCs|jdkrdSdSdS)z>!Returns 1 or 0: 1 if there are ranks and 0 if there are none.rr=Nrr.r r r rj!s zMPIRanksSPMD.ngroupsFccs,|r|j|j|jjfVn|j|jfVdS)zV!Yields a tuple (X,N) where X is the mpi program and N is the number of ranks.N)rrrlrkr r r rm'szMPIRanksSPMD.groupscCs2t|j|j}|j|_t|j|_|j|_|S)z!Returns a deep copy of self.)rrcopyrrDrrrlr!rpr r r r.s  zMPIRanksSPMD.copyccs&|jdkr"t|jD] }|jVqdS)z%!Iterates over MPI ranks within self.rN)rrangerr!r@r r r ri5s zMPIRanksSPMD.rankscCs|jdkr|jSdSdS)z3!Returns the number of ranks this program requests.rNrr.r r r r:s zMPIRanksSPMD.nrankscCs$t|tstSt|j|j|Sz7!Multiply the number of requested ranks by some factor.rrr}rrrrr~r r r r@s zMPIRanksSPMD.__mul__cCs$t|tstSt|j|j|Srrr~r r r rEs zMPIRanksSPMD.__rmul__cCsd}t|ds(tdt|jt|f|}||rz|j|jkrz|j|jkrzd}| D]\}}||j ks^d}qzq^|rt |j ||St | | gSdS)!Add some new ranks to self. If they are not identical to the MPI program presently requested, this returns a new MPIRanksMPMD.Frz%s %s: has no nranksTN)hasattr TypeErrortyperrIrrBrDrSrmrrr MPIRanksMPMD)r!r?rrrr*r r r rJs(     zMPIRanksSPMD.__add__cCs|jdkr|jSdSdS)a6!Checks to see if this program contains serial (non-MPI) or MPI components. @returns a tuple (serial,parallel) where serial is True if there are serial components, and parallel is True if there are parallel components. If there are no components, returns (False,False)rr\N)rrr]r.r r r r]_s  zMPIRanksSPMD.check_serialcCs |jS)z!!Returns my MPI program's logger.)rr[r.r r r r[jszMPIRanksSPMD.get_loggerN)F)rrrr rrYrarergrrSrLrErMrPr6r8r:rZrrjrmrrirrrrr]r[r r r r rs<   rc@seZdZdZddZddZddZdd Zd d Ze eeed Z d dZ ddZ ddZ e e e e dZddZddZddZe eeedZddZddZdd Zd!d"Zd#d$Zd%d&Zd'd(Zd)d*Zd+d,Zd?d.d/Zd0d1Zd2d3Zd4d5Zd6d7Z d8d9Z!d:d;Z"dS)@rzZ!Represents a group of MPI programs, each of which have some number of ranks assigned.cCsht||_d|_d|_d|_|rPdd|djD|_|dj|_|dj|_nt|_d|_d|_dS)zR!MPIRanksMPMD constructor @param args an array of MPIRanksBase to execute.NcSsg|]}|qSr r r0r r r r2{sz)MPIRanksMPMD.__init__..r) r_el_ngcache_nrcache_threadsrrDrSr`)r!r r r r rss  zMPIRanksMPMD.__init__c sfdd|jD|_|S)Ncsg|]}|jfqSr )rY)r1erXr r r2sz$MPIRanksMPMD.env..)rrWr rr rYszMPIRanksMPMD.envcCs$t|}|jD] }||_q||_|SrO)rrrPrDr!rKrqror r r rLs  zMPIRanksMPMD.setturbomodecCs6|jdj}|jddD]}||jkrtSq|SNrr=)rrP MIXED_VALUESr!resultelr r r rEs    zMPIRanksMPMD.getturbomodecCs|jD]}|`qd|_dSrO)rrPrDr!ror r r rMs zMPIRanksMPMD.delturbomoderNcCs|jD] }||_q|SrOrrl)r!rlror r r rxs zMPIRanksMPMD.setthreadscCs6|jdj}|jddD]}||jkrtSq|Sr)rrlrrr r r rrs    zMPIRanksMPMD.getthreadscCs|jD]}|`qdSrOrrr r r rzs zMPIRanksMPMD.delthreadsr{cCs$t|}|jD] }||_q||_|SrO)rrrSr`rr r r res  zMPIRanksMPMD.setranks_per_nodecCs6|jdj}|jddD]}|j|krtSq|Sr)rr`rSr)r!rrr r r ras    zMPIRanksMPMD.getranks_per_nodecCs|jD]}|`qd|_dSNr)rrSr`rr r r rgs zMPIRanksMPMD.delranks_per_nodez@Ranks per node for this group of MPI ranks, or 0 if unspecified.cCs*dd|D|_|jD]}||q|S)NcSsg|]}|qSr r r0r r r r2sz-MPIRanksMPMD.setlocalopts..)rrr6r!r5ror r r r6s  zMPIRanksMPMD.setlocaloptscCs&|j||jD]}||q|SrO)rr7rr8rr r r r8s   zMPIRanksMPMD.addlocaloptscCs&|j||jD]}||q|SrO)rr9rr:)r!rAror r r r:s   zMPIRanksMPMD.addlocaloptcCs.|jddD]}|jd|sdSqdS)r;r=NrTF)rrB)r!rr r r r<szMPIRanksMPMD.mixedlocaloptsccs,|rt|jdD] }|VqdSr)r<rrrrr r r rszMPIRanksMPMD.localoptitercCstdd|jDS)z!Tells each containing element to make its produtil.prog.Runners into produtil.prog.ImmutableRunners so that changes to them will not change the original.cSsg|] }|qSr )rZ)r1rr r r r2sz7MPIRanksMPMD.make_runners_immutable..)rrr.r r r rZsz#MPIRanksMPMD.make_runners_immutablecCs4g}|jD]}|dkr |t|q d|S)z/!Returns a pythonic description of this object.rz + )rrr9rIjoin)r!Zreprsrr r r rs   zMPIRanksMPMD.__repr__cCs2|jdkr,d}|jD]}||7}q||_|jS)zN!How many groups of identical repeated ranks are in this MPMD program?Nr)rrrj)r!Znggr r r rjs   zMPIRanksMPMD.ngroupscCs2|jdkr,d}|jD]}||7}q||_|jS)z*!How many ranks does this program request?Nr)rrr)r!nrrr r r rs   zMPIRanksMPMD.nranksFccsb|r6|jD](}|jddD]\}}}|||fVqq n(|jD] }|D]\}}||fVqHqt|j|gStS)z]!Adds more ranks to this program. @param other an MPIRanksMPMD or MPIRanksSPMD to addrrrrrr}rr r r rs  zMPIRanksMPMD.__add__cCsBt|trt|j|jSt|ts.t|tr>t|g|jStS)ze!Prepends more ranks to this program. @param other an MPIRanksMPMD or MPIRanksSPMD to prependrrr r r r s  zMPIRanksMPMD.__radd__cCst|trt|j|StS)m!Duplicates this MPMD program "factor" times. @param factor how many times to duplicate this program.)rrrrr}r~r r r rs zMPIRanksMPMD.__mul__cCst|trt||jSdS)rN)rrrrr~r r r rs zMPIRanksMPMD.__rmul__cCs8d}d}|jD] }|\}}|p$|}|p,|}q||fS)z!Checks to see if this program contains serial (non-MPI) or MPI components. @returns a tuple (serial,parallel) where serial is True if there are serial components, and parallel is True if there are parallel components.F)rr])r!serialZparallelrspr r r r] s   zMPIRanksMPMD.check_serialcCs(|jD]}|}|dk r|SqdS)z:!Returns a logging.Logger for the first rank that has one.N)rr[)r!rloggerr r r r[-s   zMPIRanksMPMD.get_loggerN)F)$rrrr rrYrLrErMrrPrxrrrzrlrerargrSr6r8r:r<rrZrrjrrmrirrrrr]r[r r r r rpsH   rc@seZdZdZd/ddZddZddZd d Zd d Ze eeed Z ddZ ddZ ddZ ddZd0ddZddZddZddZddZd d!Zd1d#d$Zd%d&Zd'd(Zd)d*Zd+d,Zd-d.ZdS)2rz!Represents a single MPI rank.NcCs||_d|_t|_d|_d|_t|_t|t rn|jdkrD|j|_t|j |_ t|j|_|j |_|j |_nxt|t jjr|rdd|D|_ qtdnBt|tr|g|_ n.t|tst|trdd|D|_ ntd|dS)a!MPIRank constructor. @param arg What program to run. Can be a produtil.prog.Runner, or some way of creating one, such as a program name or list of program+arguments. @param logger a logging.Logger for log messages or None to have no logger.NrcSsg|]}|qSr r r0r r r r2Osz$MPIRank.__init__..zTried to convert a Runner to an MPIRank directly, when the Runner had more than an executable and arguments. Use mpiserial instead.cSsg|]}|qSr r r0r r r r2XszInput to MPIRank.__init__ must be a string, a list of strings, or a Runner that contains only the executable and its arguments.)_loggerrrrrDr`r_envrr_argsrPrSprodutilprogRunnerrr r rtuplervalidate)r!r,rr r r r9s6       zMPIRank.__init__cKs|jjf||SrO)rupdaterWr r r rY_sz MPIRank.envcCs|jS)zr!Returns the number of threads requested by this MPI rank, or by each MPI rank in this group of MPI ranks.rr.r r r rrbszMPIRank.getthreadscCs|dkrd|_n t||_|S)ruN)rr)r!rwr r r rxfs zMPIRank.setthreadscCs d|_dS)ryr=Nrr.r r r rznszMPIRank.delthreadsr{cCsddd|jDS)zH!Return a POSIX sh representation of this MPI rank, if possible. cSsg|]}tj|qSr )rrrr0r r r r2usz$MPIRank.to_shell..)rrr.r r r rrszMPIRank.to_shellcCs0|}t|tr |j|n |j||S)z+!Adds arguments to this MPI rank's program.)rrrrr9r7)r!r rpr r r __getitem__vs   zMPIRank.__getitem__c Cst}|dt|jdt|jdkrZ|dddd|jddDd |rx|d t|jf|j r|d t|j f|j r|d t|j f|j r|d t|j f| }| |S)zH!Returns a Pythonic representation of this object for debugging.zmpi(%s)rr=[,cSsg|] }t|qSr )rIr0r r r r2sz$MPIRank.__repr__..N]z.setlocalopts(%s)z .threads(%s)z.turbomode(%s)z.rpn(%s))ioStringIOwriterIrr>rr/r5rlrPrSgetvalueclose)r!sioretr r r r~s,zMPIRank.__repr__cCstS)z3!Returns a logging.Logger for this object, or None.)rr.r r r r[szMPIRank.get_loggercCsT|D]}t|tstdq|dk rPt|dkrP|D]}t|ts8tdq8dS)z!Checks to see if this MPIRank is valid, or has errors. @param more Arguments to the executable to validate. @returns None if there are no errors, or raises a descriptive exception.z)Executable and arguments must be strings.Nr)r rrrr>)r!morer'r r r rs   zMPIRank.validateccsH|jr2dV|jD]\}}d|t|fVq|jD] }|Vq8dS)z(!Iterates over the executable arguments.z/bin/envz%s=%sN)ritemsrr)r!kvr,r r r r s  z MPIRank.argscCs(t|}|j|_t|j|_|j|_|S)zk!Return a copy of self. This is a deep copy except for the logger which whose reference is copied.)rrDrrrrr r r rs  z MPIRank.copycCsdS)z$!Returns 1: the number of MPI ranks.r=r r.r r r rszMPIRank.nrankscCsdS)z4!Returns 1: the number of groups of identical ranks.r=r r.r r r rjszMPIRank.ngroupsccs |VdS)z!!Yields self once: all MPI ranks.Nr r.r r r risz MPIRank.ranksFccs"|r|d|jfVn |dfVdS)zQ!Yields (self,1): all groups of identical ranks and the number per group.r=Nrrkr r r rmszMPIRank.groupscCs4t|tstS||kr$t|dSt||gSdS)z}!Creates an MPIRanksSPMD or MPIRanksMPMD with this MPIRank and the other ranks. @param other The other ranks.N)rrr}rrrrr r r rs  zMPIRank.__add__cCst|trt||StSzz!Creates an MPIRanksSPMD with this MPIRank duplicated factor times. @param factor the number of times to duplicaterrrr}r~r r r rs  zMPIRank.__mul__cCst|trt||StSrrr~r r r rs  zMPIRank.__rmul__cCs8t|to6|j|jko6||o6|j|jko6|j|jkS)z;!Returns True if this MPIRank is equal to the other object.)rrrrBrDr`rr r r rs    zMPIRank.__eq__cCsdS)z7!Returns (False,True): this is a pure parallel program.)FTr r.r r r r]szMPIRank.check_serial)N)N)F)rrrr rrYrrrxrzrrlrrrr[rr rrrjrirmrrrrr]r r r r r7s. &   rc@s~eZdZdZdddZddZddZd d Zd d Zd dZ ddZ e ddZ ddZ ddZddZddZddZdS) MPISerialz!Represents a single rank of an MPI program that is actually running a serial program. This is supported directly by some MPI implementations while others require kludges to work properly.NcCs*||_||_t|_d|_d|_d|_dS)z!MPISerial constructor.Nr)_runnerrrrrDrr`)r!runnerrr r r rs zMPISerial.__init__cCs.t|jtjjs&ttj|j|jS|SdS)zF!Creates a version of self with a produtil.prog.ImmutableRunner child.N)rrrrImmutableRunnerrrr.r r r rZsz MPISerial.make_runners_immutablecCs>t|j|j}|j|_|j|_t|j|_|j|_|j|_|S)z!Duplicates self.)rrrrDrrrr`rr r r rs zMPISerial.copycCsdt|jfS)z@!Returns a pythonic string representation of self for debugging.z mpiserial(%s))rIrr.r r r rszMPISerial.__repr__ccs|jD] }|Vq dS)z=!Iterates over command arguments of the child serial program.N)rr )r!r,r r r r szMPISerial.argscCs6||krt||tSt||gSdS)rN)rrrrrrr r r rszMPISerial.__add__cCs|jdk r|jS|jS)z7!Returns my logging.Logger that I use for log messages.N)rrr[r.r r r r[ s zMPISerial.get_loggercCs|jSrO)rr.r r r rszMPISerial.runnercCsdS)z!Does nothing.Nr r.r r r rszMPISerial.validatecCsFt|toD|j|jkoD|j|jkoD|j|jkoD|j|jkoD|j|jkS)z!Returns True if other is an MPISerial with the same Runner, False otherwise. @param other the other object to compare against.)rrrr`rDrrrr r r rs    zMPISerial.__eq__cCsdS)zk!Returns (True,False) because this is a serial program (True,) and not a parallel program (,False).)TFr r.r r r r] szMPISerial.check_serialcCs |jS)z!Returns True if the child serial program is a plain executable, False otherwise. See produtil.prog.Runner.isplainexe() for details.)rrr.r r r r$szMPISerial.isplainexecCs |jS)z8!Returns a POSIX sh version of the child serial program.)rrr.r r r r)szMPISerial.to_shell)N)rrrr rrZrrr rr[rrrrr]rrr r r r rs    rcCst}t}|dD]T\}}t|s6|||gq|dd|kr\|dd|7<q|||gq|D]\}}|t||qpt|dkrt|}n|d}|S)NTrr=)rrr>r9rr)rZSPMDsrcr)r*rr r r collapse/s   r)r __all__sysrrF produtil.progrrrrr r rrobjectrrrrrrrrrr r r r s2, yH-L