Howard Robinson
Computer Science Department
University of California, Berkeley
Carlos R. Mechoso
Deparment of Atmospheric Sciences
University of California, Los Angeles
Leroy A. Drummond
Lawrence Berkeley Laboratory
Joseph Spahr
Deparment of Atmospheric Sciences
University of California, Los Angeles
John D. Farrara
Deparment of Atmospheric Sciences
University of California, Los Angeles
Edmund Mesrobian
Computer Science Department
University of California, Los Angeles
Having each producer send directly to each consumer conserves bandwidth, reduces memory requirements, and minimizes the delay that would otherwise occur if a centralized element were to reassemble each of the fields and retransmit them, especially in network topologies where there is more aggregate bandwidth in the system than available to each individual processor, as in the IBM SP2 and CRAY T3E.
The Data Broker is implemented using the C++ language, but provides a modest set of subroutines that simulation codes written in Fortran can call. It operates by means of a relatively simple protocol with a small number of message types, currently transported using PVM. A secondary design goal was to keep the protocol simple enough to allow other programs to interoperate with systems employing the Data Broker by using an independent implementation of the protocol.
The Data Broker was created as a tool as part of an effort to develop a comprehensive Earth System Model, which would incorporate General Circulation and Chemistry models of the Atmosphere and Oceans, an active archiver storing products in tertiary storage and registering metadata in an object relational database, and online visualization tools.
Due to the central position of the Data Broker in the Earth System Model (ESM), with consequent potential for slowing down the operation of the entire ESM, efficiency has been a paramount concern in the design and implementation of the Data Broker.
The ability to have multiple consumers for a given product each taking delivery at different frequencies was also a requirement at the outset - as, for example, the atmospheric chemistry would want three dimensional wind data at a rate that would be impractical to archive in its entirety.
Another capability planned from the outset was to enable ephermal consumers, such as the visualization tool, that might want to inspect different quantities at different times as the computation unfolded according to the wishes of the operator based on what was discovered during the course of the computation.
We are motivated by the need of ESM scientists for a tool that requires minimal modifications to existing codes, is easy to learn and easy to remember, and permits the transmission of arbitrary numbers of fields among arbitrary numbers of models, and can be dynamically configured at run-time.
There are two phases of registration, and after that, the computation proceeds. In the first phase, the registration broker collects from each model head the MODELINFO_MSG, which specifies a coordinate system, a list of quantities to be computed and offered using that coordinate system, and the number of computational elements participating in that model. Each model head also files a META_REQUEST_MSG on behalf of each of its computational elements. When all of the model heads have been heard from, the registration broker sends out an INVENTORY_MSG in reply to each META_REQUEST_MSG. The INVENTORY_MSG message contains an initial SYSTEMINFO_MSG detailing the experiment name, starting and ending times, the protocol version, and the number of models, and then a list of MODELINFO_MSG's that it has received.
Receipt of the SYSTEMINFO_MSG moves each process into the second phase of registration. Each element that produces quantities it wishes to make available sends the registration broker a VAR_SUPPLY_MSG, and each element that wishes to make use of various quantities sends a VAR_REQUEST_MSG. Of course, any process may simultaneously produce some quantities and consume others. The VAR_SUPPLY_MSG contains a list enumerating for each desired quantity, the subfield that the process will produce and the frequency at which it will be produced. The VAR_REQUEST_MSG lists the which subfields at which frequency of which quantities the process wishes to consume. Each process then sends a BARRIER_REQUEST_MSG to the registration broker, marking the end of its requests and allowing synchronization at the end of the second phase.
Once the registration broker has received all offers and requests, it then calculates the intersections, forwards them to relevant suppliers via the VAR_REFER_MSG, and notifies each requestor that each request will be satisfied by a certain number of tiles, via the VAR_REPLY_MSG. After that, the registration broker sends BARRIER_REPLY_MSG's to all the participating elements, terminating the second phase of registration and the computation proceeds.
During the calculation, at the end of every time step, each element producing a quantity examines the list of requests to identify which of its consumers desire the quantity, and transmits a subfield of it using the DATA_MSG, which contains not only the data, but also the description of which variable, which subfield and which timestep.
The user provides for the spawning of groups of PVM-aware processes. Each group (model) will be computing some quantities (variables) represented by two or three dimensional arrays, that it will make available to other groups on a periodic basis. We'll call the state of a given variable at a particular timestep a field. It is assumed that all the variables computed by a given model are interpreted according to a common set of rectilinear coordinates, (i.e. three uni-dimensional arrays of monotonically increasing double precision numbers, but without the requirement that they be uniformly spaced). Furthermore, it assumed that the units for the coordinates and time steps which are transmitted by this interface are the same for all models; e.g. all radians or all degrees, time measured in hours or seconds or milliseconds from a common starting point.
During a registration period, the processes exchange information about what quantities will be made available, and which processes are willing to offer which subsets of which variables at what frequency. One process needs to be selected to synchronize the registration process as a whole, which we'll call the registration broker. This process calls the routine MCLIamTheBroker, providing a character string to identify the system as a whole, a starting time and an ending time.
Each model needs to have a desiginated member (model head) that gives some information to the registration broker concerning that group of processes as a whole. It is permitted, but not required, for the registration broker to be one of the model heads. Within the interface, each model head calls the subroutine MCLStartMetaRegistration supplying a character string to identify the model, the coordinates for the model, and a list of the TID's of each participating element in that model.
In all of our implementations so far, the registration broker has spawned the rest of the models (or all have been executed as part of the same binary in the case of the T3E), so that the PVM call to determine the parent TID has sufficed to provide the TID of the registration broker. There is nothing in the protocol that requires this, and we have made some effort to accomodate ephemeral consumers; so that a very simple call should be added to inform the DDB library the TID for the registration broker. However, it is beyond the scope of this interface to determine how such late joiners would discover the TID for an unassociated process.
The model head then calls the routine MCLMetaRegister for each variable, supplying a character string for the name of the variable, the number of dimensions, the frequency at which it will be offered, the fortran type of the data, and some other metadata there by reasons of historical intertia. Each model head then calls MCLEndMetaRegistration.
Any process that will be supplying or consuming data calls MCLStartRegistration. If the process is not a model head, it calls the subroutine directly, which will block the process awaiting the registration broker to identify itself and supply the inventory of available variables. The process then calls MCLRegisterProduce for each variable it is willing to supply, giving it the name of the variable, and bounds describing the range in each direction of the subset of the variable the process will produce. Some of the parameters are redudantly provided to MCLMetaRegister, and must be the same. Deadlock will ensue unless each element makes available (via MCLPut; see below) its subset of the variable at the frequency supplied in MCLMetaRegister. MCLRegisterProduce returns a small integer, a kind of "coat check", which will be supplied as an argument to MCLput. The process calls MCLRegisterConsume for each variable it wishes to import, obtaining a small integer to be used as an argument to MCLGet; see below. Here, the interval at which the process wishes to consume the variable may be a multiple of the interval in which it is produced.
If a process wishes to employ a variable in a coordinate system other
than that in which it is produced, it uses two routines, LILRegisterCoordinates
and LILRequestInterpolatedData. LILRegisterCoordinatestakes
a name and 3 axes, and returns a small integer. LILRequestInterpolatedData
takes
that result, integer bounds describing what subset is desired,
and a vector describing in which direction wrapping might occur.
(I.E. one coordinate system is offset from the other by 90 degrees or pi
radians, and consequently coordinate comparisons along that axis must be
done modulo some value. One could argue that this parameter belongs
to LILRegisterCoordinates). LILRequestInterpolatedData
returns
a small integer which is used as an argument to MCLGet;
i.e. the same routine used in the non-interpolated case.
The second phase of registration is concluded by calling MCLEndRegistration, which blocks the process awaiting a list of consumers for its products. It is intended that the process can safely commence computation with no additional external synchronization once MCLEndRegistration returns.
During the actual computation the process calls MCLput
to make quantities available for transmission to consumers.
The process supplies the "coat check" returned from MCLRegisterProduce,
above, as well as a pointer to to a buffer containing the data and a double
precision value representing the time step. Making the quantity available
may not necessarily result in any transmission depending on which processes
have registered an interest in that particular timestep. Similarly,
processes call MCLGet to import fields.
Figure 1. Memory requirements for centralized and distributed coupling.
Figure 2.Memory requirements with doubled resolution OGCM and increased the number of nodes.
Figure 1 and 2 illustrate comparison results based on the memory requirements for both coupling implementations. In Figure 1, the centralized data brokerage requires almost twice as much memory as the distributed data brokerage because it needs to collect the entire grid from one model in a single node.In the distributed case, each processor has enough information to produce the data needed by consumer processes and communication is realized in distributed manner. In Figure 2, a more drastic scenario is presented, in which the centralized coupling cannot be realized because of the 45Mw memory requested in a single computational node. In this case the distributed case requires less than a third of the memory requested by the centralized approach.
Figure 3. Simplified Timing model of centralized vs. distributed coupling.
Figure 3 compares the execution time between the two coupling approaches, and in this case the AGCM is sending 4 fields to the OGCM, and the requested time by the distributed approach is one third of the centralized.In the reverse communication, the OGCM sends a single field to the AGCM and the requested time is also greatly reduced with the distributed approach.
We'll describe a conceptual framework in which the code for the Data Broker is organized. The entire distributed computation, made up of individual computational proceses deemed Clients, is thought of as a System. The System is comprised of several subcollections, called Models, which compute scientific quanties called Variables.
All the Variables associated with a given Model are interpreted according to a common coordinate system, or Grid, made up of three axes or Dimensions. The Data Broker currently supports both two and three-dimensional Variables. The array of specific values for a Variable at a given time step is a Field.
Models and Variables are identified by a human readable, but potentially lengthy, ascii string. Since a given Client can belong to more than one Model Any real life computation requiring that a given scientific quantity be made available in more than one coordinate system could do so by having two separately named Variables that were easily recognized to be related in two separately named Models which might have the grid as part of its name, e.g. "Ocean2x6" and "Ocean1x3".
Alternatively, the Data Broker provides a linear interpolation facility so that Clients can request Variables and then convert them internally, using methods associated with an object called a Mesh comprising a specification of the target subset (Tile) of the users Grid, and a reference to the Grid in which it is produced.
Offers by a Client to produce subsets of Variables according to a given Tile, have the same structure as requests to consume Variables, and are called VarRequests in the code. We expect that most computational elements will be producing some Variables while consuming others. For housekeeping purposes, the registration broker will need to retain the entire list of offers and requests, and individual processes will need to maintain subscriber lists for the quantities that they produce. This data structure is called the ClientServer.
The Data Broker library is largely event driven, being activated either by the explicit user requests described in the fortran interface above, by the receipt of messages, or when certain key counters reach full values. One such transition occurs when the registration broker has received all of the MODELINFO_MSG's, and proceeds to issue SYSTEM_INVENTORY_MSGs to each participating registrant. A similar transition occurs in registration broker to close the second phase of registration by receipt of the final BARRIER_REQUEST_MSG. A counter-induced event also occurs in every consuming element as the Data Broker "knows" that each of its requested fields is complete when it has counted the contributing number of tiles. (The number of tiles is conveyed in the VAR_REPLY_MSG).
We have not yet heavily optimized the Data Broker, relying on simple methods, but choosing them carefully. Identification of incoming data message packets are done by full string comparisons of the variable name, searching among linearly linked lists.
We have however, attempted to minimize the number of copies that the data itself must undergo, as the cost of memory-to-memory copies within a processor can take a comparable amount of time to transmission of the data between processors in modern tightly coupled systems, and the significant benefits achieved by having producers send directly to consumers (in a distributed way) rather than requiring double transmission times and synchronization among all processes in the centralized scheme.
We have future plans (described below) for avoiding even these searches by using a simple indexing scheme.
A high priority for us is to allow the use of MPI in the Data Broker system, because for some of the large multiprocessor systems, vendor-provided implementations of PVM do not perform as well MPI and seem more prone to bugs.
As noted above, performance and user-friendliness have been our utmost concern. we believe that there are further performance gains to be made. We are less concerned with the performance of the registration phases, but are considering some streamlining of the data transfer phase.
Protocols such as PVM and MPI allow one to obtain a tag associated with a message, which could even be detected and manipulated independently from the rest of the associated data. We propose using a range of tags to identify the individual tiles associated with various requested quantities, and thus bypass the current linear lookups based on quantity name, and also thus reserve the entire data portion of the message for the numerical quantities, which might even allow for direct deposition of the data in the appropriate place without calling special unpacking routines.
During the development of the Data Broker, when the primary interface was the direct inclusion of PVM calls into the simulation codes, a source of run time errors were inconsistencies between the Data Broker and participating codes, especially when inadvertently running older versions of the Broker.
This suggested that a formal description of the protocol could be written down and that the calls for marshalling and un-marshalling data into messages could be mechanically generated as has been done for Remote Procedure Call mechanisms, and for processing the presentation layer in the OSI protocols.
Parenthetically, it seems that message passing mechanisms and RPC are essentially isomorphic, since given an RPC, one can get a (possibly inefficient) message passing mechanism merely by ignoring the returned values from RPC calls, and conversely, with some effort, one could build an RPC on top of pairs of passed messages - one for the request and one for the return.
Going through the exercise of writing down the specification for our protocol (included here as Appendix B.) has been useful as a tool for discussion and guidance, although we've never actually rewritten the data broker to make use of it.
We estimate that 30% of the code of the existing Data Broker could be mechanically generated from formal specifications; and doing this exercise would lessen the burden of future maintenance efforts. However, the anticipated amount of work in debugging such a substantial rewrite of the system for no apparent gain in meeting performance milestones, has persuaded us to postpone this nonetheless desirable effort.
Registration: ============= There are 2 phases of registration, one to provide top-level information about each model and variables it will produce, and a second to provide the details of each produce and consume request. The first phase: Meta Registration: =================================== The designated host for each model must perform Meta registration to inform the other models of the variables it will produce. Only general information about each model is required at this stage (such as model name, gridding, and number of constituent processes). Function 'StartMetaRegistration()' must be called first; 'MetaRegister()' must be called for each variable to be produced; and finally 'EndMetaRegistration()' terminates the procedure. MCLStartMetaRegistration: ========================= Called by a single processor (the designated host) to start the Meta-Registration process. Returns 1 on success, 0 on failure. Arguments: tid == array of PVM tids for all processes that wills participate in the exchange of data using the data broker. It is of length 'numTasks' numTasks == length of array 'tid' f77modelName == Name of the model. Must be of length 128 (CHARACTER*128) numLon == Number of longitude points. numLat == Number of latitude points. numVert == Number of longitude points. lonTicks == Longitude array, of length 'numLon'. latTicks == Latitude array, of length 'numLat'. vertTicks == Vertical array, of length 'numVert'. int MCLStartMetaRegistration( INTEGER tid, INTEGER numTasks, CHARACTER f77modelName, INTEGER numLon, INTEGER numLat, INTEGER numVert, REAL*8 lonTicks, REAL*8 latTicks, REAL*8 vertTicks ) MCLMetaRegister: ================ Called for each variable to be produced. Returns 1 on success, 0 on failure. Arguments: f77VarName == Name of the variable. Of length at least 'nameLen' nameLen == Length (in characters) of the name of this variable coordType == Coordinate type; currently, 0 (sigma coordinates) dataType == Type of the data. The value to data type mapping follows: 0 == SignedCharType 1 == SignedShortType 2 == SignedIntType 3 == SignedLongType 4 == UnsignedCharType 5 == UnsignedShortType 6 == UnsignedIntType 7 == UnsignedLongType 8 == FloatType 9 == DoubleType varType == Type of the variable. 1: Temporary, 0: persistent. Always use 0 for now numDims == Number of dimensions. frequency == Number of simulated seconds between each production of this variable. int MCLMetaRegister( CHARACTER f77VarName, INTEGER nameLen, INTEGER coordType, INTEGER dataType, INTEGER varType, INTEGER numDims, REAL*8 frequency ) MCLEndMetaRegistration: ======================= Terminates the meta registration process. Returns 1 on success, 0 on failure. No arguments. int MCLEndMetaRegistration() The second registration phase: ============================== In the next phase, each process that will produce or consume data must full details on its requirements. Exact grids are supplied in each request. This data registration must be initiated by a call to StartDataRegistration(), followed by calls to RegisterDataProduce() and/or RegisterDataConsume() for each variable produced or consumed, and terminated by a call to EndDataRegistration(). All of the LIL calls are made in the second phase of registration, by each member process. As before, the data registration must be initiated by a call to StartDataRegistration(), followed by calls to RegisterDataProduce(), RegisterDataConsume() for each variable produced or consumed. If the process wishes to make use of interpolated data, then it must make a call to LILRegisterCoordinates(), followed by calls to LILRequestInterpolatedData(). MCLStartRegistration: ======================= Initiates the second registration phase. Returns 1 on success, 0 on failure. No arguments. int MCLStartRegistration() MCLRegisterProduce: =================== Called to inform the data broker that the calling process will produce the given data on the given grid. On success, returns a non-negative integer id used to uniquely identify this data request. (This value will be used in subsequent calls to 'MCLSendData'.) Any other value implies failure. Arguments: f77VarName == Name of the variable. Of length at least 'nameLen' nameLen == Length (in characters) of the name of this variable coordType == Coordinate type; currently, 0 (sigma coordinates) dataType == Type of the data, as in MCLMetaRegister. varType == Type of the variable. 1: Temporary, 0: persistent. Always use 0 for now numDims == Number of dimensions. lon0 == fortran index (1 based) of first Latitude point at which this process produces data. numLon == number of Longitude points at which this process produces data. lat0 == fortran index (1 based) of first Latitude point at which this process produces data. numLat == number of Latitude points at which this process produces data. numVert == number of veritical points at which this process produces data. Vert == the array of veritical points at which this process produces data. frequency == Number of simulated seconds between each production of this variable. int MCLRegisterProduce( CHARACTER f77VarName, INTEGER nameLen, INTEGER coordType, INTEGER dataType, INTEGER varType, INTEGER numDims, INTEGER lon0, INTEGER numLon, INTEGER lat0, INTEGER numLat, INTEGER numVert, REAL*8 vert, REAL*8 frequency ) MCLRegisterConsume: =================== Called to inform the data broker that the calling process will consume the given data on the given grid. On success, returns a non-negative integer id used to uniquely identify this data request. (This value will be used in subsequent calls to 'MCLGetData'.) Any other value implies failure. Arguments: f77VarName == Name of the variable. Of length at least 'nameLen' nameLen == Length (in characters) of the name of this variable coordType == Coordinate type; currently, 0 (sigma coordinates) dataType == Type of the data, as in MCLMetaRegister. varType == Type of the variable. 1: Temporary, 0: persistent. Always use 0 for now numDims == Number of dimensions. lon0 == fortran index (1 based) of first Latitude point at which this process requires data. numLon == number of Longitude points at which this process requires data. lat0 == fortran index (1 based) of first Latitude point at which this process requires data. numLat == number of Latitude points at which this process requires data. numVert == number of veritical points at which this process requires data. Vert == the array of veritical points at which this process requires data. frequency == Number of simulated seconds between each intake of this variable. int MCLRegisterConsume( CHARACTER f77VarName, INTEGER nameLen, INTEGER coordType, INTEGER dataType, INTEGER varType, INTEGER numDims, INTEGER lon0, INTEGER numLon, INTEGER lat0, INTEGER numLat, INTEGER numVert, REAL*8 vert, REAL*8 frequency ) MCLEndRegistration: =================== Ends the data registration phase. Returns 1 on success, 0 on failure. No arguments. int MCLEndRegistration() MCLGetData: =========== Called by a consumer to get data. Note that this is a blocking call and will only return when the data is available. Arguments: id == Integer id returned from the call to MCLRegisterConsume() for this item buf == Buffer/Array containing the data timeStamp == Time-stamp associated with this data int MCLGetData( INTEGER id, REAL*8 buf, /* Can be of other types */ REAL*8 timeStamp ) MCLSendData: ============ Called by a producer to make data available to consumers. Returns 1 on success, 0 on failure. Arguments: id == Integer id returned from the call to 'MCLRegisterProduce' for this item of data buf == Buffer/Array containing the data timeStamp == Time-stamp associated with this data int MCLSendData( INTEGER id, REAL*8 buf, /* Can be other types */ REAL*8 timeStamp ) MCLSetTracelevel: ================= Sets the MCL debugging and tracing level, for diagnostic output. No return value. Arguments: level == Level of tracing. Please choose one of: 0 == Critical errors only (always on) 1 == Warnings 2 == More warnings 3 == Flow; function entry and exit is shown 4 == Maximum level void MCLSetTracelevel( INTEGER level ) MCLPoll: ======== MCLPoll checks for and handles pending PVM messages destined for the data broker. It should be called intermittently to aviod the data broker messages from piling up... Returns the number of messages processesed. No arguments. int MCLPoll() MCLIAmRegistrationBroker: ========================= Only called by the process designated as the data broker! Returns 1 on success, 0 on failure. Arguments: runName == Name of this run, eg "Configuration A" startTime == Start time for the models in the run endTime == End time for the models in the run (estimated if not known) numModels == Number of models taking part in this run int MCLIAmRegistrationBroker( CHARACTER runName, REAL*8 startTime, REAL*8 endTime, INTEGER numModels ) LILRegisterCoordinates: ====================== Called to inform the interpolation library that the process will be requesting data in terms of the supplied coordinate system. A process may request data in multiple coordinate systems, if desired. On success, returns a (typically small) non-negative integer token for use in subsequent calls to LILRequestInterpolatedData on success, or negative on failure. It is assumed that all participants will use the same scaling for the ticks (radians, degrees, etc.); the TickModulus vector is used to indicate which, if any, of the coordinates wrap, and what the modulus factor is in the same scale as the ticks. So, if wrapping is desired in the East-west direction and the Ticks are given in degrees, the TickModulus vector would be (360, 0, 0). If wrapping over the poles was desired also, and the Ticks were given in radians, then the TickModulus Vector would be (6.328431853*, 3.1415926*, 0) * to however many digits for real*8. Arguments: f77coordName == Name of the coordinate system. Must be of length 128 (CHARACTER*128) numLon == Number of longitude points. numLat == Number of latitude points. numVert == Number of longitude points. lonTicks == Longitude points, of length 'numLon'. latTicks == Latitude points, of length 'numLat'. vertTicks == Vertical points, of length 'numVert'. int LILRegisterCoordinates( CHARACTER f77coordName, INTEGER numLon, INTEGER numLat, INTEGER numVert, REAL*8 lonTicks, REAL*8 latTicks, REAL*8 vertTicks ) LILRequestInterpolatedData: =================== Called to inform the data broker that the calling process will consume the given data on the given grid. On success, returns a non-negative integer id used to uniquely identify this data request. (This value will be used in subsequent calls to 'MCLGetData'.) Any other value implies failure. Arguments: f77VarName == Name of the variable. Of length at least 'nameLen' nameLen == Length (in characters) of the name of this variable coordToken == from LILRegisterCoordinates, above dataType == Type of the data, as in MCLMetaRegister. varType == Type of the variable. 1: Temporary, 0: persistent. Always use 0 for now numDims == Number of dimensions. 1 for 1D, etc. lon0 == starting index (1-based) relative to the lonTicks array registered above numLon == number of array elements in this dimension lat0 == starting index (1-based) relative to the latTicks array registered above numLat == number of array elements in this dimension vert0 == starting index (1-based) relative to the vertTicks array registered above numVert == number of array elements in this dimension TickModulii == value for moduli if wrapping is desired, frequency == Number of simulated seconds between each intake of this variable. int LILRequestInterpolatedData( CHARACTER f77VarName, INTEGER nameLen, INTEGER coordToken, INTEGER dataType, INTEGER varType, INTEGER numDims, INTEGER lon0, INTEGER numLon, INTEGER lat0, INTEGER numLat, INTEGER vert0, INTEGER numVert, REAL*8 TickModulii REAL*8 frequency ) LILSetMissingDefault: ===================== Called after requesting an interpolation to be done to specify a sentinel value to be supplied for any data that cannot be interpolated or to be filled in prior to the receipt of any data (as a background value for missing data). Returns 0 when successful, negative if the supplied ID is not known or is for a field of data type other than Float or Double. Arguments: id (as returned from LILRequestInterpolatedData) value either a float or a double being the default LILSetMissingDefault( INT ID REAL value); (in C, it's declared as void * and casted to the appropriate type).
struct varMetaData { int datatype; int vartype; int vardim; int namelen; char name<>; }; struct systemMetaData { char expName[128]; double startingTime; double endingTime; int noModels; int brokerProtocolVersion; }; struct modelMetaData { char modelName[128]; int noLonPoints; /* dimension of Longitude axis */ int noLatPoints; /* dimension of Latitude axis */ int noSigmaLevels; /* dimension of Sigma axis */ int noSlaves; /* int noValidityBounds; /* for OGCM, give valid boxes */ int noPrivateParams; /* some model specific data */ int noVars; double lonTicks<>; double latTicks<>; double sigmaTicks<>; double privateParams<>; /* int validityBounds<>; */ varMetaData varlist<>; }; struct systemInventory { systemMetaData esmInfo; modelMetaData modelList<>; }; /* XXX - this is not used in this version struct agcmMetaData { int ozoneFlag; int noStratLevels; double pTop; double pInt; double deltaT; } */ struct varDataField { int namelen; char name<>; int startLon; int endLon; int startLat; int endLat; int startSigma; int endSigma; int fieldType; int noElements; char fieldData<>; /* XXX: mostly double */ }; struct varData { int noFields; double expTime; varDataField fields<>; }; struct varRequest { int namelen; int varCoord; /* coordinate type */ double varFreq; int varXstart; int varXnum; int varYstart; int varYnum; int varZstart; int varZnum; char name<>; }; struct dataRequest { int noRequests; varRequest requests<>; }; struct dataReferral { int consumerTID; /*noSuppliers when used for varReply */ varRequest forwarded; }; struct event { int eventNumber; int recipient; /* tid to send to */ }; struct metaRequest { int recipient; /* tid to send to */ }; struct alert { int audience; /* 1: only models, 2: all procs*/ int code; /* what to say */ int initiator; /* who said it */ }; program QUEST { version QUEST_VERSION { void QUEST_SYSTEMINFO_MSG(systemMetaData) = 7000; void QUEST_MODELINFO_MSG(modelMetaData) = 7001; void QUEST_INVENTORY_MSG(systemInventory) = 7002; void QUEST_META_REQUEST_MSG(metaRequest) = 7003; void QUEST_VAR_REQUEST_MSG(dataRequest) = 7004; void QUEST_VAR_REPLY_MSG(dataReferral) = 7005; void QUEST_VAR_SUPPLY_MSG(dataRequest) = 7006; void QUEST_VAR_REFER_MSG(dataReferral) = 7007; void QUEST_VAR_UNREFER_MSG(dataReferral) = 7008; void QUEST_DATA_MSG(varData) = 7009; void QUEST_BARRIER_REQUEST_MSG(event) = 7010; void QUEST_BARRIER_REPLY_MSG(event) = 7011; void QUEST_ECHO_REQUEST_MSG(event) = 7012; void QUEST_ECHO_REPLY_MSG(event) = 7013; void QUEST_EXIT_MSG(void) = 7014; void QUEST_ALERT_REQUEST_MSG(alert) = 7015; void QUEST_ALERT_MSG(alert) = 7016; } = 7; } = 200004;