Tools and Resources

MODel-driven Performance prediction in early Integration Environments (MODPIE)

The MODPIE framework and tool set is designed to provide system integration testing during the early phases of the software development lifecycle. MODPIE uses existing tools to model representative systems as well as providing new tools for deployment, execution and visualisation. The existing tools are provided by CUTS, a Component Utilisation Test Suite ( see ). The new tools are designed to give the system expert greater flexibility in the testing of a representative system as well as interactive visualisation of the execution and performance metrics. New tools in MODPIE enable performance prediction to be extended from a single modelled system to a combined System of Systems (SoS). Integration of composing systems into a SoS has inherent risks of unexpected behaviour and critical failures that can be mitigated by early analysis of system workloads by System Execution Modelling (SEM). The evaluation of a representative SoS architecture when tested on heterogeneous hardware with an interoperable communication layer can identify system wide risks and problematic subsystems that may require further evaluation and redesign. MODPIE advances a stepped process from an abstract model of a system to the execution and evaluation of performance when deployed upon realistic hardware. MODPIE users follow a performance analysis and prediction process which has five steps that use specific tools.


  1. A model of the composing systems is defined by specifying component interfaces and workload behaviour.
  2. The deployment of the composing systems is done by assigning the components of models to be distributed on a cluster of hardware nodes.
  3. The execution of the SoS uses the defined models and deployment configurations to generate the code for distribution to the selected hardware nodes.
  4. During execution, system information is captured and aggregated into performance metrics.
  5. The evaluation metrics are shown to the expert through context-specific visualisations that indicate if the model fulfils performance requirements.

The MODPIE process as applied to an Unmanned Aerial Vehicle (UAV) can be viewed on youtube from the following link

1.      Modelling Architectural Alternatives of Composing Systems

A representative Distributed Real Time (DRE) system is modelled through a SEM environment. In this modelling phase, MODPIE integrates the existing CUTS tools within the Generic Modelling Environment (GME) to model the components, connections, behaviour and workload. More information about the CUTS tools can be found at

2.      Deployment of Composing Distributed Real-time Systems

In the deployment phase, the system model is assigned to a hardware test-bed. The new MODPIE tool developed for deployment of the modelled system provides improvement over existing tools by presenting the system expert with the available hardware options and allowing multiple system models to be configured into a SoS execution plan. This deployment tool enables the system expert to construct a model specifying how the component parts of the modelled system are to be distributed across the available hardware. It generates the necessary deployment configuration information for the specified experimental plan. Please contact the Defence Information Group for a trial of this software.

3.      Executing/Simulating Composing Systems and Scenarios

The execution phase uses the defined models and deployment configuration to generate the code to that can be executed on a distributed platform. MODPIE uses the Jenkins continuous integration environment to automate the code generation, deployment, compilation and execution to ensure simulation reliability and repeatability. Jenkins is a continuous integration platform which allows the user to set up custom build jobs to be run on a variety of hardware for testing purposes. Jenkins is setup in a Master/Slave paradigm. The Jenkins master keeps track of all jobs and available slave nodes and contains the WebUI which is used to dispatch custom build jobs. These jobs have been setup to run custom build scripts written in python. More information about Jenkins can be found at Contact us for further information about the specific Jenkins setup and jobs used in this process.

4.      Observing/Predicting System Performance

In the observation stage, MODPIE executes the generated code within its indicated deployment to produce execution traces and basic metrics about the system performance. Both hardware and software metrics are captured. Hardware metrics include cpu and memory utilisation, while the application-defined software metrics are associated with workload and network latencies. The execution traces produce a large amount of raw information which is processed by an evaluation engine that aggregates the data as specified. Aggregated performance information is passed to the MODPIE visualisation component to provide the user with an overview on the causes of particular performance values.

5.      Evaluating Architectural Alternatives

The MODPIE evaluation phase uses an interactive visualisation tool. This tool is designed to show relationships, activities and alert or error conditions to indicate whether the model is working correctly and where constraints are not being met. It is possible to show physical relationships between components and hardware nodes as well as the connections between them. The visualisation tool includes a notification system that provides a focal point to the user when specific events, constraint violations or items of interest occur within the model. Combined with the ability to scale up and down so as to see all of a large model from a high level, while still being able to see all details for an individual component, this approach makes the visualisation system a powerful tool in understanding if and how the system is meeting the performance requirements.

The MODPIE tools used for Observation, Aggregation and Visualisation continue to be improved and developed. Contact us for further information about these Tools. The main contact for this research is A/Prof Katrina Falkner.