System Overview
Introduction
The OSCAR-5 system aims to allow for multi-code, multi-physics support for research reactor analysis, with the primary aim to facilitate the use of fit-for-purpose tools in the support of reactor operation. This implies finding a balance between the nature of specific calculational application and the level of detail utilized in achieving the result.
The shift to more sophisticated models usually comes at the cost of providing and receiving more data, and extensive pre and post processing systems are typically developed to help manage large amounts of input and output. The OSCAR-5 system incorporates a powerful pre and post processing system, which maintains a consistent model, and manages the data passing between target codes.

Schematic view of the OSCAR-5 system. Components not fully functional are shown in transparent blocks with dashed boundaries. The dots under each target code illustrates the suitability of that code to the intended application, with green meaning the code is perfectly suitable, yellow meaning that it can be used but is not necessarily the best choice, and red indicting, that although possible, the code is not well suited due to feature or resource limitations. The size of the dot indicates the error or level of uncertainty associated with each code for that application. For instance, although the nodal diffusion solver MGRAC can be used to estimate local flux values in the system, the associated error would be large. The Monte Carlo codes Serpent or MCNP would be much better choices, with MCNP more favorable since it incorporates better estimators in its detector response models. On the other hand, for equilibrium studies, where a final core mass distribution is the main outcome, MGRAC will give fairly accurate results in a reasonable amount of computing time, while the Monte Carlo codes will consume many thousands of CPU hours. The main benefit and goal of the system is that, no matter what code is used, the model and input data remain consistent.
The main entry point to the system is the construction of a unified, code-independent model. A detailed model of each assembly type and reactor pool (or reflector) is build using the Constructive Solid Geometry (CSG) module of the system. Assembly models are combined in an assembly library, from which full core configurations are constructed. All material properties (isotopic composition and nominal material state etc.) are also defined in a code-independent fashion.
The model building process is facilitated in the system via extensive visualisation schemes, allowing two and three dimensional rendering with multiple filters to isolate the components and materials being considered. This can be done at both component and core level. Macros for the creation of typical component types, geometry processing and mesh optimization schemes as well as mesh completion algorithms all assist in the creation and final deployment of the model.
In order to use the model in a target code which can handle detailed geometry, translators are used to write the code specific cell and material cards. These translators are defined once in the system, and therefore do not depend on the model. This mechanism also ensures that the model remains consistent when it is exported to multiple codes. Additional translators can be added without modifying the core system, so that new target codes can easily be incorporated.
Detailed assembly and core models can not be used directly in a nodal diffusion solver, and an additional model preparation step is required. The cOMPoSe (OSCAR Model Preparation System) tool is used to systematically move from the heterogeneous unified description, using point-wise cross section data, to a set of homogenized mixtures with energy condensed to a few group representation.
Once a suitable model is prepared, it can be deployed to various analysis applications. The system also treats the input and execution of applications in a code-independent manner. In particular, all input data is provided through a unified system interface, with facilities to visualize and further manipulate the data. Moreover, the deployment to various hardware architectures, ranging from singe node workstations, to multi-core, multi-node computing clusters, are automated and handled internally. This allows one to match the best available hardware to the intended target code. A generic inventory management system, which stores the material states of burnable assemblies, makes it possible to use analysis codes lacking this feature for long term core management.
The final deployment of an application is once again handled by a set of translators for each target code. This, together with the model, provides a set of inputs for the target code.
Finally, output from target codes are passed back to the system using code-dependent output translators, and stored in target code-independent data archives.
What is in the box?
The OSCAR-5 release introduces a new pre- and post processing framework called rapyds (Reactor Analysis Python Driver System). This system represents a major evolution of the OASYS pre- and post processor, adding a wealth of features, and completely changing the way you interact with the system.
This release also includes a full nodal package, with MGRAC the core nodal diffusion solver, as well a number of utility codes used to prepare cross section libraries for it.
To better serve the nodal package, OSCAR-5 also introduces a number of improvements to the front end of the calculational path. This includes additional lattice code options, and a theoretically consistent step by step approach to move from a detailed heterogeneous model, to an energy condensed homogeneous model. The system gives feedback at each step, to help the user improve the nodal model, and quantify the final error estimate associated with the nodal model. The system is called the OSCAR Model Preparation System (or just cOMPoSe).
Finally, pre- and post processing support for a number of external analysis codes are also included.
The nodal package
This package contains the following suite of codes:
A Core simulator: MGRAC
In MGRAC, the calculation of the steady-state neutron flux distribution is based on the solution of the three-dimensional multi-group time-independent diffusion equation by means of a modern transverse-integration nodal method for Cartesian geometry. This nodal method, which is known as the Multi-group Analytic Nodal Method, engages an analytic solution to the one-dimensional transverse-integrated multi-group diffusion equation in order to determine a relationship between node side-average net currents and node-average fluxes. It is subject to only one approximation, namely that of a finite-order polynomial approximation for the transverse leakage inhomogeneous source term in the one-dimensional equation. Various iteration acceleration methods are also available.
Depletion history tracking in MGRAC involves both fuel exposure and nuclide (an arbitrary number of actinides, fission-products and burnable absorbers) inventory tracking. In MGRAC the depletion tracking mesh (the exposure mesh) is quite independent of the neutronic mesh. An exposure mesh is assigned to each component (fuel assembly, control rod, detector string, reflector assembly, irradiation rig, etc.) individually at the beginning of life of the component (at the time the component is first introduced into the calculation system). This allows depletion to be assigned to components as they are moved (both axially and laterally) within the reactor.
A predictor-corrector method is used for the depletion calculations, thus involving two converged nodal flux solutions per burn-up step. A constant-flux explicit time integration method is optionally available for faster calculations with reduced accuracy requirements. The burnup solution algorithm is a highly accurate method free of numerical round-off.
In OSCAR-5, MGRAC is fully integrated into the pre- and post processing system, which automatically deploys models and create input for all the supported application modes.
Lattice code: HEADE
HEADE uses collision probability methods to generate few-group assembly homogenized equivalence parameters. Few-group equivalence parameters include node-averaged cross-sections, discontinuity factors at the assembly boundaries and flux/power form functions to allow the reconstruction of heterogeneous detail during the full core global diffusion calculation. Assembly calculations are performed for a number of discreet state conditions of the assembly and cross-sections for each point in the assembly state space.
Cross section tabulation: POLX
POLX is used to tabulate homogenized cross-sections sets as a function of state-parameters, interpolating were necessary in order to provide a continuous representation. It also performs the equivalent diffusion calculations, generating the so called discontinuity factors, which are an integral part of equivalence theory.
Library linking: LINX
Combines all homogenized cross sections and equivalence parameters, into a single run-time library for the core simulator MGRAC.
Attention
In OSCAR-5, POLX and LINX are used internally within the cOMPoSe sub-system, and there is rarely a need to create input for, and execute these codes manually.
Input and output processing of external codes
Currently the following external code packages are supported:
Serpent 2: Only version 2.1.23 has been extensively tested, and some inconsistencies might exist with newer versions. Earlier versions (and Serpent 1) are not supported.
MCNP5: Version 5.1.51 or later.
MCNP6: All currently available versions has been tested.
As the rapyds framework was designed to be extendable, this list will continue to grow, especially as we move towards multi-physics and full system modeling.
Attention
Only input and output wrappers for these codes are provided. They are not distributed with the system, and should be obtained from the individual distributors under the appropriate license agreements.
The rapyds framework
The OSCAR-5 pre-processing and post processing framework rapyds is a collection of python packages, which provides the backbone of the system, with the following major features:
Provides a complete set of tools for building and maintaining detailed heterogeneous reactor models. This is facilitated by a Constructive Solid Geometry (CSG) package, as well as numerous pre defined assembly types. The modeling language is similar to the philosophy used in Monte Carlo codes, and should be familiar to anybody with a neutronics background. Moreover, the system also incorporates interactive visualization and geometry processing algorithms, making it easier to deploy complicated models faster. Model building is a once of exercise, as the system deploys translators (templates) to export the model to various analysis codes.
Implements a robust inventory management system, which allows analysis codes which lack this feature to be used for multi-cycle core management support.
The application framework allows a single input script to be used in multiple target analysis codes. This once again guarantees consistency in model and input data. The implementation is flexible, allowing the user to intervene at any point, using it purely as a pre-processor, which only builds input decks, to ful execution, including the extraction of data from output files. Functions to manage application deployment to local and remote machines, including multi-node compute clusters, is also included.
The system is designed to be extendable, allowing additional target codes to be added without modifying the core library.
It is a flexible multi-physics platform, allowing code to code communication via input files (using its template system), as well as in memory communication using python bindings. The latter is a feature particular to python, and is one of the reasons it was chosen as a scripting platform.