cOMPoSe (OSCAR Model Preparation System) User Manual

Introduction

Moving from a full detail, heterogeneous model utilizing transport theory for neutronic analysis, to a coarse (or nodal) homogenized representation suitable for diffusion theory, is a fairly involved process. There are many factors to take into consideration, but the final outcome would be a model with a limited number of fat (assembly size) meshes, which would allow a very fast calculation of the neutron flux distribution. In order for the model to be useful, it must be prepared in such a way that it captures as much of the neutronic behavior of the heterogeneous model as possible. In the cOMPoSE system, this typically involves the following steps:

  1. Since the cOMPoSe system attempts to retain as much properties as possible from the detailed model, the bulk of the homogenization process is performed on full reactor models, as apposed to single assembly models. Thus, the first step is to select a core configuration (or configurations) that best fit the intended purpose of the homogenized model.

  2. For a chosen core configuration, a coarse radial mesh is chosen, which define the node sizes. This is typically done in such a way that the fuel pitch is preserved, so that fueled assemblies fill one node.

  3. Homogenized cross sections are then calculated on the nodal mesh for each axial layer, by performing a two dimensional transport calculation over the entire slice. Generalized equivalence theory is used to ensure that reaction rates and leakages are preserved on each node. The major advantage of this approach is that complicated static features, like the ex core facilities encountered in research reactors, are treated explicitly and accurately.

  4. Fueled assemblies, or any other loadable that either undergo state changes (e.g. burnup), or does not stay in a fixed position in the core, needs additional treatment. This is because, for a fast operational support tool, we do not want to re-homogenize the entire core every time the configuration changes. Thus, fueled and other loadable assemblies are treated in a more traditional fashion, by performing assembly level lattice calculations in approximate environments (so called colorsets). These calculations also account for burnup and state changes. It is important to note that, since it is difficult to capture all the environments a loadable assembly would experience, this is a major source of error, and its effect must be carefully monitored.

  5. Finally, all two dimensional layers are stacked together to form a three dimensional model. Since axial leakage is not preserved, this is another potential source of error.

During each step of the process, the system gives feedback on the errors (as compared to the detailed heterogeneous model) incurred. This includes:

  • equivalence errors, which typically indicates how well equivalence parameters can compensate for the move from transport to diffusion theory,

  • the errors introduced when adding loadable components homogenized from approximate environments, and

  • the full three dimensional error, which includes axial leakage errors, as well as other axial effects such as the movement of control rods.

Since the errors are now more traceable, they can be refined, so that one ends up with a nodal diffusion model with acceptable, and well quantified error margins.

Typical structure of the compose package for your model

In order to generate nodal models for all loadable components and reactor configurations, the compose package usually contains a number of scripts, which are typically organized as follows:

  • One script for each loadable component. Loadable here means any reactor component that will move to different positions in the core, and or, experience different conditions, e.g. burnup, temperature etc. The homogenization of these components are performed in simplified representative environments, as using the real core environment is not a realistic representation for the entire lifetime of the assembly. Also, when burnup and state dependent homogenization is performed, smaller models will help reduce overall calculation time.

    These scripts are primarily used to define the nodal representation for a particular component. They can however also define small scale nodal models, which are used for error propagation monitoring.

  • A script for each major core configuration, which generates homogenized mixtures for all static channels in the nodal mesh. These include core channels with components that will not move during operation, ex-core reflector regions, and channels containing static structural components, e.g. core box and beam tubes. Typically, one would have a script defining a nodal configuration for each major state of the target reactor, e.g a cold state for startup experiments, and a hot state for normal operation. Significant changes to in or ex-core facilities (e.g. filling or evacuation of a beam tube) might also require a new nodal model. However, in many instances, making correct use of model multiplicities, you can get away with a single nodal covering many operational regimes.

    These scripts create a nodal configuration, which ‘freezes’ all the static channels, but leaves loadable channels open. In MGRAC terminology, it defines a configuration file, and assembly definition files for all the static positions. Once loadable compositions are defined, the script combines all mixtures into a single library, which is attached to the configuration.

The following gives an example of the contents of a compose package:

-SAFARI_Benchmark
    |- compose
        |- inf_1.py
        |- inf_2.py
        |- experiments.py
        |- operational.py

where, inf_1 and inf_2 are modules (scripts) setting up two different fuel assembly types, experiments is the configuration used for startup experiment modeling, and operational sets up the nodal model used for core follow analysis.

Creating and running a cOMPoSe module

Building a automated compose module using auto-compose