Orcina news

Here you will find the latest news on the development of OrcaFlex. Alongside our LinkedIn page, it is a valuable source of information about what we are up to!

Distributed OrcaFlex 6.0a released

We have just released Distributed OrcaFlex 6.0a which you can download from the Distributed OrcaFlex web page.

The significant change introduced in this release is the ability to run more than one Distributed OrcaFlex (DOF) Client process on the same computer. The motivation for the change is to support machines with very large numbers of processors. Machines with more than 64 processors have the processors split into processor groups. Each DOF Client can only use processors from a single group. For previous versions, there was a single DOF Client process per machine and so that process could only use processors from a single group. Allowing multiple DOF Client processes on a single machine enables full use of the processing capacity of that machine.

As well as the issue of processor groups, machines with large numbers of processors usually use NUMA memory architecture. In order to make the best use of such machines, programs need to make sure that memory is allocated on the NUMA node which contains the processors that use that memory.

Now when the DOF Client service is started it detects the number of NUMA nodes or processor groups present and starts the same number of DOF Client processes, setting their thread affinity appropriately to use all the processor cores on that computer.

It is also possible to choose the number of DOF Client processors started if more client processes than groups are required, or if the machine has only one processor group. The advantage here is improved performance running models using Python post-calculation actions or external functions which are currently limited by the fact that the Python interpreter is single threaded, and there is only one Python interpreter in a DOF Client process.

Other changes include:

  • Running or waiting jobs in the job list can now be manually paused and resumed.
  • Improved behaviour with the licence manager. The model will not automatically fail if the licence connection is lost but will retry the connection a number of times first.
  • The DOF server can be configured to run jobs in just the order they are submitted rather than share processing between users regardless of submitted order.
  • Each DOF Client now has its own small buffer queue of scheduled jobs which allows the DOF Server to distributed jobs to each client in larger chunks, this smooths the job throughput particularly for shorter simulations.
  • New job batches are started gradually to avoid sharp spikes in file server activity when saving after completion. This feature can be disabled if required.
  • The DOF Server no longer automatically moves running jobs between DOF Clients. This feature was previously disabled by default but has now been removed completely. This simplifies the DOF Server code which is beneficial, and also avoids situations where jobs could be unnecessarily moved – there was no simple decision algorithm for the DOF Server that covered all situations satisfactorily.