Platforms and Hardware Recommendations

Platforms

OptiStruct runs on the following platforms:
Operating System Architecture Version SMP SPMD
Linux 64-bit RHEL 6.6

CentOS 7.2

SLES 12 SP1

Yes Yes
Windows 64-bit 7/8.1/10 Yes Yes
Server 2008 (R2/HPC) Yes Yes
SMP
Symmetric Multiprocessing (Multiple processors, single memory)
SPMD
Single Process Multiple Data (Massive parallel processing, Multiple processors each having its own memory)
RHEL
Red Hat Enterprise Linux
SLES
SUSE Linux Enterprise Server

Hardware Recommendations

Altair does not recommend any particular brand of hardware. All hardware purchases are going to balance the cost versus performance. The following are some items which can affect the performance with OptiStruct.
CPU
The faster the clock speed of the processor, along with the speed at which data is exchanged between CPU cores of processor, the better the performance.
Memory
The amount of memory required by an analysis depends on the solution type, types of elements in the model, and model size. Large OptiStruct solutions can require large amounts of memory. Also, memory that is not used by OptiStruct is still available for I/O caching. So the amount of free memory can dramatically effect the wall clock time of the run. The more free memory, the less I/O wait time and the faster the job will run. Even if an analysis is too large to run in-core, having extra memory available will increase the speed of the analysis because unused RAM will be used by the operating system to buffer disk requests.
Disk drives
OptiStruct solutions often require the writing of large temporary scratch files to the hard drive. Therefore, it is important to have fast hard drives. The best solution is to use two or more fast hard drives in RAID 0 (striped) as a dedicated place for scratch files during the solution. A typical configuration is to have one drive for the operating system and software, and then 2-15 drives striped together as the scratch space for the runs.
Interconnect
The parallel SPMD versions of OptiStruct can run on multiple processors and/or on multiple nodes in the cluster. To run parallel jobs on a cluster, each should have enough RAM to run a full job in non-parallel mode. And, each node in a cluster should have its own disk space that is sufficient to store all the scratch files on that node. Cluster architecture with separate disks for each node will achieve better performance than single shared RAID array of disks. A fast interconnect is important, but anything over Gigabit Ethernet will not speed the solution visibly. When nodes use a shared scratch disk area, the interconnect speed is a critical factor for all out-of-core jobs.

For a large NVH analysis, it is recommended to have at least 8 GB per CPU with at least 4 disks in RAID 0 for temporary scratch files.