Or, one could specify the large memory whirlwind nodes explicitly: qsub -l nodes=4:c11a:ppn=8. If you have memory requirements exceeding the 8000 MB/core available on every Whirlwind node, ask for only the large-memory nodes, like so: To use only Whirlwind nodes, you would instead specify While Hurricane additionally has GPUs, Hurricane and Whirlwind have the same CPU configuration and InfiniBand switch and can be used effectively together as a " metacluster" by non-GPU parallel jobs using the TORQUE property named for their processor model, e.g. This set of properties allows you to select different subsets of hurricane and whirlwind nodes. The TORQUE properties for the hurricane and whirlwind nodes are: hu01-hu08: c10, c10x, x5672, el6, compute, hurricane hu09-hu12: c10a, c10x, x5672, el6, compute, hurricane wh01-wh44: c11, c11x, x5672, el6, compute, whirlwind wh45-wh52: c11a, c11x, x5672, el6, compute, whirlwind TORQUE assigns jobs to a particular set of processors so that jobs do not interfere with each other. TORQUE node specifiersĪll access to compute nodes (for either interactive or batch work) is via the TORQUE resource manager, as described elsewhere. Communication performance is therefore an important concern when designing multi-node parallel algorithms for this architecture. However, when the higher speed of the Xeon processors is taken into account, the bandwidth/FLOP is lower when GPU acceleration is factored in, the communication-to-computation ratio could drop by a couple of orders of magnitude. Per-core InfiniBand bandwidth is 5 GB/core (less 20% protocol overhead), also matching that of the existing Rain compute nodes. Main memory size works out to a generous 6-24 GB/core, meeting or exceeding that of SciClone's large-memory Rain nodes. The Hurricane and Whirlwind subclusters share a single QDR InfiniBand switch, and can also communicate with the Rain subcluster via a single DDR (20 Gb/s) switch-to-switch link, with Hima via a single QDR (40 Gb/s) switch-to-switch link, and with Bora, and Vortex via two QDR switch-to-switch links. HardwareĢ × NVIDIA Tesla M2075, 1.15 GHz, 448 CUDA cores Their front-end is .edu and they share the same startup modules file. Hurricane and Whirlwind are the subclusters of SciClone with Intel Xeon "Westmere-EP" processors.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |