High Performance Computing Directions and Needs

Extract from a report prepared for the Bureau of Resource Sciences High Performance Computing Centre by compleXia, August 1998


The High Performance Computing (HPC) environment within the Bureau of Resource Science (BRS) provides a research environment allowing access to advanced computing technologies for decision makers, scientists and others involved in analysing sustainability and development issues. The HPC centre offers facilities for research, analysis and visualisation of information and knowledge as well as providing the platforms required for education, training and technology transfer associated with advance computer technologies.

Computational research, simulation, visualisation and integrated modeling require facilities that provide enormous storage, fast computing responses and rapid networking. The HPC centre was developed to provide these facilities for research into ecologically sustainable development. This is a "grand challenge" problem with more than 108 entities and 105 attributes. Problems in this area are at least an order of magnitude beyond the requirements of global climate prediction.

The BRS was at the leading-edge of technologies; at one stage it had all three major hardware architectures in active use: Symmetric Multi-Processor (SMP); Massively Parallel (MPP); and Vector Parallel (VP) systems. The BRS had shown the way with innovative storage facilities, adopting techniques such as: Hierarchical Storage Management (HSM) and distributed filesystems; FDDI backbone networks; multi-channel/multi-host RAID disk subsystems; Magneto-optical and tape storage silos; and switched network technologies, both as network backbones and to the desktop.

Current Environment

The current (1998) HPC environment (Figure 1) consists of:

  1. Computational Cluster comprising two SUN Enterprise 6000 systems. A third system using a dual-processor DM2 Silicon Graphics server provides facilities for the BRS Intranet and specific image processing applications.
  2. Disk subsystems are provided by an XSI RAID disk storage array, which ensures maximum data availability, reliability and performance.
  3. Development and Test system provided by a four-processor SUN 1000E system with 256 Mbytes of physical memory. This system is used for application development, testing and evaluation of new software and hardware facilities such as SUNs HPC cluster software.
  4. HSM and archiving system provided by a Cray Research J916 computer connected to a Storage Tek Wolfcreek tape silo. This system provides transparent HSM facilities and network backup for the HPC environment. The J916 is also a powerful vector parallel computer, which can be used for specific modeling tasks. The StorageTek silo provides four tape transports (two redwood and two timberline) and 984 cartridges. Total capacity of the silo is in excess of 50 Terabytes with all redwood cartridges.
  5. Network backbones implemented using three technologies. All systems have 100 Mbit FDDI facilities, most have 100 Mbit switched ethernet capabilities and the SUN E6000 systems also communicate via a switched 1 Gbit ethernet backbone. Communications to HPC users centre around a mix of switched and unswitched 10 Mbit links, with some using having 100 Mbit switched links to their desktops.


Figure 1 Current HPCE Diagram

Future Directions

The concept behind the development of the HPC facility into the future is one of the "virtual computer (VC)" facility, where resources are connected together and provided to clients, but may not reside in close geographical proximity. That is, the facility is built on CPU and storage "components", not necessarily co-located, linked by networks totally transparently to the client (Figure 2). The CPU providing the computing resources may live in one building, city or even country; while the storage facilities are located elsewhere; and the client may be in a totally different building, city or country.

Sharing of data files between the systems is one of the most important aspects of the "Virtual Computer Strategy", which has been adopted by the HPC. The strategy integrates all HPC systems into a single entity thus ensuring a high degree of transparency of computing resources to the users. When fully implemented, users will be able to login to the environment and have resources allocated to them automatically by the system. This allocation will seek the most efficient utilisation of the environment while being transparent to the users. Initially, this strategy will be implemented locally. When costs for high speed bandwidth over long distances decrease it will become feasible to expand the strategy to incorporate national and global resources.

Current technology allows the separation of CPUs and the formation of dynamically allocated CPU "farms", over short distances at reasonable cost. Storage facilities combined with techniques such as HSM allow the separation of storage into various levels or groups with differing levels of performance, capacity and cost (Malafant and Radke, 1995). Local area networks are fast providing the performance and cost effectiveness required to implement high-speed (1 Gbit or more) connectivity although still at the local scale. The development of wide area communications, which supply the requisite bandwidth and cost effectiveness for the VC approach, are still to materialise. The development of the ubiquitous super-highway and the delivery of entertainment such as video-on-demand, may drive the delivery of reasonably priced higher bandwidth communications networks, which will at least allow the development of prototype VC environments.


Malafant, K.W.J. and Radke, S. (1995). The Terabyte problem in Environmental Databases. In: Recent Advances in Marine Science and Technology '94. Eds. Bellwood, O., Choat, H. and Saxena, N. ISBN 0 86443 540 1, Pp. 615-623.

|| Home page || Resume || Projects || Publications ||
Web site established: 20 September 1998       Last updated: 20 September 1998
URL http://complexia.com.au/resume.html
Site designer and maintainer: Kim Malafant (kim@complexia.com.au)

Copyright © 1998 by Kim Malafant. All rights reserved. This Web page may be freely linked to by other Web pages. Contents may not be republished, altered or plagiarized. compleXia does not control or endorse the content of third party Web Sites.