AOS has traditionally operated local data centers in the Math Sciences and Geology buildings which provided unlimited space and support for compute/storage systems, HPC clusters, and shared servers.

In addition to these local data centers, UCLA offers the following services to affiliated personnel:

As of summer 2016 AOS IT Services can no longer accommodate additional faculty and research group acquisitions in the AOS datacenters

All new compute/storage nodes must either replace an existing system or be added to UCLA's datacenter (co-location services)


UCLA Co-Location | MSA Data Center

IT Services offers co-location services in the MSA Data Center. This service provides clients with a reliable environment, a secure location, and network connectivity for hosting mission-critical servers and related equipment. The MSA Data Center is connected to the campus backbone network via two separate 10-Gigabit trunks for high availability. Clients are provided rack space in cabinets with dual power cording, uninterrupted power supplies (UPS) backed by motor generator (MG), and hardware installation coordination. The center is staffed 24/7/365 by our Data Center employees.


IDRE HPC | Hoffman2 Cluster

The Hoffman2 Cluster is a project of the Institute for Digital Research and Education (IDRE) Cluster Hosting Program. It opened to users on January 28, 2008. The Hoffman2 Cluster is managed and operated by the IDRE Research Technology Group under the direction of Bill Labate.

UCLA’s Shared Hoffman2 Cluster currently consists of 1,200+ 64-bit nodes and 13,340 cores, with an aggregate of over 50TB of memory. Each node has 1GB Ethernet network and a DDR, QDR, or FDR Infiniband interconnect. The cluster includes a job scheduler, compilers for C, C++, Fortran 77, 90 and 95 on the current Shared Cluster architecture, applications and software libraries that offer languages, compilers and software specific to Chemistry, Chemical Engineering, Engineering, Mathematics, Visualization, Programming and an array of miscellaneous software. The current peak CPU performance of the cluster is approximately 150 Trillion Floating Point, double precision, operations per second (TFLOPS) plus another 200 TFLOPS with GPUs. Hoffman2 is currently the largest and most powerful shared cluster in the University of California system.

Additional Hoffman2 resources for researchers include complete system administration for contributed cores, cluster access through dual, redundant 10Gb network interconnects to the campus backbone, the capability to run large parallel jobs that can take advantage of the cluster’s InfiniBand interconnect, and web access to the Hoffman2 Cluster through the UCLA Grid Portal, as well as access to a Panasas parallel filesystem and a NetApp storage system. Current HPC storage capacity is 2 petabytes.

The cluster is also an end point on the Globus Online service using the 10Gb network interconnect backbone, thus providing researchers a facility for fast and reliable data movement between Hoffman2 and most leadership class facilities across the USA.


HPC Compute Nodes / Modelling Servers


Storage Arrays / Servers


Network / Switching

Both the AOS and UCLA Data Centers can accommodate 1G, 10G, and Infiniband connections.


System Accounts / Access

All research accounts are part of the AOSID system. Access for temporary students/researchers can be requested from AOS Support