Quick links

Computing Facilities

Last updated: June 2024

The Department of Computer Science at Princeton University maintains a dedicated computing and network infrastructure to support its academic and research mission. In addition to these resources, members of the CS department also have access to shared campus computing resources.

The CS Network has a backbone based on 10/40/100 Gbps Ethernet and spans several buildings including the Computer Science Building, 221 Nassau Street, the Friend Center, Corwin Hall, and the main campus data center. The CS Network has a 40 Gbps uplink to the main campus network. The main campus has two 10 Gbps uplinks to the commodity Internet and two 100 Gbps uplinks to Internet2. Within the Computer Science Building, each room is wired with multiple Category 5/5e UTP ports, single-mode fiber, and multimode fiber. The entire building is covered by the campus wireless network.

The department maintains a central, high-performance modular filesystem with storage nodes interconnected with 100Gb ethernet. In its current configuration, there are approximately 1.1 PB of usable storage. The filesystem connects to the CS Network with a 400 Gbps uplink. All research data are protected by the equivalent of RAID-6 or better.

At any given time, there are hundreds of personal computers connected to the network running Windows, MacOS, or Linux. Personal computers can connect to the central file system using CIFS. For centralized, general purpose computing, the department has two clusters of machines running RHEL 9. The first cluster, cycles, consists of four servers; each server has dual, twenty-core CPUs, 1 TB RAM, and 10 Gbps Ethernet connections to the network and file system. The second cluster, ionic, is a traditional beowulf cluster with an aggregate of 7000 CPU cores, 41.2 TB RAM, and 720 GPUs. Each node in the ionic cluster has a 10 Gbps Ethernet connection to a dedicated switch which, in turn, has a 120 Gbps uplink to the department backbone and file system. All clusters are remotely accessible via SSH.

In addition to the above clusters for CS Department use, the CS Department operates a cluster for the use of all School of Engineering and Applied Sciences (SEAS) faculty and researchers. This cluster, neuronic, consists of 32 compute nodes, each of which contains two CPUs, 512GB of main memory, and eight GPUs. One more such node acts as the login and job-submission node. Each node has a 10Gbps uplink to the network, and the cluster has a 120 Gbps uplink to the department backbone and filesystem.

The Computer Science Building and the attached Friend Center building house several laboratory spaces (Graphics Lab, Systems Lab, Theory Lab, etc.) as well as our local co-location facility. The local co‑location facility contains 10 racks that are available for department infrastructure and specialized research equipment. In addition, the CS department is currently assigned 29 racks at the University's main data center located approximately 2 miles from the Computer Science Building. These racks are also available for both department infrastructure and specialized research equipment.

In addition to the resources dedicated to the CS department or SEAS, researchers also have access to campus resources such as the library system (over 14 million holdings), the DataSpace Repository (for long-term archiving and dissemination of research data), and shared computational clusters. The largest cluster, della, consists of 367 nodes with an aggregate of 17,656 CPU cores (946 TB RAM) and 668 GPUs (46 TB RAM). The nodes are interconnected using FDR Infiniband.

Follow us: Facebook Twitter Linkedin