ACES Workshop

Dates and Location

Dates: July 18, 2024 - July 20, 2024 (Thursday – Saturday)
Location: OMNI Providence Hotel, Rhode Island (map)

Overview

Please join Texas A&M High Performance Research Computing (HPRC) in Providence for a pre-conference workshop ahead of the Practice & Experience in Advanced Research Computing (PEARC24) conference. You’ll learn how the ACES (Accelerating Computing for Emerging Sciences) testbed complements the National Science Foundation’s portfolio of advanced cyberinfrastructure (CI) resources and services that are supported through the U.S. taxpayer investment. Participants will meet kindred community members and future collaborators from across the country.

ACES features a menu of accelerators in a hardware composable infrastructure designed to excel with artificial intelligence/machine learning (AI/ML) tasks with an Open OnDemand interface that commands a robust and growing software ecosystem. The HPRC advanced user support team will conduct tutorials and present PEARC24 papers that chronicle how ACES tackles data-intensive AI/ML tasks with greater speed, precision and efficiency. Participants from all domains are welcome to deliver lightning talks about their own ACES experience, and share how it has influenced their plans for the future.

Applications from professional staff and research faculty who work at minority-serving institutions of higher learning, demographics that are underrepresented in STEM academics and careers (Science, Technology, Engineering, and Mathematics), and research domains that are new to advanced CI are especially encouraged to apply. All must provide a brief description of their current research, future plans for ACES engagement and a biosketch (current NSF format).

Registration waivers and travel support are available for a limited number of U.S.-based applicants. Non-academic industry or government affiliates will be charged a $350 registration fee. If you have questions, or if you’d like to present your research, please contact: events@hprc.tamu.edu.

Schedule (Subject to Change, all times are EDT)

Last updated: July 19, 2024

Thursday, July 18 Reception and Dinner
Omni Hotel and Resort
Map Address: One West Exchange Street, Providence RI, 02903
Phone Number: 401-598-8000
6:00PM - 6:30PM Reception with Light Hors d'Oeuvres
Newport/Washington (Third Floor)
6:30PM - 9PM Buffet Dinner
Newport/Washington
Friday, July 19 Tutorials and Lightning Talks
8:00AM - 9:00AM Continental Breakfast
(Waterplace I, 2nd Floor)
9:00AM - 9:10AM Opening Remarks
Bob Chadduck, National Science Foundation
(Waterplace II/III)
9:10AM - 9:30AM ACES Overview
Honggao Liu, Texas A&M High Performance Research Computing
(Waterplace II/III)
9:30AM - 12:00PM AI/ML Workflows on ACES Accelerators (slides)
Zhenhua He, Texas A&M High Performance Research Computing
(Waterplace II/III)
12:00PM - 1:30PM Buffet Lunch and Lightning Talks
(Waterplace I)
1:30PM - 5:00PM NVIDIA CUDA-Q hosted on NVIDIA DLI
Mike O'Keeffe, NVIDIA
(Waterplace II/III)
6:30PM - 9:00PM Buffet Dinner (Newport/Washington)
Saturday, July 20 Tutorials
8:00AM - 8:45AM Continental Breakfast (Waterplace I)
8:45AM - 9:00AM Office of Advanced Cyberinfrastructure (OAC) Learning and Workforce Development (LWD)
Jenny Li, National Science Foundation
(Waterplace II/III)
9AM - 10:15AM Julia (slides)
Wesley Brashear, Texas A&M High Performance Research Computing
(WaterPlace I)
9AM - 10:15AM Using containers on ACES for Simulations, Bioinformatics and AI/ML (slides)
Richard Lawrence, Texas A&M High Performance Research Computing
(WaterPlace II/III)
10:15AM - 10:30AM Break
10:30AM - 12:00PM AlphaFold (slides)
Michael Dickens, Texas A&M High Performance Research Computing
(Waterplace I)
10:30AM - 11:15AM Drona Composer Demo (slides)
Marinus Pennings, Texas A&M High Performance Research Computing
(WaterPlace II/III)
NVIDIA Parabricks (slides)
Wesley Brashear, Texas A&M High Performance Research Computing
(Waterplace I)
11:15AM - 12:00PM LAMMPS on PVC Demo (slides)
Richard Lawrence, Texas A&M High Performance Research Computing
(WaterPlace II/III)
12:00PM - 1:00PM Lunch (Waterplace I)
1:00PM - 3:00PM ACES Office Hours (Waterplace I) 1:00PM - 3:00PM Advisory Board Meeting (Waterplace II/III)
3:00PM - 5:00PM CXL Meetup (Waterplace I)

Here’s what early-adopters have to say about ACES!

Image of Ruisi Cai
Ruisi Cai (UT-Austin) uses ACES to process long context sequences in Large Language Models (LLMs). “Due to transformers’ quadratic memory requirements, LLMs command substantial computational power and agile memory management,” said Cai. The UT-Austin team developed a unique approach that was highlighted in a paper titled, “Learning to Compress Long Contexts by Dropping-In Convolutions.” Their paper was accepted by the International Conference on Machine Learning (ICML24).

Image of Aocheng Li
Aocheng Li (Purdue) uses ACES for data-driven archaeological site reconstruction. They said, “I love its elegant and light-weight web interface for file manipulation and job creation/submission. Using the composability features, I combine virtual network computing and TensorBoard servers to launch jobs and monitor training output with just a few clicks - all within one browser session. The HPRC staff are extremely helpful, and are quick to solve my issues and concerns. Using ACES has been an enjoyable experience.”

Image of Freddie Witherden
Freddie Witherden (Texas A&M Department of Ocean Engineering) used ACES to perform high-order accurate fluid flow calculations of bluff bodies. “The range of hardware, including CPUs, NVIDIA GPUs, and Intel GPUs, is perfect for the development, testing, and evaluation of performance-portable coding paradigms. Additionally, the large-memory nodes have proved invaluable for enabling us to perform preprocessing work for simulations on leadership-class computing resources.”

Image of Rubem Mondaini
Rubem Mondaini (University of Houston) uses ACES to study quantum many-body problems in Condensed Matter Physics with the goal of understanding how Coulomb repulsion between electrons can affect quantum matter topology. "ACES’ abundant supply of the latest CPUs (Sapphire Rapids), large memory and fast interconnect make it possible to reach physical system sizes unforeseen until now," said Dr. Mondaini. “This unique combination of assets makes all the difference with investigations in the quantum world,” he added.

Image of Chen-Chun Chen
Chen-Chun Chen (Ohio State University NOWLAB) primarily uses the Intel GPUs and XeLink nodes on ACES. “Using TensorFlow and Horovod, I’ve been running OSU Micro Benchmarks (OMB) to extend the MVAPICH library to support Intel PVC GPUs,” he said, and added, “I receive invaluable assistance from the HPRC helpdesk, and my experiments on ACES have been consistently smooth.”

Image of Junyuan Hong
Junyuan Hong (UT-Austin) cited ACES in his latest research which presents a new method for private prompt tuning of LLMs, like ChatGPT. The solution is called Differentially-Private Offsite Prompt Tuning (DP-OPT) which employs a discrete client-side prompt that can be applied to desired cloud models without significantly compromising performance.

Image of Wonmuk Hwang
Wonmuk Hwang (Texas A&M Department of Biomedical Engineering) performs molecular dynamics simulations of biomolecules - a task best performed with state-of-the-art computational resources. Dr. Hwang uses ACES to investigate the mechanical response of T-cell receptors which defend against pathogens like influenza and the SARS CoV-2 virus that was responsible for the COVID pandemic. “The NVIDIA H100s are great for carrying out multiple simulations, and the HPRC staff are always helpful when troubleshooting aspects of this novel testbed,” he said.

Image of Hanning Chen
Hanning Chen (Texas Advanced Computing Center) used ACES to conduct a Molecular Dynamics (MD) simulation of Satellite Tobacco Mosaic Virus with more than 28 million atoms. “MD simulations of large biological systems are significant because they reveal functions contributed by millions of atoms, or more,” he said, and added, “Our benchmark test with NAMD3 and a 64-node run revealed a performance of 4.8 ns/day, with an impressive 80 percent scaling factor when we increased the number of nodes from 1 to 64. ACES is a powerful tool for MD simulations, and the HPRC support team’s knowledge of this novel platform helps researchers progress quicker.”

Acknowledgment

The ACES team gratefully acknowledges support from the National Science Foundation (NSF). The ACES project is supported by the Office of Advanced Cyberinfrastructure (OAC) award number 2112356. For more information about ACES, please visit https://hprc.tamu.edu/aces/.

Contact Information

Phone: 979-845-0219
Email: events@hprc.tamu.edu