
Welcome to the Parallel Architecture, System, and Algorithm Lab

Campus Photo

University of California, Merced
Parallel Architecture, System, and Algorithm (PASA) Lab at the Electrical Engineering and Computer Science, University of California, Merced performs research in core technologies for large-scale parallel systems. The core theme of our research is to study how to enable scalable and efficient execution of applications (especially machine learning and artificial intelligence workloads) on increasingly complex large-scale parallel systems. Our work creates innovation in runtime, architecture, performance modeling, and programming models; We also investigate the impact of novel architectures (e.g., CXL-based memory and accelerator with massive parallelism) on the designs of applications and runtime. Our goal is to improve the performance, reliability, energy efficiency, and productivity of large-scale parallel systems. PASA is a part of High Performance Computing Systems and Architecture Group at UC Merced
See our Research and Publications pages for more information about our work. For information about our group members, see our People page.
[12/2022] Welcome new PhD students, Bin Ma and Jianbo Wu!
[11/2022] Dong Li was invited as a panelist in the panel on "AI for HPC" in Seventh International Workshop on Extreme Scale Programming Models and Middleware associated with SC'22.
[11/2022] Dong Li co-chaired AI for Scientific Applications workshop associated with SC'22.
[11/2022] Thanks Lawrence Livermore National Lab (LLNL) for supporting our research on heterogeneous memory!
[11/2022] Thanks Oracle for supporting our research on large-scale AI model training!
[11/2022] Group meeting with DarchR Lab@UC Davis for research collaboration.
[11/2022] A paper "Merchandiser: Data Placement on Heterogeneous Memory for Task-Parallel HPC Applications with Load-Balance Awareness" is accepted to PPoPP'23.
[11/2022] Prof. Jian Huang@UIUC visited us and gave a talk "The ISA of Future Storage Systems: Interface, Specialization and Approach".
[9/2022] A paper on using heterogeneous memory to train GNN (titled "Betty: Enabling Large-Scale GNN Training with Batch-Level Graph Partitioning") is accepted to ASPLOS'23.
[9/2022] Welcome Prof. Tsung-wei Huang@Utah visits us.
[8/2022] Welcome new PhD student, Jin Huang!
[8/2022] Thanks SK hynix for supporting our research on heterogeneous memory.
[6/2022] A paper on mixed-precision AI model training is accepted into ATC'22.
[5/2022] Dong Xu and Jie Liu will go to SK hynix and Tencent for summer internship. Congratulations!
[4/2022] Congratulations to my student, Jie Ren. She will join the College of William and Mary as a tenure-track assistant professor since fall!
[4/2022] Congratulations to my student, Wenqian Dong. She will join the Florida International University as a tenure-track assistant professor since fall!
[4/2022] Congratulations to my student, Jiawen Liu. He will join the Meta (facebook) AI Research.
[4/2022] A paper, "Campo: A Cost-Aware and High-Performance Mixed Precision Optimizer for Neural Network Training", is accepted to ATC'22.
[10/2021] Dong gave a talk on large AI model training at ECE@UMass.
[10/2021] Thanks Facebook and Western Digital for supporting our research on big memory!
[9/2021] A paper, "Flame: A Self-Adaptive Auto-Labeling System for Heterogeneous Mobile Processors" is accepted into SEC'21.
[9/2021] Thanks ANL and SK Hynix for supporting our research on machine learning systems and big memory!
[8/2021] Welcome new PhD student, Dong Xu :-)
[6/2021] Dong was invited to give a talk and be a panelist at 2nd Workshop on Heterogeneous Memory Systems (HMEM)
[6/2021] A paper "Fauce: Fast and Accurate Deep Ensembles with Uncertainty for Cardinality Estimation" is accepted into VLDB'21.
[6/2021] Dong was selected to be an associate editor for IEEE Transactions on Parallel and Distributed Systems (TPDS).
[5/2021] An NSF grant is funded to support our research on big memory for HPC.
[5/2021] Dong was invited to give a keynote at the Eleventh International Workshop on Accelerators and Hybrid Emerging Systems (AsHES).
[4/2021] Our work on training billion-scale NLP models on heterogeneous memory is accepted into USENIX ATC'21! This is a collaboration work with Microsoft.
[4/2021] Our ASPLOS'21 paper won the distinguished artifact award! Only two papers won this award.
[3/2021] Four papers are accepeted into ICS'21!
[3/2021] Prof. Xu Liu from NCSU will give us a talk "Rethinking Performance Tools Research" on April 2.
[3/2021] Our collaboration work with LLNL on MPI fault tolerance benchmark suite is reported by HPCWire.