Parallel Architecture, System, and Algorithm (PASA) Lab at the Electrical Engineering and Computer Science, University of California, Merced performs research in core technologies for large-scale parallel systems. The core theme of our research is to study how to enable scalable and efficient execution of enterprise and scientific applications on increasingly complex large-scale parallel systems. Our work creates innovation in runtime, architecture, performance modeling, and programming models; We also investigate the impact of novel architectures (e.g., non-volatile memory and accelerator with massive parallelism) on the designs of applications and runtime. Our goal is to improve the performance, reliability, energy efficiency, and productivity of large-scale parallel systems. PASA is a part of High Performance Computing Systems and Architecture Group at UC Merced
[9/2021] A paper, "Flame: A Self-Adaptive Auto-Labeling System for Heterogeneous Mobile Processors" is accepted into SEC'21.
[9/2021] Thanks ANL and SK Hynix for supporting our research on machine learning systems and big memory!
[8/2021] Welcome new PhD student, Dong Xu :-)
[6/2021] Dong was invited to give a talk and be a panelist at 2nd Workshop on Heterogeneous Memory Systems (HMEM)
[6/2021] A paper "Fauce: Fast and Accurate Deep Ensembles with Uncertainty for Cardinality Estimation" is accepted into VLDB'21.
[6/2021] Dong was selected to be an associate editor for IEEE Transactions on Parallel and Distributed Systems (TPDS).
[5/2021] An NSF grant is funded to support our research on big memory for HPC.
[5/2021] Dong was invited to give a keynote at the Eleventh International Workshop on Accelerators and Hybrid Emerging Systems (AsHES).
[4/2021] Our work on training billion-scale NLP models on heterogeneous memory is accepted into USENIX ATC'21! This is a collaboration work with Microsoft.
[4/2021] Our ASPLOS'21 paper won the distinguished artifact award! Only two papers won this award.
[3/2021] Four papers are accepeted into ICS'21!
[3/2021] Prof. Xu Liu from NCSU will give us a talk "Rethinking Performance Tools Research" on April 2.
[3/2021] Our collaboration work with LLNL on MPI fault tolerance benchmark suite is reported by HPCWire.
[3/2021] Wenqian got an internship offer! She will go to the HP labs during the summer working on scientific machine learning.
[1/2021] Our collaboration work with Microsoft on training large NLP models with heterogeneous memory draws some attentions from media (see 1 and 2).
[1/2021] A paper "Tahoe: Tree Structure-Aware High Performance Inference Engine for Decision Tree Ensemble on GPU" is accepted into EuroSys'21.
[12/2020] Dong was invited to give a talk at IBM Research Almaden, titled "Memory Management in Heterogeneous Memory Systems: Case Studies with Machine Learning Workloads".
[12/2020] Shuangyan Yang joined us as a PhD student. Welcome aboard, Shuangyan!
[12/2020] A paper “ ArchTM: Architecture-Aware, High Performance Transaction for Persistent Memory” is accepted in FAST'21!
[12/2020] Welcome Yan Li from Western Digital to visit us. She gave a talk, “NAND Flash and its Application”.
[11/2020] A paper “Fast, Flexible and Comprehensive Bug Detection for Persistent Memory Programs” is accepted in ASPLOS'21!
[11/2020] A paper “Sparta: High-Performance, Element-Wise SparseTensor Contraction on Heterogeneous Memory” is accepted in PPoPP'21!
[10/2020] Dong was invited to give a talk@NCSU, titled "Is Big Memory Useful for HPC Applications? A Case Study with Molecular Dynamics Simulation".
[10/2020] A paper “Sentinel: Efficient Tensor Migration and Allocation on Heterogeneous Memory Systems for Deep Learning” is accepted in HPCA'21!
[10/2020] Wenqian's SC'20 work is highlighted by the U.S. Department of Energy!
[10/2020] Dong was invited to give a talk at Intel, titled "Performance Optimization of ANN on Optane-based Heterogeneous Memory".