Parallel Architecture, System, and Algorithm (PASA) Lab at the Electrical Engineering and Computer Science, University of California, Merced performs research in core technologies for large-scale parallel systems. The core theme of our research is to study how to enable scalable and efficient execution of enterprise and scientific applications on increasingly complex large-scale parallel systems. Our work creates innovation in runtime, architecture, performance modeling, and programming models; We also investigate the impact of novel architectures (e.g., non-volatile memory and accelerator with massive parallelism) on the designs of applications and runtime. Our goal is to improve the performance, reliability, energy efficiency, and productivity of large-scale parallel systems. PASA is a part of High Performance Computing Systems and Architecture Group at UC Merced
[5/2021] An NSF grant is funded to support our research on big memory for HPC.
[4/2021] Our work on training billion-scale NLP models on heterogeneous memory is accepted into USENIX ATC'21! This is a collaboration work with Microsoft.
[4/2021] Our ASPLOS'21 paper won the distinguished artifact award! Only two papers won this award.
[3/2021] Four papers are accepeted into ICS'21!
[3/2021] Prof. Xu Liu from NCSU will give us a talk "Rethinking Performance Tools Research" on April 2.
[3/2021] Our collaboration work with LLNL on MPI fault tolerance benchmark suite is reported by HPCWire.
[3/2021] Wenqian got an internship offer! She will go to the HP labs during the summer working on scientific machine learning.
[1/2021] Our collaboration work with Microsoft on training large NLP models with heterogeneous memory draws some attentions from media (see 1 and 2).
[1/2021] A paper "Tahoe: Tree Structure-Aware High Performance Inference Engine for Decision Tree Ensemble on GPU" is accepted into EuroSys'21.
[12/2020] Shuangyan Yang joined us as a PhD student. Welcome aboard, Shuangyan!
[12/2020] A paper “ ArchTM: Architecture-Aware, High Performance Transaction for Persistent Memory” is accepted in FAST'21!
[12/2020] Welcome Yan Li from Western Digital to visit us. She gave a talk, “NAND Flash and its Application”.
[11/2020] A paper “Fast, Flexible and Comprehensive Bug Detection for Persistent Memory Programs” is accepted in ASPLOS'21!
[11/2020] A paper “Sparta: High-Performance, Element-Wise SparseTensor Contraction on Heterogeneous Memory” is accepted in PPoPP'21!
[10/2020] A paper “Sentinel: Efficient Tensor Migration and Allocation on Heterogeneous Memory Systems for Deep Learning” is accepted in HPCA'21!
[9/2020] Welcome Dr. Zhao Zhang from Texas Advanced Computing Center! He will give us a talk virtually, "Scalable Deep Learning on Supercomputers".
[9/2020] A paper “HM-ANN: Efficient Billion-Point Nearest Neighbor Search on Heterogeneous Memory” is accepted in NeurIPS'20!
[9/2020] Congratulate Jiawen for his internship in the Facebook research!
[8/2020] The lab has a new website! :)
[8/2020] A paper “MATCH: An MPI Fault Tolerance Benchmark Suite” is accepted in IISWC'20.
[7/2020] A paper “Exploring Non-Volatility of Non-Volatile Memory for High Performance Computing Under Failures” is accepted in Cluster’20.
[6/2020] A paper “Ribbon: High Performance Cache Line Flushing for Persistent Memory” is accepted in PACT’20.
[6/2020] A paper “Smart-PGSim: Using Neural Network to Accelerate AC-OPF Power Grid Simulation” is accepted in SC’20.
[3/2020] Congratulations to Jie Ren, Kai, Jie Liu, Jiawen and Wenqian for their summer internships in Microsoft research, ByteDance, Futurewei and PNNL!
[3/2020] A paper “RIANN: Real-time Incremental Learning with Approximate Nearest Neighbor on Mobile Devices” is accepted in USENIX OpML’20.
[2/2020] A paper “Flame: A Self-Adaptive Auto-Labeling System for Heterogeneous Mobile Processors” is accepted in On-Device Intelligence Workshop at MLSys’20.
[1/2020] Dong was invited to join IEEE Transactions on Parallel and Distributed Systems (TPDS) Review Board.