Publications

  1. A. C. Yüzügüler, C. Sönmez, M. Drumond, Y. Oh, B. Falsafi, and P. Frossard, "Scale-Out Systolic Arrays," ACM Transactions of Architecture and Code Optimization, Volume 20, Issue 2, 2023.
  2. S. B. Harma, A. Chakraborty, B. Falsafi, M. Jaggi and Y. Oh, "Accuracy Boosters: Epoch-Driven Mixed-Mantissa Block Floating-Point for DNN Training," arXiv preprint arXiv:2211.10737, 2022.
  3. M. Drumond, L. Coulon, A. Pourhabibi, A. C. Yüzügüler, B. Falsafi, and M Jaggi, "Equinox: Training (for Free) on a Custom Inference Accelerator," 54th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2021.
  4. S. B. Harma, M. Drumond and B. Falsafi, "Numerical Encoding for DNN Training," Computer Architecture Today, sigarch.org, 2021.
  5. T. Lin, S. U. Stich, L. Barba, D. Dmitriev, M. Jaggi, "Dynamic Model Pruning with Feedback," International Conference on Learning Representations (ICLR), 2020.
  6. A. Koloskova, T. Lin, S. U. Stich, M. Jaggi, "Decentralized Deep Learning with Arbitrary Communication Compression," International Conference on Learning Representations (ICLR), 2020.
  7. T. Lin, S. U. Stich, K. K. Patel, M. Jaggi, "Don't Use Large Mini-Batches, Use Local SGD," International Conference on Learning Representations (ICLR), 2020.
  8. A. C. Yüzügüler, F. Celik, M. Drumond, B. Falsafi, and P. Frossard, "Analog Neural Networks for Deep-Submicron Nonlinear Synapses," IEEE Micro special issue on ML Accelerators, 2019.
  9. M. Drumond, T. Lin, M. Jaggi, and B. Falsafi, "Training DNNs with Hybrid Block Floating Point," Proceedings of the Thirty-second Conference on Neural Information Processing Systems (NeurIPS 2018), 2018.