I am a Ph.D. student in Computer Architecture and Systems at the University of California, Berkeley, advised by Krste Asanović. I am also a Student Researcher at Google, where I work with Parthasarathy Ranganathan.
My research focuses on identifying and exploiting hardware-software co-design opportunities in warehouse-scale computers to improve datacenter performance, energy efficiency, and total cost of ownership. My work spans various areas including hardware accelerator and server system-on-chip design for hyperscale systems, system optimization and profiling, and agile, open-source hardware development methodologies.
I lead the FireSim project, which enables cycle-accurate simulation of thousand-node clusters interconnected by high-performance networks using FPGAs in the cloud. FireSim allows us to prototype a datacenter, with full control over the compute hardware (from RTL), network, and software (with complete operating systems and applications) in the datacenter (see the ISCA ‘18 paper for more). FireSim was selected as one of IEEE Micro’s “Top Picks from Computer Architecture Conferences” for 2018, as the CACM Research Highlights Nominee from ISCA 2018, and for the ISCA@50 25-year Retrospective 1996-2020 collection.
FireSim is open-source on GitHub and includes extensive documentation. FireSim has been used in over 40 publications from authors at over 20 academic and industrial institutions across various areas including computer architecture, systems, networking, security, scientific computing, circuits, design automation, and more (see User Publications on the FireSim website). FireSim has also been used in the development of commercially-available silicon.
Our FirePerf ASPLOS 2020 paper added new out-of-band performance profiling features to FireSim, facilitating rapid improvements in networking performance on RISC-V server SoCs, including commercially available products.
I also work on techniques to address system-level overheads in the hyperscale/WSC context (the “datacenter tax”). Our paper at MICRO 2021 presented a detailed study of one of these overheads in Google’s datacenter fleet, Protocol Buffers serialization and deserialization. As part of this paper, we also produced HyperProtoBench, an open-source benchmark representative of key protobuf-user services at Google and an open-source hardware accelerator for Protocol Buffers. This paper won the Distinguished Artifact Award at MICRO 2021 and was selected as an Honorable Mention in IEEE Micro’s “Top Picks from Computer Architecture Conferences” for 2021. Our recent paper at ISCA 2023 addresses another common tax in hyperscale systems: general-purpose lossless compression and decompression.
Additional publications and projects can be found on my publications page. Older projects can be found on the archives page.
I have also been a lecturer and many-time TA for Berkeley’s CS61C, a sophomore-level computer architecture/systems course, and have interned at Google and SiFive. I received a B.S. in Electrical Engineering and Computer Sciences and an M.S. in Computer Science from Berkeley.
The best way to reach me is at sagark at eecs dot berkeley dot edu.