We are thrilled to share that Professor Zhang is among the esteemed recipients of Intel's 2023 Outstanding Researcher Awards! The award recognizes the exceptional contributions made through Intel university-sponsored research that help further Intel’s mission of creating world-changing technology that improves the lives of everyone on the planet. More details available here.
Published on 4/19/2024.
In collaboration with Colorado State University, University of Illinois Chicago, CITIC, and Universidade da Coruña,
our peper titied "Formal Verification of Source-to-Source Transformations for HLS" has been recognized as the best paper at FPGA'24!
This work introduces a novel approach to verify the correctness of fundamental and advanced source-to-source transformations for HLS,
proposing a hybrid symbolic analysis approach that achieves scalability to practical problem sizes and is robust to a rich set of HLS transformations.
Congratulations to Louis-Noël Pouchet and all co-authors including Niansong Zhang and Hongzheng Chen!
Paper Link
Published on 3/7/2024.
Congratulations to Dr. Chenhui Deng for passing his defense of "Accurate and Efficient Representation Learning on Large-Scale Graphs"! Chenhui joined the group in 2018 and has been a key contributor to the group's research in graph learning and EDA. His next stop is at NVIDIA research. Best of luck, Chenhui!
Published on 2/28/2024.
Congratulations to Dr. Yichi Zhang for successfully defending his thesis entitled "Co-Design of Binarized Deep Learning"! After joining the group in 2018, Yichi has published multiple high-quality papers at major conferences and led us to the fantastic world of binarization. He will be joining Google soon. Wish you all the best, Yichi!
Published on 10/27/2023.
Yixiao Du is being recognized for his great work as a teaching assistant for ECE 2300 during
fall 2022: he is this year's sole recipient of the ECE Outstanding PhD TA Award! Yixia joined
the Zhang Research Group in the summer of 2020 in the MS/PHD track. Since then, Yixiao has been
using his expertise in FPGA hardware accelerators to the benefit of his peers, holding
discussions and office hours that help break down many of the difficult topics of the course.
Yixiao took the group for lunch with his prize earnings. Thanks Yixiao!
Published on 05/06/2023.
On Tuesday May 2 2023, Samantha Cobado and the rest of the M.Eng. cohort shared their research achievements spanning many different fields: AI, bioinformatics, information theory, computer systems, and power electronics among others. Samantha's poster, "Optimizing Binary Convolution for Compute-In-SRAM Accelerator," won "Best in AI" alongside another poster. Cobado's work demonstrates the optimization opportunities of binarized neural networks. Her poster is shared below.
Optimizing Binary Convolution for Compute-In-SRAM Accelerator (Poster, 2023)
Published on 05/03/2023.
The latest update to HeteroCL, our groups programming infrastructure for heterogeneous computing, has been released. We are proud to announce that HeteroCL has undergone a complete migration from Halide IR to the MLIR ecosystem. Included in this update is a brand new Python frontend with HeteroCL AST, an IR system built around the newly designed HeteroCL MLIR dialect, an LLVM CPU backend, and a Vivado HLS backend. This latest release marks a big stepping point for the future of HeteroCL, allowing for future work to build on the highly extensible MLIR infrastructure. HeteroCL is now even more versatile, efficient and stable, making it an excellent tool for developing high-performance hardware design and heterogeneous programming. We encourage all users, both new and exisiting, to try out HeteroCL v0.5 here and the HeteroCL MLIR dialect here .
Published on 2/28/2023.
Electrical and computer engineering faculty from Cornell Engineering hold key positions in the newly announced ACE Center for Evolvable Computing, a Joint University Microelectronics Program 2.0 ( JUMP 2.0 ) initiative sponsored by the Semiconductor Research Corporation. Our group will lead a research theme that focuses on improving the efficiency and usability of domain-specific hardware accelerators in distributed systems, by enhancing the programmablity and scalability of the accelerators. Read more here.
Published on 1/10/2023.
Prof. Zhiru Zhang joins the newly elevated class of 2023 of IEEE Fellows for his contributions to FPGA high-level synthesis and machine learning accelerators. Since co-founding AutoESL (now Vivado HLS) back in 2006, Prof. Zhang continues to pave the way in design automation of accelerators for heterogeneous computing platforms. IEEE Fellow membership is reserved for those with distinguished contributions to their respective fields, and less than 0.1% of members are nominated each year. This comes after Prof. Zhang also received the Michael Tien '72 Excellence in Teaching Award in September. Read more here.
Published on 12/06/2022.
Chenhui Deng and Prof. Zhiru Zhang, along with their co-authors, Xiuyu Li and Zhuo Feng, will publish their work, GARNET, as a part of this year's proceedings of the Learning on Graphs Conference (LoG). GARNET is a new method to increase the resilience of graph neural networks against adversarial attacks without impairing the clean graph structures used for training. Their technique shows significant speedups and accuracy improvements over prior work, earning the work a spotlight place in the conference. The recording of Chenhui's talk can be watched here.
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks
Published on 11/24/2022.
Congratulations to Ecenur Ustun, who successfully defended her thesis: “Learning-Assisted Techniques for Agile Arithmetic Design on FPGAs.” Since joining the group in 2016, Ecenur has had a steady output of publications and received several honors and awards in recognition of her hard work. Now Dr. Ecenur Üstün will be joining Meta Reality Labs.
Published on 10/13/2022.
Chenhui Deng (left) and Andrew Butt (right) have been awarded a 2022 Qualcomm Innovation Fellowship (QIF) for their proposal titled "Power Inference with Self-Supervised Learning". Chenhui and Andrew's proposal mainly focuses on improving the speed and accuracy of power inference, which is a critical part of today's electronic design automation (EDA) tools. The QIF program aims to recognize and reward innovative Ph.D. students from various research areas. This year, only 19 proposals have been awarded among over 100 submissions. Well done!
More detailed article from Cornell Chronicle
Published on 09/15/2022.
The 30th IEEE International Symposium on Field-Programmable Custom Computing Machines, the first hybrid conference in the field, was held May 15th to 18th at Cornell Tech in New York City. Ecenur Ustun presented her work IMpress: Large Integer Multiplication Expression Rewriting for FPGA HLS. Professor Zhang served as the General Chair, Debjit Pal served as Local Arrangement Chair, and Jordan Dotzel helped plan and run large portions of the event. A total of twelve volunteers from the Zhang lab attended the conference in person to ensure the conference ran smoothly.
Published on 05/26/2022.
Professor Zhiru Zhang and Professor Jason Cong are recognized by the TCFPGA Hall of Fame for their DAC'06 paper titled "An Efficient and Versatile Scheduling Algorithm Based on SDC Formulation". Their paper is inducted into the Class of 2022 for being "one of the milestones in the development of high-level synthesis used heavily in industry and academia". Many tools, including Vivado HLS, are built on the scheduling algorithm proposed in this paper. "The TCFPGA Hall of Fame for FPGAs and Reconfigurable Computing recognizes the most significant peer-reviewed publications in the field, highlights key contributions, and represents the body of knowledge that has accumulated over the past 30 years. The ACM SIGDA Technical Committee on FPGAs and Reconfigurable Computing is a technical committee of the Design Automation Special Interest Group, which was formed to promote the FPGA and reconfigurable computing community." Any technical publications on FPGAs and Reconfigurable Computing from any journal or conference publications in the past are eligible for this distinction, and each year, 2-3 papers are inducted.
Published on 03/01/2022.
In collaboration with Professor Jason Cong's group at UCLA, AMD Xilinx, and Ghent University, our paper RapidStream: Parallel Physical Implementation of FPGA HLS Designs received the Best Paper Award at the 30th International Symposium on Field-Programmable Gate Arrays (FPGA 2022). RapidStream shortens FPGA compile time by nearly an order of magnitude through integrating HLS pipelining and physical implementation and enabling parallel compilation. Congratulations to Licheng Guo (lead author and developer) and all the co-authors! Details of the conference can be found at https://www.isfpga.org/program/.
Published on 03/01/2022.
Our undergraduate Xiuyu Li received an honorable mention for the 2022 CRA Outstanding Undergraduate Researcher award. This award program recognizes undergraduate students in North America who show outstanding potential in computing research. For more information, please visit the CRA website.
Published on 01/19/2022.
Contratulations to all the authors of HeteroFlow and HiSparse for publishing their work in International Symposium on Field-Programmable Gate Arrays (FGPA) 2022! HeteroFlow addresses challenges to optimizing the data placement on software-defined FPGAs. HiSparse presents approaches to implement high performance sparse accelerators on HBM-equipped FPGAs.
Published on 01/16/2022.
In March, Yuan successfully passed his PhD defense: "Trase-based Learning for Agile Hardware Design and Design Automation". Yuan was one of the early members of the group and well-known for his consistent research output, dedication, and mentorship to new PhD students. He took his talents to Amazon AWS and will be working as an applied scientist. In December, Sean passed his PhD defense. Sean was known for his discipline and mentorship for dozens of students (undergrads, MEng, and others), especially those working on HeteroCL. He was also the first person from the group to receive a best paper award from a top conference. He will be joining Amazon AWS to continue his research with a considerably larger salary. Both Sean and Yuan will be missed from the group!
Published on 12/18/2021.
We are excited to share the news that our group will be part of Panorama, a new collaborative project funded by NSF (with a $5M grant) to accelerate computational pangenomics using a hardware/software codesign approach. If you would like to read more about this project, computational pangenomics, and our research collaborators, please have a look at this recent article.
Published on 09/16/2021.
Congratulations to Yuwei, Yixiao, and Ecenur for publishing their work on GraphLily in ICCAD 2021. Their work represents the first FPGA overlay for graph processing. It supports a rich set of graph algorithms by adopting the GraphBLAS programming interface and formulating the graph algorithms as sparse linear algebra kernels. It utilizes the high bandwidth of HBM to achieve high performance for memory-bound sparse kernels by co-designing the data layout and the accelerator architecture. Evaluations show the advantages of GraphLily over competitive CPU and GPU graph processing systems and also outperform existing single-purpose graph accelerators on FPGAs. More details can be found in the paper or the corresponding YouTube video.
Published on 10/28/2021.
We welcome Andrew Butt, Hongzhang Chen and Jiajie Li into our group as MS/PhD students, Nansong Zhang and Dingyi Dai as MS students! Andrew completed his undergraduate at University of Pennsylvania, and his current research interests are in high-level synthesis and FPGA place-and-route tools. Hongzheng completed his undergraduate at Sun Yat-sen University, and his research interests include heterogeneous computing, domain-specific compilers, and computer systems for big data and machine learning. Jiajie completed his undergraduate at Tsinghua University, and his research interests lie within systems and hardware optimization for machine learning tasks with the main focus on graph learning applications.
Published on 09/16/2021.
Prof. Zhiru Zhang gives a plenary talk titled "Faster, Slimmer, Smarter: Machine Learning for Agile Hardware Specialization" on 3rd September 2021 in MLCAD 2021. This talk focuses on recent progress in our group on using machine learning (ML) to automate critical steps in the digital design process to achieve agile hardware specialization for a broad range of emerging applications with improved compute performance and energy efficiency. Recording of the talk is available here.
Published on 09/06/2021.
Congratulations to Chenhui and Yaohui for publishing their work on SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation in ICML 2021! This work is in collaboration with Wuxinlin and Zhiqiang from Professor Zhuo Feng's group at Stevens Institute of Technology. In this work, we propose a black-box metric called SPADE to measure the adversarial robustness of machine learning (ML) models. Moreover, we further extend the SPADE to evaluate the robustness of input samples, which is then used to guide the applications such as adversarial training. Our experiments demonstrate that SPADE consistently reveals model non-robustness, and the SPADE-guided adversarial training achieves 1.82% adversarial accuracy improvement over the vanilla adversarial training on CIFAR-10. More details can be found in the paper or the corresponding YouTube video.
Published on 7/9/2021.
We gave a tutorial “Productive Construction of High-Performance Systolic Arrays on FPGAs” at FCCM 2021 (May 12th). This tutorial is co-organized by Prof. Jason Cong from UCLA and Dr. Hongbo Rong from Intel Labs. In this tutorial, we presented some of our latest efforts on generating high-performance systolic arrays using AutoSA, T2S, and HeteroCL. Short demos were included. For more details please refer to the FCCM official website and our tutorial homepage.
Published on 05/13/2021.
With Prof. Callie Hao (Georgia Tech) taking the lead, Jordan recently helped publish a survey on Edge AI design methodologies in IEEE Design & Test. It covers edge AI challenges, model design methodologies, software-hardware co-design, and AI benchmarking, among others. These include the latest techniques for model design, model compression, and adaptive inference. It especially highlights opportunities for optimization across technology layers to boost quality of results. We would like to thank the other authors Jinjun Xiong, Luca Benini, and Deming Chen for their numerous contributions. The paper preprints can be found on IEEE and arxiv.
Published on 04/05/2021.
Congratulations to Yuan Zhou for his work, Distilling Arbitration Logic from Traces using Machine Learning: A Case Study on NoC, getting accepted to the 58th Design Automation Conference (DAC). This work explores approaches to optimize the arbitration logic of a network-on-chip router using machine learning models, together with a efficient way to implement the desired model on hardware.
Published on 03/29/2021.
Nikita and Shaojie will present their work Dagger at ASPLOS in July this year. Dagger represents a further extension of specialized programmable networking adapters designed specifically to offload cloud RPC stacks to reconfigurable hardware. In contrast to previous proposals, its programmable FPGA-based NIC features full networking offload up to the application layer, reconfigurability, and close coupling with the host processor over a memory interconnect. We show that the combination of these three principles improves both end-to-end latency and throughput of cloud RPC stacks while providing the same level of flexibility and abstraction as software-only systems. More information can be found here.
Published on 03/15/2021.
In collaboration with Professor Jason Cong's group at UCLA, our paper AutoBridge: Coupling Coarse-Grained Floorplanning and Pipelining for High-Frequency HLS Design on Multi-Die FPGAs received the Best Paper Award at the 29th International Symposium on Field-Programmable Gate Arrays (FPGA 2021). Congratulations to all the authors: Licheng Guo, Yuze Chi, Jie Wang, Jason Lau, Weikang Qiao, Ecenur Ustun, Professor Zhiru Zhang, and Professor Jason Cong. Details of the conference can be found at https://isfpga.org/program/.
Published on 03/04/2021.
We welcome Yaohui Cai into our group as an MS/PhD student! He completed his undergraduate at Peking University, and his previous works on ZeroQ and HAWQ-V2 have helped forge new directions in DNN quantization. Currently, his research interests focus on improving the efficiency of machine learning models and complement our current research efforts well. More information can be found at his personal website .
Published on 02/15/2021.
Our paper FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations is nominated as best paper candidates at the 29th International Symposium on Field-Programmable Gate Arrays (FPGA 2021). Congratulations to all the authors.
Published on 01/13/2021.
Jiajia Jiao and Debjit Pal will present a paper on the application of Graph Learning on instruction vulnerability estimation at the 24th Design Automation and Test in Europe (DATE'21) conference. The conference will be held online during Feb 1-5, 2021. This paper presents GLAIVE, a graph learning-assisted model for fast, accurate, and transferable soft-error-induced instruction vulnerability estimation by leveraging the synergy between static analysis and data-driven statistical reasoning. This year DATE had a large number of submissions involving more than 2600 people spanning 33 countries across the globe with a regular paper acceptance rate of 24%.
Published on 11/24/2020.
In the ECE Colloquium Series in University of Minnesota on Nov. 19, Prof. Zhiru Zhang gave a
talk titled
"Accelerator Synthesis for Agile Hardware Specialization: A New Dawn".
Both academia and industry are seeing an increasing use of high-level synthesis (HLS)
to automatically generate specializated hardware accelerators from software programs,
since these accelerators are able to achieve better compute performance and energy efficiency
for a plethora of emerging applications.
However, a more widespread adoption of HLS is currently held back
by its deficiencies in quality of results (QoR) and ease-of-programming.
This talk covers some of the recent progress our research group made
on improving the QoR as well as the programming abstraction of HLS.
Event
website.
Published on 11/19/2020.
At the Samsung Forum held on Oct 27th, 2020, Prof. Zhiru Zhang gave a talk titled "Design and Design Automation for Efficient ML Hardware Specialization". The talk focused on our recent research on ML hardware specialization, where we investigate both new hardware-friendly ML algorithms and design automation for ML hardware accelerators. Recording of the talk is available here.
Published on 10/27/2020.
Ecenur Ustun has been selected for Rising Stars in EECS 2020. Rising Stars brings together top graduate and postdoc women in EECS who are interested in pursuing academic careers. The event was launched by MIT in 2012, and this year it is organized by UC Berkeley. For more information, please visit the Rising Stars 2020 website.
Published on 10/15/2020.
Yichi and Jordan presented this year in SRC TECHCON 2020. Yichi presented on his work on precision gating, while Jordan presented on his work on overwrite quantization. Jordan received a student presentation award for his talk, which were given to only 10 of the 160 research presentations this year. We thank SRC for the opportunity, and they made the best of the current at-home situation by shipping high-quality t-shirts (pictured to the right) and face masks to all the student participants.
Published on 09/20/2020.
Prof. Zhang gave the opening keynote, which discussed the benefits of HLS for developing ASICs and positioned HeteroCL, a python-based DSL, as higher-level tool for further improving the programmer efficiency during the development process. At the same time, our students Yuan Zhou, Yichi Zhang, and Jordan Dotzel received an award and invited talk for being among the winning solutions in the IWLS programming contest, which focused on learning logic circuits directly from data sampled from general boolean functions.
Published on 08/02/2020.
Our group members Ecenur Ustun and Yi-Hsiang Lai will present two papers at the 39th International Conference on Computer-Aided Design (ICCAD'20). This year ICCAD sees a 20% increase in total submissions with 471 papers, while the acceptance rate is 27%.
Published on 07/17/2020.
Our group received a research award from Facebook for the codesign of near-data graph learning systems. This was part of the recent focus from Facebook in the area of AI System Hardware/Software Co-design, and this year they had 132 proposals to choose from. This award will allow our group to continue our research in the direction of efficient and scalable graph learning systems.
Published on 07/09/2020.
Nitish Srivastava will present his paper at the 53rd International Symposium on Microarchitecture, which will be held during Oct. 17-20. Due to the COVID-19 pandemic, the MICRO 2020 edition will be a global online event. The Athens edition of MICRO has been rescheduled for 2021. See links to the paper below.
Published on 07/08/2020.
Yuwei Hu will present his paper at the 33rd International Conference for High Performance Computing, Networking, Storage, and Analysis (SC'20), which will be held online during Nov. 16-19. This paper analyzes the inefficiency of existing deep learning frameworks and graph processing frameworks in handling graph neural networks, and proposes a solution. See links to the paper below.
Published on 06/30/2020.
Congratulations to Dr. Nitish Srivastava for successfully defending his thesis titled "Design and Generation of Efficient Hardware Accelerators for Tensor Computations". He is joining the edge TPU group at Google as a software engineer.
Published on 01/15/2020.
Our group members Chenhui Deng and Yichi Zhang will present 2 papers at the 8th International Conference on Learning Representations. This year ICLR sees an 63% increase in total submissions with 2594 papers. The acceptance rate decreased from 31.4% to 26.5%. See links to the papers below.
Published on 12/19/2019.
Professor Zhang gave two invited talks at the TVM conference (2019) and NeurIPS'19, with each focusing on the HeteroCL programming framework for productive hardware specialization and algorithm-accelerator co-design for neural networks, respectively.
Published on 12/13/2019.
Professor Zhang received the Ruth and Joel Spira Award for Excellence in Teaching. The award is presented annually to an individual faculty member who has excelled in teaching and inspiring students during a particular academic year. See Cornell ECE news.
Published on 11/19/2019.
We would like to welcome new Ph.D students, Jie Liu, Jordan Dotzel, and Nikita Lazarev, new PostDoc Debjit Pal to our research group.
Published on 09/01/2019.
Congratulations to Dr. Ritchie Zhao for successfully defending his thesis titled “Co-Designing Model Compression Algorithms and Hardware Accelerators for Efficient Deep Learning”. He is joining Microsoft as the start of his career.
Published on 08/26/2019.
The 30th IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP'2019) took place between July 15-17 at Cornell Tech, NYC [link]. Prof. Zhang served as the General Chair and multiple members from Zhang Research Group participated in the organization of the conference as student volunteers.
Published on 07/17/2019.
Congratulations to Dr. Zhenghong (John) Jiang for starting his new position at Cadence, San Jose, CA, and Dr. Cunxi Yu for starting his faculty career at University of Utah, Salt Lake city, UT.
Published on 07/10/2019.
Our group member Ritchie Zhao and Jordan Dotzel presented the paper Improving Neural Network Quantization without Retraining using Outlier Channel Splitting at the 36th International Conference on Machine Learning (ICML'19) and Building Efficient Deep Neural Networks with Unitary Group Convolutions at the Conference on Computer Vision and Pattern Recognition (CVPR'19) in Long Beach, LA. This year ICML received 3424 initial submissions, and 774 out of them got accepted. All talks at the conference can be found here. The total number of papers submitted to CVPR increased significantly by 1857 compared to last year, while the acceptance rate decreased from 30% to 25%.
Published on 06/21/2019.
Our group presented 5 papers at the 56th Design Automation Conference (DAC'19) in Las Vegas, NV. See the paper listed below.
Prof. Zhang also co-organized the ML tutorial at the conference.
Published on 06/07/2019.
Our group members Ecenur Ustun and Nitish Srivastava presented two papers at the 27th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM’19) in San Diego, CA. See links to their videos below.
Published on 05/27/2019.
Prof. Zhiru Zhang with his project proposal “Automatic Synthesis for Programmable Hardware Specialization” received the 2018 Google Faculty Research Award. The project aims at developing a new compilation framework that can automatically synthesize a high-quality programmable hardware accelerator from instruction set specifications. See news on Cornell Chronicle.
The Google Faculty Research Awards Program’s goal is to recognize cutting-edge research in mutual areas of interest and to identify and strengthen long-term collaborative relations with faculty working on problems that will impact how future generations use technology. Prof. Zhang is one of the ten CIS, engineering professors at Cornell who received this award in the year 2018. See an article in Cornell Daily Sun.
Published on 04/04/2019.
Prof. Zhiru Zhang and his co-authors Yi-Hsiang Lai, Yuze Chi, Yuwei Hu, Jie Wang, Cody Hao Yu, Yuan Zhou, and Prof. Jason Cong have received the Best Paper Award at the 27th ACM/SIGDA International Symposium on Field-Programmable Gate Arrays held in Seaside, CA, February 24-26, 2019. Their paper, "HeteroCL: A Multi-Paradigm Programming Infrastructure for Software-Defined Reconfigurable Computing" results from a collaborative project between Prof. Zhang’s research group at Cornell and Prof. Cong’s group at UCLA. HeteroCL a highly productive programming infrastructure for heterogeneous platforms integrating CPUs and hardware accelerators like FPGAs.
The ACM/SIGDA International Symposium on Field-Programmable Gate Arrays is the premier conference for presentation of advances in all areas related to the FPGA technology, including FPGA architecture, FPGA circuit design, CAD for FPGAs, high-level abstractions and tools for FPGAs, FPGA-based and FPGA-like computing engines, as well as applications and design studies. This year's Best Paper Award is selected from a total of 161 submissions.
Published on 02/26/2019.
Steve Dai has won the prestigious ECE Outstanding Thesis Research Award for 2019! The award is given to one graduating PhD student from the School of ECE yearly, based on the significance of their doctoral research. Steve recently joined NVIDIA Research as a Research Scientist, after successfully defending his PhD thesis titled “Coordinated Static and Dynamic Scheduling for High-Quality High-Level Synthesis”. Congratulations to Steve for the exceptional work!
Published on 02/04/2019.
Our group will present 5 papers at the 56th Design Automation Conference (DAC'19) in Las Vegas, NV. This year DAC sees an 18% increase in total submissions with 815 papers(10-year high), but all of our 5 submitted papers got accepted.
Published on 01/30/2019.
Congratulations to Dr. Steve Dai for successfully defending his thesis titled “Coordinated Static and Dynamic Scheduling for High-Quality High-Level Synthesis”. He is joining Nvidia as the start of his career.
Published on 01/09/2019.
We would like to welcome new Ph.D students, Chenhui Deng, Shaojie Xiang and Yichi Zhang, new postdoc Cunxi Yu to our research group.
Published on 09/16/2018.
Congratulations to Dr. Gai Liu for successfully defending his thesis titled “Cross-Stage Logic and Architectural Synthesis: with Applications to Specialized Circuits and Programmable Processors”. He is joining Synopsys as a Senior Research and Development Engineer.
Published on 09/01/2018.
Professor Zhang was named as one of the five winners of the 2018 Young Under-40 Innovators Award at the 55th Design Automation Conference (DAC) held in San Francisco, CA on June 24-28, 2018. The winners, from both innovative companies as well as universities, were announced during the conference’s opening keynote session at the 55th gathering of DAC, the premier conference devoted to the design and automation of electronic systems. The Under-40 Innovators Award is sponsored by Association for Computing Machinery (ACM), the Electronic Systems Design Alliance (ESDA), and the Institute of Electrical and Electronics Engineers (IEEE). The award recognizes the top five young innovators, who already have made a significant impact in the field of design and automation of electronics. Details of the Young Under-40 Innovators Award Panel is available at https://www.eetimes.com/author.asp?section_id=36&doc_id=1333452
Published on 07/03/2018.
Our paper entitled Fast and Accurate Estimation of Quality of Results in High-Level Synthesis with Machine Learning received Best Paper Award in the Short Paper Category at the 26th IEEE International Symposium on Filed-Programmable Custom Computing Machines. Details of the conference can be found at http://fccm.org/2018/program.html.
Published on 05/01/2018.
Professor Zhang received the Rising Professional Achievement Award from UCLA’s Henry Samueli School of Engineering and Applied Science. Presented to one alumnus annually, the Rising Professional Achievement Award honors the early career achievements of alumni under the age of 40. The school seeks candidates with impactful accomplishments in academia, industry or entrepreneurship; contributions to the engineering profession; a demonstrated commitment to mentorship; and notable service to the community and the profession. Details.
Published on 03/08/2018.
Members of the Zhang Research Group presented two papers and a poster at the 26th ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA’18) in Monterey, CA.
Published on 03/03/2018.
We would like to welcome new Ph.D students, Hanchen Jin and Yuwei Hu, to our research group.
Published on 02/09/2018.