Research Projects

We work on the efficient (i.e., low power and energy) and high performance (i.e., real-time) computing which integrates computer architecture, digital circuit design, and AI neural networks.

Research Topics:

Embedded AI Systems: Hardware-aware model compression, perception-driven scheduling, and cross-layer optimization for embedded machine learning applications. (🔗Link Button)

LLM-Driven Embedded and Hardware Coding: LLM for GPU CUDA Programming. (🔗Link Button)

Real-Time Computing on GPU/FPGA Heterogeneous Architectures: System-level modeling, timing analysis, and scheduling frameworks to guarantee performance predictability in complex GPU/FPGA computing environments. (🔗Link Button)

FPGA/ASIC-based AI Accelerator Design: Design of customized accelerators and systolic arrays for deep neural networks, sparse computations, and sensor fusion tasks. (🔗Link Button)

PDN and Low-Power Processor Design: Power delivery network design and reinforcement learning–based runtime management for energy efficiency. (🔗Link Button)