The team investigates how large language models can be tailored for code generation in constrained environments such as IoT and embedded systems. Topics include test generation, HDL synthesis, human-in-the-loop feedback, and multimodal user interaction. Their goal is to optimize for multiple objectives including power, latency, and correctness, while exploring constrained decoding and specialized small-scale LLMs.








