About Me

Hi!
I'm a fourth year Ph.D. student at UCLA adviced by Prof. Kai-Wei Chang. I previous work as a research assistant in National Taiwan University advised by Prof. Yu-Chiang Frank Wang.

My research views intelligence fundamentally as search, where neural networks provide the heuristics that guide exploration. The stronger and more structured those heuristics are—through pretraining and task-specific adaptation—the more efficiently we can discover solutions with limited data, compute, and time. My long-term goal is to build AI systems that can reason and solve novel problems by making this search process more data-efficient, robust, and systematic. I'm interested in the following three directions:

  • How can we systematically search through emergent internal symbolic structures?
    Ultimately, solving truly difficult problems requires more than just a good first guess; it demands systematic exploration. Traditional search methods guarantee a solution but are impractically slow, whereas neural approaches are fast but offer no guarantees. My work aims to merge the benefits of both. Instead of imposing a rigid, human-defined symbolic system, I am interested in discovering the internal symbolic abstractions that emerge within neural models themselves--letting models hypothesize, compose, and test abstract concepts automatically—so that search can be both efficient and guaranteed.
  • Can we generate the right counter-examples to break spurious correlations and learn robust heuristics?
    While neural networks excel at learning heuristics, they are prone to exploiting spurious correlations—shortcuts that work on the training data but fail in the real world. My works explore the possibility of an automated data synthesis pipelines which 1. leverage regulations from different modalities, or 2. generates compositional counter examples to discard task irrelevant information.
  • How can we learn reusable, explainable abstractions for efficient search?
    My research equips models with an interface to externalize latent knowledge as discrete, composable concepts—enabling them to detect recurring structures and recycle those abstractions for faster adaptation to new tasks in an systematic manner.


I'm looking for internships for Summer 2026!

Experience

UCLA

-   Ph.D. in Computer Science

Sep. 2022 - Present

National Taiwan University

-   B.S. in Electrical Engineering

Sep. 2016 - Jun. 2020

Meta (FAIR)

-   Research Scientist Intern

Menlo Park

Mentor: Bernie Huang

Jun. 2025 - Sep. 2025

Amazon

-   Research Scientist Intern

Los Angeles

Mentor: Christos Christodoulopoulos

Jun. 2024 - Sep. 2024

Amazon

-   Research Scientist Intern

Sunnyvale

Mentor: Feng Gao

Jun. 2023 - Sep. 2023

Microsoft

-   Research Intern

Taipei (remote)

Mentor: Yen-Chun Chen

Arp. 2022 - Jun. 2023

National Taiwan University

-   Research Assistant

Taipei

Advisor: Yu-Chiang Frank Wang

Sep. 2020 - Jun. 2022

Carnegie Mellon University

-   Collaborative Research Project Participant

Taipei

Mentor: Yao-Hung Hubert Tsai, Advisor: Ruslan Salakhutdinov, Louis-Philippe Morency

Dec. 2020 - Sep. 2021

AICS

-   Software Engineer Intern

Taipei

Jul. 2018 - Sep. 2018

Publications