Wayne Chu

Wei-Teng (Wayne) CHU

Master of Science in Electrical Engineering (MSEE) Student

Stanford University

waynechu@stanford.edu


📣 Wayne is actively seeking internship opportunities for Summer 2026! 📣

🎉 UPDATE🎉

I recently joined the Stanford Vision and Learning Lab (SVL), advised by Prof. Fei-Fei Li.

Efficient Construction of Implicit Surface Models From a Single Image for Motion Generation was submitted to ICRA 2026 and published to arXiv now!


🌟 Highlight 🌟

Wayne dedicated much of his time to robotics and computer vision through projects, internships, and research. In the summer of 2023, he developed a project on dynamic obstacle avoidance using TurtleBot4 at UC San Diego, relying solely on depth camera inputs. His internship at ITRI further deepened his understanding of the challenges in bridging the Sim2Real gap, particularly in designing perception pipelines that remain robust in real-world conditions.

He conducted remote research in computer vision and robotics under the supervision of Prof. Weiming Zhi and Dr. Tianyi Zhang at the DROP Lab, CMU before beginning his studies at Stanford.

dtu_114_32
sdf demo
robotic arm demo

Education

Stanford University, CA

M.S. Student, Electrical Engineering

Concentration: Robotics / Computer Vision

  • GPA: N/A
  • Duration: Sep. 2025 - Apr. 2027 (Expected)

National Tsing Hua University, Taiwan

Bachelor of Science in Interdisciplinary Program of Engineering

Concentration: Electrical Engineering / Power Mechanical Engineering

  • Graduated with Distinction
  • GPA: 4.15 / 4.30
  • Duration: Sep. 2020 - Jan. 2024

Work Experience

SAIL Logo

Stanford Artificial Intelligence Laboratory (SAIL)

Researcher @ Stanford Vision and Learning Lab

Sep. 2025 - In Progress

Working on something interesting related to robot learning.
Highlight coming soon!

ITRI Logo

Industrial Technology Research Institute (ITRI)

Automotive AI Algorithm Development Intern

Sep. 2024 - Nov. 2024, Mar. 2025 - Jul. 2025

Mask2Former Semantic Segmentation

We combined and re-annotated Mapillary and ADE20k datasets to fulfill requirements for quadruped robots and autonomous vehicles. It was then converted to COCO format for the training of custom Mask2Former model using NVIDIA TAO Toolkit. Lastly, the re-trained model was integrated into ROS2 for real-time video inference on robotic and vehicle platforms at about 15 FPS.

semantic segmentation demo

Person Re-Identification (Re-ID)

We built a real-time pedestrian detection and re-ID pipeline using YOLO11n and OSNet. The pipeline employed cosine similarity for feature matching and it exported models in .onnx and TensorRT .engine formats for model acceleration. We successfully achieved real-time inference at about 25 FPS for robotics and automotive use cases.

person re-id demo

Sim2Real Quadruped Robot Terrain Traversal

We developed an elevation mapping workflow for quadruped robots Unitree Go2 by integrating Gazebo and point clouds from Intel RealSense depth camera. Moreover, a point cloud sampling & processing pipeline is integrated with reinforcement learning gait models in simulation. The most challenging part is to employed visual inertial odometry (VIO) to resolve sensor interruptions and implemented forward kinematics to compute foot coordinates of Unitree Go2. Finally, the system was deployed on Unitree Go2 for Sim2Real validation.

quadruped robot demo
Foxconn Logo

Foxconn

Technology Innovation Group of Chairman Office Intern

Jun. 2024 - Sep. 2024

We designed an intuitive Android car app to control A/C temperature of the vehicle via voice. The pipeline was composed of Azure Speech Service → Dialogflow intent parsing → CarAPI to operate the HVAC system.

[video]

Selected Research & Projects

Smart Fridge

Efficient Construction of Implicit Surface Models From a Single Image for Motion Generation

Mar. 2025 - Sep. 2025

[DROP Lab], Carnegie Mellon University

Advised by Prof. Weiming Zhi and Tianyi Zhang Ph.D..
We present Fast Image-to-Neural Surface (FINS), a lightweight framework capable of reconstructing high fidelity signed distance fields (SDF) from a single image in 10 seconds. Our method fuses a multi-resolution hash grid and efficient optimization to achieve state-of-the-art accuracy while being an order of magnitude faster than existing methods. The scalability and real-world usability of FINS were also tested through robotic surface-following experiments, showing its utility in a wide range of tasks and datasets.

[arXiv]

Smart Fridge

Smart Fridge

Jun. 2024 - Sep. 2024

Google Hardware Product Sprint (HPS) Program

Developed a smart refrigerator with food pattern recognition and expiration tracking using Google Gemini. Built a recipe suggestion system based on soon-to-expire ingredients.

[poster] [slides] [code]
Laser-Assisted Guidance Landing

Laser-Assisted Guidance Landing Technology for Drones

Jan. 2023 - Nov. 2023

[HSCC Lab], National Tsing Hua University

Advised by Jang-Ping Sheu.
We propose a laser-assisted guidance approach to improve the landing accuracy of a drone where GPS creates a multi-meter error. This system merges embedded electronics, 3D-printed mechanical design, and low-power laser sensing to achieve 30-40 cm landing accuracy while also confirming the feasibility of laser-based localization for future unmanned aerial vehicle (UAV) applications that are either autonomous or remote.

[report] [slides] [video] [paper]
TurtleBot4 Project

Dodging Dynamical Obstacles Using TurtleBot4 Camera Feed

Jun. 2023 - Aug. 2023

[MURO Lab], UC San Diego

Supervised by Jorge Cortés.
Implemented real-time dynamic obstacle avoidance for mobile robots using vision-based sensing and ROS2. The system employed the Turtlebot4 equipped with an OAK-D Pro camera to achieve precise obstacle tracking and smooth, collision-free navigation. It also integrated RRT* path planning and Bézier curve smoothing, demonstrating affordable and efficient vision-driven navigation in dynamic environments.

[report] [slides] [video] [paper]
MURO demo

Publications

(† Corresponding author)

Selected Awards