Disclaimer: Website is a parody of the Elder Scrolls V: Skyrim Menu (base code taken from CodePen). Logo is a parody of the Bethesda Game Studios Logo

About me

I am currently a graduate student studying for master's degree in computer engineering at the University of California San Diego.

Please enjoy a sample of the projects I'm proud of. And don't forget to leave your shoes at the door!

Load Resume?

Yes No

Load a project







Contact me

LinkedIn

Quit?

For Windows: click x top right

For macOs: click red circle top left

Autonomous Vehicle

About

This was a project done by 3 people to convert a basic RC Car with just a motor, servo, and battery into an Autonomous Vehicle. Some notable components that allowed for this conversion are:

  • Camera that can output video
  • NVIDIA Jetson Nano
  • VESC (Variable/Vedder Electric Speed Controller)
To attach these components onto the car, we laser cut an acrylic plate with holes to route wires and then 3D printed supports for the plate along with housing for the electrical components. Once connected, we were able to ssh into the Jetson Nano which uses a form linux as its OS and code directly on it. One of the first things we did was use a software called DonkeyCar to do machine learning using images taken from the camera as we manually drove the RC Car around a track with a game controller. After multiple attempts of gathering data and training, we were able to get some sub-par results. We think this was due to the track being reflective of sunlight which made gathering good data difficult. Our next 2 projects with the car used Python, OpenCV, and an API that interacts with our VESC to autonomously drive the car using computer vision

Three Autonomous Laps using Python and OpenCV

To accomplish this project, I used OpenCV to filter the video feed and detect lane lines on both sides of the car. From that, I would receive two lines in which I calculated the bisector whose angle would be used to steer our car. The following video shows 3 autonomous laps using this method.

Wildlife Conservation Protocol

Another fun project we did was using object detection to adjust driving behavior. For the object detection I used the COCO (Common objects in context) pre-trained model in order to detect animals such as a raccoon and pedestrians. This combined with a simple form of distance estimation, I am able to program the different behaviors according to the object immediately in front of the vehicle seen in the video below. Enjoy!

Path Tracer

During the spring of 2022, I created a path tracer for my computer graphics class. I started with a raytracer using Peter Shirley's Ray Tracing in One Weekend as a guide. After having a good foundation, I implemented a variety of upgrades and features such as direct lighting, indirect lighting with path tracing , and multiple importance sampling. Please enjoy a few samples of the results.

Basic Raytracer

Adding directing lighting

Adding indirect lighting with next event estimation (NEE)

Added multiple importance sampling that combines NEE and Bidirectional Reflectance Distribution Function (BRDF) importance sampling

Shadow Mapping

What is shadow mapping?

Like the name suggests, shadow mapping is a technique used in computer graphics to generate shadows Why shadow mapping? Firstly, shadows make things look more realistic, gives dimensions of depth, and just looks cool. In ray tracing realistic shadows are much easier to implement, rays are shot from the camera and when they intersect with objects, it is easy to detect if the object is occluded by a light source. However, given the computing time of ray tracing, how can we get shadows using a rasterizer instead? This is where shadow mapping comes in. it allows for the real time generation of shadows using a rasterizer.

Progress - Part 1

First, I created a fragment shader that would color the scene based on depth values. In this case the renderings are from the camera’s Perspective. To generate final image on the right, I had to linearize the depth between two closer points (near and far) to create a nice gradient rather than the image on the left which assigns depth value based off unmodified z-coordinates in window space.

Progress - Part 2

Since in this project, there is only one light source (the “sun”) which is a directional source as opposed to a point source, I used an orthographic projection for the light’s projection. Using this, I created the left image by rendering the scene with the depth shader fragment from the POV of the Light source (ie. drawing the scene in light space). After this, I combine the two passes by setting up a depth map frame buffer object and depth map, then using the first pass to generate the depth map and store that into a texture. Then passing this texture to the second rendering pass to allow for the calculation of shadows in a separate fragment shader. The right image is a result from this. Next, is to add back in the Phong shading for color. I combined the two by multiplying the shadow (which is 0 for occluded and 1 for not occluded) to the specular and diffuse parts of the Phong shading, the ambient lighting is left alone.

Progress - Part 3 - Dealing with artifacts

After adding back in the Phong shading, I get the image on the left which contains a shadow mapping artifact called shadow acne. The acne occurs on surfaces not occluded and is caused by the limited resolution of the shadow map and sampling of fragments overlapping. This makes it so that the surface is constant casting a shadow on itself. To fix this, a small bias can be added which offsets the depth of surfaces to make it slightly closer to the light when doing the shadow map comparison. Fixed image is on the right

Another artifact is aliasing, which causes the edges of shadows to be pixelated and jagged. The method I used to attempt to fix this is called Poisson Sampling. Which essentially samples the depth map multiple times in a small area around each fragment rather than only sampling the same location

Final Results

Project Eucalyptus

What is Project Eucalyptus?

Project Eucalyptus is a bullet journalling web application that aims to give users an online alternative to bullet journaling. This project was created in a 10 week span in a collaborative effort with my peers. I programmed and helped design many aspects of the projects' frontend (using Javascript, HTML, and css) like the side navigation bar and timespan format menu.

To experience the web app please follow the README on the github repository linked ahead - https://github.com/cse110-sp21-group19/cse110-sp21-group19

Mad Martians

What is Mad Martians?

Mad Martians is a 3D, 4-player co-op tower defense game inspired by games like Orcs Must Die and Dungeon Defenders 2.



It was built from scratch by 7 people in 10 weeks using C++, OpenGL, and a few other libraries for audio and networking. My role within this project involved working on the rendering/graphics engine where I parsed data coming from the game server and rendered the game objects on the client. Lighting was done using Blinn-Phong Shading. I was also involved in creating physics based particle effects along with creating simple animations in the graphics shaders. This was one of the most fun experiences I've had as a software developer and I hope you take look at our creation.

Here is a link to a development journal page where you can see reports on our team's progress (our original project name was Mars Rovers): Notion Link
Here is a livestream VOD where our group presented our game (Mad Martians starts at 47:23):

Nature Photography VR

What is Nature Photography VR?

A Virtual Reality game inspired by Pokemon Snap where the player explores a natural environment and takes pictures of the surrounding wildlife to earn points. This was created by 2 people in a 3 week period using the Unity Game Engine. Disclaimer: game uses free assets from the Unity Store



Some unique features of this game:

  • Intuitive Controls: The camera feels great in hand and taking a picture with the right trigger feels natural.
  • Movement: movement is performed by drawing a path on the floor and once done, the player character will follow that path.
  • Photo Gallery: visit the photo gallery to view the pictures you took and spawn a picture frame that can be translated, scaled, and rotated.
  • Audio: There is directional audio to help the player locate animals
Here is a link to a short developer log - Notion Link

Here is a trailer I made for the game

Animation Engine and Physics Sim Demos

Animation Engine

The animation engine takes in three inputs: the skeleton, skin, and animation. First it loads a character skeleton from a custom .skel file and displays it in 3D. All joints in the skeleton are implemented as 3-DOF rotational joints (ball-and-socket joints). The program performs the forward kinematics computations to generate world space matrices for the joints. Then it loads the character skin from a .skin file and attaches it to the skeleton. Finally, it loads a keyframe animation from an .anim file and plays it back on a skinned character. The .anim file contains an array of channels, each channel containing an array of keyframes.

Cloth Simulator

This program simulates a piece of cloth made from particles, spring-dampers, and triangular surfaces. It includes the effects of uniform gravity, spring elasticity, damping, aerodynamic drag, and simple ground plane collisions. There is also an additional feature of adjustable wind speed.

Particle System

This program simulates a particle system with a selection of interactable physical properties seen below:
  • Particle creation rate (particles per second)
  • Initial position & variance (X, Y, Z)
  • Initial velocity & variance (X, Y, Z)
  • Initial life span & variance
  • Gravity
  • Air density
  • Drag coefficient
  • Particle radius (for both rendering & aerodynamic drag)
  • Collision elasticity
  • Collision friction


All projects above were developed using C++ and OpenGL

Odd Even Boxing

What is Odd Even Boxing?

Odd Even Boxing is a boxing video game that is designed for patients with Parkinson’s Disease to enable a safe way to get exercise while stimulating their minds. Odd-Even Boxing has players not only exercise but have them use different forms of cognition by having users think about math problems as well as what type of punch they need to throw. Below is a video of the final results as well as a report with more details about our design process and implementation.