AR/VR Research
Facial Tracking
Programming: Python, OpenCV
My most recent projects have focused on facial tracking and how it can be improved. In this video I'm testing out the OpenCV/MediaPipe pre-trained data sets to achieve facial landmark tracking that could later be used for facial expression detection or mapped to a digital avatar. I come from a C# and C++ background but for this project, the target files had to be in Python so I used it as an opportunity to learn the syntax.
This application is still in early development but I'm enjoying the learning curve its throwing at me.
My most recent projects have focused on facial tracking and how it can be improved. In this video I'm testing out the OpenCV/MediaPipe pre-trained data sets to achieve facial landmark tracking that could later be used for facial expression detection or mapped to a digital avatar. I come from a C# and C++ background but for this project, the target files had to be in Python so I used it as an opportunity to learn the syntax.
This application is still in early development but I'm enjoying the learning curve its throwing at me.
EMG Data Visualization
Programming: Python, C#
This project was the most interesting to me for its practical application. The mocap actor is wearing the XSens mocap sensor to detect individual joint rotation, which can be visualized with the muscle anatomy avatar running in Unity. At the same time, he's wearing several EMG sensors under his clothes that detect neuromuscular activity. When muscle activity is detected, the data is passed along to Unity via the Lab Streaming Layer (LSL) and used to color the model (in this case, magenta). Higher color saturation indicates higher muscle exertion and vice-versa.
A variation of this app was made in the second video to visually depict the stretching (red) and compression (blue) of the muscles. The purpose of the project was to find new ways to visualize real medical data to track and better understand human performance.
This project was the most interesting to me for its practical application. The mocap actor is wearing the XSens mocap sensor to detect individual joint rotation, which can be visualized with the muscle anatomy avatar running in Unity. At the same time, he's wearing several EMG sensors under his clothes that detect neuromuscular activity. When muscle activity is detected, the data is passed along to Unity via the Lab Streaming Layer (LSL) and used to color the model (in this case, magenta). Higher color saturation indicates higher muscle exertion and vice-versa.
A variation of this app was made in the second video to visually depict the stretching (red) and compression (blue) of the muscles. The purpose of the project was to find new ways to visualize real medical data to track and better understand human performance.
|
|
Full Body and Facial Motion Capture
Programming: C#
I've been very interested in monocular motion capture for first person and third person avatars. Here's a collection of the various techniques I've used in recent years.
The "Anatomy Layers" avatar tested out NVIDIAs Omniverse mocap system with an avatar that could be changed by the user at runtime. The user can toggle between the different body layers at run-time and still retain full body, accurate mocap. This was later used at a medical education event open to the public.
The "Movement SDK" was for testing out Oculus' improved hand tracking 2.0 and the facial mocap afforded by the multiple sensors integrated into the HMD.
The VTuber application was for testing the HoloLens' Holographic Remoting capabilities with various body joints and real-time movement to test the limits of its performance.
I've been very interested in monocular motion capture for first person and third person avatars. Here's a collection of the various techniques I've used in recent years.
The "Anatomy Layers" avatar tested out NVIDIAs Omniverse mocap system with an avatar that could be changed by the user at runtime. The user can toggle between the different body layers at run-time and still retain full body, accurate mocap. This was later used at a medical education event open to the public.
The "Movement SDK" was for testing out Oculus' improved hand tracking 2.0 and the facial mocap afforded by the multiple sensors integrated into the HMD.
The VTuber application was for testing the HoloLens' Holographic Remoting capabilities with various body joints and real-time movement to test the limits of its performance.
|
|
Photogrammetry and 3D Scanning
Programming: C#, NVIDIA instant NeRFs
How an object feels is directly connected to how it sounds. Think of dropping a pillow versus dropping a glass vase. The pillow is soft and flexible, therefore, it produces a low frequency, low amplitude sound wave. On the other hand, The vase is a reflective hard surface which produces a high frequency, high amplitude sound wave when dropped. As part of my research in haptics and CG integration, I recorded audio clips for various textures and materials with a standard microphone sensor on the surface of the object. The recorded audio clips are triggered when a CG asset is touched. When the computer's audio output is connected to the haptic sensor on my fingertip, the waveform is felt rather than heard.
In these projects, I experimented with various 3D scans I made while using standard photogrammetry techniques and Instant NeRF 3D scanning to extract the unique waveform for each object. In the second video, you'll notice that sound plays when I pick up the CG assets. This is the waveform that is felt by my fingertip wearing the haptic sensor!
How an object feels is directly connected to how it sounds. Think of dropping a pillow versus dropping a glass vase. The pillow is soft and flexible, therefore, it produces a low frequency, low amplitude sound wave. On the other hand, The vase is a reflective hard surface which produces a high frequency, high amplitude sound wave when dropped. As part of my research in haptics and CG integration, I recorded audio clips for various textures and materials with a standard microphone sensor on the surface of the object. The recorded audio clips are triggered when a CG asset is touched. When the computer's audio output is connected to the haptic sensor on my fingertip, the waveform is felt rather than heard.
In these projects, I experimented with various 3D scans I made while using standard photogrammetry techniques and Instant NeRF 3D scanning to extract the unique waveform for each object. In the second video, you'll notice that sound plays when I pick up the CG assets. This is the waveform that is felt by my fingertip wearing the haptic sensor!
|
|
Texture-based Haptic Generator
Programming: C#, UDP/TCP IP
This project was very exciting to make. I developed two applications (a server and a client) which, together, function like a multiplayer lobby that connect a Mixed Reality headset to a PC. The HoloLens searches for the server PC and shares data freely, including collisions and haptic feedback. Haptics can be felt by the user when he's wearing the eRubber fingertip haptic sensor, a company hardware that is at the root of this research. The goal was to make AR/VR experiences more immersive by adding the sense of touch.
Normally, audio waveforms are used in collisions to produce vibration, but this project required too much haptic variation, therefore, traditional methods wouldn't suffice. I developed a unique approach for generating haptic feedback at run-time by using the model's normal map as the main driver of the waveform. This would capture the unique bumps and valleys of the model, even if it was swapped out for another. When the user palpates the arm, a standard sinewave is altered based on movement speed, pressure exerted, and tilt of the finger.
In the video, I'm also experimenting with neural networks to produce soft deformation that respond to user touch. The red sphere was later replaced with VR hands and retained its functionality.
This project was very exciting to make. I developed two applications (a server and a client) which, together, function like a multiplayer lobby that connect a Mixed Reality headset to a PC. The HoloLens searches for the server PC and shares data freely, including collisions and haptic feedback. Haptics can be felt by the user when he's wearing the eRubber fingertip haptic sensor, a company hardware that is at the root of this research. The goal was to make AR/VR experiences more immersive by adding the sense of touch.
Normally, audio waveforms are used in collisions to produce vibration, but this project required too much haptic variation, therefore, traditional methods wouldn't suffice. I developed a unique approach for generating haptic feedback at run-time by using the model's normal map as the main driver of the waveform. This would capture the unique bumps and valleys of the model, even if it was swapped out for another. When the user palpates the arm, a standard sinewave is altered based on movement speed, pressure exerted, and tilt of the finger.
In the video, I'm also experimenting with neural networks to produce soft deformation that respond to user touch. The red sphere was later replaced with VR hands and retained its functionality.
|
|