- Suriya Narayanan Lakshmanan
- Te-Li Wang
Srinivasa Narasimhan (email@example.com)
About Suriya Narayanan Lakshmanan:
Suriya Narayanan from MSCV ’18 was part of the Platform Pittsburgh projects during the year ’18. His work on this project focuses on building a simple surveillance pipeline with Computer Vision blocks implemented, connected and deployed through software and system engineering. Some of the contributions in the pre-processing stage are implementing video stabilization, exploring background subtraction techniques, contour detection and clustering for ROI proposal. Other contributions in the vision aspect of the project include exploring object detection methods and object tracking. Major contributions come from the system and software engineering part. One of the main goals of this project is to showcase an easy to use website which gives access to the installed cameras’ feeds and run computer vision algorithms on those videos. The main work involves developing a modular framework which lets plugging in various computer vision blocks and developing a web interface. The web interface has options to record video from a camera and process a video saved to the filesystem. More on this can be found in the project presentation and the project summary page.
About Te-Li Wang:
Te-Li Wang was a student in the MSCV program who participated in the Platform Pittsburgh project as his capstone project. Along with Suriya, his best teammate, they focused on the software pipeline that enables computer vision capability to the camera network that is currently being deployed to several intersections near CMU. In particular, Te-Li has been working on 3D object detection for detecting road vehicles, which served as one of the core applications to do smart infrastructure. He came up with two different methods on this topic. One utilizes high precision wheel detection to estimate the pose of the vehicle, and the other combines 2D keypoint detection network with a custom perspective-n-point algorithm that embeds the pose of each vehicle to detectable 2D points. Both results are shown in the result section of this website.