
Robotic camera rigs are frequently used in the film community to shoot more interesting clips that would be difficult with traditional camera equipment. However, feedback from our industry partner has revealed that programming trajectories for such a robot is time-consuming and difficult.
Traditionally, depending on the shot camera operator is given how a start and end of a shot should look like either via storyboard sketches or the director. The typical workflow for configuring the shot is shown in Figure 1. The camera operator programs the move by using a controller or manually entering values for each joint in a software called FLAIR Classic. Each position is saved as a key frame. FLAIR then controls the rig by linearly interpolating the motion between key frames. Depending on what was shot, the camera operator then modifies parts of the motion to suit the needs of the director.
Therefore, our goal is to create a cinematographic system for robotic camera trajectory planning that is intelligent while being intuitive for the user. Our work presents a system that allows the camera operator to create camera trajectories using either phone or voice commands and allows them adjust active trajectories through natural language. It allows the user to create movements that are smooth and well framed.