Project Resurrect uses Deep Learning Models to synthesize audio and video, syncing it together, to recreate famous speeches, that can be further used to generate holograms, virtual actors and a lot more. We are refining these videos by constantly feeding it new data, to yield seamless videos in real-time