To boost the potential of the technology, the researchers at MIT Media Lab and their colleagues at the University of California at Santa Barbara and Osaka University have built an open-source, convenient character generation pipeline that integrates AI models for facial gestures, voice, and motion. It can be used to create a variety of audio and video outputs.

The pipeline also connects the resulting production to a traceable, comprehensible watermark to differentiate it from original video content, and to demonstrate how it was created.

The technology itself suggests a way for instruction to be customized for your interest, context, idols, and possibility for change over time.