an AI-powered platform that converts Wikipedia articles into customizable videos. Utilized IBM's text-to-speech API for audio narration and integrated Watson AI for audio generation. Implemented NLP techniques for summarizing Wikipedia content and identifying visual elements.
The process for the user:
User searches for a Wikipedia article on our platform
- The user can start our video generation platform by specifying the length of the video that is wanted
- The user can specify the formality of the video depending on what the target audience is (For the classroom, for sharing information on TikTok & Instagram, etc.)
- The user can specify what voice model they want to use for the audio, using IBM’s text-to-speech API, the possibilities are endless
- The user can then specify what kind of background music they want playing in the video
- Once this step for the user is done, we are able to generate a short version of the Wikipedia article via co:here, create audio for the video via Watson AI, and generate keywords to use while finding GIFs, videos, and images on Pexels and Tenor, and put them in a video format.