Loading ...

SDDS Researchers Teach Machines to Help Video Editors | School of Software Design & Data Science | Seneca Students

Home » Spaces » School of Software Design & Data Science » Articles » SDDS Researchers Teach Machines to Help Video Editors
School of Software Design & Data Science

Leave Space :

Are you sure you want to leave this space?

Join this space:

Join this space?

Edit navigation item

Required The name that will appear in the space navigation.
Required
Required
Required The url can point to an internal or external web page.
 
Login to follow, share, and participate in this space.
Not a member?Join now
SDDS Researchers Teach Machines to Help Video Editors

SDDS Researchers Teach Machines to Help Video Editors

Ask anyone who’s done it — video editing is time-consuming, tedious work. And with demand for new content across countless platforms higher than ever, the editor’s work is never done.

But thanks to Seneca Innovation’s artificial intelligence (AI) video categorization project with industry partner, Vubble, human editors will now have some high-tech help.

Vubble, an IT and communications company based in Toronto and Waterloo, has tapped into the machine-learning expertise of Seneca’s School of Software Design & Data Science to integrate an automatic video categorization recommender into the manual editing process.

“This work is typically very labour intensive and expensive,” said Tessa Sproule, Co-founder and Co-CEO of Vubble. “We are at a pivotal moment in our communications world where we need technology to help us decipher and interpret the quality and validity of all the content we produce.”

The multi-year collaboration has resulted in Vubble being able to tag video content more quickly with appropriate category recommendations based on visual and audio information. This has enabled the company to meet the needs of a growing customer base that includes CTV News, TFO, Channel 4 News (U.K.) and the Canadian Film Centre.

The applied research project was led by Dr. Vida Movahedi, Professor, School of Software Design & Data Science, and was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) with support from the Southern Ontario Smart Computing Innovation Platform (SOSCIP).

For Dr. Movahedi and her students, developing a video categorizing system involved training and evaluating machine-learning models that could predict appropriate categories based on content. That meant teaching the machines to understand what is happening in videos by inputting the right images and audio transcripts.

“Using machine-learning techniques, we can avoid manual categorization,” Dr. Movahedi said. “Even if it’s not 100 per cent accurate, the suggested categories are still helpful recommendations to the editors and curators.”

Using the Seneca platform, Vubble is now building a live audio transcription model that uses words and patterns to pick up cues in podcasts and videos so the content can be categorized accordingly.

“All AI starts with humans teaching machines,” said Ms. Sproule, who is former head of digital at CBC. “What we want to do is try to replicate the human curation skills in a machine form. We want to build a world based on not just what people want to see, but also what they need to see.”