Artificial intelligence in storytelling: machines as co-creators

0
141

Sunspring made its debut at the 2016 London film festival. The film is shown in a dystopian world of mass unemployment, which has attracted many fans, and one viewer describes the film as funny and strange. But the most obvious aspect of the movie involves its creation: the artificial intelligence (AI) robot wrote the script for Sunspring.

“Wow,” you think. “Maybe machines will replace human storytellers, just like self-driving cars can take the road.” However, a closer look at Sunspring might raise some questions. One of the characters in the film is inexplicably dazzling, and one critic points out that conversations often sound like “a series of unrelated sentences.” We still need crumpled writers to stoop to the keyboard before the technology advances. So let’s imagine a less extreme scenario: can machines work with humans to improve the storytelling process?

Imagine how this could be done in the rich media of video. As usual, the human storyteller will create a script that is cleverly twisted and realistic. Artificial intelligence will enhance their work by providing insight into the emotional pull of the story, for example, identifying music or visual images that help generate a sense of hope. This groundbreaking technology will increase revenue for storytellers, helping them thrive in a world where seemingly limitless audiences demand.

The MIT media lab recently investigated the potential of this robot collaboration in video. Our team asked, the machine can identify common emotional arc in the video the story – the typical characters in difficult times struggle, overcome difficulties and fall from grace, the swing or declaring victory too evil? If so, can storytellers use this information to predict how the audience will respond? These questions resonate for anyone involved in video’s storytelling, from amateur YouTube videos to studio executives.

Before we get into the study, let’s talk about an emotional arc. From stanger to spielberg, Proust to pixar, the teachers of the story are all good at raising our emotions. Through the instinct of the pulse, they adjust their stories to provoke joy, sadness and anger at the crucial moment. But even the best storytellers can produce an unbalanced outcome, and some Shakespeare plays make viewers feel indifferent or irrelevant. (there’s not a lot of Cymbeline fans there.) What’s the reason for this change? We theorize that the emotional arc of a story largely explains why some films get credit, while others get stuck.

The concept of emotional arc is not new. Every master storyteller is familiar with them, and some are trying to figure out the most common patterns. Consider Kurt Vonnegut’s interpretation of arc. The most popular arc, he claimed, was Cinderella’s pattern. At the beginning of the story, the protagonist is in a hopeless situation. Then came the sudden increase in wealth – a Cinderella case provided by a fairy godmother – and then further trouble. No matter what happens, Cinderella’s story ends a winning note, the hero or heroine leads a happy life.

There is evidence that the emotional arc of a story can affect audience engagement, for example, how many people comment on video on social media, or compliment their friends. In a study at the university of Pennsylvania, researchers reviewed the New York times article to see if certain types were more likely to make a list of the most e-mailing publications. They found that readers often share stories that trigger strong emotional reactions, especially those that encourage positive emotions. It is logical to think that moviegoers may respond in the same way.

Some researchers have used machine learning to identify emotional arcs in stories. A method developed by the university of vermont is to use a computer scan text video script or book content to construct an arc.

We decided to go further. As MIT social machine laboratory and McKinsey consumption part of a broader collaboration between technology and the media team, we have developed a machine learning model, depend on the depth of neural network “watch” the small piece of video film, TV and short online functions and estimate their positive or negative emotional content in the second.

These models consider all aspects of the video – not just the plot, characters and dialogue, there are more subtle touch, such as the car chase scene play face close-up or music clips. The emotion of the story comes when the content of each piece is taken into account.

Think about it: the machine can view unmarked video and create an emotional arc based on all of its audio and visual elements. This is something we’ve never seen before.

Consider the opening sequence of the Up -a 3-d computer animated movie, a popular hit movie. The film focuses on Carl fredrickson, a grumpy old people, after the death of his wife Ellie, he brought thousands of balloons to his house, in order to fly to South America. To dedicate most of the film to Carl’s adventure, the writers had to come up with a quick way to provide the complex background stories behind his travels. That’s the beginning of the sequence. In addition to the movie’s score, it is silent, and emotions appear when Carl’s life scenes are played on the screen. (we also watched the entire movie radian, but it was a great way to narrow the view.)

You can see a montage high and low in the chart (chart 1). The X-axis is time, in minutes, and the Y-axis is the visual effect price, or the degree to which the image causes positive or negative emotions, such as machine scores, at that particular time. The higher the score, the more positive the mood. Like all of our analysis, we created a similar chart for the machine’s response to the audio and the whole video. We’re going to focus on the visual graph here and elsewhere, because that’s where we’re going to focus on the emotional input later.

LEAVE A REPLY

Please enter your comment!
Please enter your name here