Hollywood Shouldn’t Trust Big Tech AI (But There’s A Way Around That)

By John Attard

There have been a lot of think pieces and analysis on how Hollywood will react after recent meetings between OpenAI and entertainment industry players — both heads of major and independent studios and talent agency execs. And yes, film and TV producers like me are considering the offers on the table, so here’s what I have to say: Hollywood can’t trust Big Tech with AI.

It’s no great revelation that AI is inevitable in every aspect of our lives, and it will one day replace much of what our current technology kingpins provide us.

Enter Microsoft (which owns OpenAI), Google and Meta, among others, peddling the idea that AI can be accessed only through their centralized services. Big Tech is pushing products like OpenAI’s Sora as the ultimate content engine, promising infinite content with a few keystrokes.

It’s a race for dominance in the AI future, and the stakes couldn’t be higher. The more we are convinced that AI is the dominion of the Big Tech titans, the more likely they are to gain ground in this new land grab.  

But does their current pitch make sense, and is it even applicable to creators? The answers, fortunately, are no and no.

Copyright

Let’s assume for a moment that we are actually drinking the Kool-Aid that Open AI is selling with Sora, a singular model that can do anything, and will apply that to the work we do on a daily basis. We saw some amazing footage from Sora, very believable photorealistic footage. (My favorite was the puppies in snow — who doesn’t love puppies?) 

We don’t really know much about Sora other than the examples we’ve been shown, but on what was the model trained? Maybe it was just trained on the material we saw specifically to produce these great, frightening, awe-inspiring results? If not, we would assume it was training on everything that has ever existed, which seems to be the claim. 

As incredulous as that sounds, if we assume it’s true we immediately run into some problems, and these are already playing out in the many lawsuits underway against AI companies. Where did the training data come from? Did OpenAI film everything in the world, or did it acquire the data from elsewhere? 

I think it’s safe to believe the company didn’t film everything in the world, so I’m guessing it acquired the data. What did that contract look like if it indeed exists? Based on the New York Times lawsuit against Microsoft and OpenAI for copyright infringement of its literary works, I think OpenAI might not have a chain of title for all the material it has used to train its models.

Governmental Oversight

Regulating AI companies to enhance transparency and accountability increasingly involves governments. Unfortunately, President Biden’s executive order in 2023 did little, if anything, to regulate the collection of data for the training of machine learning models and instead focused on the safe implementation of AI. Some argue it would have been more effective if it dealt with the cause rather than the effect of AI, but at least the European Union has closed that gap.

After three years of development, the EU’s AI Act has cleared its final bureaucratic hurdle when the European Parliament voted to approve it. Under this new law, AI companies developing “general purpose AI models,” including language models, must create and maintain technical documentation demonstrating how they built the model and are adhering to copyright law.

The companies are also required to publish a readily available summary of the training data used. This marks a significant departure from the current secretive practices in the tech industry and necessitates an overhaul of data management practices, which, based on the plethora of lawsuits, are dubious at best.

Companies with powerful AI models, such as GPT-4 and Gemini, face stricter requirements. They must conduct model evaluations, assess risks, implement cybersecurity measures and report any incidents of AI system failure, with non-compliant companies risking hefty fines or even product bans in the EU.

Bias

Let’s stop pretending we are capable of producing anything devoid of bias with machine learning. While OpenAI and Google have calamitously failed in this endeavor, they have doubled down even in light of that. This is an expression of their lack of understanding of human nature and more than a fair amount of hubris. 

For example, ChatGPT, which prides itself on being an LLM (large language model) that’s free of bias, is in fact riddled with it. It fails to understand that human beings require bias in their understanding of the way the world works, which manifests in belief, morality and societal structures such as laws. 

The attempt at eliminating this aspect of human perception is in and of itself a form of bias, not to mention that these decisions are made by humans, all with their own worldview of what the lack of bias looks like. 

The sensible path would be to acknowledge the bias and lean into it. The current market for chatbots is saturated with versions of ChatGPT that acknowledge those biases. If bias didn’t exist and wasn’t necessary, there would be no chatbot market. In the case of Google’s Gemini image generator, the results of aiming to eliminate bias provided for some incredibly offensive material being produced by the product.

In storytelling, bias is an incredibly important tool, as a writer must establish a universe and its mechanics very quickly and rely on common morality to establish an antagonist and protagonist. Morality is a bias without which stories encapsulated in a 90-minute film would fail to be understandable.

So How Do Creators Escape Big Tech’s AI Web?

Think customization rather than them pulling from Big Tech’s centralized pool of data. Products — everything from scripts to imagery — should be trained with only the creators’ intellectual property. They can make strong use of AI/machine learning in combination with other production tools, whether it be building custom models or creating generative content from those models, to amplify each creative individual involved.

As it’s being presented today, AI isn’t a threat to creators. In order for Big Tech to provide what it’s selling, the companies would have to change the very fabric of intellectual property rights and overcome impossible obstacles of bias. 

The best news about AI is that, as a tool customized to the creator, it’s an accelerator of creativity, an adjunct that liberates creators from mundane tasks and allows them to explore iterations of original ideas with lightning speed. The toolset to access AI in a customized workflow paradigm is not only already here but has become relatively easy to use. 

Unfortunately for Big Tech, the genie is out of the bottle, and everyone can have one.

John Attard is founder of Nashville-based Showdog Studio, which produces feature films, episodic television, documentaries and animation. With over two decades as a visual effects producer, he has worked at Warner Bros., Disney and VFX shops including MPC, Mill Film, Avid and Autodesk.

Leave a Reply

Your email address will not be published. Required fields are marked *