Indian Startup Makes First AI Movie. No Actors
Technology

Indian Startup Makes First AI Movie. No Actors

10 min read

Neural networks are no longer the future, but the present. Every day there is news about some startup (the Chinese DeepSeek immediately comes to mind), a new scientific breakthrough or another way to replace human labor.

AI has also long been established in cinema. It helps write scripts, voice actors, create special effects, and even transform one person into another. Just remember the film Brutalist, where neural networks were used to stylize the image, or Emilia Perez, where AI transformed the main character.

Neural networks have already created movie trailers, as was the case with Morgan from 20th Century Fox, but now the talk is about trusting them with the entire process. Indian startup Intelliflicks Studios has announced that in 2025 they will release the first full-length film made by AI from start to finish. And, of course, in the best traditions of Bollywood - with songs, dances and a bunch of special effects (well, how else in India).


What kind of film are they planning to make?

Has anyone read Khushwant Singh’s “Maharaja in Jeans” (2014) (not to be confused with this writer )? No? By the way, this book is incredibly popular in India. Bollywood studios bought the rights to make it into a film twice , but each time they abandoned the project. According to the author, the production turned out to be too complicated and expensive.

What’s so special about this story? It’s a historical fiction novel about Hari Singh Sandhu, a teenager who believes he is the reincarnation of Maharaja Ranjit Singh . A little context: Ranjit Singh was a historical ruler, the founder of the Sikh Empire, and one of the most revered Sikhs (followers of Sikhism, a religion that emerged in India in the 15th century). For his achievements, he earned the nickname “The Lion of Punjab . ”

The protagonist is a young and wealthy Hari Singh, who lives in modern Chandigarh until one day he begins to see strange visions. In these memories, he is none other than Maharaja Ranjit Singh himself, the legendary ruler who founded the Sikh Empire.

His girlfriend tries to figure out what is going on, but the more Hari’s reality becomes intertwined with history, contrasting how modern values ​​differ from the “Golden Age” of Punjab.

And here is where the most interesting part begins. To film this story, it was necessary to recreate three eras at once - and this is the level of “Gladiator” or “Troy” . Such a large-scale project that Bollywood refused it twice, deciding that it was too expensive. But now neural networks have taken up the matter.

What is Intelliflicks Studios startup?

In 2022, Khushwant Singh accidentally met an old friend, Gurdeep Pall . He had worked as a vice president at Microsoft, had a hand in creating Skype and Teams, and in recent years had become interested in AI projects. Having learned about the problems with the film adaptation of The Maharaja in Jeans, he proposed an unconventional solution: since conventional studios couldn’t cope, why not make a film using neural networks?

“And we decided to take a pioneering step to show what the technology could do,” Singh says.

This is how the startup Intelliflicks Studios was born , which decided to conduct a real experiment - to make the first ever fully AI-generated film. No actors, cameramen, make-up artists or lighting technicians - the entire creation process will be entrusted to algorithms.

“We believe that our film is a turning point in the history of cinema,” Poll shares his thoughts.

Khushwant Singh and Gurdeep Pall 

The creators have not yet said how much such an experiment costs, but if you look at the Bollywood market, the average film costs about $25 million, and the most expensive projects exceed $150 million. Using neural networks clearly reduces the costs of the film crew, scenery and special effects, but by how much exactly remains a question.

How neural networks create a film

While the developers have not yet revealed all the details of the project, it is known that “Maharaja in Jeans” will be released in 2025. The studio has already prepared the first trailer .

Gurdeep Pall did share some technical details. Intelliflicks Studios uses a combination of commercial and open source generative AI solutions. First, static images are created, a kind of storyboard, which are then “fed” to a custom neural network to generate video fragments. Gradually, short scenes are assembled into full-fledged videos.

Then AI editing comes into play, connecting the generated pieces according to a clearly defined script. Additionally, specialized algorithms are used to create sound, voice acting, and synchronize lip movements with speech.

Of course, the process is not without human intervention. The team has two specialists who correct the image, correct discrepancies in colors and lighting when changing scenes. The developers also manually adjust the models, taking into account the peculiarities of Indian culture of different eras. For example, costumes and architecture can still be recreated based on historical data, but with more subtle things there are difficulties.

Thus, in one of the scenes, the heroine performs the traditional Kathak dance . However, there was not enough data on it in open sources for the neural network to be able to correctly generate the movement. Therefore, the team acted outside the box: they invited a professional dancer, recorded her performance, and then the AI ​​“replaced” her face.

But the main difficulty is different. Modern neural networks still cannot maintain the visual integrity of characters. The same character can look different in different scenes, because the generation works on probabilistic algorithms. The same problem is encountered by artists who use neural networks to create images.

To combat this, the team developed a system of digital tags that helps the AI ​​“recognize” characters and preserve their appearance when generating new scenes. This doesn’t solve the problem entirely, but it does make editing much easier.

How the project is assessed by experts

Jamie Umpherson, CEO of the startup Runway (a neural network that generates video on demand), says that the most successful projects in the field of video content generation are those that understand the limitations of the technology and constantly improve. However, creating a full-length film for 2-3 hours is a bit of a stretch.

“Most of our clients, including film studios, advertising agencies and independent artists, use this technology for rapid prototyping early in the creative process or to create visual effects that complement live action. 

It is, of course, possible to create a fully generated film, but in terms of time and effort it can be comparable to shooting a real film, at this stage of technology development. And the result is unlikely to be of high quality.”

Abe Davis, an associate professor of computer science at Cornell University, agrees. The problem, he says, is that neural network tools require minimal user input and are limited to text queries, missing important details that would likely be immediately apparent when watching a movie. 

“Yes, even a garage startup, as in the case of Intelliflicks, can create a full-fledged film. But the longer it is and the more details it contains, the more difficult it is to control everything at the production stage. 

In normal life, all of this is done by the director and other people on set. There are many stages to making a film. With AI, there is always the possibility that it will produce nonsense, and it will be easy to miss.”

Author of the bestselling book “The Maharaja in Jeans” and startup founder Khushwant Singh understands that the result may be very different from modern films. 

“Of course, our product will be very different from the usual films that everyone is used to. But this is the first step that will break the modern paradigm of cinema: we work only if we have money and connections. Neural networks open up opportunities for ordinary people to become real artists, and significantly democratize the entire industry.”

How Neural Networks Have Already Changed Cinema (and What Will Happen Next?)

In cinema, neural networks have long been indispensable. They help improve special effects, work with actors’ voices, write scripts, and even manage the filming process. But if we talk about digital technologies in cinematography, it all started back in 1973, when CGI was first used in Westworld.

Later, in the late 70s, elements of 3D graphics appeared in Star Wars (source) and Alien. But when Tron was released in 1982 , the first film built entirely on computer graphics, viewers did not appreciate it, and Hollywood studios temporarily lost interest in this technology.

In the 90s, the computing power of computers increased, and CGI began to be actively used again. The breakthrough was Jurassic Park (1993), where dinosaurs were convincingly “brought to life” for the first time (source). And after the release of Toy Story (1995), the first fully computer-generated cartoon, it became clear that digital technologies were the future.

Since the early 2000s, machine learning has been increasingly embedded in the film industry. Neural networks help capture actors’ movements and create realistic character animation. Today, it is difficult to imagine large-scale blockbusters and the VFX industry as a whole without them.

One example of AI at work: rejuvenating the actors’ faces in the film “The Irishman”

And over the past 20 years, as technology has advanced, neural networks have begun to penetrate even deeper into the filmmaking process. Here are some examples of how they are used:

AI assistants for writing scripts
In 2016, director Oscar Sharp shot a short film called Sunspring, the script for which was entirely generated by the bot Benjamin. It was trained on scripts from the 80s and 90s, combining phrases, and creating new dialogues. Now ChatGPT and its analogues are already writing much more coherent texts, which has led to screenwriters’ strikes in Hollywood.

Casting actors
Neural networks help analyze the target audience and predict which actors are best suited for roles. Netflix, for example, uses AI to evaluate scripts before filming begins.

Rejuvenation and resurrection of actors
In The Irishman (2019), Robert De Niro was rejuvenated by 40 years using neural networks. And Thanos in The Avengers (2018) was animated using Masquerade algorithms.

AI voice acting and voice copying
AI can recreate the voices of actors even if they are no longer alive. Recently, a scandal erupted around the film “Brutalist”, where the voice of the late Lance Reddick was used without his participation.

Automation of filming and editing of trailers
In 2016, the trailer for the film “Morgan” was completely edited by AI. Now there are startups like Axibo that train neural networks to predict camera movements and automate the filming process.

On the one hand, filming automation is simply the next step in technology development. If you can speed up the process, save money, and minimize human labor, why not use such a tool?

On the other hand, AI directors are unlikely to compete with real ones yet. Neural networks can generate textures and scenes, but will they be able to convey human emotions?