NotebookLM is Google’s more sophisticated, research-focused AI tool, built on the same underlying models as Gemini but with an interface that’s less chatty and friendly. Like Gemini, it regularly gets upgrades and new features, and the latest update adds “cinematic” video summaries.
One of the tasks NotebookLM is perfect for is collecting a bunch of information together in a notebook—PDFs, web links, YouTube videos—and then having it explained and summarized (it’s a great study tool in that respect). You may remember that a couple of years ago it added the ability to generate realistic-sounding podcasts called Audio Overviews from your notebooks.
The new Cinematic option in Video Overviews.
Credit: Lifehacker
Last year we got Video Overviews, but these were more like slideshows than mini movies—they basically put the contents of a notebook into something that might have been made in PowerPoint. With this new cinematic upgrade, however, you’ll get video summaries that are much more animated and three-dimensional.
There are some rather hefty caveats: You need to be on the top-tier, $250-per-month Google AI Ultra plan to access this new Video Overviews category; additionally, they’re only available to users aged 18 and above, and only in English. The feature may trickle down to everyone eventually, but for now you need to be a relatively well-off AI enthusiast to be able to access this.
It’s a lot of money to spend on AI each month, but there’s plenty more to Google AI Ultra, including a subscription to YouTube Premium and 30TB of Google Drive space to store all those AI-generated images and videos you’ll be making. In Gemini and NotebookLM, you get higher usage rates on just about everything.
To put the new cinematic overviews to the test, I created a new notebook in NotebookLM based on a single source: A paper published by Apple researchers on the “illusion of thinking” exhibited by Large Reasoning Models (LRMs), such as those that NotebookLM taps into. It’s a dense, 39-page study, exactly the sort of lengthy document you might need summarizing by AI.
The cinematic difference
While there are mobile apps available for NotebookLM, the interface is easier to manage on the web. If you’re new to the tool, you can click Create new notebook to get started, then point NotebookLM towards your sources—whether it’s plain text you want to paste in, or an Apple AI study you want to upload. The app can even search the web for other relevant sources.
Once you’ve actually got some information together, the Video Overview option is in the Studio panel on the right. You can pick between a Brief overview or a more detailed Explainer style, and choose a template for your overview too. There’s also space to give some pointers as to how the video should be structured. If you’re a paid-up Google AI Ultra member, you also get a Cinematic option.
What do you think so far?
First I requested an Explainer-style Video Overview for the Apple AI paper. It took around 15 minutes to generate, and came in at a little over six minutes long. As you can see above, it does a decent job of explaining the contents of the paper, and giving some of the reasons why LRMs get stuck on complex tasks.
It’s a very static slideshow, but it’s well laid out, and the illustrations generally make sense, with one or two aberrations: That first Claude Sonnet graph has two extra lines that shouldn’t be there. It gives you a decent understanding of the paper, but I wouldn’t want to 100 percent rely on it (AI “can be inaccurate” is the disclaimer that Google puts all across Gemini and NotebookLM).
The Cinematic–style Video Overview took more than 50 minutes to generate and is more than seven minutes long, and you can see it below. On the plus side, it goes into more detail, and I felt I knew more about the topic after watching it. All the charts were copied over from the paper correctly, and some of the animations were genuinely useful.
However, NotebookLM clearly struggled with certain animations: When it was trying to show someone drawing on a page for example, or placing blocks on top of one another in a Tower of Hanoi puzzle. These are notable problems evident in other AI-generated videos, because these models don’t really understand the real world or physics—the vast amounts of video footage they’ve been trained on give them a good idea about where to place pixels, but not really how objects should interact with each other.
I did prefer the Cinematic Video Overview overall, though clearly the results are going to vary depending on the sources that you feed NotebookLM—outside of research papers it might get a bit more creative and interesting. The visual errors are kind of distracting, though, and ultimately the standard Video Overviews that everyone has access to work well enough without any enhancements.
