Back
Gigapixel AI
Sharpen AI
DeNoise AI
Video Enhance AI

The ultimate guide to 60 fps frame rate conversion in 2022

AUTHOR:
Brian Matiash
Published:
September 7, 2021
Time to read:

Why you would want to convert video to 60 fps

An integral component of digital video and video games is the frame rate, commonly represented in frames per second or fps. Unlike the perception of increased image quality solely based on more megapixels, increasing the frame rate of your footage has a material impact on its output. Today, one of the most popular and commonly used frame rates is 60 fps, and there are several good reasons for it. Footage with a lot of motion, such as sports or video games, plays back much more smoothly at a higher frame rate, like 60 fps. It's also increasingly more common for creators to record footage at 60 fps so that they can slow motion down when conforming it to their project's native timeline.The biggest problems often occur when creators forget to set their cameras to record at 60 fps (this has happened to me more times than I'd care to admit 😅), don't have cameras capable of recording at 60 fps, or must compromise resolution to record at that higher frame rate. In other words, the camera can either record in 4K resolution with a lower frame rate or HD at 60 fps. Historically, a creator would have to run a conversion process to increase the frame rate or upscale the resolution, but the output almost always significantly suffered. Fortunately, Video Enhance AI has Chronos, a new AI model specifically built for frame rate conversion that generates incomparably smooth, realistic footage. Chronos can also apply up to 2000% slow motion to your footage as illustrated below.

Before we dive into the merits and benefits of 60 fps footage for video and video games, let's briefly define the most commonly used frame rates and what their intended purposes are.

Frame rates in a nutshell.

A video is a sequence of still images or frames played back at a specific rate to convey motion. Each second of footage comprises a particular number of frames; hence the unit "frames per second" or fps. So, a video clip filmed at 24 fps has 24 frames captured for each second of footage. The more frames recorded per second within a video clip, the smoother the viewer perceives the motion of objects. That is one of the driving factors that leads a creator to select one frame rate over another. Here's a great visual clip provided by Wistia that clearly illustrates how frame rate affects motion.

Creators tend to use specific fps values based on the intent of the footage and what the desired aesthetic needs to be. For example, cinematographers will often film at 24 fps because of the cinematic look they can achieve with it. US Broadcasters and news channels have long used the NTSC standard of 30 fps (technically 29.97 fps) for television, but that has more to do with the 60 Hz electricity frequency we use. The PAL television standard is 25 fps in Europe because they use a 50 Hz electricity frequency.

So, why 60fps?

You may be wondering where the higher 60 fps frame rate factors into this spectrum. As I previously mentioned, you can portray motion more smoothly as you increase the frame rate of your clip. The experience of watching a 60 fps clip on a 60 Hz display (virtually all computer monitors, smartphones, and TVs) can be very pleasing. Fast motion looks more natural and less jarring at these higher frame rates. Additionally and ironically, many viewers argue that 60 fps footage looks sharper and less jittery because of the smoother motion conveyed at this higher frame rate.Watch the video below to see how a higher frame rate affects the quality of motion. I recorded this drone clip at 24 fps, and the source footage is on the left. Next, I used the Chronos model in Video Enhance AI to perform a frame rate conversion to 60 fps and placed it on the right. Do you see how much smoother the drone footage is as it races over the plants and shrubs?

Here are additional clips showcasing the smoother motion achieved using Video Enhance AI to convert native 24 fps and 30 fps footage to 60 fps. Do you see how much smoother the motion is on the right half of the video? That's one of the primary benefits of converting your source footage frame rate to 60 fps.

Gamer approved.

It's worth noting that the 60 fps frame does not only benefit video either. One of the more recent trends with video game console platforms is to offer 60 fps performance, which provides a smooth, seamless experience for the player. Video game enthusiasts can take this a step further by pairing Video Enhance AI's stunning frame rate conversion models with its powerful upscaling models to elevate in-game cinematics and cutscenes. Digital Foundry, a YouTube channel specializing in game technology and hardware reviews, takes this further by using Video Enhance AI's deep learning models to upscale the native 480i resolution of a legacy video game trailer to 4K. As quoted in the following video, "The use of AI upscaling in the industry is now becoming more commonplace. It's used a lot to remaster textures and to dramatically improve pre-rendered video sequences."

Not all frame rate conversions are created equal.

I used Video Enhance AI to perform the frame rate conversion on the previous drone clip even though similar utilities are built right into most popular video editing applications. The output quality is unmatched compared to similar framer rate conversion processes from Adobe Premiere and Apple Final Cut Pro. The reason boils down to how Video Enhance AI's neural network converts frame rates compared to other applications.Frame rate conversion techniques are incredibly challenging because they involve two complex steps:

  1. interpolating new frames from the original ones in a source clip and
  2. blending those generated frames so that they look realistic.

Two of the more common methods of frame interpolation are Frame Blending and Optical Flow. Let's take a brief look at each before showcasing why both are subpar options compared to the Video Enhance AI's advanced neural network.

The Frame Blending method.

Frame blending is a process of repeating or removing existing frames to achieve the desired clip speed. The frames are essentially duplicated and blended to illustrate motion. This method is undoubtedly fast, but the output is often crude and riddled with motion jitter. To show this, I slowed down the following clip by 400% to make it easier to compare Frame Blending in Apple Final Cut Pro with Chronos in Video Enhance AI. Do you see how choppy the Frame Blending footage is compared to Chronos?

The Optical Flow method.

Optical Flow is a more advanced algorithmic process that analyzes a clip to determine the pixel-by-pixel direction from frame to frame. The algorithm uses this analysis to generate new frames and blend them with their neighboring frames. Many people regard Optical Flow as a superior interpolation method compared to Frame Blending. However, one of the biggest pitfalls of Optical Flow is occlusion. In some cases, the pixels from one frame block the existing pixels in the next frame. This is called occlusion, and the Optical Flow blending process tends to have difficulty managing it. Often, distracting artifacts are the result of pixel occlusion between frames when using Optical Flow.Like the Frame Blending comparison, I slowed down the same clip by 400% and compared Optical Flow in Apple Final Cut Pro with Chronos in Video Enhance AI. While the Optical Flow output improves over the Frame Blending example, it still suffers from an uncanny effect due to this interpolation method.

The Video Enhance AI Chronos method.

Video Enhance AI uses a new approach based on extensive machine learning to "predict" the best intermediate frame between existing ones, which gives you much better results with a broader variety of material. We built a robust neural network to iterate and improve on the current Optical Flow method. Part of our frame interpolation method uses Optical Flow to generate new frames. However, our neural network allows us to include a secondary refinement mechanism that analyzes and intelligently corrects occlusion issues.This refinement mechanism is what sets Video Enhance AI apart from any other frame rate conversion utility. We spent months training our neural network on servers with 8 GPUs, using multiple megawatts of power, and a massive library consisting of tens of millions of video frames. We've created between 50 - 100 million parameters by analyzing these video frames, creating a memory that our neural network remembers and learns from—this ability to learn and remember sets Video Enhance AI apart, especially from the Optical Flow method.

Humans built Optical Flow upon a static algorithm that can't adapt to the scale and intelligence of our neural network. Our neural network uses this training memory to intelligently fill in frame details and outperform other interpolation methods. Video Enhance AI also sets us apart because our neural network can interpolate any location between two frames, whereas other networks can only interpolate at the middle.

How to use Video Enhance AI for 60 fps frame rate conversion.

Video Enhance AI makes it especially easy to perform frame rate conversion tasks and apply up to 2000% slow motion to your video clips. I'll walk you through how to use the Chronos model to achieve both tasks in no time.

Try Video Enhance AI for free.

You should now have a better understanding of why the 60 fps frame rate is so popular, what the limitations of other frame rate conversion utilities are, and why Video Enhance AI is superior to them. Visit our product page for Video Enhance AI and click on the "Try for Free" button to download free trials that never expire.

Tags
No items found.
AUTHOR:
Brian Matiash

Brian Matiash is a South Florida photo educator who serves enthusiast photographers looking to grow their landscape, travel, and wildlife skills. Learn more on his website and YouTube.