[ad_1]
Video Boost with Night Sight was announced in October 2023 alongside the Google Pixel 8 series. The rollout began some weeks later, and only for the “Pro” model. It is a technology that uses AI to significantly improve the videos you capture with your Pixel device.
The two elements of technology work together, each taking care of different things. “Video Boost” focuses on boosting parameters such as dynamic range, contrast, colors, stabilization, and level of detail. On the other hand, “Night Sight” is the implementation in videos of the function that the Google Camera app has offered for photos for a long time.
In the latest Made by Google Podcast episode, the company offered more details about the development of Video Boost with Night Sight. The team revealed what exactly they were looking for with this technology. They also mentioned the main challenges they encountered along the way.
Google’s focus when developing Video Boost with Night Sight
The main goal of Google was to “unlock the best video quality ever that you’ve seen on a smartphone.” Pixel devices are known for their high-quality photos, but video recording has never been on the same level. So, now they will seek to balance the balance of quality between photo and video on their phones.
Regarding the exclusivity of the feature on the Pixel 8 Pro, they said that they are “just starting on Pixel 8 Pro to do that for our users.” Although it is not an official confirmation of anything, this opens the door for Video Boost with Night Sight to reach more Pixel devices in the future.
The challenges the dev team encountered
Regarding the challenges faced, they mentioned the processing power of smartphones. A key difference between Night Sight for photos and Night Sight for video is the way it is processed. While for photos the processing is carried out on-device, for videos they rely on the power of cloud-based AI. This is because smartphones still lack sufficient power for complex local video processing.
This is not surprising, since this task can be complicated even for desktop PCs. Let’s remember that video files are exponentially longer and more complex than photos. Regarding the use of the cloud for video processing, the Google team said the following:
“So we can decide how many frames that we want to align and merge, and how are we going to merge them. We can even take this to another level. We can even do much better. We can even run more complex algorithms simultaneously. For example, we can run deblurring. We can run stabilization. We can run color correction.”
While turning to the cloud for processing has the advantage of being able to harness the full power of Google’s servers, it also means that you can’t use the feature without an internet connection.
To top it off, another big challenge was the calibration of the processing to avoid any type of problem in the final result. That is, they had to figure out how to avoid time lags between the audio and video, chromatic aberrations, random flickering, or similar effects. After all, no one would want to use a tool that constantly generates unusable results.
[ad_2]
Source link