Note: You can find a higher quality trailer in this article.
As a little side-project, I have been working on putting the artificial neural networks of AI Gigapixel to the test and having them upscale another favorite thing of mine... Star Trek: Deep Space Nine (DS9).
While you can rescan analog film at a higher resolution, video is digital and can't be rescanned. This makes it much costlier to remaster this TV show, which is one of the reasons why it hasn't happened.
So I tried my hand at frame or two, to see what it could do. The results were great. AI Gigapixel uses neural networks trained on real photos. So while it did okay with upscaling the video game renders of Final Fantasy, it did amazing upscaling real-life footage and the bigger budget CGI effects of DS9.
Here are some examples below:
NOTE: I hadn't realized this hoster reduced the quality of the video so much. If you really want to see the quality of the above video, download the video itself here (139 MB).
As a little side-project, I have been working on putting the artificial neural networks of AI Gigapixel to the test and having them upscale another favorite thing of mine... Star Trek: Deep Space Nine (DS9).
The State of Deep Space Nine
Just like Final Fantasy 7, of which I am upscaling the backgrounds, textures, and videos in Remako mod, DS9 was also relegated to a non-HD future. While the popular Original Series and The Next Generation were mostly shot on film, the mid 90s DS9 had its visual effects shots (space battles and such) shot on video.While you can rescan analog film at a higher resolution, video is digital and can't be rescanned. This makes it much costlier to remaster this TV show, which is one of the reasons why it hasn't happened.
Remastering Star Trek: Deep Space Nine With Machine Learning
This is where neural networks could come in, I thought. With tools like AI Gigapixel, I knew it might be possible the low definition frames of DS9 can be scaled up to a higher definition such as 1080p or 4K. It would never be the same as proper remastering, but it would a step in the good direction.So I tried my hand at frame or two, to see what it could do. The results were great. AI Gigapixel uses neural networks trained on real photos. So while it did okay with upscaling the video game renders of Final Fantasy, it did amazing upscaling real-life footage and the bigger budget CGI effects of DS9.
Here are some examples below:
Original 480p definition (click to enlarge)
DS9 Enhanced 1080p definition (click to enlarge)
Original 480p definition (click to enlarge)
DS9 Enhanced 1080p definition (click to enlarge)
These still frames showed promise. In the first set of images, the maintenance crewmen in their spacesuits were nothing more than a few pixely blobs. The upscaling process turned the blobs into much more defined figures
The close-up of the hand also improved. The creases and folds of the fingers and hand look much more detailed, and the baseball really shows off its sheen and the intricate stitching.
Moving Images
The real test, however, was going to be if the upscaling process held up with a sequence of frames aka as a video. Would there be artifacts or other unsightly issues? AI Gigapixel was after all made for upscaling single images so it wouldn't take into account the relation between the individual frames of a moving image.
So I set about upscaling a portion of an episode. I settled on the season six episode: "Sacrifice of Angels". A great Dominion War episode that had both epic space battles and more personal face-to-face moments. And so I set out to work.
I will go into greater detail about my process in a future blog post, but it took me about two days to get everything extracted, upscaled and put it back together in a way that was pleasing. This resulted only in the first five minutes of the episode being done (the episode recap, the opening scene, and the intro). Still pretty good time for a mid-to-high end PC with software that isn't just available to professionals.
The result left me pretty awestruck. It looked better than I had hoped. No weird issues or anything. It looked pretty much like an HD version of DS9. Since (moving) pictures are worth more than a thousand words, here are two comparison videos that show off the improvement I was able to get with this machine learning based upscaling technique.
The first shows off the before-and-after situation with still frames.
The second puts the two videos side by side. Take note how much clearer and sharper the Enhanced version looks.
I highly recommend watching all these videos through your YouTube app on your TV if possible. It gives you more of a sense of how it would feel watching an enhanced DS9 on your TV.
Comparisons are all well and good, but what does it look like if you were to watch it normally? Below is a video of the first five minutes of the episode in full 1080p:
Comparisons are all well and good, but what does it look like if you were to watch it normally? Below is a video of the first five minutes of the episode in full 1080p:
What About 4K?
Honestly, I don't know. While I can upscale the image to a 4K resolution, I don't have a TV or monitor with a 4K native resolution to see if it looks better. I have nonetheless made this video and I am interested to hear from people with people with 4K equipment if it looks better over the 1080p version of the intro.
This nearly melted my computer, as it is a lot more intense to upscale than 1080p so I'll stick to this single video for 4K examples of DS9 Enhanced.
This nearly melted my computer, as it is a lot more intense to upscale than 1080p so I'll stick to this single video for 4K examples of DS9 Enhanced.
UPDATE: A month or so after I made the above videos, I released a much improved 4K comparison trailer. This shows off both space scenes and conventional scenes from DS9:
What's Next?
Since I do not own DS9, I can not just do what I want with it. While I would love to release full episodes, this is just not legally possible. These videos serve more as a proof of concept for CBS to look into machine learning and neural networks to help remaster DS9 and move it a bit closer to the HD era.
Imagine what a real team could do, with more powerful equipment, custom trained neural networks (perhaps training the network on TNG vs. TNG Remastered images) and access to the original SD files instead of a DVDRip like me.
What I will do is go into further detail about my process, which will be the subject of a future blog.
Let me know what you think.
Imagine what a real team could do, with more powerful equipment, custom trained neural networks (perhaps training the network on TNG vs. TNG Remastered images) and access to the original SD files instead of a DVDRip like me.
What I will do is go into further detail about my process, which will be the subject of a future blog.
Let me know what you think.
Comments
Looking forward your next blog post. I would love to try it on my own.
Hopefully in the future, we might see some of these machine learning upscalers become plug-ins for programs like Premiere, which could make these kinds of enhancements much easier.
Gigapixel is indeed not suitable for large projects. A thousand or so frames it can load nicely, but go beyond that and it has to fit too much into memory at once. Some batch system that loads images one at a time would be the best for this.
Still it works well enough to get a few nice clips that can allow us to dream.
Good luck!
3 GB for the exported PNG frames (you need that format so that the upscaler does not upscale compression artifacts)
5.5 GB for the upscaled frames in JPG with Maximum Quality. For a final, commercial, product you'd use PNG as well, since you wouldn't want to get any quality reduction. For my tests it was fine.
Multiply both by 9 (45 min episode) and you get 76 GB per episode.
And honestly you could set up one person for a year and they might be able to recreate all of the shots considering how expensive it was back then and how slow.
s a possible thing?
All the best
Sven
I'd love to know what the studio makes of this proof of concept. Well done.
I own of AI Gigapixel and use it to enlarge jpegs and it does a far better job than Photoshop. I would like to try this on smaller trek videos (deleted scenes for props or costumes I own) but I have no idea how to extract the stills and reassemble them. Can you post how that is done or point me to a resource. Thanks.
So could it be you are extremly under the minimum requirement?
RTX 2080 (non TI) @ 2000MHz + AI Gigapixel = 1 upscaled frame per second
--> 25fps x 60sek (in a minute) x 45minutes per episode = 67500 frames
67500 frames = 67500sek rendering time = 18.75 - ca. 19 hours of rendering per episode.
My PC consumed ca. 500Watts constantly while rendering - equals 9.5kWh per Episode (in Germany , where I live thats's ca. 2.6€ (@0.28€/kWh)
26 Episodes x 7 Seasons = 182 episodes (x 2.6€) = ca. 500€ of costs of electricity alone :(
This has not factured in the cost of buying AI Gigapixel (currently running as a trial).
I find the idea of an updated DS9 or VOY awesome - hopefully some time in the future the process will not be so tedious, time and energy consuming ;)
Best wishes from Germany and thank you for showing what is possible.
Update 1: I am currently trying to render parts of an episode in 4k - I can open 20.000 frames at a time in AI Gigapixel (i5 7500k, 16GB RAM, RTX2080 (8GB Video Memory)) without a problem. My taskmanager tells me that video memory is not used above 4.8GB at a time, while system memory is used up to 6.3gb (total). I am using AI Gigapixel v. 4.03
The total render time for a double episode (90min) would be 6 days :( and the resulting video file would be 2.2TB (uncompressed) - this is not something that you'd want to do on the whole library - hopefully somebody at CBS sees the potential and greenlights a remaster.
This is based on a FFMPEG guide, so the audio parts of the FFMPEG guides just make that happen.