Upscaled backgrounds: https://drive.google.com/open?id=1zzAK1 ... a2SySqcuZu Feel free to edit/use them as you wish.
Took a pause on trying to train a custom model. The weather is too warm for running my GPU on full for days in a row..
Cutscene upscaling
Re: Cutscene upscaling
Great, I'll submit them in the PR.
THANKS!
THANKS!
Re: Cutscene upscaling
These really are amazing. How did you do this?! Added for including.
Only one is left though (missionend.png), something you want to do? or rather not?
Only one is left though (missionend.png), something you want to do? or rather not?
- AlexTheDacian
- Greenhorn
- Posts: 13
- Joined: 17 Feb 2020, 12:33
- Location: Dacia
- Contact:
Re: Cutscene upscaling
Okay Phase II, we got it, you're amazing. Just kidding. 
The cutscenes and the backgrounds are incredible. Keep up the good work!

The cutscenes and the backgrounds are incredible. Keep up the good work!
Re: Cutscene upscaling
That image had much lower quality. I think I have repaired as much as I can on it: https://drive.google.com/open?id=1VtQ4e ... lpvEZKLt_w
For individual images, I use esrgan with multiple different models (upscale model, then downsample; downsample, then upscale model; artifact removal model, then upscale model, then downsample; ...). Then combine all the resulting images through some combination of pixel averaging, median blending, and manually selecting patches. This aims to get the best of what each separate model can produce.
For cutscenes, I automate more (it is not reasonable to manually edit every frame...). Split the video into frames. Divide the frames into categories (e.g. logo, map, animation...). Process each category with the model series that seems to give the best results. Render the frames back to a video.
Later this year, I aim to improve my cutscene upscales. My trained models look like they will be yielding better results, but there are still some quirks to work out of them. It just is too warm to keep training them right now.
For individual images, I use esrgan with multiple different models (upscale model, then downsample; downsample, then upscale model; artifact removal model, then upscale model, then downsample; ...). Then combine all the resulting images through some combination of pixel averaging, median blending, and manually selecting patches. This aims to get the best of what each separate model can produce.
For cutscenes, I automate more (it is not reasonable to manually edit every frame...). Split the video into frames. Divide the frames into categories (e.g. logo, map, animation...). Process each category with the model series that seems to give the best results. Render the frames back to a video.
Later this year, I aim to improve my cutscene upscales. My trained models look like they will be yielding better results, but there are still some quirks to work out of them. It just is too warm to keep training them right now.
Re: Cutscene upscaling
Thank you for all of your efforts. These really are exceptional improvements / upscaling.
As a heads-up: I hope we can transition from the current video format to VP9 + WebM container in the future.
Not sure when precisely that will happen, but just wanted to make you aware since you are in the process of your upscaling and conversion. Perhaps we can upload the raw(?) format somewhere to reconvert / recompress in the future. (What initial output formats does your pipeline generate?)
Re: Cutscene upscaling
Intermediate data is one png per frame. Frame sizes range from around 100 KB to 1.5 MB. Size adds up quickly after a couple clips. I could pack them in a lossless VP9 (uncertain how much that will shave off the size), to support future re-encoding usage.
Re: Cutscene upscaling
soo I tried doing the same thing, made several versions of a single video with a different ai model, AA, denoise, deblock, and stuff. but it still gets me that I'd prefer the remakes instead.
the link: https://mega.nz/folder/pdFWzZLR#rjr0TtRiaeIDNTporWBCVA
the link: https://mega.nz/folder/pdFWzZLR#rjr0TtRiaeIDNTporWBCVA