This is an automated archive made by the Lemmit Bot.
The original was posted on /r/SFWdeepfakes by /u/lawless_c on 2023-06-07 13:29:43+00:00.
Apologies if people here are already familiar with this.
For context you can use multiple photgraphs of the same thing to create a single higher resolution image.
Could the same be applied for deepfakes?
Two methods i have thought of:
At the conversion stage instead of just getting one output , you run the input multiple time’s offset slightly each time vertically and horizontally(or applying a jitter )
Then you apply the stacked result to final video.
Alternatively:
You you make multiple downsampled versions of the video you want to deepfake, (trying to simplify my explanation here ) with two videos one would use column 1, 3, 5… of pixels , the second would use columns 2,4,6 …
You run your lower resolution model on both these videos, then afterwards merge them into something higher resolution.
A similar method is also used in astrophotograpy for making HDR images.