Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Q] Neural Network Deinterlacing in ChaiNNer #3044

Open
bbgdzxng1 opened this issue Oct 26, 2024 · 0 comments
Open

[Q] Neural Network Deinterlacing in ChaiNNer #3044

bbgdzxng1 opened this issue Oct 26, 2024 · 0 comments

Comments

@bbgdzxng1
Copy link

Firstly, love the application - it is a refreshing pleasure to be able to install an application and for it to JustWork on a Mac, and especially Apple Silicon. After battling with video tools such as avisynth, vapoursynth and Hybrid on Apple hardware & software for many years, it is so wonderful to land on a solution that allows a user to focus on a workflow, rather that hardware and software compatibility.

Niceties and complements over, my question is regarding the practically of using ChaiNNer for applying neural network based deinterlacing after the load video module?

One of the interesting use-cases of ChaiNNer is SD video restoration, where interlaced sources are de rigueur. In my restoration of Betacam SP mezzanines (& VHS to some extent), optimal full-temporal resolution deinterlacing (bwdif & nnedi), dot-crawl removal and temporal denoise (hqdn3d, nlmeans or Neat Video) significantly improved the likelihood of success with subsequent neural network enhancement and super resolution models. The pre-processing steps make a huge difference, especially on sources of dubious quality (ie digitizations of analog-recorded content)

I appreciate that it is currently expected that progressive sources are fed into ChaiNNer, and a user is likely to use one of the classical deinterlacers (yadif, bwdif).

Or of, course Neural Network Edge-Directed Interpolation (NNEDI)

  • https://github.com/dubhater/vapoursynth-nnedi3, based on tritical's original NNEDI3 filter, which predates TensorFlow & PyTorch.
  • or QTGMC which itself typically relies on NNEDI (and a filterchain of classical sharpen, blur and denoise processes).

There are some interesting papers & unofficial implementations of Deep Video Deinterlacing, but neither of these pre-trained models appear to be loadable or compatible with in ChaiNNer, and are thus outside of the reach of non computer science users.

These two seem the most promising and claim to have good results.

Real-time Deep Video Deinterlacing (RDVD) (Zhu et al, 2017)

Disney Deep Video Deinterlacing (Bernasconi et al, 2020)

It would be impertinent to suggest that the authors of ChaiNNer should get involved in individual models, however, the question is what would a deinterlace architecture or processing node look like in ChaiNNER in order to support deep neural network video deinterlacing models as models become available in the zoos (or even adding classical libavfilter nodes)? I assume that deinterlacers would fall under the category of general "temporal interpolation" rather than "upscale" nodes, where a field-based frame at input fps would result in output frames of 2xfps. Advanced full-temporal deinterlace models may eventually utilize multi-frame techniques established by the stalwarts of multi-frame super resolution.

I did search the github repo, but did not find particular reference to neural network deinterlacing.

Thanks again for great software!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant