-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about the test_video.py #32
Comments
I'm also trying to reproduce the results in the paper. I also had issue getting the same level of quality (PyTorch official example repo also has an implementation; quality isn't that great either). Were you able to get good quality for images? But I think what you mentioned here is expected. If you read the paper closely, you will see it does only use Y channel. Cb/Cr are upsampled using bicubic interpolation. Having said that, I think the main diff is that the original paper uses a very small patch size (17x17). I haven't been able to confirm this though. Btw, this implementation uses diff activation functions as well, although that doesn't make much diff per my testing. |
Do you have a good solution now? I have tried to remake the training samples, but it has not improved much. I doubt whether there is a problem with the training sample cropping method in data_utils.py? |
Unfortunately no. I have tried half a dozen TF/PyTorch implementation on GitHub. None of them could get even close to what was reported in the paper. |
@windmaple @CauchyHu
|
直接用OpenCV好了,已经有训练好的ESPCN |
I am using your code to reproduce the experimental results of the ESPCN paper, but I find that the results are quite different from those in the paper. Therefore, I speculate whether the input image only has y value and no CB and Cr value into the model, which will lead to the final image can not achieve the effect of the paper?
this is my results:
The text was updated successfully, but these errors were encountered: