I have trained a StyleGAN2-ADA generator on my own images and now trying different inversion methods.
My first question after reading your paper is if I can use my pretrained G and implement your code to train an encoder or if should I retrain a StyleGAN2 using psp framework.
I have trained my Generator using [https://github.com/NVlabs/stylegan3]. but for StyleGAN2-ADA not for version 3.
As far as I understood, your work is based on psp which maps Z into W+ in contrast to the original StyleGAN2 which is on W space.
So I suppose I need to train a psp first on my image and then use the trained G and follow your method.
Would be great if you can clarify this for me. Thank you very much in advance for your help.
I have trained a StyleGAN2-ADA generator on my own images and now trying different inversion methods.
My first question after reading your paper is if I can use my pretrained G and implement your code to train an encoder or if should I retrain a StyleGAN2 using psp framework.
I have trained my Generator using [https://github.com/NVlabs/stylegan3]. but for StyleGAN2-ADA not for version 3.
As far as I understood, your work is based on psp which maps Z into W+ in contrast to the original StyleGAN2 which is on W space.
So I suppose I need to train a psp first on my image and then use the trained G and follow your method.
Would be great if you can clarify this for me. Thank you very much in advance for your help.