The following are my training parameters, which are basically based on the parameters you gave me. The data set used is FFHQ training. Why is there such a big jump at 15k? The generated pictures are not human faces at all. They are images with no rules. What is my problem? I hope the author has seen it and can do me a favor to reply. Thank you so much!
python scripts/train.py
--dataset_type=ffhq_encode
--exp_dir=/root/autodl-tmp/path/to/experiment
--workers=8
--batch_size=4
--test_batch_size=4
--test_workers=8
--val_interval=2500
--save_interval=5000
--encoder_type=GradualStyleEncoder
--start_from_latent_avg
--lpips_lambda=0.8
--l2_lambda=1
--id_lambda=0.1

The following are my training parameters, which are basically based on the parameters you gave me. The data set used is FFHQ training. Why is there such a big jump at 15k? The generated pictures are not human faces at all. They are images with no rules. What is my problem? I hope the author has seen it and can do me a favor to reply. Thank you so much!

python scripts/train.py
--dataset_type=ffhq_encode
--exp_dir=/root/autodl-tmp/path/to/experiment
--workers=8
--batch_size=4
--test_batch_size=4
--test_workers=8
--val_interval=2500
--save_interval=5000
--encoder_type=GradualStyleEncoder
--start_from_latent_avg
--lpips_lambda=0.8
--l2_lambda=1
--id_lambda=0.1