PyTorch DCGAN 教程

2020-09-09 15:16 更新
原文: https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html

作者: Nathan Inkawhich

介紹

本教程將通過(guò)一個(gè)示例對(duì) DCGAN 進(jìn)行介紹。 在向其展示許多真實(shí)名人的照片后,我們將訓(xùn)練一個(gè)生成對(duì)抗網(wǎng)絡(luò)(GAN)以產(chǎn)生新名人。 此處的大部分代碼來(lái)自 pytorch / examples 中的 dcgan 實(shí)現(xiàn),并且本文檔將對(duì)該實(shí)現(xiàn)進(jìn)行詳盡的解釋?zhuān)㈥U明此模型的工作方式和原因。 但是請(qǐng)放心,不需要 GAN 的先驗(yàn)知識(shí),但這可能需要新手花一些時(shí)間來(lái)推理幕后實(shí)際發(fā)生的事情。 另外,為了節(jié)省時(shí)間,安裝一兩個(gè) GPU 也將有所幫助。 讓我們從頭開(kāi)始。

生成對(duì)抗網(wǎng)絡(luò)

什么是 GAN?

GAN 是用于教授 DL 模型以捕獲訓(xùn)練數(shù)據(jù)分布的框架,因此我們可以從同一分布中生成新數(shù)據(jù)。 GAN 由 Ian Goodfellow 于 2014 年發(fā)明,并首先在論文生成對(duì)抗網(wǎng)絡(luò)中進(jìn)行了描述。 它們由兩個(gè)不同的模型組成:生成器和鑒別器。 生成器的工作是生成看起來(lái)像訓(xùn)練圖像的“假”圖像。 鑒別器的工作是查看圖像并從生成器輸出它是真實(shí)的訓(xùn)練圖像還是偽圖像。 在訓(xùn)練過(guò)程中,生成器不斷嘗試通過(guò)生成越來(lái)越好的偽造品而使鑒別器的性能超過(guò)智者,而鑒別器正在努力成為更好的偵探并正確地對(duì)真實(shí)和偽造圖像進(jìn)行分類(lèi)。 博弈的平衡點(diǎn)是當(dāng)生成器生成的偽造品看起來(lái)像直接來(lái)自訓(xùn)練數(shù)據(jù)時(shí),而鑒別器則總是猜測(cè)生成器輸出是真品還是偽造品的 50%置信度。

現(xiàn)在,讓我們從判別器開(kāi)始定義一些在整個(gè)教程中使用的符號(hào)。 令 是表示圖像的數(shù)據(jù)。  是鑒別器網(wǎng)絡(luò),其輸出 來(lái)自訓(xùn)練數(shù)據(jù)而非生成器的(標(biāo)量)概率。 在這里,由于我們要處理圖像,因此 的輸入是 CHW 大小為 3x64x64 的圖像。 直觀地講,當(dāng) 來(lái)自訓(xùn)練數(shù)據(jù)時(shí), 應(yīng)該為高,而當(dāng) 來(lái)自發(fā)生器時(shí),則應(yīng)為低。  也可以被視為傳統(tǒng)的二進(jìn)制分類(lèi)器。

對(duì)于發(fā)生器的表示法,將 設(shè)為從標(biāo)準(zhǔn)正態(tài)分布中采樣的潛在空間矢量。  表示將潛在矢量 映射到數(shù)據(jù)空間的生成器函數(shù)。  的目標(biāo)是估計(jì)訓(xùn)練數(shù)據(jù)來(lái)自的分布( ),以便它可以從該估計(jì)的分布( )中生成假樣本。

因此, 是發(fā)生器 的輸出是真實(shí)圖像的概率(標(biāo)量)。 如所述,Goodfellow 的論文, 玩一個(gè) minimax 游戲,其中 試圖最大化其正確分類(lèi)實(shí)物和假貨( )的概率,而 嘗試 以最大程度地降低 預(yù)測(cè)其輸出為假的可能性( )。 從本文來(lái)看,GAN 損失函數(shù)為

從理論上講,此 minimax 游戲的解決方案是 ,判別器會(huì)隨機(jī)猜測(cè)輸入是真實(shí)的還是假的。 但是,GAN 的收斂理論仍在積極研究中,實(shí)際上,模型并不總是能達(dá)到這一目的。

什么是 DCGAN?

DCGAN 是上述 GAN 的直接擴(kuò)展,不同之處在于 DCGAN 分別在鑒別器和生成器中分別使用卷積和卷積轉(zhuǎn)置層。 它最初是由 Radford 等人描述的。 等 深度卷積生成對(duì)抗網(wǎng)絡(luò)中的無(wú)監(jiān)督表示學(xué)習(xí)。 鑒別器由分層的卷積層, 批處理規(guī)范層和 LeakyReLU 激活組成。 輸入是 3x64x64 的輸入圖像,輸出是輸入來(lái)自真實(shí)數(shù)據(jù)分布的標(biāo)量概率。 生成器由卷積轉(zhuǎn)置層,批處理規(guī)范層和  ReLU 激活組成。 輸入是從標(biāo)準(zhǔn)正態(tài)分布中提取的潛矢量 ,輸出是 3x64x64 RGB 圖像。 跨步的轉(zhuǎn)置圖層使?jié)撌噶靠梢赞D(zhuǎn)換為與圖像具有相同形狀的體積。 在本文中,作者還提供了有關(guān)如何設(shè)置優(yōu)化器,如何計(jì)算損失函數(shù)以及如何初始化模型權(quán)重的一些技巧,所有這些將在接下來(lái)的部分中進(jìn)行解釋。

from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML


## Set random seed for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)

得出:

Random Seed:  999

輸入項(xiàng)

讓我們?yōu)榕懿蕉x一些輸入:

  • dataroot -數(shù)據(jù)集文件夾根目錄的路徑。 我們將在下一節(jié)中進(jìn)一步討論數(shù)據(jù)集
  • worker -使用 DataLoader 加載數(shù)據(jù)的工作線程數(shù)
  • batch_size -訓(xùn)練中使用的批次大小。 DCGAN 紙使用的批處理大小為 128
  • image_size -用于訓(xùn)練的圖像的空間大小。 此實(shí)現(xiàn)默認(rèn)為 64x64。 如果需要其他尺寸,則必須更改 D 和 G 的結(jié)構(gòu)。 有關(guān)更多詳細(xì)信息,請(qǐng)參見(jiàn)此處的。
  • nc -輸入圖像中的顏色通道數(shù)。 對(duì)于彩色圖像,這是 3
  • nz -潛矢量的長(zhǎng)度
  • ngf -與通過(guò)生成器傳送的特征圖的深度有關(guān)
  • ndf -設(shè)置通過(guò)鑒別器傳播的特征圖的深度
  • num_epochs -要運(yùn)行的訓(xùn)練時(shí)期數(shù)。 訓(xùn)練更長(zhǎng)的時(shí)間可能會(huì)導(dǎo)致更好的結(jié)果,但也會(huì)花費(fèi)更長(zhǎng)的時(shí)間
  • lr -訓(xùn)練的學(xué)習(xí)率。 如 DCGAN 文件中所述,此數(shù)字應(yīng)為 0.0002
  • beta1 -Adam 優(yōu)化器的 beta1 超參數(shù)。 如論文所述,該數(shù)字應(yīng)為 0.5
  • ngpu -可用的 GPU 數(shù)量。 如果為 0,代碼將在 CPU 模式下運(yùn)行。 如果此數(shù)字大于 0,它將在該數(shù)量的 GPU 上運(yùn)行
# Root directory for dataset
dataroot = "data/celeba"


## Number of workers for dataloader
workers = 2


## Batch size during training
batch_size = 128


## Spatial size of training images. All images will be resized to this
##   size using a transformer.
image_size = 64


## Number of channels in the training images. For color images this is 3
nc = 3


## Size of z latent vector (i.e. size of generator input)
nz = 100


## Size of feature maps in generator
ngf = 64


## Size of feature maps in discriminator
ndf = 64


## Number of training epochs
num_epochs = 5


## Learning rate for optimizers
lr = 0.0002


## Beta1 hyperparam for Adam optimizers
beta1 = 0.5


## Number of GPUs available. Use 0 for CPU mode.
ngpu = 1

數(shù)據(jù)

在本教程中,我們將使用 Celeb-A Faces 數(shù)據(jù)集,該數(shù)據(jù)集可在鏈接的站點(diǎn)或 Google 云端硬盤(pán)中下載。 數(shù)據(jù)集將下載為名為 img_align_celeba.zip 的文件。 下載完成后,創(chuàng)建一個(gè)名為 celeba 的目錄,并將 zip 文件解壓縮到該目錄中。 然后,將此筆記本的數(shù)據(jù)根輸入設(shè)置為剛創(chuàng)建的 celeba 目錄。 結(jié)果目錄結(jié)構(gòu)應(yīng)為:

/path/to/celeba
    -> img_align_celeba
        -> 188242.jpg
        -> 173822.jpg
        -> 284702.jpg
        -> 537394.jpg
           ...

這是重要的一步,因?yàn)槲覀儗⑹褂?ImageFolder 數(shù)據(jù)集類(lèi),該類(lèi)要求數(shù)據(jù)集的根文件夾中有子目錄。 現(xiàn)在,我們可以創(chuàng)建數(shù)據(jù)集,創(chuàng)建數(shù)據(jù)加載器,將設(shè)備設(shè)置為可以運(yùn)行,最后可視化一些訓(xùn)練數(shù)據(jù)。

# We can use an image folder dataset the way we have it setup.
## Create the dataset
dataset = dset.ImageFolder(root=dataroot,
                           transform=transforms.Compose([
                               transforms.Resize(image_size),
                               transforms.CenterCrop(image_size),
                               transforms.ToTensor(),
                               transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                           ]))
## Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
                                         shuffle=True, num_workers=workers)


## Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")


## Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))

../_images/sphx_glr_dcgan_faces_tutorial_001.png

實(shí)作

設(shè)置好輸入?yún)?shù)并準(zhǔn)備好數(shù)據(jù)集后,我們現(xiàn)在可以進(jìn)入實(shí)現(xiàn)了。 我們將從 Weigth 初始化策略開(kāi)始,然后詳細(xì)討論生成器,鑒別器,損失函數(shù)和訓(xùn)練循環(huán)。

重量初始化

從 DCGAN 論文中,作者指定所有模型權(quán)重均應(yīng)從均值= 0,stdev = 0.02 的正態(tài)分布中隨機(jī)初始化。 weights_init函數(shù)采用已初始化的模型作為輸入,并重新初始化所有卷積,卷積轉(zhuǎn)置和批處理歸一化層,以滿(mǎn)足該標(biāo)準(zhǔn)。 初始化后立即將此功能應(yīng)用于模型。

# custom weights initialization called on netG and netD
def weights_init(m):
    classname = m.__class__.__name__
    if classname.find('Conv') != -1:
        nn.init.normal_(m.weight.data, 0.0, 0.02)
    elif classname.find('BatchNorm') != -1:
        nn.init.normal_(m.weight.data, 1.0, 0.02)
        nn.init.constant_(m.bias.data, 0)

發(fā)電機(jī)

生成器 旨在將潛在空間矢量( )映射到數(shù)據(jù)空間。 由于我們的數(shù)據(jù)是圖像,因此將 轉(zhuǎn)換為數(shù)據(jù)空間意味著最終創(chuàng)建與訓(xùn)練圖像大小相同的 RGB 圖像(即 3x64x64)。 在實(shí)踐中,這是通過(guò)一系列跨步的二維卷積轉(zhuǎn)置層來(lái)完成的,每個(gè)層都與 2d 批處理規(guī)范層和 relu 激活配對(duì)。 生成器的輸出通過(guò) tanh 函數(shù)進(jìn)行饋送,以使其返回到 的輸入數(shù)據(jù)范圍。 值得注意的是,在卷積轉(zhuǎn)置層之后存在批處理規(guī)范函數(shù),因?yàn)檫@是 DCGAN 論文的關(guān)鍵貢獻(xiàn)。 這些層有助于訓(xùn)練過(guò)程中的梯度流動(dòng)。 DCGAN 紙生成的圖像如下所示。

dcgan_generator

注意,我們?cè)谳斎氩糠种性O(shè)置的輸入 (nz , ngf 和 nc )如何影響代碼中的生成器體系結(jié)構(gòu)。 nz 是 z 輸入向量的長(zhǎng)度, ngf 與通過(guò)生成器傳播的特征圖的大小有關(guān), nc 是 輸出圖像中的通道(對(duì)于 RGB 圖像設(shè)置為 3)。 下面是生成器的代碼。

# Generator Code


class Generator(nn.Module):
    def __init__(self, ngpu):
        super(Generator, self).__init__()
        self.ngpu = ngpu
        self.main = nn.Sequential(
            # input is Z, going into a convolution
            nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
            nn.BatchNorm2d(ngf * 8),
            nn.ReLU(True),
            # state size. (ngf*8) x 4 x 4
            nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ngf * 4),
            nn.ReLU(True),
            # state size. (ngf*4) x 8 x 8
            nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ngf * 2),
            nn.ReLU(True),
            # state size. (ngf*2) x 16 x 16
            nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ngf),
            nn.ReLU(True),
            # state size. (ngf) x 32 x 32
            nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
            nn.Tanh()
            # state size. (nc) x 64 x 64
        )


    def forward(self, input):
        return self.main(input)

現(xiàn)在,我們可以實(shí)例化生成器并應(yīng)用weights_init函數(shù)。 簽出打印的模型以查看生成器對(duì)象的結(jié)構(gòu)。

# Create the generator
netG = Generator(ngpu).to(device)


## Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
    netG = nn.DataParallel(netG, list(range(ngpu)))


## Apply the weights_init function to randomly initialize all weights
##  to mean=0, stdev=0.2.
netG.apply(weights_init)


## Print the model
print(netG)

得出:

Generator(
  (main): Sequential(
    (0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
    (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
    (3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (5): ReLU(inplace=True)
    (6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (8): ReLU(inplace=True)
    (9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (11): ReLU(inplace=True)
    (12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (13): Tanh()
  )
)

鑒別器

如前所述,鑒別符 是一個(gè)二進(jìn)制分類(lèi)網(wǎng)絡(luò),該二進(jìn)制分類(lèi)網(wǎng)絡(luò)將圖像作為輸入并輸出輸入圖像是真實(shí)的(與假的相對(duì))的標(biāo)量概率。 在這里, 拍攝 3x64x64 的輸入圖像,通過(guò)一系列的 Conv2d,BatchNorm2d 和 LeakyReLU 層對(duì)其進(jìn)行處理,然后通過(guò) Sigmoid 激活函數(shù)輸出最終概率。 如果需要解決此問(wèn)題,可以用更多層擴(kuò)展此體系結(jié)構(gòu),但是使用跨步卷積,BatchNorm 和 LeakyReLUs 具有重要意義。 DCGAN 論文提到,使用跨步卷積而不是合并以進(jìn)行下采樣是一個(gè)好習(xí)慣,因?yàn)樗梢宰尵W(wǎng)絡(luò)學(xué)習(xí)自己的合并功能。 批處理規(guī)范和泄漏的 relu 函數(shù)還可以促進(jìn)健康的梯度流,這對(duì)于 的學(xué)習(xí)過(guò)程都是至關(guān)重要的。

鑒別碼

class Discriminator(nn.Module):
    def __init__(self, ngpu):
        super(Discriminator, self).__init__()
        self.ngpu = ngpu
        self.main = nn.Sequential(
            # input is (nc) x 64 x 64
            nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf) x 32 x 32
            nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 2),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*2) x 16 x 16
            nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 4),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*4) x 8 x 8
            nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 8),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*8) x 4 x 4
            nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
            nn.Sigmoid()
        )


    def forward(self, input):
        return self.main(input)

現(xiàn)在,與生成器一樣,我們可以創(chuàng)建鑒別器,應(yīng)用weights_init函數(shù),并打印模型的結(jié)構(gòu)。

# Create the Discriminator
netD = Discriminator(ngpu).to(device)


## Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
    netD = nn.DataParallel(netD, list(range(ngpu)))


## Apply the weights_init function to randomly initialize all weights
##  to mean=0, stdev=0.2.
netD.apply(weights_init)


## Print the model
print(netD)

得出:

Discriminator(
  (main): Sequential(
    (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (1): LeakyReLU(negative_slope=0.2, inplace=True)
    (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (4): LeakyReLU(negative_slope=0.2, inplace=True)
    (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): LeakyReLU(negative_slope=0.2, inplace=True)
    (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
    (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (10): LeakyReLU(negative_slope=0.2, inplace=True)
    (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
    (12): Sigmoid()
  )
)

損失函數(shù)和優(yōu)化器

通過(guò) 設(shè)置,我們可以指定它們?nèi)绾瓮ㄟ^(guò)損失函數(shù)和優(yōu)化器學(xué)習(xí)。 我們將使用在 PyTorch 中定義的二進(jìn)制交叉熵?fù)p失 (BCELoss)函數(shù):

請(qǐng)注意,此函數(shù)如何提供目標(biāo)函數(shù)(即 )中兩個(gè)日志分量的計(jì)算。 我們可以指定[CEG2]輸入要使用 BCE 公式的哪一部分。 這是在即將到來(lái)的訓(xùn)練循環(huán)中完成的,但重要的是要了解我們?nèi)绾蝺H通過(guò)更改 (即 GT 標(biāo)簽)就可以選擇想要計(jì)算的組件。

接下來(lái),我們將實(shí)際標(biāo)簽定義為 1,將假標(biāo)簽定義為 0。這些標(biāo)簽將在計(jì)算 的損耗時(shí)使用,這也是 GAN 原始文件中使用的慣例。 最后,我們?cè)O(shè)置了兩個(gè)單獨(dú)的優(yōu)化器,一個(gè)用于 ,一個(gè)用于 。 如 DCGAN 論文中所述,這兩個(gè)都是 Adam 優(yōu)化器,學(xué)習(xí)率均為 0.0002,Beta1 = 0.5。 為了跟蹤生成器的學(xué)習(xí)進(jìn)度,我們將生成一批固定的潛在矢量,這些矢量是從高斯分布(即 fixed_noise)中提取的。 在訓(xùn)練循環(huán)中,我們將定期將此 fixed_noise 輸入到 中,并且在迭代過(guò)程中,我們將看到圖像形成于噪聲之外。

# Initialize BCELoss function
criterion = nn.BCELoss()


## Create batch of latent vectors that we will use to visualize
##  the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)


## Establish convention for real and fake labels during training
real_label = 1
fake_label = 0


## Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))

訓(xùn)練

最后,既然我們已經(jīng)定義了 GAN 框架的所有部分,我們就可以對(duì)其進(jìn)行訓(xùn)練。 請(qǐng)注意,訓(xùn)練 GAN 某種程度上是一種藝術(shù)形式,因?yàn)椴徽_的超參數(shù)設(shè)置會(huì)導(dǎo)致模式崩潰,而對(duì)失敗的原因幾乎沒(méi)有解釋。 在這里,我們將嚴(yán)格遵循 Goodfellow 論文中的算法 1,同時(shí)遵守 ganhacks 中顯示的一些最佳做法。 即,我們將“為真實(shí)和偽造構(gòu)建不同的小批量”圖像,并調(diào)整 G 的目標(biāo)函數(shù)以最大化 。 訓(xùn)練分為兩個(gè)主要部分。 第 1 部分更新了鑒別器,第 2 部分更新了生成器。

第 1 部分-訓(xùn)練鑒別器

回想一下,訓(xùn)練鑒別器的目的是最大程度地提高將給定輸入正確分類(lèi)為真實(shí)或偽造的可能性。 關(guān)于古德費(fèi)羅,我們希望“通過(guò)提高隨機(jī)梯度來(lái)更新鑒別器”。 實(shí)際上,我們要最大化 。 由于 ganhacks 提出了單獨(dú)的小批量建議,因此我們將分兩步進(jìn)行計(jì)算。 首先,我們將從訓(xùn)練集中構(gòu)造一批真實(shí)樣本,向前通過(guò) ,計(jì)算損失( ),然后在向后通過(guò)中計(jì)算梯度。 其次,我們將使用電流發(fā)生器構(gòu)造一批假樣本,將這批樣本通過(guò) 正向傳遞,計(jì)算損失( ),然后向后傳遞累積梯度。 現(xiàn)在,利用從所有真實(shí)批次和所有偽批次累積的漸變,我們將其稱(chēng)為“鑒別器”優(yōu)化器的一個(gè)步驟。

第 2 部分-訓(xùn)練發(fā)電機(jī)

如原始論文所述,我們希望通過(guò)最小化 來(lái)訓(xùn)練 Generator,以產(chǎn)生更好的假貨。 如前所述,Goodfellow 指出這不能提供足夠的梯度,尤其是在學(xué)習(xí)過(guò)程的早期。 作為解決方法,我們改為希望最大化 。 在代碼中,我們通過(guò)以下步驟來(lái)實(shí)現(xiàn)此目的:將第 1 部分的 Generator 輸出與 Discriminator 進(jìn)行分類(lèi),使用實(shí)數(shù)標(biāo)簽 GT 計(jì)算 G 的損耗,反向計(jì)算 G 的梯度,最后使用優(yōu)化器更新 G 的參數(shù) 步。 將真實(shí)標(biāo)簽用作損失函數(shù)的 GT 標(biāo)簽似乎違反直覺(jué),但這使我們可以使用 BCELoss 的 部分(而不是 部分),這正是我們想要的。

最后,我們將進(jìn)行一些統(tǒng)計(jì)報(bào)告,并在每個(gè)時(shí)期結(jié)束時(shí),將我們的 fixed_noise 批次推入生成器,以直觀地跟蹤 G 的訓(xùn)練進(jìn)度。 報(bào)告的訓(xùn)練統(tǒng)計(jì)數(shù)據(jù)是:

  • Loss_D -鑒別器損失,計(jì)算為所有真實(shí)批次和所有假批次的損失總和( )。
  • Loss_G -發(fā)電機(jī)損耗計(jì)算為
  • D(x)-所有真實(shí)批次的鑒別器的平均輸出(整個(gè)批次)。 這應(yīng)該從接近 1 開(kāi)始,然后在 G 變得更好時(shí)理論上收斂到 0.5。 想想這是為什么。
  • D(G(z))-所有假批次的平均鑒別器輸出。 第一個(gè)數(shù)字在 D 更新之前,第二個(gè)數(shù)字在 D 更新之后。 這些數(shù)字應(yīng)從 0 開(kāi)始,并隨著 G 的提高收斂到 0.5。 想想這是為什么。

注意:此步驟可能需要一段時(shí)間,具體取決于您運(yùn)行了多少個(gè)時(shí)期以及是否從數(shù)據(jù)集中刪除了一些數(shù)據(jù)。

# Training Loop


## Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0


print("Starting Training Loop...")
## For each epoch
for epoch in range(num_epochs):
    # For each batch in the dataloader
    for i, data in enumerate(dataloader, 0):


        ############################
        # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
        ###########################
        ## Train with all-real batch
        netD.zero_grad()
        # Format batch
        real_cpu = data[0].to(device)
        b_size = real_cpu.size(0)
        label = torch.full((b_size,), real_label, device=device)
        # Forward pass real batch through D
        output = netD(real_cpu).view(-1)
        # Calculate loss on all-real batch
        errD_real = criterion(output, label)
        # Calculate gradients for D in backward pass
        errD_real.backward()
        D_x = output.mean().item()


        ## Train with all-fake batch
        # Generate batch of latent vectors
        noise = torch.randn(b_size, nz, 1, 1, device=device)
        # Generate fake image batch with G
        fake = netG(noise)
        label.fill_(fake_label)
        # Classify all fake batch with D
        output = netD(fake.detach()).view(-1)
        # Calculate D's loss on the all-fake batch
        errD_fake = criterion(output, label)
        # Calculate the gradients for this batch
        errD_fake.backward()
        D_G_z1 = output.mean().item()
        # Add the gradients from the all-real and all-fake batches
        errD = errD_real + errD_fake
        # Update D
        optimizerD.step()


        ############################
        # (2) Update G network: maximize log(D(G(z)))
        ###########################
        netG.zero_grad()
        label.fill_(real_label)  # fake labels are real for generator cost
        # Since we just updated D, perform another forward pass of all-fake batch through D
        output = netD(fake).view(-1)
        # Calculate G's loss based on this output
        errG = criterion(output, label)
        # Calculate gradients for G
        errG.backward()
        D_G_z2 = output.mean().item()
        # Update G
        optimizerG.step()


        # Output training stats
        if i % 50 == 0:
            print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
                  % (epoch, num_epochs, i, len(dataloader),
                     errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))


        # Save Losses for plotting later
        G_losses.append(errG.item())
        D_losses.append(errD.item())


        # Check how the generator is doing by saving G's output on fixed_noise
        if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
            with torch.no_grad():
                fake = netG(fixed_noise).detach().cpu()
            img_list.append(vutils.make_grid(fake, padding=2, normalize=True))


        iters += 1

得出:

Starting Training Loop...
[0/5][0/1583]   Loss_D: 2.0937  Loss_G: 5.2060  D(x): 0.5704    D(G(z)): 0.6680 / 0.0090
[0/5][50/1583]  Loss_D: 0.2073  Loss_G: 12.9653 D(x): 0.9337    D(G(z)): 0.0000 / 0.0000
[0/5][100/1583] Loss_D: 0.0364  Loss_G: 34.5761 D(x): 0.9917    D(G(z)): 0.0000 / 0.0000
[0/5][150/1583] Loss_D: 0.0078  Loss_G: 39.3111 D(x): 0.9947    D(G(z)): 0.0000 / 0.0000
[0/5][200/1583] Loss_D: 0.0029  Loss_G: 38.7681 D(x): 0.9974    D(G(z)): 0.0000 / 0.0000
[0/5][250/1583] Loss_D: 1.2861  Loss_G: 13.3356 D(x): 0.8851    D(G(z)): 0.2970 / 0.0035
[0/5][300/1583] Loss_D: 1.2933  Loss_G: 6.7655  D(x): 0.8533    D(G(z)): 0.5591 / 0.0020
[0/5][350/1583] Loss_D: 0.7473  Loss_G: 3.2617  D(x): 0.5798    D(G(z)): 0.0514 / 0.0483
[0/5][400/1583] Loss_D: 0.5454  Loss_G: 4.0144  D(x): 0.8082    D(G(z)): 0.2346 / 0.0310
[0/5][450/1583] Loss_D: 1.1872  Loss_G: 3.2918  D(x): 0.4389    D(G(z)): 0.0360 / 0.0858
[0/5][500/1583] Loss_D: 0.7546  Loss_G: 4.7428  D(x): 0.9072    D(G(z)): 0.4049 / 0.0178
[0/5][550/1583] Loss_D: 0.3514  Loss_G: 3.7726  D(x): 0.8937    D(G(z)): 0.1709 / 0.0394
[0/5][600/1583] Loss_D: 0.4400  Loss_G: 4.1662  D(x): 0.7768    D(G(z)): 0.1069 / 0.0284
[0/5][650/1583] Loss_D: 0.3275  Loss_G: 4.3374  D(x): 0.8452    D(G(z)): 0.0852 / 0.0214
[0/5][700/1583] Loss_D: 0.7711  Loss_G: 5.0677  D(x): 0.9103    D(G(z)): 0.3848 / 0.0190
[0/5][750/1583] Loss_D: 0.5346  Loss_G: 5.7441  D(x): 0.8971    D(G(z)): 0.2969 / 0.0064
[0/5][800/1583] Loss_D: 0.5027  Loss_G: 2.5982  D(x): 0.6897    D(G(z)): 0.0431 / 0.1196
[0/5][850/1583] Loss_D: 0.4479  Loss_G: 4.8790  D(x): 0.7407    D(G(z)): 0.0456 / 0.0200
[0/5][900/1583] Loss_D: 0.9812  Loss_G: 5.8792  D(x): 0.8895    D(G(z)): 0.4801 / 0.0070
[0/5][950/1583] Loss_D: 0.5154  Loss_G: 3.4813  D(x): 0.7722    D(G(z)): 0.1549 / 0.0449
[0/5][1000/1583]        Loss_D: 0.8468  Loss_G: 6.6179  D(x): 0.8914    D(G(z)): 0.4262 / 0.0030
[0/5][1050/1583]        Loss_D: 0.4425  Loss_G: 3.9902  D(x): 0.8307    D(G(z)): 0.1872 / 0.0270
[0/5][1100/1583]        Loss_D: 0.6800  Loss_G: 4.3945  D(x): 0.8244    D(G(z)): 0.3022 / 0.0223
[0/5][1150/1583]        Loss_D: 0.7227  Loss_G: 2.2669  D(x): 0.6177    D(G(z)): 0.0625 / 0.1613
[0/5][1200/1583]        Loss_D: 0.4061  Loss_G: 5.7088  D(x): 0.9269    D(G(z)): 0.2367 / 0.0071
[0/5][1250/1583]        Loss_D: 0.8514  Loss_G: 3.8994  D(x): 0.7686    D(G(z)): 0.3573 / 0.0330
[0/5][1300/1583]        Loss_D: 0.5323  Loss_G: 3.0046  D(x): 0.7102    D(G(z)): 0.0742 / 0.1138
[0/5][1350/1583]        Loss_D: 0.5793  Loss_G: 4.6804  D(x): 0.8722    D(G(z)): 0.2877 / 0.0169
[0/5][1400/1583]        Loss_D: 0.6849  Loss_G: 5.4391  D(x): 0.8974    D(G(z)): 0.3630 / 0.0100
[0/5][1450/1583]        Loss_D: 1.1515  Loss_G: 6.0096  D(x): 0.8054    D(G(z)): 0.5186 / 0.0049
[0/5][1500/1583]        Loss_D: 0.4771  Loss_G: 3.3768  D(x): 0.8590    D(G(z)): 0.2357 / 0.0541
[0/5][1550/1583]        Loss_D: 0.6947  Loss_G: 5.9660  D(x): 0.8989    D(G(z)): 0.3671 / 0.0064
[1/5][0/1583]   Loss_D: 0.5001  Loss_G: 3.9243  D(x): 0.8238    D(G(z)): 0.2077 / 0.0377
[1/5][50/1583]  Loss_D: 0.4494  Loss_G: 4.4726  D(x): 0.8514    D(G(z)): 0.2159 / 0.0187
[1/5][100/1583] Loss_D: 0.4519  Loss_G: 2.6781  D(x): 0.7331    D(G(z)): 0.0688 / 0.0948
[1/5][150/1583] Loss_D: 0.3808  Loss_G: 3.6005  D(x): 0.8827    D(G(z)): 0.1908 / 0.0456
[1/5][200/1583] Loss_D: 0.4373  Loss_G: 4.0625  D(x): 0.8281    D(G(z)): 0.1719 / 0.0306
[1/5][250/1583] Loss_D: 0.5906  Loss_G: 3.1507  D(x): 0.7603    D(G(z)): 0.1952 / 0.0682
[1/5][300/1583] Loss_D: 1.4315  Loss_G: 6.2042  D(x): 0.9535    D(G(z)): 0.6480 / 0.0051
[1/5][350/1583] Loss_D: 0.8529  Loss_G: 1.2236  D(x): 0.5291    D(G(z)): 0.0552 / 0.3978
[1/5][400/1583] Loss_D: 0.8166  Loss_G: 5.3178  D(x): 0.8460    D(G(z)): 0.3872 / 0.0104
[1/5][450/1583] Loss_D: 0.6699  Loss_G: 2.4998  D(x): 0.6921    D(G(z)): 0.1719 / 0.1220
[1/5][500/1583] Loss_D: 0.4986  Loss_G: 4.3763  D(x): 0.8835    D(G(z)): 0.2643 / 0.0212
[1/5][550/1583] Loss_D: 0.9149  Loss_G: 5.6209  D(x): 0.9476    D(G(z)): 0.5069 / 0.0088
[1/5][600/1583] Loss_D: 0.5116  Loss_G: 3.4946  D(x): 0.8368    D(G(z)): 0.2444 / 0.0488
[1/5][650/1583] Loss_D: 0.4408  Loss_G: 2.8180  D(x): 0.7795    D(G(z)): 0.1262 / 0.0926
[1/5][700/1583] Loss_D: 0.3821  Loss_G: 3.5735  D(x): 0.8237    D(G(z)): 0.1387 / 0.0432
[1/5][750/1583] Loss_D: 0.5042  Loss_G: 2.4218  D(x): 0.6897    D(G(z)): 0.0541 / 0.1319
[1/5][800/1583] Loss_D: 1.3208  Loss_G: 4.7094  D(x): 0.9466    D(G(z)): 0.5988 / 0.0158
[1/5][850/1583] Loss_D: 0.3780  Loss_G: 2.9969  D(x): 0.8475    D(G(z)): 0.1662 / 0.0648
[1/5][900/1583] Loss_D: 0.4350  Loss_G: 3.2726  D(x): 0.8306    D(G(z)): 0.1925 / 0.0531
[1/5][950/1583] Loss_D: 0.4228  Loss_G: 2.5205  D(x): 0.7438    D(G(z)): 0.0493 / 0.1090
[1/5][1000/1583]        Loss_D: 0.4680  Loss_G: 4.4448  D(x): 0.8652    D(G(z)): 0.2433 / 0.0190
[1/5][1050/1583]        Loss_D: 0.4261  Loss_G: 2.7076  D(x): 0.7683    D(G(z)): 0.1049 / 0.0999
[1/5][1100/1583]        Loss_D: 0.5115  Loss_G: 1.9458  D(x): 0.6730    D(G(z)): 0.0449 / 0.2070
[1/5][1150/1583]        Loss_D: 0.6619  Loss_G: 2.0092  D(x): 0.6320    D(G(z)): 0.1115 / 0.1926
[1/5][1200/1583]        Loss_D: 0.4824  Loss_G: 2.0529  D(x): 0.7735    D(G(z)): 0.1647 / 0.1758
[1/5][1250/1583]        Loss_D: 0.4529  Loss_G: 4.3564  D(x): 0.9270    D(G(z)): 0.2881 / 0.0223
[1/5][1300/1583]        Loss_D: 0.5469  Loss_G: 2.5909  D(x): 0.7217    D(G(z)): 0.1403 / 0.1101
[1/5][1350/1583]        Loss_D: 0.4525  Loss_G: 1.4998  D(x): 0.7336    D(G(z)): 0.0904 / 0.2715
[1/5][1400/1583]        Loss_D: 0.5267  Loss_G: 2.3458  D(x): 0.7594    D(G(z)): 0.1700 / 0.1311
[1/5][1450/1583]        Loss_D: 0.4700  Loss_G: 3.7640  D(x): 0.9059    D(G(z)): 0.2852 / 0.0316
[1/5][1500/1583]        Loss_D: 0.7703  Loss_G: 1.4253  D(x): 0.5655    D(G(z)): 0.0683 / 0.3071
[1/5][1550/1583]        Loss_D: 0.5535  Loss_G: 2.4315  D(x): 0.6773    D(G(z)): 0.0834 / 0.1280
[2/5][0/1583]   Loss_D: 0.7237  Loss_G: 3.4642  D(x): 0.8383    D(G(z)): 0.3687 / 0.0442
[2/5][50/1583]  Loss_D: 0.4401  Loss_G: 2.4749  D(x): 0.7939    D(G(z)): 0.1526 / 0.1107
[2/5][100/1583] Loss_D: 0.7470  Loss_G: 1.8611  D(x): 0.5830    D(G(z)): 0.0871 / 0.2102
[2/5][150/1583] Loss_D: 0.7930  Loss_G: 1.3743  D(x): 0.5201    D(G(z)): 0.0343 / 0.3171
[2/5][200/1583] Loss_D: 0.5059  Loss_G: 2.9394  D(x): 0.8044    D(G(z)): 0.2128 / 0.0739
[2/5][250/1583] Loss_D: 0.5873  Loss_G: 1.6961  D(x): 0.6329    D(G(z)): 0.0561 / 0.2297
[2/5][300/1583] Loss_D: 0.5341  Loss_G: 1.9229  D(x): 0.7022    D(G(z)): 0.1145 / 0.1921
[2/5][350/1583] Loss_D: 0.7095  Loss_G: 1.3619  D(x): 0.5855    D(G(z)): 0.0707 / 0.3038
[2/5][400/1583] Loss_D: 0.5163  Loss_G: 3.0209  D(x): 0.8695    D(G(z)): 0.2828 / 0.0657
[2/5][450/1583] Loss_D: 0.5413  Loss_G: 3.5822  D(x): 0.8450    D(G(z)): 0.2748 / 0.0387
[2/5][500/1583] Loss_D: 0.4929  Loss_G: 2.1009  D(x): 0.7645    D(G(z)): 0.1692 / 0.1552
[2/5][550/1583] Loss_D: 0.5042  Loss_G: 2.5833  D(x): 0.7047    D(G(z)): 0.0888 / 0.1107
[2/5][600/1583] Loss_D: 0.4562  Loss_G: 2.5190  D(x): 0.8316    D(G(z)): 0.2151 / 0.0987
[2/5][650/1583] Loss_D: 0.9564  Loss_G: 2.5315  D(x): 0.7157    D(G(z)): 0.3861 / 0.1153
[2/5][700/1583] Loss_D: 0.6706  Loss_G: 3.0991  D(x): 0.7382    D(G(z)): 0.2497 / 0.0603
[2/5][750/1583] Loss_D: 0.5803  Loss_G: 2.9059  D(x): 0.7523    D(G(z)): 0.2092 / 0.0785
[2/5][800/1583] Loss_D: 0.8315  Loss_G: 3.7972  D(x): 0.9184    D(G(z)): 0.4829 / 0.0325
[2/5][850/1583] Loss_D: 0.6177  Loss_G: 2.2548  D(x): 0.7526    D(G(z)): 0.2470 / 0.1306
[2/5][900/1583] Loss_D: 0.7398  Loss_G: 3.2303  D(x): 0.8604    D(G(z)): 0.3999 / 0.0572
[2/5][950/1583] Loss_D: 0.7914  Loss_G: 1.5464  D(x): 0.6001    D(G(z)): 0.1507 / 0.2605
[2/5][1000/1583]        Loss_D: 0.9693  Loss_G: 4.0590  D(x): 0.9251    D(G(z)): 0.5270 / 0.0275
[2/5][1050/1583]        Loss_D: 0.5805  Loss_G: 2.1703  D(x): 0.6749    D(G(z)): 0.1185 / 0.1465
[2/5][1100/1583]        Loss_D: 0.8626  Loss_G: 0.9626  D(x): 0.5259    D(G(z)): 0.0865 / 0.4571
[2/5][1150/1583]        Loss_D: 0.7256  Loss_G: 4.0511  D(x): 0.9135    D(G(z)): 0.4172 / 0.0300
[2/5][1200/1583]        Loss_D: 0.5937  Loss_G: 3.8598  D(x): 0.8982    D(G(z)): 0.3440 / 0.0320
[2/5][1250/1583]        Loss_D: 0.6144  Loss_G: 1.8087  D(x): 0.6660    D(G(z)): 0.1424 / 0.2062
[2/5][1300/1583]        Loss_D: 0.8017  Loss_G: 1.2032  D(x): 0.5450    D(G(z)): 0.0746 / 0.3562
[2/5][1350/1583]        Loss_D: 0.7563  Loss_G: 1.6629  D(x): 0.6002    D(G(z)): 0.1437 / 0.2351
[2/5][1400/1583]        Loss_D: 0.7457  Loss_G: 1.5831  D(x): 0.6069    D(G(z)): 0.1493 / 0.2511
[2/5][1450/1583]        Loss_D: 0.6697  Loss_G: 2.8194  D(x): 0.7597    D(G(z)): 0.2677 / 0.0804
[2/5][1500/1583]        Loss_D: 0.5681  Loss_G: 2.2054  D(x): 0.7171    D(G(z)): 0.1626 / 0.1358
[2/5][1550/1583]        Loss_D: 0.6741  Loss_G: 2.9537  D(x): 0.8373    D(G(z)): 0.3492 / 0.0760
[3/5][0/1583]   Loss_D: 1.0265  Loss_G: 1.1510  D(x): 0.4474    D(G(z)): 0.0685 / 0.3681
[3/5][50/1583]  Loss_D: 0.6190  Loss_G: 1.9895  D(x): 0.7136    D(G(z)): 0.1900 / 0.1705
[3/5][100/1583] Loss_D: 0.7754  Loss_G: 3.2350  D(x): 0.8117    D(G(z)): 0.3782 / 0.0535
[3/5][150/1583] Loss_D: 1.8367  Loss_G: 5.1895  D(x): 0.9408    D(G(z)): 0.7750 / 0.0095
[3/5][200/1583] Loss_D: 0.6821  Loss_G: 2.4254  D(x): 0.7709    D(G(z)): 0.3020 / 0.1152
[3/5][250/1583] Loss_D: 1.1273  Loss_G: 4.2718  D(x): 0.9373    D(G(z)): 0.5970 / 0.0206
[3/5][300/1583] Loss_D: 0.5944  Loss_G: 2.2868  D(x): 0.7547    D(G(z)): 0.2306 / 0.1256
[3/5][350/1583] Loss_D: 0.7941  Loss_G: 3.4394  D(x): 0.7585    D(G(z)): 0.3472 / 0.0437
[3/5][400/1583] Loss_D: 0.7588  Loss_G: 3.7067  D(x): 0.8416    D(G(z)): 0.3981 / 0.0347
[3/5][450/1583] Loss_D: 0.7671  Loss_G: 2.7477  D(x): 0.7932    D(G(z)): 0.3686 / 0.0823
[3/5][500/1583] Loss_D: 1.0295  Loss_G: 1.6097  D(x): 0.6318    D(G(z)): 0.3568 / 0.2429
[3/5][550/1583] Loss_D: 0.5186  Loss_G: 2.1037  D(x): 0.7998    D(G(z)): 0.2266 / 0.1473
[3/5][600/1583] Loss_D: 0.5855  Loss_G: 1.9740  D(x): 0.6520    D(G(z)): 0.0972 / 0.1770
[3/5][650/1583] Loss_D: 0.5954  Loss_G: 2.2880  D(x): 0.7819    D(G(z)): 0.2611 / 0.1234
[3/5][700/1583] Loss_D: 1.0706  Loss_G: 1.1761  D(x): 0.4335    D(G(z)): 0.0681 / 0.3609
[3/5][750/1583] Loss_D: 0.7128  Loss_G: 1.5402  D(x): 0.5909    D(G(z)): 0.0993 / 0.2702
[3/5][800/1583] Loss_D: 0.8883  Loss_G: 2.4234  D(x): 0.8035    D(G(z)): 0.4176 / 0.1206
[3/5][850/1583] Loss_D: 0.7085  Loss_G: 2.7516  D(x): 0.7502    D(G(z)): 0.2918 / 0.0878
[3/5][900/1583] Loss_D: 0.8472  Loss_G: 3.5935  D(x): 0.8553    D(G(z)): 0.4403 / 0.0397
[3/5][950/1583] Loss_D: 0.4454  Loss_G: 2.3438  D(x): 0.7763    D(G(z)): 0.1519 / 0.1226
[3/5][1000/1583]        Loss_D: 1.2425  Loss_G: 1.0600  D(x): 0.3930    D(G(z)): 0.0889 / 0.4122
[3/5][1050/1583]        Loss_D: 1.0465  Loss_G: 1.4973  D(x): 0.4618    D(G(z)): 0.1165 / 0.2906
[3/5][1100/1583]        Loss_D: 0.5885  Loss_G: 2.7760  D(x): 0.8852    D(G(z)): 0.3356 / 0.0854
[3/5][1150/1583]        Loss_D: 0.5940  Loss_G: 2.5669  D(x): 0.7481    D(G(z)): 0.2109 / 0.1001
[3/5][1200/1583]        Loss_D: 0.9074  Loss_G: 3.0569  D(x): 0.7762    D(G(z)): 0.4214 / 0.0644
[3/5][1250/1583]        Loss_D: 0.7487  Loss_G: 3.0959  D(x): 0.8534    D(G(z)): 0.4052 / 0.0601
[3/5][1300/1583]        Loss_D: 0.5956  Loss_G: 2.5807  D(x): 0.7263    D(G(z)): 0.1887 / 0.1039
[3/5][1350/1583]        Loss_D: 1.7038  Loss_G: 0.6425  D(x): 0.2487    D(G(z)): 0.0507 / 0.5746
[3/5][1400/1583]        Loss_D: 0.5863  Loss_G: 1.7754  D(x): 0.6609    D(G(z)): 0.1044 / 0.2069
[3/5][1450/1583]        Loss_D: 0.4925  Loss_G: 2.7946  D(x): 0.7665    D(G(z)): 0.1660 / 0.0864
[3/5][1500/1583]        Loss_D: 0.6616  Loss_G: 2.9829  D(x): 0.9091    D(G(z)): 0.3944 / 0.0654
[3/5][1550/1583]        Loss_D: 1.2097  Loss_G: 1.0897  D(x): 0.4433    D(G(z)): 0.1887 / 0.3918
[4/5][0/1583]   Loss_D: 0.5653  Loss_G: 2.1567  D(x): 0.6781    D(G(z)): 0.1105 / 0.1464
[4/5][50/1583]  Loss_D: 0.7300  Loss_G: 1.7770  D(x): 0.7472    D(G(z)): 0.3011 / 0.2104
[4/5][100/1583] Loss_D: 0.5735  Loss_G: 1.7644  D(x): 0.6723    D(G(z)): 0.1219 / 0.2092
[4/5][150/1583] Loss_D: 1.0598  Loss_G: 0.6708  D(x): 0.4336    D(G(z)): 0.0800 / 0.5560
[4/5][200/1583] Loss_D: 0.6098  Loss_G: 2.0432  D(x): 0.6658    D(G(z)): 0.1378 / 0.1655
[4/5][250/1583] Loss_D: 0.7227  Loss_G: 1.6686  D(x): 0.5750    D(G(z)): 0.0759 / 0.2371
[4/5][300/1583] Loss_D: 0.8077  Loss_G: 2.7966  D(x): 0.7647    D(G(z)): 0.3703 / 0.0771
[4/5][350/1583] Loss_D: 0.7086  Loss_G: 1.3171  D(x): 0.5890    D(G(z)): 0.1103 / 0.3079
[4/5][400/1583] Loss_D: 0.6418  Loss_G: 2.3383  D(x): 0.6284    D(G(z)): 0.1060 / 0.1303
[4/5][450/1583] Loss_D: 0.7046  Loss_G: 3.6138  D(x): 0.8926    D(G(z)): 0.4057 / 0.0354
[4/5][500/1583] Loss_D: 1.7355  Loss_G: 2.1156  D(x): 0.5473    D(G(z)): 0.4802 / 0.2431
[4/5][550/1583] Loss_D: 0.6479  Loss_G: 2.5634  D(x): 0.7987    D(G(z)): 0.3139 / 0.0956
[4/5][600/1583] Loss_D: 0.5650  Loss_G: 1.9429  D(x): 0.6772    D(G(z)): 0.1203 / 0.1713
[4/5][650/1583] Loss_D: 0.9440  Loss_G: 3.2048  D(x): 0.7789    D(G(z)): 0.4225 / 0.0533
[4/5][700/1583] Loss_D: 0.5745  Loss_G: 2.5296  D(x): 0.7004    D(G(z)): 0.1496 / 0.1075
[4/5][750/1583] Loss_D: 0.7448  Loss_G: 1.5417  D(x): 0.5864    D(G(z)): 0.1132 / 0.2617
[4/5][800/1583] Loss_D: 0.5315  Loss_G: 2.4287  D(x): 0.7047    D(G(z)): 0.1254 / 0.1159
[4/5][850/1583] Loss_D: 1.1006  Loss_G: 0.9708  D(x): 0.4101    D(G(z)): 0.0549 / 0.4226
[4/5][900/1583] Loss_D: 0.8635  Loss_G: 1.1581  D(x): 0.5057    D(G(z)): 0.0711 / 0.3618
[4/5][950/1583] Loss_D: 0.5915  Loss_G: 2.8714  D(x): 0.8364    D(G(z)): 0.3005 / 0.0727
[4/5][1000/1583]        Loss_D: 1.5283  Loss_G: 0.4922  D(x): 0.2847    D(G(z)): 0.0228 / 0.6394
[4/5][1050/1583]        Loss_D: 0.7626  Loss_G: 1.7556  D(x): 0.5865    D(G(z)): 0.1282 / 0.2159
[4/5][1100/1583]        Loss_D: 0.6571  Loss_G: 1.7024  D(x): 0.6470    D(G(z)): 0.1505 / 0.2243
[4/5][1150/1583]        Loss_D: 0.7735  Loss_G: 1.2737  D(x): 0.5851    D(G(z)): 0.1427 / 0.3350
[4/5][1200/1583]        Loss_D: 0.4104  Loss_G: 3.2208  D(x): 0.8835    D(G(z)): 0.2290 / 0.0520
[4/5][1250/1583]        Loss_D: 0.4898  Loss_G: 2.1841  D(x): 0.7873    D(G(z)): 0.1912 / 0.1451
[4/5][1300/1583]        Loss_D: 0.6657  Loss_G: 2.5232  D(x): 0.6504    D(G(z)): 0.1283 / 0.1273
[4/5][1350/1583]        Loss_D: 1.0126  Loss_G: 4.9254  D(x): 0.9131    D(G(z)): 0.5439 / 0.0115
[4/5][1400/1583]        Loss_D: 1.2293  Loss_G: 5.6073  D(x): 0.9281    D(G(z)): 0.6209 / 0.0062
[4/5][1450/1583]        Loss_D: 0.3908  Loss_G: 2.4251  D(x): 0.7873    D(G(z)): 0.1181 / 0.1124
[4/5][1500/1583]        Loss_D: 1.1000  Loss_G: 0.9861  D(x): 0.4594    D(G(z)): 0.1542 / 0.4324
[4/5][1550/1583]        Loss_D: 0.9504  Loss_G: 3.8109  D(x): 0.9275    D(G(z)): 0.5386 / 0.0277

結(jié)果

最后,讓我們看看我們是如何做到的。 在這里,我們將看三個(gè)不同的結(jié)果。 首先,我們將了解 D 和 G 的損失在訓(xùn)練過(guò)程中如何變化。 其次,我們將在每個(gè)時(shí)期將 G 的輸出顯示為 fixed_noise 批次。 第三,我們將查看一批真實(shí)數(shù)據(jù)和來(lái)自 G 的一批偽數(shù)據(jù)。

損失與訓(xùn)練迭代

下面是 D & G 的損失與訓(xùn)練迭代的關(guān)系圖。

plt.figure(figsize=(10,5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses,label="G")
plt.plot(D_losses,label="D")
plt.xlabel("iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()

../_images/sphx_glr_dcgan_faces_tutorial_002.png

可視化 G 的進(jìn)度

請(qǐng)記住,在每次訓(xùn)練之后,我們?nèi)绾螌⑸善鞯妮敵霰4鏋?fixed_noise 批次。 現(xiàn)在,我們可以用動(dòng)畫(huà)形象化 G 的訓(xùn)練進(jìn)度。 按下播放按鈕開(kāi)始動(dòng)畫(huà)。

#%%capture
fig = plt.figure(figsize=(8,8))
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)


HTML(ani.to_jshtml())

../_images/sphx_glr_dcgan_faces_tutorial_003.png

實(shí)像與假像

最后,讓我們并排查看一些真實(shí)圖像和偽圖像。

# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))


## Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,2,1)
plt.axis("off")
plt.title("Real Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))


## Plot the fake images from the last epoch
plt.subplot(1,2,2)
plt.axis("off")
plt.title("Fake Images")
plt.imshow(np.transpose(img_list[-1],(1,2,0)))
plt.show()

../_images/sphx_glr_dcgan_faces_tutorial_004.png

下一步去哪里

我們已經(jīng)走到了旅程的盡頭,但是您可以從這里到達(dá)幾個(gè)地方。 你可以:

  • 訓(xùn)練更長(zhǎng)的時(shí)間,看看效果如何
  • 修改此模型以采用其他數(shù)據(jù)集,并可能更改圖像的大小和模型架構(gòu)
  • 在處查看其他一些不錯(cuò)的 GAN 項(xiàng)目
  • 創(chuàng)建可生成音樂(lè)的 GAN

腳本的總運(yùn)行時(shí)間:(28 分鐘 39.288 秒)

Download Python source code: dcgan_faces_tutorial.py Download Jupyter notebook: dcgan_faces_tutorial.ipynb

由獅身人面像畫(huà)廊生成的畫(huà)廊


以上內(nèi)容是否對(duì)您有幫助:
在線筆記
App下載
App下載

掃描二維碼

下載編程獅App

公眾號(hào)
微信公眾號(hào)

編程獅公眾號(hào)