python 如何在VGG-19 PyTorch中微调FC 1和FC2?

nbnkbykc  于 4个月前  发布在  Python
关注(0)|答案(1)|浏览(74)

我需要用pytorch微调预训练的VGG-19。我有以下具体任务:
1.微调VGG-19网络中所有层的权重。
1.微调VGG-19网络中最后两个全连接层(FC 1和FC2)的权重,这是我得到的唯一信息。
VGG-19结构如下:

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (17): ReLU(inplace=True)
    (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (24): ReLU(inplace=True)
    (25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (26): ReLU(inplace=True)
    (27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (31): ReLU(inplace=True)
    (32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (33): ReLU(inplace=True)
    (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (35): ReLU(inplace=True)
    (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

字符串
我试着这样做第一个任务,我认为这是正确的:

model = models.vgg19(pretrained=True)

for param in model.parameters():
    param.requires_grad = True

model.classifier[6] = nn.Linear(4096, len(class_to_idx))


但是我不知道第二个任务,我试过这个,我不确定:

model2 = models.vgg19(pretrained=True)

for param in model2.parameters():
    param.requires_grad = False

# Set requires_grad to True for FC1 and FC2
for param in model2.classifier[0].parameters():
    param.requires_grad = True

for param in model2.classifier[3].parameters():
    param.requires_grad = True

# Modify the last fully connected layers for the number of classes in your dataset
model2.classifier[6] = nn.Linear(4096, len(class_to_idx))


如何做第二部分?我应该保留model2.classifier[6]还是定义一个新的Sequential结构?

qqrboqgw

qqrboqgw1#

你的想法基本上是正确的。执行第二个任务的小步骤:
1.加载预训练的VGG-19模型。
1.冻结除最后两个完全连接的层(FC 1和FC2)外的所有层的权重。如果您正在定义新的Sequential,则不会有预训练的权重,因为预训练的权重不可用,因此该方法将涉及从头开始训练层,而不是微调现有权重
1.也可以修改最后一个图层以适合数据集中的类数。
正如你上面所说,VGG-19的分类器部分的结构是:

(classifier): Sequential(
        (0): Linear(in_features=25088, out_features=4096, bias=True) # FC1
        (1): ReLU(inplace=True)
        (2): Dropout(p=0.5, inplace=False)
        (3): Linear(in_features=4096, out_features=4096, bias=True) # FC2
        (4): ReLU(inplace=True)
        (5): Dropout(p=0.5, inplace=False)
        (6): Linear(in_features=4096, out_features=1000, bias=True) # Original output layer
      )

字符串
仅更新FC 1和FC2的代码(以及最终分类器层,如果您选择修改它)将在训练期间更新,而网络的其余部分保持不变:

import torch.nn as nn
    from torchvision import models
    from torchvision.models import VGG19_Weights
    
    # Load the pretrained model
    vgg19_model = models.vgg19(weights=VGG19_Weights.IMAGENET1K_V1) # since 0.13, the argument 'pretrained' has been deprecated and may be deleted in the future
    
    # Freeze all layers' weights in the model
    for param in vgg19_model.parameters():  # get all the parameters of the model
        param.requires_grad = False  # these layers won't be update during training
    
    # Unfreeze weights of last two fully connected layers (FC1 and FC2)
    for param in vgg19_model.classifier[0].parameters():
        param.requires_grad = True  # will be updated during training
    for param in vgg19_model.classifier[3].parameters():
        param.requires_grad = True  # will be updated during training
    
    # (Recommended) Modify the last layer for your number of classes
    class_to_idx = TODO
    num_classes = len(class_to_idx)
    model.classifier[6] = nn.Linear(4096, num_classes)


最后一个全连接层(ImageNet最初有1000个输出特征)被替换为等于数据集中类数量的特征数量,这是推荐的(如果不是必需的)方法。如果您的任务需要不同数量的类,则必须调整模型以输出正确数量的可能性。另一件需要考虑的事情是,虽然您的任务可能与ImageNet具有相同数量的类,但类本身可能不同。重新训练输出层有助于模型更好地区分特定于您的任务的类。

相关问题