亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

Pytorch之8層神經(jīng)網(wǎng)絡(luò)實現(xiàn)Cifar-10圖像分類驗證集準(zhǔn)確率94.71%

 更新時間:2023年03月25日 10:05:34   作者:雪地(>^ω^<)  
這篇文章主要介紹了Pytorch之8層神經(jīng)網(wǎng)絡(luò)實現(xiàn)Cifar-10圖像分類驗證集準(zhǔn)確率94.71%問題,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教

實驗環(huán)境 

  • Pytorch 1.7.0
  • torchvision 0.8.2
  • Python 3.8
  • CUDA10.2 + cuDNN v7.6.5
  • Win10 + Pycharm
  • GTX1660, 6G

網(wǎng)絡(luò)結(jié)構(gòu)采用最簡潔的類VGG結(jié)構(gòu),即全部由3*3卷積和最大池化組成,后面接一個全連接層用于分類,網(wǎng)絡(luò)大小僅18M左右。

神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)圖

8層神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)圖

Pytorch上搭建網(wǎng)絡(luò)

class Block(nn.Module):
    def __init__(self, inchannel, outchannel, res=True):
        super(Block, self).__init__()
        self.res = res     # 是否帶殘差連接
        self.left = nn.Sequential(
            nn.Conv2d(inchannel, outchannel, kernel_size=3, padding=1, bias=False),
            nn.BatchNorm2d(outchannel),
            nn.ReLU(inplace=True),
            nn.Conv2d(outchannel, outchannel, kernel_size=3, padding=1, bias=False),
            nn.BatchNorm2d(outchannel),
        )
        if stride != 1 or inchannel != outchannel:
            self.shortcut = nn.Sequential(
                nn.Conv2d(inchannel, outchannel, kernel_size=1, bias=False),
                nn.BatchNorm2d(outchannel),
            )
        else:
            self.shortcut = nn.Sequential()

        self.relu = nn.Sequential(
            nn.ReLU(inplace=True),
        )

    def forward(self, x):
        out = self.left(x)
        if self.res:
            out += self.shortcut(x)
        out = self.relu(out)
        return out


class myModel(nn.Module):
    def __init__(self, cfg=[64, 'M', 128,  'M', 256, 'M', 512, 'M'], res=True):
        super(myModel, self).__init__()
        self.res = res       # 是否帶殘差連接
        self.cfg = cfg       # 配置列表
        self.inchannel = 3   # 初始輸入通道數(shù)
        self.futures = self.make_layer()
        # 構(gòu)建卷積層之后的全連接層以及分類器:
        self.classifier = nn.Sequential(nn.Dropout(0.4),            # 兩層fc效果還差一些
                                        nn.Linear(4 * 512, 10), )   # fc,最終Cifar10輸出是10類

    def make_layer(self):
        layers = []
        for v in self.cfg:
            if v == 'M':
                layers.append(nn.MaxPool2d(kernel_size=2, stride=2))
            else:
                layers.append(Block(self.inchannel, v, self.res))
                self.inchannel = v    # 輸入通道數(shù)改為上一層的輸出通道數(shù)
        return nn.Sequential(*layers)

    def forward(self, x):
        out = self.futures(x)
        # view(out.size(0), -1): change tensor size from (N ,H , W) to (N, H*W)
        out = out.view(out.size(0), -1)
        out = self.classifier(out)
        return out

該網(wǎng)絡(luò)可以很方便的改造成帶殘差的,只要在初始化網(wǎng)絡(luò)時,將參數(shù)res設(shè)為True即可,并可改變cfg配置列表來方便的修改網(wǎng)絡(luò)層數(shù)。

Pytorch上訓(xùn)練

所選數(shù)據(jù)集為Cifar-10,該數(shù)據(jù)集共有60000張帶標(biāo)簽的彩色圖像,這些圖像尺寸32*32,分為10個類,每類6000張圖。這里面有50000張用于訓(xùn)練,每個類5000張,另外10000用于測試,每個類1000張。
訓(xùn)練策略如下:

1.優(yōu)化器

momentum=0.9 的 optim.SGD,adam在很多情況下能加速收斂,但因為是自適應(yīng)學(xué)習(xí)率,在訓(xùn)練后期存在不能收斂到全局極值點的問題,所以采用能手動調(diào)節(jié)學(xué)習(xí)率的SGD,現(xiàn)在很多比賽和論文中也是采用該策略。設(shè)置weight_decay=5e-3,即設(shè)置較大的L2正則來降低過擬合。

# 定義損失函數(shù)和優(yōu)化器
loss_func = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=LR, momentum=0.9, weight_decay=5e-3)

2.學(xué)習(xí)率

optim.lr_scheduler.MultiStepLR,參數(shù)設(shè)為:milestones=[int(num_epochs * 0.56), int(num_epochs * 0.78)], gamma=0.1,即在0.56倍epochs和0.78時分別下降為前一階段學(xué)習(xí)率的0.1倍。

# 學(xué)習(xí)率調(diào)整策略 MultiStep:
scheduler = optim.lr_scheduler.MultiStepLR(optimizer=optimizer,
                   milestones=[int(num_epochs * 0.56), int(num_epochs * 0.78)],
                   gamma=0.1, last_epoch=-1)

在每個epoch訓(xùn)練完的時候一定要記得step一下,不然不會更新學(xué)習(xí)率,可以通過get_last_lr()來查看最新的學(xué)習(xí)率

# 更新學(xué)習(xí)率并查看當(dāng)前學(xué)習(xí)率
scheduler.step()
print('\t last_lr:', scheduler.get_last_lr())

3.數(shù)據(jù)策略

實驗表明,針對cifar10數(shù)據(jù)集,隨機(jī)水平翻轉(zhuǎn)、隨機(jī)遮擋、隨機(jī)中心裁剪能有效提高驗證集準(zhǔn)確率,而旋轉(zhuǎn)、顏色抖動等則無效。

     norm_mean = [0.485, 0.456, 0.406]      # 均值
     norm_std = [0.229, 0.224, 0.225]       # 方差      
     transforms.Normalize(norm_mean, norm_std),                    #將[0,1]歸一化到[-1,1]
     transforms.RandomHorizontalFlip(),                            # 隨機(jī)水平鏡像
     transforms.RandomErasing(scale=(0.04, 0.2), ratio=(0.5, 2)),  # 隨機(jī)遮擋
     transforms.RandomCrop(32, padding=4)                          # 隨機(jī)中心裁剪

4.超參數(shù)

batch_size = 512     # 約占用顯存4G
num_epochs = 200     # 訓(xùn)練輪數(shù)
LR = 0.01            # 初始學(xué)習(xí)率    

實驗結(jié)果:best_acc= 94.71%

在這里插入圖片描述

另外,將網(wǎng)絡(luò)改成14層的帶殘差結(jié)構(gòu)后,準(zhǔn)確率上升到了95.56%,但是網(wǎng)絡(luò)大小也從18M到了43M。

以下是14層殘差網(wǎng)絡(luò)的全部代碼,8層的只需修改cfg和初始化時的res參數(shù):

cfg=[64, ‘M’, 128, 128, ‘M’, 256, 256, ‘M’, 512, 512,‘M’] 修改為 [64, ‘M’, 128, ‘M’, 256, ‘M’, 512, ‘M’]

# *_* coding : UTF-8 *_*
# 開發(fā)人員: csu·pan-_-||
# 開發(fā)時間: 2020/12/29 15:17
# 文件名稱: battey_class.py
# 開發(fā)工具: PyCharm
# 功能描述: 自建CNN對cifar10進(jìn)行分類

import torch
from torchvision import datasets, transforms
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
import onnx
import time
import numpy as np
import matplotlib.pyplot as plt


class Block(nn.Module):
    def __init__(self, inchannel, outchannel, res=True, stride=1):
        super(Block, self).__init__()
        self.res = res     # 是否帶殘差連接
        self.left = nn.Sequential(
            nn.Conv2d(inchannel, outchannel, kernel_size=3, padding=1, stride=stride, bias=False),
            nn.BatchNorm2d(outchannel),
            nn.ReLU(inplace=True),
            nn.Conv2d(outchannel, outchannel, kernel_size=3, padding=1, stride=1, bias=False),
            nn.BatchNorm2d(outchannel),
        )
        if stride != 1 or inchannel != outchannel:
            self.shortcut = nn.Sequential(
                nn.Conv2d(inchannel, outchannel, kernel_size=1, bias=False),
                nn.BatchNorm2d(outchannel),
            )
        else:
            self.shortcut = nn.Sequential()

        self.relu = nn.Sequential(
            nn.ReLU(inplace=True),
        )

    def forward(self, x):
        out = self.left(x)
        if self.res:
            out += self.shortcut(x)
        out = self.relu(out)
        return out


class myModel(nn.Module):
    def __init__(self, cfg=[64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512,'M'], res=True):
        super(myModel, self).__init__()
        self.res = res       # 是否帶殘差連接
        self.cfg = cfg       # 配置列表
        self.inchannel = 3   # 初始輸入通道數(shù)
        self.futures = self.make_layer()
        # 構(gòu)建卷積層之后的全連接層以及分類器:
        self.classifier = nn.Sequential(nn.Dropout(0.4),           # 兩層fc效果還差一些
                                        nn.Linear(4 * 512, 10), )   # fc,最終Cifar10輸出是10類

    def make_layer(self):
        layers = []
        for v in self.cfg:
            if v == 'M':
                layers.append(nn.MaxPool2d(kernel_size=2, stride=2))
            else:
                layers.append(Block(self.inchannel, v, self.res))
                self.inchannel = v    # 輸入通道數(shù)改為上一層的輸出通道數(shù)
        return nn.Sequential(*layers)

    def forward(self, x):
        out = self.futures(x)
        # view(out.size(0), -1): change tensor size from (N ,H , W) to (N, H*W)
        out = out.view(out.size(0), -1)
        out = self.classifier(out)
        return out

all_start = time.time()
# 使用torchvision可以很方便地下載Cifar10數(shù)據(jù)集,而torchvision下載的數(shù)據(jù)集為[0,1]的PILImage格式
# 我們需要將張量Tensor歸一化到[-1,1]
norm_mean = [0.485, 0.456, 0.406]  # 均值
norm_std = [0.229, 0.224, 0.225]  # 方差
transform_train = transforms.Compose([transforms.ToTensor(),  # 將PILImage轉(zhuǎn)換為張量
                                      # 將[0,1]歸一化到[-1,1]
                                      transforms.Normalize(norm_mean, norm_std),
                                      transforms.RandomHorizontalFlip(),  # 隨機(jī)水平鏡像
                                      transforms.RandomErasing(scale=(0.04, 0.2), ratio=(0.5, 2)),  # 隨機(jī)遮擋
                                      transforms.RandomCrop(32, padding=4)  # 隨機(jī)中心裁剪
                                      ])

transform_test = transforms.Compose([transforms.ToTensor(),
                                     transforms.Normalize(norm_mean, norm_std)])

# 超參數(shù):
batch_size = 256
num_epochs = 200   # 訓(xùn)練輪數(shù)
LR = 0.01          # 初始學(xué)習(xí)率

# 選擇數(shù)據(jù)集:
trainset = datasets.CIFAR10(root='Datasets', train=True, download=True, transform=transform_train)
testset = datasets.CIFAR10(root='Datasets', train=False, download=True, transform=transform_test)
# 加載數(shù)據(jù):
train_data = DataLoader(dataset=trainset, batch_size=batch_size, shuffle=True)
valid_data = DataLoader(dataset=testset, batch_size=batch_size, shuffle=False)
cifar10_classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

train_data_size = len(trainset)
valid_data_size = len(testset)

print('train_size: {:4d}  valid_size:{:4d}'.format(train_data_size, valid_data_size))

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

model = myModel(res=True)

# 定義損失函數(shù)和優(yōu)化器
loss_func = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=LR, momentum=0.9, weight_decay=5e-3)

# 學(xué)習(xí)率調(diào)整策略 MultiStep:
scheduler = optim.lr_scheduler.MultiStepLR(optimizer=optimizer,
                                           milestones=[int(num_epochs * 0.56), int(num_epochs * 0.78)],
                                           gamma=0.1, last_epoch=-1)

# 訓(xùn)練和驗證:
def train_and_valid(model, loss_function, optimizer, epochs=10):
    model.to(device)
    history = []
    best_acc = 0.0
    best_epoch = 0

    for epoch in range(epochs):
        epoch_start = time.time()
        print("Epoch: {}/{}".format(epoch + 1, epochs))

        model.train()

        train_loss = 0.0
        train_acc = 0.0
        valid_loss = 0.0
        valid_acc = 0.0

        for i, (inputs, labels) in enumerate(train_data):
            inputs = inputs.to(device)
            labels = labels.to(device)

            # 因為這里梯度是累加的,所以每次記得清零
            optimizer.zero_grad()

            outputs = model(inputs)

            loss = loss_function(outputs, labels)

            loss.backward()

            optimizer.step()

            train_loss += loss.item() * inputs.size(0)

            ret, predictions = torch.max(outputs.data, 1)
            correct_counts = predictions.eq(labels.data.view_as(predictions))

            acc = torch.mean(correct_counts.type(torch.FloatTensor))

            train_acc += acc.item() * inputs.size(0)

        with torch.no_grad():
            model.eval()

            for j, (inputs, labels) in enumerate(valid_data):
                inputs = inputs.to(device)
                labels = labels.to(device)

                outputs = model(inputs)

                loss = loss_function(outputs, labels)

                valid_loss += loss.item() * inputs.size(0)

                ret, predictions = torch.max(outputs.data, 1)
                correct_counts = predictions.eq(labels.data.view_as(predictions))

                acc = torch.mean(correct_counts.type(torch.FloatTensor))

                valid_acc += acc.item() * inputs.size(0)
        # 更新學(xué)習(xí)率并查看當(dāng)前學(xué)習(xí)率
        scheduler.step()
        print('\t last_lr:', scheduler.get_last_lr())

        avg_train_loss = train_loss / train_data_size
        avg_train_acc = train_acc / train_data_size

        avg_valid_loss = valid_loss / valid_data_size
        avg_valid_acc = valid_acc / valid_data_size

        history.append([avg_train_loss, avg_valid_loss, avg_train_acc, avg_valid_acc])

        if best_acc < avg_valid_acc:
            best_acc = avg_valid_acc
            best_epoch = epoch + 1

        epoch_end = time.time()

        print(
            "\t Training: Loss: {:.4f}, Accuracy: {:.4f}%, "
            "\n\t Validation: Loss: {:.4f}, Accuracy: {:.4f}%, Time: {:.3f}s".format(
                avg_train_loss, avg_train_acc * 100, avg_valid_loss, avg_valid_acc * 100,
                                epoch_end - epoch_start
            ))
        print("\t Best Accuracy for validation : {:.4f} at epoch {:03d}".format(best_acc, best_epoch))

        torch.save(model, '%s/' % 'cifar10_my' + '%02d' % (epoch + 1) + '.pt')  # 保存模型

        # # 存儲模型為onnx格式:
        # d_cuda = torch.rand(1, 3, 32, 32, dtype=torch.float).to(device='cuda')
        # onnx_path = '%s/' % 'cifar10_shuffle' + '%02d' % (epoch + 1) + '.onnx'
        # torch.onnx.export(model.to('cuda'), d_cuda, onnx_path)
        # shape_path = '%s/' % 'cifar10_shuffle' + '%02d' % (epoch + 1) + '_shape.onnx'
        # onnx.save(onnx.shape_inference.infer_shapes(onnx.load(onnx_path)), shape_path)
        # print('\t export shape success...')

    return model, history


trained_model, history = train_and_valid(model, loss_func, optimizer, num_epochs)

history = np.array(history)
# Loss曲線
plt.figure(figsize=(10, 10))
plt.plot(history[:, 0:2])
plt.legend(['Tr Loss', 'Val Loss'])
plt.xlabel('Epoch Number')
plt.ylabel('Loss')
# 設(shè)置坐標(biāo)軸刻度
plt.xticks(np.arange(0, num_epochs + 1, step=10))
plt.yticks(np.arange(0, 2.05, 0.1))
plt.grid()  # 畫出網(wǎng)格
plt.savefig('cifar10_shuffle_' + '_loss_curve1.png')

# 精度曲線
plt.figure(figsize=(10, 10))
plt.plot(history[:, 2:4])
plt.legend(['Tr Accuracy', 'Val Accuracy'])
plt.xlabel('Epoch Number')
plt.ylabel('Accuracy')
# 設(shè)置坐標(biāo)軸刻度
plt.xticks(np.arange(0, num_epochs + 1, step=10))
plt.yticks(np.arange(0, 1.05, 0.05))
plt.grid()  # 畫出網(wǎng)格
plt.savefig('cifar10_shuffle_' + '_accuracy_curve1.png')

all_end = time.time()
all_time = round(all_end - all_start)
print('all time: ', all_time, ' 秒')
print("All Time: {:d} 分 {:d} 秒".format(all_time // 60, all_time % 60))

總結(jié)

以上為個人經(jīng)驗,希望能給大家一個參考,也希望大家多多支持腳本之家。

相關(guān)文章

最新評論