• 企业400电话
  • 微网小程序
  • AI电话机器人
  • 电商代运营
  • 全 部 栏 目

    企业400电话 网络优化推广 AI电话机器人 呼叫中心 网站建设 商标✡知产 微网小程序 电商运营 彩铃•短信 增值拓展业务
    Pytorch实现全连接层的操作

    全连接神经网络(FC)

    全连接神经网络是一种最基本的神经网络结构,英文为Full Connection,所以一般简称FC。

    FC的准则很简单:神经网络中除输入层之外的每个节点都和上一层的所有节点有连接。

    以上一次的MNIST为例

    import torch
    import torch.utils.data
    from torch import optim
    from torchvision import datasets
    from torchvision.transforms import transforms
    import torch.nn.functional as F
    batch_size = 200
    learning_rate = 0.001
    epochs = 20
    train_loader = torch.utils.data.DataLoader(
        datasets.MNIST('mnistdata', train=True, download=False,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=batch_size, shuffle=True)
    test_loader = torch.utils.data.DataLoader(
        datasets.MNIST('mnistdata', train=False, download=False,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=batch_size, shuffle=True)
    w1, b1 = torch.randn(200, 784, requires_grad=True), torch.zeros(200, requires_grad=True)
    w2, b2 = torch.randn(200, 200, requires_grad=True), torch.zeros(200, requires_grad=True)
    w3, b3 = torch.randn(10, 200, requires_grad=True), torch.zeros(10, requires_grad=True)
    torch.nn.init.kaiming_normal_(w1)
    torch.nn.init.kaiming_normal_(w2)
    torch.nn.init.kaiming_normal_(w3)
    def forward(x):
        x = x@w1.t() + b1
        x = F.relu(x)
        x = x@w2.t() + b2
        x = F.relu(x)
        x = x@w3.t() + b3
        x = F.relu(x)
        return x
    optimizer = optim.Adam([w1, b1, w2, b2, w3, b3], lr=learning_rate)
    criteon = torch.nn.CrossEntropyLoss()
    for epoch in range(epochs):
        for batch_idx, (data, target) in enumerate(train_loader):
            data = data.view(-1, 28*28)
            logits = forward(data)
            loss = criteon(logits, target)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            if batch_idx % 100 == 0:
                print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                    epoch, batch_idx*len(data), len(train_loader.dataset),
                    100.*batch_idx/len(train_loader), loss.item()
                ))
        test_loss = 0
        correct = 0
        for data, target in test_loader:
            data = data.view(-1, 28*28)
            logits = forward(data)
            test_loss += criteon(logits, target).item()
            pred = logits.data.max(1)[1]
            correct += pred.eq(target.data).sum()
        test_loss /= len(test_loader.dataset)
        print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
            test_loss, correct, len(test_loader.dataset),
            100.*correct/len(test_loader.dataset)
            ))

    我们将每个w和b都进行了定义,并且自己写了一个forward函数。如果我们采用了全连接层,那么整个代码也会更加简介明了。

    首先,我们定义自己的网络结构的类:

    class MLP(nn.Module):
        def __init__(self):
            super(MLP, self).__init__()
            self.model = nn.Sequential(
                nn.Linear(784, 200),
                nn.LeakyReLU(inplace=True),
                nn.Linear(200, 200),
                nn.LeakyReLU(inplace=True),
                nn.Linear(200, 10),
                nn.LeakyReLU(inplace=True)
            )
        def forward(self, x):
            x = self.model(x)
            return x
    

    它继承于nn.Moudle,并且自己定义里整个网络结构。

    其中inplace的作用是直接复用存储空间,减少新开辟存储空间。

    除此之外,它可以直接进行运算,不需要手动定义参数和写出运算语句,更加简便。

    同时我们还可以发现,它自动完成了初试化,不需要像之前一样再手动写一个初始化了。

    区分nn.Relu和F.relu()

    前者是一个类的接口,后者是一个函数式接口。

    前者都是大写的,并且调用的的时候需要先实例化才能使用,而后者是小写的可以直接使用。

    最重要的是后者的自由度更高,更适合做一些自己定义的操作。

    完整代码

    import torch
    import torch.utils.data
    from torch import optim, nn
    from torchvision import datasets
    from torchvision.transforms import transforms
    import torch.nn.functional as F
    batch_size = 200
    learning_rate = 0.001
    epochs = 20
    train_loader = torch.utils.data.DataLoader(
        datasets.MNIST('mnistdata', train=True, download=False,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=batch_size, shuffle=True)
    test_loader = torch.utils.data.DataLoader(
        datasets.MNIST('mnistdata', train=False, download=False,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=batch_size, shuffle=True)
    class MLP(nn.Module):
        def __init__(self):
            super(MLP, self).__init__()
            self.model = nn.Sequential(
                nn.Linear(784, 200),
                nn.LeakyReLU(inplace=True),
                nn.Linear(200, 200),
                nn.LeakyReLU(inplace=True),
                nn.Linear(200, 10),
                nn.LeakyReLU(inplace=True)
            )
        def forward(self, x):
            x = self.model(x)
            return x
    device = torch.device('cuda:0')
    net = MLP().to(device)
    optimizer = optim.Adam(net.parameters(), lr=learning_rate)
    criteon = nn.CrossEntropyLoss().to(device)
    for epoch in range(epochs):
        for batch_idx, (data, target) in enumerate(train_loader):
            data = data.view(-1, 28*28)
            data, target = data.to(device), target.to(device)
            logits = net(data)
            loss = criteon(logits, target)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            if batch_idx % 100 == 0:
                print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                    epoch, batch_idx*len(data), len(train_loader.dataset),
                    100.*batch_idx/len(train_loader), loss.item()
                ))
        test_loss = 0
        correct = 0
        for data, target in test_loader:
            data = data.view(-1, 28*28)
            data, target = data.to(device), target.to(device)
            logits = net(data)
            test_loss += criteon(logits, target).item()
            pred = logits.data.max(1)[1]
            correct += pred.eq(target.data).sum()
        test_loss /= len(test_loader.dataset)
        print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
            test_loss, correct, len(test_loader.dataset),
            100.*correct/len(test_loader.dataset)
            ))

    补充:pytorch 实现一个隐层的全连接神经网络

    torch.nn 实现 模型的定义,网络层的定义,损失函数的定义。

    import torch
    # N is batch size; D_in is input dimension;
    # H is hidden dimension; D_out is output dimension.
    N, D_in, H, D_out = 64, 1000, 100, 10
    # Create random Tensors to hold inputs and outputs
    x = torch.randn(N, D_in)
    y = torch.randn(N, D_out)
    # Use the nn package to define our model as a sequence of layers. nn.Sequential
    # is a Module which contains other Modules, and applies them in sequence to
    # produce its output. Each Linear Module computes output from input using a
    # linear function, and holds internal Tensors for its weight and bias.
    model = torch.nn.Sequential(
        torch.nn.Linear(D_in, H),
        torch.nn.ReLU(),
        torch.nn.Linear(H, D_out),
    )
    # The nn package also contains definitions of popular loss functions; in this
    # case we will use Mean Squared Error (MSE) as our loss function.
    loss_fn = torch.nn.MSELoss(reduction='sum')
    learning_rate = 1e-4
    for t in range(500):
        # Forward pass: compute predicted y by passing x to the model. Module objects
        # override the __call__ operator so you can call them like functions. When
        # doing so you pass a Tensor of input data to the Module and it produces
        # a Tensor of output data.
        y_pred = model(x)
        # Compute and print loss. We pass Tensors containing the predicted and true
        # values of y, and the loss function returns a Tensor containing the
        # loss.
        loss = loss_fn(y_pred, y)
        print(t, loss.item())
        # Zero the gradients before running the backward pass.
        model.zero_grad()
        # Backward pass: compute gradient of the loss with respect to all the learnable
        # parameters of the model. Internally, the parameters of each Module are stored
        # in Tensors with requires_grad=True, so this call will compute gradients for
        # all learnable parameters in the model.
        loss.backward()
        # Update the weights using gradient descent. Each parameter is a Tensor, so
        # we can access its gradients like we did before.
        with torch.no_grad():
            for param in model.parameters():
                param -= learning_rate * param.grad
    

    上面,我们使用parem= -= learning_rate* param.grad 手动更新参数。

    使用torch.optim 自动优化参数。optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。

    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
    for t in range(500):
        y_pred = model(x)
        loss = loss_fn(y_pred, y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。如有错误或未考虑完全的地方,望不吝赐教。

    您可能感兴趣的文章:
    • pytorch_detach 切断网络反传方式
    • pytorch 禁止/允许计算局部梯度的操作
    • 如何利用Pytorch计算三角函数
    • 聊聊PyTorch中eval和no_grad的关系
    • Pytorch实现图像识别之数字识别(附详细注释)
    • pytorch 优化器(optim)不同参数组,不同学习率设置的操作
    • PyTorch 如何将CIFAR100数据按类标归类保存
    • PyTorch的Debug指南
    • Python深度学习之使用Pytorch搭建ShuffleNetv2
    • win10系统配置GPU版本Pytorch的详细教程
    • 浅谈pytorch中的nn.Sequential(*net[3: 5])是啥意思
    • pytorch visdom安装开启及使用方法
    • PyTorch CUDA环境配置及安装的步骤(图文教程)
    • pytorch中的nn.ZeroPad2d()零填充函数实例详解
    • 使用pytorch实现线性回归
    • pytorch实现线性回归以及多元回归
    • PyTorch学习之软件准备与基本操作总结
    上一篇:pytorch 优化器(optim)不同参数组,不同学习率设置的操作
    下一篇:Django Admin 管理工具的实现
  • 相关文章
  • 

    © 2016-2020 巨人网络通讯 版权所有

    《增值电信业务经营许可证》 苏ICP备15040257号-8

    Pytorch实现全连接层的操作 Pytorch,实现,全,连接,层,