Learning Summary(1)
公开课
Machine Learning 2021 Spring:(p1-p20)
- machine learning还有deep learning的基本概念(机器学习类别:Regression Classification Structured Learning)
- training 基本步骤 (Model with unknown parameters,Define Loss,Optimization)
- training 问题(model bias、overfitting、Gradient disappearance)
- Escaping saddle points along the direction of eigenvectors of the Hessian matrix
- Batch and Momentum
- Adaptive Learning Rate
- Batch Normalization
- Classification with CNN
Pytorch
PyTorch Fundamentals by MicroSoft
- 了解PyTorch的基本模块及方法(Tensor, Datasets&&Dataloaders, Transform, nn.Module, opim)
- Fashion-MNIST dataset
- Build a neural network(Flatten, Linear, Relu, Softmax)
- Loss Function: CrossEntropyLoss( )
- Optimizer: Stochastic Gradient Descent
- Save and load the model
- Computer Vision with PyTorch
- MNIST Dataset
- from DNN, MLP to CNNs(nn.Conv2d)
- Multi-layered CNNs and pooling layers(Average Pooling, Max Pooling)
- Validation Accuracy : DNN - 89%, MLP - 96%, Simplest CNN-97%, Multi-Layers CNN-98%
- Training with real images from the CIFAR-10 dataset
- LeNet proposed by Yann LeCun.
- Cats vs. Dogs Dataset(use 2000 pictures due to the limitation of cpu)
- use pre-trained model VGG-16 from
torchvision
module - Extracting VGG16 features manually for training
- Transfer Learning
- replace the final classifier with 25088 inputs and 2 output neurons
- freeze weights of convolutional feature extractor
- Validation Accuracy:99%
- use pre-trained model VGG-16 from
- Other computer vision models(ResNet、MobileNet)
- MNIST Dataset
Next Week
- Natural Language Processing with PyTorch
- Audio Classification with PyTorch
- Start learning more details of PyTorch