summary2
公开课
Machine Learning 2021 Spring:
- Input as vector
- One-Hot encoding
- Word Embedding
- Sequence Labeling
- Fully-Connected Network
- Self-attention
- Multi-head Self-attention
- Positional Encoding + Self-attention
PyTorch Fundamentals by MicroSoft
Natural Language Tasks (Text Classification、Intent Classification、Sentiment Analysis、Named Entity Recognition、Keyword extraction、Text Summarization )
Text Classfication with PyTorch (torchtext)
Representing text as Tensors : (Character-level, Word-level )
Dataset : AG_NEWS (4 categories)
Bag of words text representation
Training Bow classifier (simple linear layer) val_acc: 90%
Term Frequency Inverse Document Frequency (TF-IDF)
Embeddings text represent
word index in vocab as Input, padding minibatch into same length
Training Embedding classifier (Embedding layer, linear layer)
python train_loss: 0.131295 train_acc: 0.959406 val_loss: 0.258924 val_acc: 0.916708
Variable-Length Sequence Representation,offset vector without padding batch
Using Pre-Trained Semantic Embeddings: Word2Vec
- val_acc: 92.67%
using RNN
- use padded data loader
- model ( Embedding layer + RNN + linear layer)
- Val_acc : 90%
Long Short Term Memory (LSTM)
- Packed sequences
- Val_acc : 91%
Using BERT for text classification
- training can not be finished by cpu
Generate text with LSTM
- Building character vocabulary
- training generative LSTM
- Soft text generation and temperature
CVPR 2021 论文分享会
Session1 图像生成
Information Bottleneck Disentanglement for Identity Swapping(换脸)
Leveraging Line-point Consistent to Preserve Structure for Wide Parallax Image Stitching
Facelnpainter: High Fidelity Face Adaptation to Heterogeneous Domains
Session2 图像处理
- Deep Homography for Efficient Stereo Image Compression(CVPR-2021 oral)
- Learning Scalable \(\ell_\infty\)-constrained Near-lossless Image Compression via Joint Lossy Image and Residual Compression
Session3 底层视觉
- Neighbor2 Neighbor Self-supervised Denoising from Single Noisy Images
- Deep Animation Video Interpolation in the Wild