본문 바로가기
cs224n

[cs224n] lecture 3. Neural Networks 강의 필기

by 파우르네 2021. 5. 10.
반응형

Lecture 3: Word Window Classification, Neural Nets, and Calculus

 

Classification review/introduction

Neural networks introduction

Named Entity Recognition

Binary true vs. corrupted word window classification

Matrix calculus introduction

 

 

Classification setup and notation

training dataset

• xi : inputs (words, sentences, documents)

yi : labels (sentiment, named entities, buy/sell decision)

 

 

softmax classfier

• Fixed 2D word vectors to classify

• Using softmax/logistic regression

• Linear decision boundary

 

softmax classfier

Training with softmax and cross-entropy loss

 - probability of the correct class 를 최대화

 - the negative log probability of that class 를 최소화

 

cross entropy

p: true probability distribution

q: computed model probability

 

p = [0,…,0,1,0,…0]  (one-hot) >>>  the negative log probability of the true class

 

cross entropy loss function

Traditional ML optimization

기존의 machine learning 의 θ W의 열로만 이루어져있다.

decision boundary가 다음과 같이 업데이트된다.

Neural Network

 

softmax/ logistic regression 의 경우 decision boundaries 가 linear 하다.

linear 하면 복잡한 문제의 경우, 비효율적이다.

Neural network의 경우, 더 복잡한 함수와 nonlinear한 decision boundary를 학습할 수 있다.

반응형

댓글