U E D R , A S I H C RSS

데블스캠프2017/강화학습으로컴퓨터에게고전게임플레이시키기

Difference between r1.29 and the current

@@ -1,29 +1,29 @@
[[Tableofcontents]]
= machine learning =
머신 러닝의 세가지 분류
1. Supervised learning
1. Unsupervised learning
1. Reinforcement learning
== supervised learning ==
* 학습을 시킬 때 label에 정답이 있는
* Need input, target
* Learning from difference between prediction and target
* 학습을 시킬 때 input으로 feature(입력값)와 label(원하는 결과값)을 함께 전달
* Learning from difference between prediction and label
* e.g. mnist, classification
== unsupervised learning ==
* label 미리 정해져 있지 않은
* Need input
* input: feature만 입력, 보통 projection등으로 feature의 차원을 축소시킨다.
* Cluster by distance between inputs
* Can't predict outcome
* Human can't predict the outcome
* e.g. clustering
== reinforcement learning ==
* 일종의 unsupervised learning
* input : environment, reward, output : action
* Learn from try
* Model free
* Model free: 게임의 규칙을 알려주지 않음
* e.g. game play, stock trading
== reinforcement learning ==
* Q learning
* + Neural Network
* Q learning + Neural Network
* DQN : Deep Q Learning
* hidden layer를 늘리는게 다가 아니다!
== Basic knowledge ==
* MDP : Markov Decision Process
* Bellman equation
@@ -33,25 +33,28 @@
* Value iteration, Polish iteration

== 실습 ==
* 필요한 라이브러리: numpy, gym, tensorflow 필요
* [https://gym.openai.com gym]: Reinforcement learning을 위한 고전 게임들을 python으로 포팅한 toolkit. 직접 구현한 것도 있고 atari는 포팅함.
* [https://gym.openai.com gym]: Reinforcement learning을 위한 고전 게임들을 python으로 포팅한 toolkit. 직접 구현한 것도 있고 atari는 포팅함. [https://github.com/openai/gym github]에 코드가 공개되어 있다.
* 오늘 실습할 [https://gym.openai.com/envs/CartPole-v0 cartpole]
* [https://github.com/openai/gym in github]
{{{
* 필요한 라이브러리: numpy, gym, tensorflow 필요
{{{
$ pip install gym
$ pip install tensorflow
$ pip install tensorflow
}}}
1. cartpole 실행을 해보자! - cartpole_init.py
1. random action(왼쪽, 오른쪽)을 하는 cartpole - cartpole_random.py
1. q-network(q-learning의 NN버전) - cartpole.py
1. DQN - cartpole_dqn.py
1. 2015에 Deep Mind에서 발표한 DQN - cartpole_dqn2015.py
=== 순서 ===
1. 일단 cartpole 실행을 해보자! - [https://github.com/Rabierre/cartpole/blob/master/cartpole_init.py cartpole_init.py]
1. random action(왼쪽, 오른쪽)을 하는 cartpole - [https://github.com/Rabierre/cartpole/blob/master/cartpole_random.py cartpole_random.py]
1. q-network(q-learning의 NN버전) - [https://github.com/Rabierre/cartpole/blob/master/cartpole_qnetwork.py cartpole_qnetwork.py]
1. DQN - [https://github.com/Rabierre/cartpole/blob/master/cartpole_dqn.py cartpole_dqn.py]
1. 2015에 Deep Mind에서 발표한 DQN - [https://github.com/Rabierre/cartpole/blob/master/cartpole_dqn2015.py cartpole_dqn2015.py]
== Reference ==
* 발표 슬라이드: [https://slides.com/rabierre/playing_a_game_with_rl slide]
* 코드: [https://github.com/Rabierre/cartpole github]
* 논문: [https://arxiv.org/abs/1312.5602 Playing Atari with Deep Reinforcement Learning]
* 발표 슬라이드: [https://slides.com/rabierre/playing_a_game_with_rl slide]
* 실습코드: [https://github.com/Rabierre/cartpole github]
* DeepMind의 DQN 논문: [https://arxiv.org/abs/1312.5602 Playing Atari with Deep Reinforcement Learning]
* Tensorflow tutorial: [https://github.com/golbin/TensorFlow-Tutorials/tree/master/10%20-%20DQN DQN]
== Furthermore ==
* [https://en.wikipedia.org/wiki/David_Silver_(programmer) David Silver]의 강의
* [http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html David Silver의 강의노트]
* [https://www.youtube.com/watch?v=2pWv7GOvuf0 David Silver의 강의 영상]
== 하고싶은 ==
* [https://en.wikipedia.org/wiki/David_Silver_(programmer) David Silver]의 강의
* [http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html 강의노트]
* [https://www.youtube.com/watch?v=2pWv7GOvuf0 강의 영상]
* Gitbook: [https://www.gitbook.com/book/dnddnjs/rl/details Fundamental of Reinforcement Learning]. 한글로 되어 있다!
* 레퍼런스 모음: [Machine%20Learning]
== 후기 및 기타의견 ==



1. machine learning

머신 러닝의 세가지 분류
  1. Supervised learning
  2. Unsupervised learning
  3. Reinforcement learning

1.1. supervised learning

  • 학습을 시킬 때 input으로 feature(입력값)와 label(원하는 결과값)을 함께 전달
  • Learning from difference between prediction and label
  • e.g. mnist, classification

1.2. unsupervised learning

  • input: feature만 입력, 보통 projection등으로 feature의 차원을 축소시킨다.
  • Cluster by distance between inputs
  • Human can't predict the outcome
  • e.g. clustering

1.3. reinforcement learning

  • 일종의 unsupervised learning
  • input : environment, reward, output : action
  • Learn from try
    • Model free: 게임의 규칙을 알려주지 않음
  • e.g. game play, stock trading

1.4. reinforcement learning

  • Q learning
  • Q learning + Neural Network
  • DQN : Deep Q Learning
    • hidden layer를 늘리는게 다가 아니다!

1.5. Basic knowledge

  • MDP : Markov Decision Process
  • Bellman equation
  • Dynamic programming
  • Value, Polish
  • Value function, Polish function
  • Value iteration, Polish iteration

1.6. 실습

  • gym: Reinforcement learning을 위한 고전 게임들을 python으로 포팅한 toolkit. 직접 구현한 것도 있고 atari는 포팅함. github에 코드가 공개되어 있다.
  • 필요한 라이브러리: numpy, gym, tensorflow 필요
      $ pip install gym
      $ pip install tensorflow 
    

1.6.1. 순서

  1. 일단 cartpole 실행을 해보자! - cartpole_init.py
  2. random action(왼쪽, 오른쪽)을 하는 cartpole - cartpole_random.py
  3. q-network(q-learning의 NN버전) - cartpole_qnetwork.py
  4. DQN - cartpole_dqn.py
  5. 2015에 Deep Mind에서 발표한 DQN - cartpole_dqn2015.py

1.7. Reference

1.8. Furthermore

1.9. 후기 및 기타의견

Valid XHTML 1.0! Valid CSS! powered by MoniWiki
last modified 2021-02-07 05:29:18
Processing time 0.0421 sec