Boxing Q-Learning
![](/portfolio/assets/img/portfolio/boxing-qlearning/0-learn.png)
![](/portfolio/assets/img/portfolio/boxing-qlearning/50-learn.png)
![](/portfolio/assets/img/portfolio/boxing-qlearning/100-learn.png)
Project information
- Category: AI
- Project date: Fall 2017
- Github Repository: https://github.com/abradat/boxing-qlearning
Project Description
An Agent learns how to box using Naïve Q-Learning and Deep Q Network (DQN).
Introduction
Boxing with Q-Learning was a project for the “Artificial Intelligence” course in which an agent learns how to box using Naïve Q-Learning and Deep Q Network (DQN).
Technical Overview
The project is implemented by Python language with the TensorFlow framework. It is based on Reinforcement Learning and has two phases. In the first phase, implementation was done by the Naïve Q-Learning approach. Since there are many scenarios where tables are not able to scale nicely, in the second phase Deep Q Network is used. It is a deep neural network consisted of five layers: 3 Convolutional and 2 Fully-Connected Layers. With DQNs, instead of a Q Table to lookup values, there is a model that we inference (make predictions from), and rather than updating the Q table, we fit (train) your model. So, the performance and the result will be significantly enhanced. In DQN, the total number of training steps is 5000000.
![Architecture of the Network](/portfolio/assets/img/portfolio/boxing-qlearning/arch.png)
Technologies/Languages Used
Technology | Usage |
---|---|
![]() |
Pytyhon is used for implementing the application |
![]() |
Tensorflow is the framework for implementing the Q-Learning and DQN |