Building A Non Character Player (Npc) Using Evolution Strategy Algorithm In Game / Muhammad Azim Bin Noryushan
Material type: TextPublication details: 2018Dissertation note: Project paper (Bachelor of Software Engineering ) - University Malaysia of Computer Science and Engineering, 2018. Summary: In most of video games, the Non-Playing Character (NPC) behaviour and movement are commonly scripted. Players who have exploited the NPCs weaknesses will be able to beat them easily and there will be no freshness in player experiences. However, if the character can adapt and learn from the environment, it will be more interactive since players need to find new weaknesses to exploit. In this research, an intelligent agent that is able to improve and learn by itself to overcome obstacle is introduced. This research investigates and compare the available self-learning algorithms used in game development such as Reinforcement Learning (RL), Evolution Strategies (ES), and HyperNEAT and is implemented in the agent. The methodology used in this project includes information gathering, selection of algorithm, implementation, testing and documenting. This project implements ES to a game called Flappy Bird and Python is used as programming language. The agent is trained to play the game and avoid the obstacles. ES algorithm will jitter the weight of the populations to produce either two results which is better actions or worse actions. To evaluate the algorithm efficiency implemented to the agent, RL have been chosen to compare the final results of training. ES was found to be more efficient than RL in term of how far the agent can avoid the obstacles. Graphs have been generated to show the learning progress of both algorithms. In future, this project will be implemented in real games with more dynamic environment.Item type | Current library | Collection | Call number | Status | Date due | Barcode | Item holds |
---|---|---|---|---|---|---|---|
Special Collection | UNIMY PJ Library | THE | BSE 012018 04 (Browse shelf (Opens below)) | Available | 102373 |
Abstract in English
"A report project submitted in partial fulfillment of the requirements for the award of Bachelor of Software Engineering (Hons)." -- On t. p.
Project paper (Bachelor of Software Engineering ) - University Malaysia of Computer Science and Engineering, 2018.
In most of video games, the Non-Playing Character (NPC) behaviour and movement are commonly scripted. Players who have exploited the NPCs weaknesses will be able to beat them easily and there will be no freshness in player experiences. However, if the character can adapt and learn from the environment, it will be more interactive since players need to find new weaknesses to exploit. In this research, an intelligent agent that is able to improve and learn by itself to overcome obstacle is introduced. This research investigates and compare the available self-learning algorithms used in game development such as Reinforcement Learning (RL), Evolution Strategies (ES), and HyperNEAT and is implemented in the agent. The methodology used in this project includes information gathering, selection of algorithm, implementation, testing and documenting. This project implements ES to a game called Flappy Bird and Python is used as programming language. The agent is trained to play the game and avoid the obstacles. ES algorithm will jitter the weight of the populations to produce either two results which is better actions or worse actions. To evaluate the algorithm efficiency implemented to the agent, RL have been chosen to compare the final results of training. ES was found to be more efficient than RL in term of how far the agent can avoid the obstacles. Graphs have been generated to show the learning progress of both algorithms. In future, this project will be implemented in real games with more dynamic environment.
There are no comments on this title.