This is the environment that was created for my final year BSc Computer Science project investigating the use of Curious Reinforcement Learning Agents in the Sustainable Foraging Problem. It modifies the lb-foraging environment to align it to the Sustainable Foraging problem with the inclusion of a physically navigable grid that introduces spatial dynamics.
- Highly configurable grid-based environment built on OpenAI gym
- Includes an implementation of a deep q-network (DQN) with experience replay and DQN with curiosity learning module
- Detailed plotting and logging
- Multiple run training aggregator
- Python 3.8 or higher
- Conda package manager https://github.com/AlexGulliver/Sustainable-Foraging-Environment.git
- Install Conda
- Run command: conda env create -f environment.yml
Run a single training session in train.py
Run multiple and aggregate results in trainmultiple.py
Extend /lb-foraging/lb-foraging/agents/foragingagent.py with your own algorithm
The Sustainable Foraging problem is a multi-agent optimisation problem that explores whether agents can learn to forage in a way that ensures their survival alongside the sustainability of their environment.
For information on the Sustainable Foraging problem see: https://ieeexplore.ieee.org/document/10336227
Curiosity in Reinforcement Learning is a method to supplement agents with intrinsically sourced reward feedback to boost their learning, particularly in sparse-reward environments.
The reward signal provided by a curiosity module is informed by a prediction error of the next state.
For insight into curiosity learning see: https://arxiv.org/abs/1705.05363
This environment is based on the Level-Based Foraging environment.
