TensorForce-Client - Running Parallelized Reinforcement Learning Experiments in the Cloud¶
TensorForce-Client is an easy to use command line interface to the open source reinforcement learning (RL) library “TensorForce”. This client helps you to setup and run your own RL experiments in the cloud (only google cloud supported so far), utilizing GPUs, multi-parallel execution algorithms, and TensorForce’s support for a large variety of environments ranging from simple grid-worlds to Atari games and Unreal Engine 4 games. Tensorforce-client submits RL-jobs using Kubernetes and docker containers in a fully automated fashion. To read more about the internal workings of the client, check out this chapter here.
- Usage - How to Run RL-Experiments in the Cloud?
- Internals - How does TensorForce-Client Work?
- Command Reference
- Package/Class Reference
You can find more information at our TensorForce-Client GitHub repository.
For the core TensorForce library, visit: https://github.com/reinforceio/TensorForce.
We also have a seperate repository available for benchmarking our algorithm implementations [here](https://github.com/reinforceio/tensorforce-benchmark).