Through many recent successes in simulation, model-free reinforcementlearning has emerged as a promising approach to solving continuous controlrobotic tasks. The research community is now able to reproduce, analyze andbuild quickly on these results due to open source implementations of learningalgorithms and simulated benchmark tasks. To carry forward these successes toreal-world applications, it is crucial to withhold utilizing the uniqueadvantages of simulations that do not transfer to the real world and experimentdirectly with physical robots. However, reinforcement learning research withphysical robots faces substantial resistance due to the lack of benchmark tasksand supporting source code. In this work, we introduce several reinforcementlearning tasks with multiple commercially available robots that present varyinglevels of learning difficulty, setup, and repeatability. On these tasks, wetest the learning performance of off-the-shelf implementations of fourreinforcement learning algorithms and analyze sensitivity to theirhyper-parameters to determine their readiness for applications in variousreal-world tasks. Our results show that with a careful setup of the taskinterface and computations, some of these implementations can be readilyapplicable to physical robots. We find that state-of-the-art learningalgorithms are highly sensitive to their hyper-parameters and their relativeordering does not transfer across tasks, indicating the necessity of re-tuningthem for each task for best performance. On the other hand, the besthyper-parameter configuration from one task may often result in effectivelearning on held-out tasks even with different robots, providing a reasonabledefault. We make the benchmark tasks publicly available to enhancereproducibility in real-world reinforcement learning.