DrEureka: Language Model Guided SIM-to-Real Transfer

https://eureka-research.github.io/dr-eureka/

2NVIDIA; 3UT Austin; *Equal Contribution

Corresponding authors: [email protected], [email protected]

Robotics: Science and Systems (RSS) 2024

Abstract

Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale. However, sim-to-real approaches typically rely on manual design and tuning of the task reward function as well as the simulation physics parameters, rendering the process slow and human-labor intensive. In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design. Our LLM-guided sim-to-real approach requires only the physics simulation for the target task and automatically constructs suitable reward functions and domain randomization distributions to support real-world transfer. We first demonstrate our approach can discover sim-to-real configurations that are competitive with existing human-designed ones on quadruped locomotion and dexterous manipulation tasks. Then, we showcase that our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball, without iterative manual design.

DrEureka Components

Overview. DrEureka takes the task and safety instruction, along with environment source code, and runs Eureka to generate a regularized reward function and policy. Then, it tests the policy under different simulation conditions to build a reward-aware physics prior, which is provided to the LLM to generate a set of domain randomization (DR) parameters. Finally, using the synthesized reward and DR parameters, it trains policies for real-world deployment.

Experiment Highlights

In this section, we present the key qualitative results from our experiments, highlighting the robustness of DrEureka policies in the real-world yoga ball walking task as well as the best DrEureka outputs for all our benchmark tasks. Detailed quantitative experiments and comparisons can be found in the paper. All videos are played at 1x speed.

DrEureka 5-Minute Uncut Deployment Video

DrEureka Walking Globe Gallery

DrEureka policy exhibits impressive robustness in the real-world, adeptly balancing and walking atop a yoga ball under various real-world, un-controlled terrain condition changes and disturbances.

We also tried kicking or deflating the ball; DrEureka policy is robust to these disturbances and can recover from them!

DrEureka Balancing on a Deflating Ball

DrEureka Rewards, DR parameters, and Policies

We evaluate DrEureka on 3 tasks, quadruped globe walking, quadruped locomotion, and dexterous cube rotation. In this demo, we visualize the unmodified best DrEureka reward and DR parameters for each task and visualize the policy deployed in the training simulation environment as well as the real-world environment.


Simulation

Real

Select an image above:

DrEureka responses shown within code block.

Qualitative Comparisons

We have conducted systematic study on the benchmark quadrupedal locomotion task. Here, we present several qualitative results. See the full paper for details.

Terrain Robustness. On the quadrupedal locomotion task, we also systematically evaluate DrEureka policies on several real-world terrains and find they remain robust and outperform policies trained using human-designed reward and DR configurations.


The default as well as additional real-world environments to test DrEureka's robustness for quadrupedal locomotion.

DrEureka performs consistently across different terrains and maintains advantages over Human-Designed.


DrEureka Safety Instruction. DrEureka's LLM reward design subroutine improves upon Eureka by incorporating safety instructions. We find this to be critical for generating reward functions safe enough to be deployed in the real world.

DrEureka Reward-Aware Physics Prior. Through extensive ablation studies, we find that using the initial Eureka policy to generate a reward-aware physics prior is crucial for the success of DrEureka. and then using LLM to sample DR parameters are critical for obtaining the best real-world performance.

Failure Videos and Limitations

Finally, we show several occasions when the robot falls from the ball. There are many avenues to further improve DrEureka. For example, DrEureka policies are currently entirely trained in simulation, but using real-world execution failure as feedback may serve as an effective way for LLMs to determine how to best tune sim-to-real in successive iterations. Furthermore, all tasks and policies in our work operately purely from robot's proprioceptive inputs, and incorporating vision or other sensors may further improve policy performance and LLM feedback loop.

BibTeX

@inproceedings{ma2024dreureka,
    title   = {DrEureka: Language Model Guided Sim-To-Real Transfer},
    author  = {Yecheng Jason Ma and William Liang and Hungju Wang and Sam Wang and Yuke Zhu and Linxi Fan and Osbert Bastani and Dinesh Jayaraman},
    year    = {2024},
  booktitle = {Robotics: Science and Systems (RSS)}
}
{
"by": "jasondavies",
"descendants": 13,
"id": 40249696,
"kids": [
40250549,
40254862,
40254077,
40250756,
40251100,
40250928
],
"score": 65,
"time": 1714754927,
"title": "DrEureka: Language Model Guided SIM-to-Real Transfer",
"type": "story",
"url": "https://eureka-research.github.io/dr-eureka/"
}
{
"author": "Jason Ma*",
"date": null,
"description": null,
"image": "https://eureka-research.github.io/dr-eureka/videos/tasks/walking_globe.png",
"logo": null,
"publisher": "Language Model Guided Sim-To-Real Transfer",
"title": "DrEureka | Language Model Guided Sim-To-Real Transfer",
"url": "https://eureka-research.github.io/dr-eureka/"
}
{
"url": "https://eureka-research.github.io/dr-eureka/",
"title": "DrEureka | Language Model Guided Sim-To-Real Transfer",
"description": "2NVIDIA; 3UT Austin; *Equal...",
"links": [
"https://eureka-research.github.io/dr-eureka/"
],
"image": "",
"content": "<div>\n <p>\n <span><sup>2</sup>NVIDIA; </span>\n <span><sup>3</sup>UT Austin; </span>\n <span><sup>*</sup>Equal Contribution</span> </p>\n <p><span>\n <span>\n <i></i>\n </span>\n Corresponding authors: [email protected], [email protected]\n </span>\n </p>\n <p><span>Robotics: Science and Systems (RSS) 2024</span>\n </p>\n </div>\n <div>\n <h2>Abstract</h2>\n <p>\n Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale. However, sim-to-real approaches typically rely on manual design and tuning of the task reward function as well as the simulation physics parameters, rendering the process slow and human-labor intensive. In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design. Our LLM-guided sim-to-real approach requires only the physics simulation for the target task and automatically constructs suitable reward functions and domain randomization distributions to support real-world transfer. We first demonstrate our approach can discover sim-to-real configurations that are competitive with existing human-designed ones on quadruped locomotion and dexterous manipulation tasks. Then, we showcase that our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball, without iterative manual design. </p>\n </div>\n <div>\n <h2><span>DrEureka Components</span></h2>\n <p><span><b>Overview.</b> DrEureka takes the task and safety instruction, along with environment source code, and runs Eureka to generate a regularized\n reward function and policy. Then, it tests the policy under different simulation conditions to build a reward-aware physics prior, which is\n provided to the LLM to generate a set of domain randomization (DR) parameters. Finally, using the synthesized reward and DR parameters,\n it trains policies for real-world deployment.</span>\n </p></div>\n <section>\n <div>\n <h2><span>Experiment Highlights</span></h2>\n <p>\n In this section, we present the key qualitative results from our experiments, highlighting the robustness of DrEureka policies in the real-world yoga ball walking task as well as the \n best DrEureka outputs for all our benchmark tasks. Detailed quantitative experiments and comparisons can be found in the paper. All videos are played at 1x speed.\n </p>\n </div>\n <div>\n <p>\n </p><h2>DrEureka 5-Minute Uncut Deployment Video</h2>\n <p></p>\n <p>\n <iframe width=\"950\" height=\"560\" src=\"https://www.youtube.com/embed/kRj3crlFdOU\" frameborder=\"0\"></iframe>\n </p>\n <div>\n <h2><span>DrEureka Walking Globe Gallery</span></h2>\n <p>\n DrEureka policy exhibits impressive robustness in the real-world, adeptly balancing and walking atop a yoga ball under various real-world, un-controlled terrain condition changes and disturbances. \n </p>\n </div>\n </div>\n <div>\n <p>\n We also tried kicking or deflating the ball; DrEureka policy is robust to these disturbances and can recover from them!\n </p>\n </div>\n <div>\n <p>\n </p><h2>DrEureka Balancing on a Deflating Ball</h2>\n <p></p>\n <p>\n <iframe width=\"950\" height=\"560\" src=\"https://www.youtube.com/embed/eECuJUuSt5c\" frameborder=\"0\"></iframe>\n </p>\n </div>\n </section>\n <section>\n <div>\n <div>\n <h2><span>DrEureka Rewards, DR parameters, and Policies</span></h2>\n <p>\n We evaluate DrEureka on 3 tasks, quadruped globe walking, quadruped locomotion, and dexterous cube rotation. In this demo, we visualize the unmodified best DrEureka reward and DR parameters\n for each\n task and visualize the policy deployed in the training simulation environment as well as the real-world environment.\n </p>\n </div>\n <br />\n <div>\n <div>\n <h2>Simulation</h2>\n </div>\n <div>\n <h2>Real</h2>\n </div>\n </div>\n <div>\n <p>Select an image above:</p>\n <div>\n <pre><code>DrEureka responses shown within code block.</code></pre>\n <pre><code></code></pre>\n </div>\n </div>\n </div>\n <div>\n <div>\n <h2><span>Qualitative Comparisons</span></h2>\n <p>\n We have conducted systematic study on the benchmark quadrupedal locomotion task. Here, we present several qualitative results. See the full paper for details. \n </p> \n <p>\n <b>Terrain Robustness.</b> On the quadrupedal locomotion task, we also systematically evaluate DrEureka policies on several real-world terrains and find they remain robust and outperform policies \n trained using human-designed reward and DR configurations.\n </p>\n </div>\n <div>\n <div>\n <p><img src=\"https://eureka-research.github.io/dr-eureka/figures/quadruped_terrains.png\" />\n <br />\n <span>\n <span>The default as well as additional real-world environments to test DrEureka's robustness for quadrupedal locomotion.\n </span> \n </span></p>\n <p><img src=\"https://eureka-research.github.io/dr-eureka/figures/real_world_robustness.png\" />\n <span>\n <span>DrEureka performs consistently across different terrains and maintains advantages over Human-Designed.\n </span>\n </span></p>\n </div>\n <br />\n </div>\n <div>\n <p>\n <b> DrEureka Safety Instruction.</b> DrEureka's LLM reward design subroutine improves upon Eureka by incorporating safety instructions. We find this to be critical for generating reward functions \n safe enough to be deployed in the real world.\n </p>\n </div>\n <div>\n <p>\n <b> DrEureka Reward-Aware Physics Prior.</b> Through extensive ablation studies, we find that using the initial Eureka policy to generate a reward-aware physics prior is crucial for the success of DrEureka.\n and then using LLM to sample DR parameters are critical for obtaining the best real-world performance.\n </p>\n </div>\n </div>\n </section>\n <div>\n <h2><span>Failure Videos and Limitations</span></h2>\n <p>\n Finally, we show several occasions when the robot falls from the ball. There are many avenues to further improve DrEureka. For example,\n DrEureka policies are currently entirely trained in simulation, but using real-world execution failure as feedback may serve as an effective \n way for LLMs to determine how to best tune sim-to-real in successive iterations. Furthermore, all tasks and policies in our work operately purely from \n robot's proprioceptive inputs, and incorporating vision or other sensors may further improve policy performance and LLM feedback loop.\n </p> \n </div>\n<div>\n <h2>BibTeX</h2>\n <pre><code>@inproceedings{ma2024dreureka,\n title = {DrEureka: Language Model Guided Sim-To-Real Transfer},\n author = {Yecheng Jason Ma and William Liang and Hungju Wang and Sam Wang and Yuke Zhu and Linxi Fan and Osbert Bastani and Dinesh Jayaraman},\n year = {2024},\n booktitle = {Robotics: Science and Systems (RSS)}\n}\n</code></pre>\n </div>",
"author": "",
"favicon": "",
"source": "eureka-research.github.io",
"published": "",
"ttr": 142,
"type": ""
}