Action Chunking with Transformers (ACT)
Published:
This blog covers a SOTA imitation learning model called Action Chunking with Transformers, which can perform versatile tasks with little amount of demonstration data.
Published:
This blog covers a SOTA imitation learning model called Action Chunking with Transformers, which can perform versatile tasks with little amount of demonstration data.
Published:
This blog goes over Decision Transformer, which is an offline-RL method that learns to optimize from pregathered data using a transformer model.
Published:
このブログではRobotic TransformerというGoogle Researchが考案した様々なタスクに対応した言語指令ロボット制御モデルを解説します。
Published:
This blog quickly covers a SOTA imitation learning model called Robotics Transformer (RT-1), which can perform variaous tasks based on a language instruction.
Published:
This blog thoroughly covers the Actor-Critic approach, which is a keep concept in RL that allows algorithms to handle continuous action spaces with low variance by using both value and policy networks. Famous Actor-Critic methods like A2C, PPO, DDPG, and SAC are also showcases in the blog.
Published:
This blog thoroughly covers the policy gradient method, which is crucial for RL algorithms to handle continous action spaces.
Published:
This blog quickly goes over temporal difference (TD) learning, which is a vital aspect that makes RL sample efficient.