Google DeepMind researchers are working on a 'solidly amateur' table tennis-playing robot

Following the trend of having agents and robots reach human-level performance on various real-world tasks—such as cooking, cleaning, and even doing backflips—a research team at Google DeepMind recently published a paper that details a robot agent capable of playing table tennis at a human level. The robot was tested through 29 matches with players of different levels, and although it lost all of its matches against tournament-level opponents, the robot beat all beginner players and defeated 55% of the intermediate players, thus demonstrating solidly
amateur human-level performance.

Given the demanding requirements a robot needs to excel at table tennis—high-speed motion, precise control, real-time decision-making, and human-robot interaction—the sport has stood as a benchmark for robotics since the 1980s. Moreover, to be able to play, a robot must be proficient in choosing low-level physical skills and high-level decision-making, have extensive training, and be prepared to face unseen human opponents. To achieve this, the robot features a hierarchical and modular policy architecture, a novel hybrid training method, real-time adaptation to unseen opponents, and a study that matches the model by facing it in matches with previously unseen humans in physical conditions.

The robot has several limitations, such as its ability to deal with fast, low, and high balls, limitations in paddle modeling, the ability to read extreme spin accurately, and inaccuracies in motion tracking, among others. According to the research team, these limitations all present directions for future research. Moreover, they also expect that several components of the robot's design, including the hierarchical policy architecture, the hybrid training method, and the adaptation strategies will have an impact well beyond the limits of table tennis.