Introducing a new DEX-EE Series Robot - DEX-EE Chiral!   Check it out now!

UK Manipulation Workshop in Edinburgh

If you happen to have a 2h break between talks in Edinburgh, you might as well climb an extinct volcano (Arthur’s Seat)!  

A few notes on the UK Robot Manipulation Workshop by Rich Walker, with comments from Louis Flinn

Edinburgh is a lovely city for an event. Edinburgh in January has the advantage that it’s not full of people, and the disadvantage of some proper cold and ice! But the venue at the University was great, with views of Arthur’s Seat and lots of local amenities. This was the seventh of these Robot Manipulation workshops, so a lot of people knew each other, which always gives an event a cosy feel. A lot of people here are also working in ARIA’s Robot Dexterity programme, so that created an extra “density” of connection.

Not all the talks were super relevant to my interests, so I will spare you the ones where I struggled to understand. But there were quite a few that had something I did understand!

Nathan Lepora gave a really interesting talk on tactile sensing in robotics, going right back into the history of the field and bringing us up to speed. It was really interesting to see that e-skin researchers don’t really seem to put their sensing on the robots, and also to notice that the link between tactile sensing and haptic feedback still seems to be under-developed. We were really surprised when we built the Tactile Telerobot  (bare in mind that was in 2018, if you’re interested in the current version check out our teleoperation system) that it was such a step forwards, and I guess that it may still be one!

 

 

Efi Psomopoulou dug into work on adaptive manipulation using tactile data – and in particular analytic methods in this area. My own feeling is that this sort of work will be how we get robots to perform skilled tasks reliably and robustly. In particular the shear-based grasp control was exciting to see.

Perla Maiolino is working on whole body contact sensing for robots, and showed some really interesting results using distributed time of flight sensors to make detection arrays across the robot. I have to say the demo video from the SestoSense project with the person working in the car with the robot arm was very convincing.

Adam Norton came over from the US (not the only American in the audience – always nice to see the international representation!) to talk about the COMPARE ecosystem work. They are building a framework for evaluating grasping and manipulation, and to do this they have built a pipeline with modular components so you can swap out one and replace it with another to test performance changes. Interesting reflections on how to get ROS1 components to work with ROS2.

Arran Reader gave a good overview of some of work on how we handle multiple objects. One of the questions we dexterous hand people often ask is “why so many fingers?” and being able to pick up one object then another is the most obvious thing we do like this. Some interesting work on combining motion capture with EMG was mentioned that I must look into.

Stéphanie Rossit talked about how the brain represents things that are to be interacted with, and in particular some recent findings that there are areas of the brain that respond specifically to “tools” – adjacent to the ones that respond specifically to “hands”. The idea that there are distinct neural formats for tool manipulation knowledge makes sense intuitively, and I look forward to seeing how we can turn this insight into robot control techniques.

Mehmet Dogar gave us a whistlestop tour of physics based methods for manipulation, including how to exploit hitting objects to make them slide and how to do in-hand manipulation with soft hands. Approximating the physics of pushing is a cool idea.

Shan Luo‘s talk at the start of Wednesday on designing tactile sensing woke us up nicely. Shan’s been doing some great work with vision based sensing and working out how to understand the results of these sensors, as well as some nice work on simulation that I’m sure will be helpful for robot design.

Another international (Denmark!) visitor was Guggi Kofod from Pliantics, another ARIA Creator, working on new soft actuation technology. It was great to finally get the technical overview and be able to dig into ways to build and optimise these actuators. We’re really looking forward to putting them to work!

Matei Ciocarlie gave the main keynote on the topic of manipulation. A couple of interesting observations from there around the object becoming part of the control space when you have transient contacts, and the idea of using kinematics to explore the overall search space of the robot to provide a scaffold for later RL. I think we disagree with some of his ideas around how to drive the fingers, but that’s the fun of Engineering

Louis:
I really enjoyed this one. I think there are two new tactiles he presented from his lab that are worth mentioning. 

1) https://roamlab.github.io/PopcornFT/ – low cost, finger agnostic sensors 

2) SpikeATac, To me this is especially interesting, it’s using characteristics of PVDF I haven’t seen before in tactiles, with incredible high performance (check out their high speed grasping dried seaweed demo).

They’re quite interesting as they are different from the sensors we make at Shadow. It’s always good to see others come up with different approaches to the same problems!

 

 

What I most noticed in this workshop was the strong sense that there was a community who are working on overlapping areas of the problem space, and that this is building a strength in robotics in the UK that was not really here a decade ago. Aaron Prather (another US visitor) gave a nice summation on his blog which I advise you reading, but my most important takeaway is that next year has a high bar to reach!

Louis:
I came from the event thinking that there are lots of promising avenues and possible futures for dexterous robot hardware (new tactiles, new actuation methods, open source low cost parts, etc) but there wasn’t much promising talk of the next steps in better quality control.

Reinforcement learning and imitation learning remain the most prominent strategies. We’ve seen them deliver impressive results compared to classical control but they don’t seem to scale to fine grained, truly dexterous manipulation (like the example Rich provided of picking a lock). The control problem is still the one that needs solving, and nobody seems to know where to look.

Back
Share: