DARPA SubT Finals: Robot Operator Wisdom
Each of the DARPA Subterranean Challenge teams is allowed to bring up to 20 people to the Louisville Mega Cavern for the final event. Of those 20 people, only five can accompany the robots to the course staging area to set up the robots. And of those five, just one person can be what DARPA calls the Human Supervisor.The Human Supervisor role, which most teams refer to as Robot Operator, is the only person allowed to interface with the robots while they’re on the course. Or, it’s probably more accurate to say that the team’s base station computer is the only thing allowed to interface with robots on the course, and the human operator is the only person allowed to use the base station. The operator can talk to their teammates at the staging area, but that’s about it—the rest of the team can’t even look at the base station screens. Robot operator is a unique job that can be different for each team, depending on what kinds of robots that team has deployed, how autonomous those robots are, and what strategy the team is using during the competition. On the second day of the SubT preliminary competition, we talked with robot operators from all eight Systems Track teams to learn more about their robots, exactly what they do during the competition runs, and their approach to autonomy.”DARPA is interested in approaches that are highly autonomous without the need for substantive human interventions; capable of remotely mapping and/or navigating complex and dynamic terrain; and able to operate with degraded and unreliable communication links. The team is permitted to have a single Human Supervisor at a Base Station… The Human Supervisor is permitted to view, access, and/or analyze both course data and status data. Only the Human Supervisor is permitted to use wireless communications with the systems during the competition run.” DARPA’s idea here is that most of the robots competing in SubT will be mostly autonomous most of the time, hence their use of “supervisor” rather than “operator.” Requiring substantial human-in-the-loop-ness is problematic for a couple of reasons—first, direct supervision requires constant communication, and we’ve seen how problematic communication can be on the SubT course. And second, operation means the need for a skilled and experienced operator, which is fine if you’re a SubT team that’s been practicing for years but could be impractical for a system of robots that’s being deployed operationally.So how are teams making the robot operator role work, and how close are they to being robot supervisors instead? I went around the team garages on the second day of preliminary runs, and asked each team operator the same three questions about their roles. I also asked the operators, “What is one question I should I ask the next operator I talk to?” I added this as a bonus question, with each operator answering a question suggested by a different team operator.Team RobotikaRobot Operator: Martin DlouhyTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.This is the third time we’ve participated in a SubT event; we’ve tried various robots, small ones, bigger ones, but for us, these two robots seem to be optimal. Because we are flying from Czech Republic, the robots have to fit in our checked luggage. We also don’t have the smaller robots or the drones that we had because like three weeks ago, we didn’t even know if we would be allowed to enter the United States. So this is optimal for what we can bring to the competition, and we would like to demonstrate that we can do something with a simple solution.Once your team of robots is on the course, what do you do during the run?We have two robots, so it’s easier than for some other teams. When the robots are in network range, I have some small tools to locally analyze data to help find artifacts that are hard for the robots to see, like the cellphone or the gas source. If everything goes fine, I basically don’t have to be there. We’ve been more successful in the Virtual SubT competition because over half our team are software developers. We’ve really pushed hard to make the Virtual and System software as close as possible, and in Virtual, it’s fully autonomous from beginning to end. There’s one step that I do manually as operator—the robots have neural networks to recognize artifacts, but it’s on me to click confirm to submit the artifact reports to DARPA.What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible?I would actually like an operator-less solution, and we could run it, but it’s still useful to have a human operator—it’s safer for the robot, because it’s obvious to a human when the robot is not doing well.Bonus operator question: What are the lowest and highest level decisions you have to make?The lowest level is, I open the code and change it on the fly. I did it yesterday to change some of the safety parameters. I do this all the time, it’s normal. The highest level is asking the team, “guys, how are we going to run our robots today.”Team MARBLERobot Operator: Dan RileyTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.We’ve been using the Huskies [wheeled robots] since the beginning of the competition, it’s a reliable platform with a lot of terrain capability. It’s a workhorse that can do a lot of stuff. We were also using a tank-like robot at one time, but we had traversability issues so we decided to drop that one for this competition. We also had UAVs, because there’s a lot of value in not having to worry about the ground while getting to areas that you can’t get to with a ground robot, but unfortunately we had to drop that too because of the number of people and time that we had. We decided to focus on what we knew we could do well, and make sure that our baseline system was super solid. And we added the Spot robots within the last two months mostly to access areas that the Huskies can’t, like going up and down stairs and tricky terrain. It’s fast, and we really like it.Our team of robots is closely related to our deployment strategy. The way our planner and multi-robot coordination works is that the first robot really just plows through the course looking for big frontiers and new areas, and then subsequent robots will fill in the space behind looking for more detail. So we deploy the Spots first to push the environment since they’re faster than the Huskies, and the Huskies will follow along and fill in the communications network.We know we don’t want to run five robots tomorrow. Before we got here, we saw the huge cavern and thought that running more robots would be better. But based on the first couple runs, we now know that the space inside is much smaller, so we think four robots is good. Once your team of robots is on the course, what do you do during the run?The main thing I’m watching for is artifact reports from robots. While I’m waiting for artifact reports, I’m monitoring where the robots are going, and mainly I want to see them going to new areas. If I see them backtracking or going where another robot has explored already, I have the ability to send them new goal points in another area. When I get an artifact report, I look at the image to verify that it’s a good report. For objects that may not be visible, like the cell phone [which has to be detected through the wireless signal it emits], if it’s early in the mission I’ll generally wait and see if I get any other reports from another robot on it. The localization isn’t great on those artifacts, so once I do submit, if it doesn’t score, I have to look around to find an area where it might be. For instance, we found this giant room with lots of shelves and stuff, and that’s a great place to put a cell phone, and sure enough, that’s where the cell phone was.What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible?We pride ourselves on our autonomy. From the very beginning, that was our goal, and actually in earlier competitions I had very little control over the robot, I could not even send it a goal point. All I was getting was reports—it was a one-way street of information. I might have been able to stop the robot, but that was about it. Later on, we added the goal point capability and an option to drive the robot if I need to take over to get it out of a situation.I’m actually the lead for our Virtual Track team as well, and that’s already decision-free. We’re running the exact same software stack on our robots, and the only difference is that the virtual system also does artifact reporting. Honestly, I’d say that we’re more effective having the human be able to make some decisions, but the exact same system works pretty well without having any human at all.Bonus operator question: How much sleep did you get last night?I got eight hours, and I could have had more, except I sat around watching TV for a while. We stressed ourselves out a lot during the first two competitions, and we had so many problems. It was horrible, so we said, “we’re not doing that again!” A lot of our problems started with the setup and launching phase, just getting the robots started up and ready to go and out of the gate. So we spent a ton of time making sure that our startup procedures were all automated. And when you’re able to start up easily, things just go well.Team ExplorerRobot Operator: Chao CaoTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.We tried to diversify our robots for the different kinds of environments in the challenge. We have wheeled vehicles, aerial vehicles, and legged vehicles (Spot robots). Our wheeled vehicles are different sizes; two are relatively big and one is smaller, and two are articulated in the middle to give them better mobility performance in rough terrain. Our smaller drones can be launched from the bigger ground robots, and we have a larger drone with better battery life and more payload. In total, there are 11 robots, which is quite a lot to be managed by a single human operator under a constrained time limit, but if we manage those robots well, we can explore quite a large three dimensional area. Once your team of robots is on the course, what do you do during the run?Most of the time, to be honest, it’s like playing a video game. It’s about allocating resources to gain rewards (which in this case are artifacts) by getting the robots spread out to maximize coverage of the course. I’m monitoring the status of the robots, where they’re at, and what they’re doing. Most of the time I rely on the autonomy of the robots, including for exploration, coordination between multiple robots, and detecting artifacts. But there are still times when the robots might need my help, for example yesterday one of the bigger robots got itself stuck in the cave branch but I was able to intervene and get it to drive out.What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible?Humans have a semantic understanding of the environment. Just by looking at a camera image, I can predict what an environment will be like and how risky it will be, but robots don’t have that kind of higher level decision capability. So I might want a specific kind of robot to go into a specific kind of environment based on what I see, and I can redirect robots to go into areas that are a better fit for them. For me as an operator, at least from my personal experience, I think it’s still quite challenging for robots to perform this kind of semantic understanding, and I still have to make those decisions. Bonus operator question: What is your flow for decision making?Before each run, we’ll have a discussion among all the team members to figure out a rough game plan, including a deployment sequence—which robots go first, should the drones be launched from the ground vehicles or from the staging area. During the run, things are changing, and I have to make decisions based on the environment. I’ll talk to the pit crew about what I can see through the base station, and then I’ll make an initial proposal based on my instincts for what I think we should do. But I’m very focused during the run and have a lot of tasks to do, so my teammates will think about time constraints and how conservative we want to be and where other robots are because I can’t think through all of those possibilities, and then they’ll give me feedback. Usually this back and forth is quick and smooth.
The Robot Operator is the only person allowed to interface with the robots while they’re on the course—the operators pretty much controls the entire run by themselves.DARPATeam CTU-CRAS-NORLABRobot Operator: Vojtech SalnskyTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.We chose many different platforms. We have some tracked robots, wheeled robots, Spot robots, and some other experimental UGVs [small hexapods and one big hexapod], and every UGV has a different ability to traverse terrain, and we are trying to cover all possible locomotion types to be able to traverse anything on the course. Besides the UGVs, we’re using UAVs as well that are able to go through both narrow corridors and bigger spaces.We brought a large number of robots, but the number that we’re using, about ten, is enough to be able to explore a large part of the environment. Deploying more would be really hard for the pit crew of only five people, and there isn’t enough space for more robots.Once your team of robots is on the course, what do you do during the run?It differs run by run, but the robots are mostly autonomous, so they decide where to go and I’m looking for artifact detections uploaded by the robots and approving or disapproving them. If I see that a robot is stuck somewhere, I can help it decide where to go. If it looks like a robot may lose communications, I can move some robots to make a chain from other robots to extend our network. I can do high level direction for exploration, but I don’t have to—the robots are updating their maps and making decisions to best explore the whole environment. What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible?Terrain assessment is subtle. At a higher level, the operator has to decide where to send a walking robot and where to send a rolling robot. It’s tiny details on the ground and a feeling about the environment that help the operator make those decisions, and that is not done autonomously. Bonus operator question: How much bandwidth do you have?I’m on the edge. I have a map, I have some subsampled images, I have detections, I have topological maps, but it would be better to have everything in 4K and dense point clouds.Team CSIRO Data61Robot Operator: Brendan TiddTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.We’ve got three robot types that are here today—Spot legged robots, big tracked robots called Titans, and drones. The legged ones have been pretty amazing, especially for urban environments with narrow stairs and doorways. The tracked robots are really good in the tricky terrain of cave environments. And the drones can obviously add situational awareness from higher altitudes and detect those high artifacts.Once your team of robots is on the course, what do you do during the run?We use the term “operator” but I’m actually supervising. Our robots are all autonomous, they all know how to divide and conquer, they’re all going to optimize exploring for depth, trying to split up where they can and not get in each other’s way. In particular the Spots and the Titans have a special relationship where the Titan will give way to the Spot if they ever cross paths, for obvious reasons. So my role during the run is to coordinate node placement, that’s something that we haven’t automated—we’ve got a lot of information that comes back that I use to decide on good places to put nodes, and probably the next step is to automate that process. I also decide where to launch the drone. The launch itself is one click, but it still requires me to know where a good place is. If everything goes right, in general the robots will just do their thing.What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible?The node drop thing is vital, but I think it’s quite a complex thing to automate because there are so many different aspects to consider. The node mesh is very dynamic, it’s affected by all the robots that are around it and obviously by the environment. Similarly, the drone launch, but that requires the robots to know when it’s worth it to launch a drone. So those two things, but also pushing on the nav stack to make sure it can handle the crazy stuff. And I guess the other side is the detection. It’s not a trivial thing knowing what’s a false positive or not, that’s a hard thing to automate.Bonus operator question: How stressed are you, knowing that it’s just you controlling all the robots during the run?Coping with that is a thing! I’ve got music playing when I’m operating, I actually play in a metal band and we get on stage sometimes and the feeling is very similar, so it’s really helpful to have the music there. But also the team, you know? I’m confident in our system, and if I wasn’t, that would really affect my mental state. But we test a lot, and all that preparedness helps with the stress.Team CoSTARRobot Operator: Kyohei OtsuTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.We have wheeled vehicles, legged vehicles, and aerial drones, so we can cover many terrains, handle stairs, and fly over obstacles. We picked three completely different mobility systems to be able to use many different strategies. The robots can autonomously adjust their roles by themselves; some explore, some help with communication for other robots. The number of robots we use depends on the environment—yesterday we deployed seven robots onto the course because we assumed that the environment would be huge, but it’s a bit smaller than we expected, so we’ll adapt our number to fit that environment. Once your team of robots is on the course, what do you do during the run?Our robots are autonomous, and I think we have very good autonomy software. During setup the robots need some operator attention; I have to make sure that everything is working including sensors, mobility systems, and all the algorithms. But after that, once I send the robot into the course, I totally forget about it and focus on another robot. Sometimes I intervene to better distribute our team of robots—that’s something that a human is good at, using prior knowledge to understand the environment. And I look at artifact reports, that’s most of my job.In the first phases of the Subterranean Challenge, we were getting low level information from the robots and sometimes using low level commands. But as the project proceeded and our technology matured, we found that it was too difficult for the operator, so we added functionality for the robot to make all of those low level decisions, and the operator just deals with high level decisions.What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible? [answered by CoSTAR co-Team Lead Joel Burdick]Two things: the system reports that it thinks it found an artifact, and the operator has to confirm yes or no. He has to also confirm that the location seems right. The other thing is that our multi-robot coordination isn’t as sophisticated as it could be, so the operator may have to retask robots to different areas. If we had another year, we’d be much closer to automating those things.Bonus Operator Question: Would you prefer if your system was completely autonomous and your job was not necessary?Yeah, I’d prefer that!Team Coordinated RoboticsRobot Operator: Kevin KnoedlerTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.The ideal mix in my mind is a fleet of small drones with lidar, but they are very hard to test, and very hard to get right. Ground vehicles aren’t necessarily easier to get right, but they’re easier to test, and if you can test something, you’re a lot more likely to succeed. So that’s really the big difference with the team of robots we have here. Once your team of robots is on the course, what do you do during the run?Some of the robots have an automatic search function where if they find something they report back, and what I’d like to be doing is just monitoring. But, the search function only works in larger areas. So right now the goal is for me to drive them through the narrow areas, get them into the wider areas, and let them go, but getting them to that search area is something that I mostly need to do manually one at a time.What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible?Ideally, the robots would be able to get through those narrow areas on their own. It’s actually a simpler problem to solve than larger areas, it’s just not where we focused our effort.Bonus operator question: How many interfaces do you use to control your robots?We have one computer with two monitors, one controller, and that’s it.Team CERBERUSRobot Operator: Marco TranzattoTell me about the team of robots that you’re operating and why you think it’s the optimal team for exploring underground environments.We have a mix of legged and flying robots, supported by a rover carrying a wireless antenna. The idea is to take legged robots for harsh environments where wheel robots may not perform as well, combined with aerial scouts that can explore the environment fast to provide initial situational awareness to the operator so that I can decide where to deploy the legged machines. So the goal is to combine the legged and flying robots in a unified mission to give as much information as possible to the human operator. We also had some bigger robots, but we found them to be a bit too big for the environment that DARPA has prepared for us, so we’re not going to deploy them.Once your team of robots is on the course, what do you do during the run?We use two main modes: one is fully autonomous on the robots, and the other one is supervised autonomy where I have an overview of what the robots are doing and can override specific actions. Based on the high level information that I can see, I can decide to control a single robot to give it a manual waypoint to reposition it to a different frontier inside the environment. I can go from high level control down to giving these single commands, but the commands are still relatively high level, like “go here and explore.” Each robot has artifact scoring capabilities, and all these artifact detections are sent to the base station once the robot is in communication range, and the human operator has to say, “okay this looks like a possible artifact so I accept it” and then can submit the position either as reported by the robot or the optimized position reported by the mapping server. What autonomous decisions would you like your robots to be able to make that they aren’t currently making, and what would it take to make that possible?Each robot is autonomous by itself. But the cooperation between robots is still like… The operator has to set bounding boxes to tell each robot where to explore. The operator has a global overview, and then inside these boxes, the robots are autonomous. So I think at the moment in our pipeline, we still need a centralized human supervisor to say which robot explores in which direction. We are close to automating this, but we’re not there yet. Bonus operator question: What is one thing you would add to make your life as an operator easier?I would like to have a more centralized way to give commands to the robots. At the moment I need to select each robot and give it a specific command. It would be very helpful to have a centralized map where I can tell a robot to say explore in a given area while considering data from a different robot. This was in our plan, but we didn’t manage to deploy it yet.
Read More