3D Modeling Autonomous vehicles Drones Lidar Surveying

Autonomous Drones Compete Underground – Part 3

image of Autonomous Drone Competition
Autonomous Drone Competition

This is the third and final part of the article on “Autonomous Drones Compete in Underground Challenge” interview.

See Part 1 here – https://lidarnews.com/articles/autonomous-drones-compete-in-underground-challenge/

There were many strengths of the team:

The three groups had complementary skills and were all world leaders in their domain.
Compared to the other teams, most of our team was professional engineers and researchers with vast experience in this area.

We have all known each other for a long time and have a good working relationship.
From a technological point of view:

We used a platform-agnostic, localization, and mapping technology as our framework, meaning all of the robots needed to work as one system. Having some core technologies in common helped achieve this.
We took a modular approach to the challenge, we developed platform-independent payloads that were easily switched between robotic platforms. Standardizing autonomy and mapping into one plug and play payload allowed us to change robotic platforms many times through the event to suit the requirements of the course as we learned more.
But most importantly, we had a passion to succeed. We took the challenge seriously because we wanted to showcase the capabilities of Australian companies and research labs on a global stage.

We changed our robotic platforms as we proceeded through the different circuits. The final event consisted of tunnel, urban, and cave circuits, these all provide very challenging environments with unknown and very different terrains. This made robot platform selection important, as there is no one robot that can traverse all of those terrains. The key was to have diversity in locomotion, so we selected

2x legged robot, Boston Dynamics’ Spot Mini (named Bluey & Bingo)
These robots are good for narrow, urban environments and stairs.
2x tracked vehicles, BIA5, a Brisbane-based company (named Bear & Rat)
These robots are good for traversing rough terrains. They also have the capacity to carry additional equipment, like the nodes for the mesh network and the drones, to preserve their battery life and position them in the best place to get good results.
2x Hovermap mounted drones, Emesent (named H1 & H2)
The drones can access areas ground vehicles can’t access, like stairwells, vertical shafts, and mezzanine levels. In fact, during the first day of preliminary rounds for the final, we were one of the only teams that discovered a vertical shaft and accessed a basement level to expand the area we had searched.

All of our robots, no matter how they moved around, had these advanced autonomy and AI capabilities:

non-GPS navigation
autonomous exploration
collaboration with robot team members
data sharing
cameras and other sensors for detecting artifacts
For the final, the teams in the Systems Competition completed one 60-minute run. The courses varied in difficulty and included 40 artifacts each. Teams earned points by correctly identifying artifacts within a five-meter accuracy and classifying these artifacts. In instances of a points tie, team rank was determined by (1) earliest time the last artifact was successfully reported, averaged across the team’s best runs on each course; (2) earliest time the first artifact was successfully reported, averaged across the team’s best runs on each course; and (3) lowest average time across all valid artifact reports, averaged across the team’s best runs on each course.

We had two days of preliminary rounds before the final competition. We did really well in the preliminary rounds and were in the lead as we went into the finals.

The day of the finals was very stressful and exciting.

We arrived in the morning at our holding area. We turned on the systems and did the standard checks. Then we made sure all of our batteries were charged. Then DARPA staff came around to ensure checks to make sure all of the robots complied with competition regulations.

Then we had to wait. We were the last team to compete in the final round because we had come first in the preliminary competitions. There was no communication between the different teams allowed to keep the integrity of the course.

We had no last-minute coding to do, so we brainstormed ideas for concepts for operations and the strategies of how to deploy the systems to maximize the outcomes. Other than that it was just jokes and getting into deep philosophical discussions. It was a great atmosphere with plenty to eat and plenty of laughs.

When it was our turn to compete, DARPA took us to the staging area of the course. We had time to set up and prepare the robots and ground station. Then we ran the course for an hour.

After it was all over, we were taken back to our holding area. It wasn’t long after that they announced that there were two teams tied for first- one of them was us. We were very happy and a little frustrated because they wouldn’t announce the tie-break method or final winner until the next day at the awards ceremony.

So we went out to celebrate.

Students did make up part of our team. In fact, the fleet operator, the one person who was able to interact with the robots during the competition, is a PhD student from the Queensland University of Technology, Brandon.

Yes, there will be ongoing collaboration between the team members. We already have other ongoing projects, so the organizations will continue working together.

We used the Rajant Kinetic Mesh network that consists of several nodes that create a mesh of communication network.

Each robot had a Rajant module attached to it that allowed it to communicate with the other robots. Each tracked robot was also carrying an additional four modules to be deployed in the field. The fleet operator monitored the strength of the field and remotely deployed the nodes when he knew a robot would lose connection.

Each robot can access the communication mesh created and send their information back to the ground station or the other robots.

Our team chose to report the information in three different ways.

Our system georeferenced the XYZ coordinates in the DARPA global frame
an image was taken of the area the artifact is in and the artifact highlighted with a box. A classification added was added
An icon shows the artifact location in our 3D map
DARPA created a global 3D coordinate system that was the source of truth.

Our robots were specifically developed to operate autonomously beyond visual line of sight and communication range. Therefore the distance they can travel mainly depends on their endurance and the environment.

The final course was contained in an area about a several hundred meters by a several hundred meters, however it was very dense and complex. It contained the three different environments, tunnels, urban areas and natural caves. The course was designed in a way that meant the robots were out of visual line of sight of the operators after just 10 meters.

Our robots covered and explored over 90% of the course. This meant they were operating at several hundreds of meters from the operator.

The drones and ground robots worked as a team and helped each other to maximize the area covered and the detection of artifacts.

One of the most advanced features that our fleet has, and that differentiates it from other teams, is the ability to remotely launch a drone from the ground robots.

Drones have a limited running time, compared to ground robots, but they can access elevated areas and shafts, or traverse blocked or complex environments that a ground robot can not.

That’s why we had two drones carried into the course by the tracked robots, Rat and Bear. They were then started and launched remotely when space is more suitable or drone exploration is required. The ground robot can be considered a mother ship, in this case, as it helps the drone:

by extending its range of operations by carrying it to the areas of interest, along the obvious and simple course, saving the drones flight time.
by navigating and traversing very narrow passages and doorways, the ground robot can allow the drone to get through areas which are not flyable and access the larger areas after the narrow passages.
by deploying the communication nodes that build the mesh communication network that is used by the drones and other vehicles to communicate with the ground station.
by sharing the geo-reference information with the drone. As the drone started remotely inside the course, it was not able to see the ground control points at the start. The information shared by ground robots means the drones are able to geo-reference its data and detections to the DARPA global coordinate frame.
On the other hand, the drones are also helping the ground vehicles by

acting as a communication node that the ground vehicles can use to send and receive data from other robots or the ground control station.
map and search the areas that are hard or risky and not accessible to ground vehicles.
provide situation awareness and information to the ground robots that allows them to optimize their path and tasks by providing data and maps.

In certain scenarios the drone could be considered the scout of the team.

The remote launch of the drone from the ground robot is fully autonomous after it is triggered by the operator with just one click. To enable this, a very complex process has been automated. Here are just some of the complex tasks completed by the robots to ensure a successful launch:

the ground robot stops moving and checks it is on a level enough terrain, or with an acceptable slope
it unlatches the drone’s security couplings
Hovermap, the autonomy and mapping payload, is started
Hovermap syncs and merges its data with the ground robot
GCS and pre-flight checks are done
the drone is started (spinning the props)
pre-takeoff checks are conducted
automated takeoff is initiated
autonomous drone exploration is started
the ground robot reconfigures itself and resumes its mission
The SubT Challenge was an intensive program of 3 years of development and learning. During this program, we really pushed the boundaries of what’s possible with autonomous robots and learned many things along the way, not just about the technology but also how to manage complex programs and systems like this one and how to collaborate as a team to develop effective collaborative robots. My top 3 lessons are:

Building an autonomous system is hard but doable but building a reliable autonomous system that can work in different complex environments takes it to another level. Although the systems deployed at DARPA events are research prototypes and not products, the requirements on reliability are very high and should be seriously considered.
Extensive continuous field testing with iterative development is key to developing reliable autonomous field robots.
It’s important to be agile, adaptive, and ready to change your methodology or approach when you hit a dead-end. Therefore, build your technology/stack in a way that allows you to switch quickly and effectively.
Although our team progressed very well in the course of the challenge and achieved excellent results in the Final, I feel that we could have done better by tweaking our approach in different areas:

A lot of time was spent by our teams on building, debugging, and fixing hardware. Putting more professional and dedicated people on the hardware from the beginning could have saved a lot of time and helped to focus on augmenting these robotic platforms with even more advanced autonomy capabilities.
I feel that we underestimated how complex DARPA could make the course, especially for drones. We would have worked on miniaturizing the airborne system early on while maximizing its flight time by exploring different propulsion and airframe concepts.
The GCS interface and all the Human-Machine interactions were not seriously researched and considered in this project. This could limit the effectiveness of the operations, especially in this one-to-many scenario. If we had the opportunity to do it again, we would have employed expert engineers in these areas to help us build an effective interface.
We lost a lot of time and energy trying to build our own hardware and platforms, whether communication modules, ground, or aerial platforms. Next time we will start by leveraging COTS systems and working with partners and OEMs to customize or modify the platforms to our needs if required.
Put more dedicated resources onto the project.

For the complete article CLICK HERE.


Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: