Back to news

September 9, 2020 | International, Advanced manufacturing 4.0

JUST IN: New Navy Lab to Accelerate Autonomy, Robotics Programs

9/8/2020
By Yasmin Tadjdeh

Over the past few years, the Navy has been hard at work building a new family of unmanned surface and underwater vehicles through a variety of prototyping efforts. It is now standing up an integration lab to enable the platforms with increased autonomy, officials said Sept. 8.

The Rapid Integration Autonomy Lab, or RAIL, is envisioned as a place where the Navy can bring in and test new autonomous capabilities for its robotic vehicles, said Capt. Pete Small, program manager for unmanned maritime systems.

“Our Rapid Autonomy Integration Lab concept is really the playground where all the autonomy capabilities and sensors and payloads come together, both to be integrated ... [and] to test them from a cybersecurity perspective and test them from an effectiveness perspective,” Small said during the Association for Unmanned Vehicle Systems International's Unmanned Systems conference, which was held virtually due to the ongoing COVID-19 crisis.

Robotics technology is moving at a rapid pace, and platforms will need to have their software and hardware components replaced throughout their lifecycles, he said. In order to facilitate these upgrades, the service will need to integrate the new autonomy software that comes with various payloads and certain autonomy mission capabilities with the existing nuts-and-bolts packages already in the unmanned platforms.

“The Rapid Autonomy Integration Lab is where we bring together the platform software, the payload software, the mission software and test them,” he explained.

During testing, the service will be able to validate the integration of the software as well as predict the performance of the unmanned vehicles in a way that “we're sure that this is going to work out and give us the capability we want,” Small said.

The RAIL concept will rely on modeling-and-simulation technology with software-in-the-loop testing to validate the integration of various autonomous behaviors, sensors and payloads, he said.

“We will rely heavily on industry to bring those tools to the RAIL to do the testing that we require,” he noted.

However, the lab is not envisioned as a single, brick-and-mortar facility, but rather a network of cloud-based infrastructure and modern software tools. “There will be a certain footprint of the actual software developers who are doing that integration, but we don't see this as a big bricks-and-mortar effort. It's really more of a collaborative effort of a number of people in this space to go make this happen," Small said.

The service has kicked off a prototype effort as part of the RAIL initiative where it will take what it calls a “third-party autonomy behavior” that has been developed by the Office of Naval Research and integrate it onto an existing unmanned underwater vehicle that runs on industry-made proprietary software, Small said. Should that go as planned, the Navy plans to apply the concept to numerous programs.

For now, the RAIL is a prototyping effort, Small said.

“We're still working on developing the budget profile and ... the details behind it,” he said. “We're working on building the programmatic efforts behind it that really are in [fiscal year] '22 and later.”

The RAIL is part of a series of “enablers” that will help the sea service get after new unmanned technology, Small said. Others include a concept known as the unmanned maritime autonomy architecture, or UMAA, a common control system and a new data strategy.

Cmdr. Jeremiah Anderson, deputy program manager for unmanned underwater vehicles, said an upcoming industry day on Sept. 24 that is focused on UMAA will also feature information about the RAIL.

“Half of that day's agenda will really be to get into more of the nuts and bolts about the RAIL itself and about that prototyping effort that's happening this year,” he said. “This is very early in the overall trajectory for the RAIL, but I think this will be a good opportunity to kind of get that message out a little bit more broadly to the stakeholders and answer their questions.”

Meanwhile, Small noted that the Navy is making strides within its unmanned portfolio, citing a “tremendous amount of progress that we've made across the board with our entire family of UVS and USVs.”

Rear Adm. Casey Moton, program executive officer for unmanned and small combatants, highlighted efforts with the Ghost Fleet Overlord and Sea Hunter platforms, which are unmanned surface vessels.

The Navy — working in cooperation with the office of the secretary of defense and the Strategic Capabilities Office — has two Overlord prototypes. Fiscal year 2021, which begins Oct. 1, will be a particularly important period for the platforms, he said.

“Our two Overlord vessels have executed a range of autonomous transits and development vignettes,” he said. “We have integrated autonomy software automation systems and perception systems and tested them in increasingly complex increments and vignettes since 2018.”

Testing so far has shown the platforms have the ability to perform safe, autonomous navigation in according with the Convention on the International Regulations for Preventing Collisions at Sea, or COLREGS, at varying speeds and sea states, he said.

“We are pushing the duration of transits increasingly longer, and we will soon be working up to 30 days,” he said. “Multi-day autonomous transits have occurred in low- and high-traffic density environments.”

The vessels have already had interactions with commercial fishing fleets, cargo vessels and recreational craft, he said.

The longest transit to date includes a round trip from the Gulf Coast to the East Coast where it conducted more than 181 hours and over 3,193 nautical miles of COLREGS-compliant, autonomous operation, Moton added.

Both Overload vessels are slated to conduct extensive testing and experimentation in fiscal year 2021, he said.

“These tests will include increasingly long-range transits with more complex autonomous behaviors,” he said. "They will continue to demonstrate automation functions of the machinery control systems, plus health monitoring by a remote supervisory operation center with the expectation of continued USV reliability."

The Sea Hunter will also be undergoing numerous fleet exercises and tactical training events in fiscal year 2021.

“With the Sea Hunter and the Overlord USVs we will exercise ... control of multiple USVs, test command-and-control, perform as part of surface action groups and train Navy sailors on these platforms, all while developing and refining the fleet-led concept of operations and concept of employment,” Moton said.

https://www.nationaldefensemagazine.org/articles/2020/9/8/navy-testing-new-autonomy-integration-lab

On the same subject

  • 3D Printing of Multilayered Materials for Smart Helmets | 3D Printing Progress

    August 3, 2021

    3D Printing of Multilayered Materials for Smart Helmets | 3D Printing Progress

    A mechanical and aerospace engineering professor is developing advanced helmets to ensure that members of the military are as protected as possible from blasts and other types of attacks.

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • “Innovations for FCAS”: Airbus concludes cooperative pilot phase with startup companies in Germany

    December 17, 2020

    “Innovations for FCAS”: Airbus concludes cooperative pilot phase with startup companies in Germany

    Munich, 09 December 2020 – Airbus has concluded a pilot phase of the “Innovations for FCAS” (I4 FCAS) initiative which aims at involving German non-traditional defence players -covering startups, small to medium enterprises (SMEs) and research institutes- in the development of Future Combat Air System (FCAS). This initiative which was launched in April 2020 was funded by the German Ministry of Defence. “The initiative shows that FCAS does not compare with previous larger defence projects. By implementing young and innovative players, some of whom have never been in touch with the defence sector, we ensure to leverage all competencies available for a game-changing high-tech programme such as FCAS”, said Dirk Hoke, Chief Executive Officer of Airbus Defence and Space. “It will also foster technological spill-overs between the military and civil worlds. It is our ambition to continue the initiative in 2021 and beyond, and make it a cornerstone of our FCAS innovation strategy.” During the pilot phase, 18 innovative players worked on 14 projects in different areas, covering the whole range of FCAS elements: combat cloud, connectivity, new generation fighter, remote carriers, system of systems, sensors. Among these 14 projects, Airbus engineers have worked closely with SMEs and startups to achieve concrete results such as: · A first flight-test approved launcher of an Unmanned Aerial Vehicle (UAV) from of a transport aircraft. This project is the result of a cooperation between Airbus as A400M integrator, Geradts GmbH for the launcher and SFL GmbH from Stuttgart for UAV integration and supported by DLR simulations. An agile design and development approach allowed for rapid prototyping and flight readiness in only 6 months. · A secure combat cloud demonstrator: a first time transfer of secured operating systems into a cloud environment. Kernkonzept GmbH from Dresden together with Airbus CyberSecurity have shown how IT security can be used for highest security requirements on a governmental cloud system. · A demonstrator of applied artificial intelligence on radio frequency analysis. Hellsicht GmbH from Munich trained their algorithms on Airbus-provided datasets, allowing for a unique capability of real time fingerprinting of certain emitters, such as radars. As Europe's largest defence programme in the coming decades, FCAS aims at pushing the innovation and technological boundaries. Its development will bring disruptive technologies such as artificial intelligence, manned-unmanned teaming, combat cloud or cybersecurity to the forefront. https://www.airbus.com/newsroom/press-releases/en/2020/12/innovations-for-fcas-airbus-concludes-cooperative-pilot-phase-with-startup-companies-in-germany.html

All news