Back to news

July 10, 2020 | International, Additive manufacturing

Additive Technologies For Future UK Air Power Advance

Additive Technologies For Future UK Air Power Advance

Tony Osborne July 10, 2020

Bidders pushing for a role to build a technology demonstrator for the UK's Lightweight Affordable Novel Combat Aircraft (LANCA) are waiting to see if their design proposals will be approved for the next phase of the initiative.

Three industry teams were selected last summer (AW&ST July 29-Aug. 18, 2019, p. 18) to take forward development of the LANCA air system, an unmanned air vehicle that could act as additive capability accompanying future combat aircraft into operations. LANCA would perform a range of tasks, including serving as a loyal wingman, gathering intelligence or acting as a weapons carrier. The Royal Air Force envisions a platform costing 1/10th of current combat aircraft and developed in one-fifth of the time. The idea is like that of the Kratos XQ-58 Valkyrie for the U.S. Air Force's Skyborg program, Australia's Boeing-led Airpower Teaming System and remote carriers for the French, German and Spanish Future Combat Air System.

Since then, the industry teams—which include Boeing Phantom Works International, working with Marshall Aerospace and Defence Group and Cranfield University; consortium Team Black Dawn, Callen-Lenz and Bombardier Belfast; and Northrop Grumman UK with Team Avenger, led by Blue Bear Systems Research and yet to be disclosed partners—submitted their proposals for the £4.8 million ($6 million) Phase 1.

The LANCA program is being led by the UK Defense Science and Technology Laboratories in conjunction with the Royal Air Force's Rapid Capability Office.

The UK Defense Ministry is evaluating proposals in readiness for the second phase of the project, called Mosquito, which is worth £30-50 million. Mosquito would see one, possibly two of the candidates being matured into a full-size flightworthy demonstrator potentially undertaking a flight-test program. Aviation Week has been told there was stiff competition for the LANCA program: Some 45 bidders entered Phase 1 and were subsequently scaled down to nine. Around eight bids were then tendered, of which three were chosen.

Few details have emerged about the proposals, although Boeing Australia confirmed through social media that it had secured a “first-of-type permit” from the Australian government to share design material for its Airpower Teaming System with the UK, suggesting the Boeing proposal may borrow heavily from that platform. Progress is also occurring on a drone swarm system announced by former Defense Secretary Gavin Williamson in February 2019: Demonstrations in March using five unmanned air vehicles proved a collaborative capability between the platforms, people close to the program told Aviation Week.

https://aviationweek.com/ad-week/additive-technologies-future-uk-air-power-advance

On the same subject

  • USAF Planning Boss Pushes for Flexible Budgets to Keep Up with New Tech - Air Force Magazine

    March 8, 2021

    USAF Planning Boss Pushes for Flexible Budgets to Keep Up with New Tech - Air Force Magazine

    As the Air Force pieces together its fiscal 2023 budget, due early next year, it must think about a murky future five years down the road.

  • Commercial Interest Grows in Defense Innovation Unit

    April 6, 2021

    Commercial Interest Grows in Defense Innovation Unit

    The Defense Innovation Unit received nearly 1,000 proposals in response to its solicitations last year, another sign that the Pentagon's outreach to commercial industry is bearing fruit. DIU was launched in 2015 by then-Secretary of Defense Ash Carter to bridge the gap between the military and the nation's tech hubs. It is headquartered in Mountain View, California, in Silicon Valley, with additional outposts in Austin, Texas, Boston and the Pentagon. “DIU's mission to strengthen U.S. national security by increasing the military's adoption of commercial technology and to grow the national security innovation base is critical not only to maintaining a strategic advantage over our adversaries but also to the strength of our economy,” the organization said in its recently released 2020 annual report. Over the past five years, the unit has leveraged more than $11 billion in private investment, the document noted. “The startups, established companies, venture capital firms, investors and traditional defense contractors that DIU works with to deliver the best commercial technology to the Department of Defense are ... fundamental sources of dual-use technologies,” it said. In 2020, DIU initiated 23 new projects, a 35 percent year-over-year increase. It received a total of 944 commercial proposals and increased the average number of proposals per solicitation by 52 percent compared with 2019. Fifty-six other transaction agreements for prototyping were awarded to companies, the majority of which were small businesses or nontraditional firms. A total of $108 million in prototype funding was obligated. Between June 2016 and December 2020, DIU facilitated more than $640 million in prototype funding, according to the report. Notably, the unit in 2020 facilitated the transition of 11 successful commercial prototypes to its Defense Department partners for large-volume procurement, an increase of 22 percent over the previous year. About 43 percent of DIU's projects to date have yielded at least one prototype that has transitioned to production, according to the report. Fifty-one ongoing projects have prototypes that will be eligible for transition to production if successfully completed. “What began in 2015 as an experiment to lead Department of Defense outreach to commercial innovators has become a gateway for business between leading-edge companies and the U.S. military,” the report said. DIU's main technology focus areas have been artificial intelligence, autonomy, cyber, human systems and space. In October, it added advanced energy and materials to its portfolio. “We look forward to providing even more high-impact solutions that will bolster our military's strategic, operational and tactical advantage,” the organization said.

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

All news