Back to news

November 2, 2020 | International, Autonomous systems (Drones / E-VTOL)

USAF issues RFI for directed energy C-UAS technologies

USAF issues RFI for directed energy C-UAS technologies

by Pat Host

The US Air Force (USAF) is requesting information from industry about directed energy (DE) capabilities for counter-unmanned aerial system (C-UAS) technologies.

The Air Force Life Cycle Management Center, Architecture and Integration Directorate (AFLCMC/XA) seeks to better characterise the technological, manufacturing, and performance capabilities of the industrial base to develop and produce upgrades to DE prototypes and related C-UAS subsystems. The directorate will use this information to inform its trade space analysis of solutions for engagement and mission level modelling and simulation (M&S), as well as programme cost estimates for potential future technical maturation of DE C-UAS systems.

The USAF wants to research the industrial base for C-UAS capabilities related to fixed-site Air Base Air Defense (ABAD) against potential Group 1 and 2 UAS threats, which weigh 25 kg or less. These threats may have characteristics such as small size; low radar cross sections; low infrared (IR) or radio frequency (RF) signatures, or no RF signatures at all; the ability to hover; and low-altitude flight capabilities, which may render them difficult to detect and defeat.

Additionally, these UASs are typically either controlled remotely from a ground control station (GCS) or can fly pre-programmed routes. Recent and pending procurements of DE C-UAS weapons require even further development and improvement, including connected and related, but not limited to, subsystems such as command-and-control (C2) suites, radar, and electronic warfare (EW).

https://www.janes.com/defence-news/news-detail/usaf-issues-rfi-for-directed-energy-c-uas-technologies

On the same subject

  • JUST IN: New Navy Lab to Accelerate Autonomy, Robotics Programs

    September 9, 2020

    JUST IN: New Navy Lab to Accelerate Autonomy, Robotics Programs

    9/8/2020 By Yasmin Tadjdeh Over the past few years, the Navy has been hard at work building a new family of unmanned surface and underwater vehicles through a variety of prototyping efforts. It is now standing up an integration lab to enable the platforms with increased autonomy, officials said Sept. 8. The Rapid Integration Autonomy Lab, or RAIL, is envisioned as a place where the Navy can bring in and test new autonomous capabilities for its robotic vehicles, said Capt. Pete Small, program manager for unmanned maritime systems. “Our Rapid Autonomy Integration Lab concept is really the playground where all the autonomy capabilities and sensors and payloads come together, both to be integrated ... [and] to test them from a cybersecurity perspective and test them from an effectiveness perspective,” Small said during the Association for Unmanned Vehicle Systems International's Unmanned Systems conference, which was held virtually due to the ongoing COVID-19 crisis. Robotics technology is moving at a rapid pace, and platforms will need to have their software and hardware components replaced throughout their lifecycles, he said. In order to facilitate these upgrades, the service will need to integrate the new autonomy software that comes with various payloads and certain autonomy mission capabilities with the existing nuts-and-bolts packages already in the unmanned platforms. “The Rapid Autonomy Integration Lab is where we bring together the platform software, the payload software, the mission software and test them,” he explained. During testing, the service will be able to validate the integration of the software as well as predict the performance of the unmanned vehicles in a way that “we're sure that this is going to work out and give us the capability we want,” Small said. The RAIL concept will rely on modeling-and-simulation technology with software-in-the-loop testing to validate the integration of various autonomous behaviors, sensors and payloads, he said. “We will rely heavily on industry to bring those tools to the RAIL to do the testing that we require,” he noted. However, the lab is not envisioned as a single, brick-and-mortar facility, but rather a network of cloud-based infrastructure and modern software tools. “There will be a certain footprint of the actual software developers who are doing that integration, but we don't see this as a big bricks-and-mortar effort. It's really more of a collaborative effort of a number of people in this space to go make this happen," Small said. The service has kicked off a prototype effort as part of the RAIL initiative where it will take what it calls a “third-party autonomy behavior” that has been developed by the Office of Naval Research and integrate it onto an existing unmanned underwater vehicle that runs on industry-made proprietary software, Small said. Should that go as planned, the Navy plans to apply the concept to numerous programs. For now, the RAIL is a prototyping effort, Small said. “We're still working on developing the budget profile and ... the details behind it,” he said. “We're working on building the programmatic efforts behind it that really are in [fiscal year] '22 and later.” The RAIL is part of a series of “enablers” that will help the sea service get after new unmanned technology, Small said. Others include a concept known as the unmanned maritime autonomy architecture, or UMAA, a common control system and a new data strategy. Cmdr. Jeremiah Anderson, deputy program manager for unmanned underwater vehicles, said an upcoming industry day on Sept. 24 that is focused on UMAA will also feature information about the RAIL. “Half of that day's agenda will really be to get into more of the nuts and bolts about the RAIL itself and about that prototyping effort that's happening this year,” he said. “This is very early in the overall trajectory for the RAIL, but I think this will be a good opportunity to kind of get that message out a little bit more broadly to the stakeholders and answer their questions.” Meanwhile, Small noted that the Navy is making strides within its unmanned portfolio, citing a “tremendous amount of progress that we've made across the board with our entire family of UVS and USVs.” Rear Adm. Casey Moton, program executive officer for unmanned and small combatants, highlighted efforts with the Ghost Fleet Overlord and Sea Hunter platforms, which are unmanned surface vessels. The Navy — working in cooperation with the office of the secretary of defense and the Strategic Capabilities Office — has two Overlord prototypes. Fiscal year 2021, which begins Oct. 1, will be a particularly important period for the platforms, he said. “Our two Overlord vessels have executed a range of autonomous transits and development vignettes,” he said. “We have integrated autonomy software automation systems and perception systems and tested them in increasingly complex increments and vignettes since 2018.” Testing so far has shown the platforms have the ability to perform safe, autonomous navigation in according with the Convention on the International Regulations for Preventing Collisions at Sea, or COLREGS, at varying speeds and sea states, he said. “We are pushing the duration of transits increasingly longer, and we will soon be working up to 30 days,” he said. “Multi-day autonomous transits have occurred in low- and high-traffic density environments.” The vessels have already had interactions with commercial fishing fleets, cargo vessels and recreational craft, he said. The longest transit to date includes a round trip from the Gulf Coast to the East Coast where it conducted more than 181 hours and over 3,193 nautical miles of COLREGS-compliant, autonomous operation, Moton added. Both Overload vessels are slated to conduct extensive testing and experimentation in fiscal year 2021, he said. “These tests will include increasingly long-range transits with more complex autonomous behaviors,” he said. "They will continue to demonstrate automation functions of the machinery control systems, plus health monitoring by a remote supervisory operation center with the expectation of continued USV reliability." The Sea Hunter will also be undergoing numerous fleet exercises and tactical training events in fiscal year 2021. “With the Sea Hunter and the Overlord USVs we will exercise ... control of multiple USVs, test command-and-control, perform as part of surface action groups and train Navy sailors on these platforms, all while developing and refining the fleet-led concept of operations and concept of employment,” Moton said. https://www.nationaldefensemagazine.org/articles/2020/9/8/navy-testing-new-autonomy-integration-lab

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • ‘The math doesn’t make sense’: Why venture capital firms are wary of defense-focused investments

    January 31, 2020

    ‘The math doesn’t make sense’: Why venture capital firms are wary of defense-focused investments

    By: Aaron Mehta WASHINGTON — In American's technology marketplace, venture capital funds are crucial for pumping capital into small companies in need of cash infusions to keep operating. Part of the venture capital model is acknowledging that many of those businesses will fail, but if a few are successful, venture capitalists can make huge returns on their investments. At a time when the Pentagon is working hard to entice small technology companies to work on defense projects, venture capital, or VC, funding could further mature technology and give entrepreneurs a chance to keep projects going. And yet, investors seem wary of putting forth cash to support companies with a defense focus. Why? In the wake of the very public fight inside Google over working with the Pentagon — which ended with the company pulling the plug on its Project Maven participation — there was a consensus from the defense establishment that there may be a culture gap that is simply too large to overcome. But according to a trio of venture capitalists who spoke to Defense News in December, the reasons are simpler. Katherine Boyle, with VC firm General Catalyst, said the culture issue is overblown for the VC community. The reluctance to work on defense programs comes down to a mix of “math and history," she said. "The math is the reason why investors are hesitant to put a third of their fund into these types of technologies because history shows us that they haven't worked out well,” Boyle explained. She said the math can be broken down into three factors: mergers, margins and interest rates. On the first, she pointed to the fact that the defense sector has seen thousands of firms exit the market, sometimes because of acquisitions by primes. But, she argued, where mergers and acquisitions tend to occur in other parts of the world to acquire new technology or capability, in the defense realm it's all about contracting value. That makes it “very difficult for new technologies to enter the market and ultimately be acquired at the valuations that venture investors would need to see in order to have a return for their fund.” In terms of margins, Boyle pointed out that defense firms are very focused on hardware, which requires a lot of investment upfront. That makes it “very difficult to invest in for venture capital firms because software has 80 percent margins, and it's much easier to build a company that can scale very quickly if it's software-based versus needing a lot of capital,” she said. The third factor, interest rates, ties into the last two. For decades interest rates have allowed VC firms to expand dramatically — something that requires a constant flow of return from investments in order to turn around funds and quickly invest in another opportunity. In the world of defense, investors with $3 billion to $5 billion under management by the VC community will find it difficult to get the kind of returns investors are accustomed to from other markets. All three of those factors come together in a mix that means there are very few chances for VC firms to invest in defense-related companies that match up with what a VC traditionally wants to see, said John Tenet, a partner with investment firm 8VC and vice chairman of the defense company Epirus. “VC investors invest based on speed and scale and probability of a 10 to 20 times return. And so I think that's where you've seen a little bit of apprehension, at least in [Silicon] Valley,” Tenet said. “The exits haven't been that fast, and you sort of have these five big players on one side [that] sort of monopolize the market.” From a pure numbers standpoint, a good benchmark for performance is to look at the S&P 500, according to Trae Stephens, co-founder and chairman of Anduril Industries and partner at Founders Fund. Over a 10-year period, an investor in the S&P can expect to get roughly 3 times their investment back. VC firms want to be able to beat that for an investment to be worth it. To highlight the challenge of attracting VC funding to defense firms with potentially limited return, Stephens pointed to the case of Blackbird Technologies. A venture-backed player in specialized communications tech aimed at the defense market, Blackbird was bought in 2014 by Raytheon for about $420 million. That looks good on paper, but the reality is the churn isn't strong enough for a big, Silicon Valley-based venture capital group. “A lot of times in the government, people say: ‘Oh, Blackbird is this, like, great example of a success story that was like a boost for venture.' It's actually not. It's not a venture scale of return for most funds,” he said. “There are some funds where the economics of [an exit that size] is really good, but for large, Silicon Valley tier-one funds, it doesn't move the needle. And so you have to have these multibillion-dollar opportunities in order for it to really make economic sense.” Another issue raised by Stephens will be familiar to defense primes as well: concerns over sharing intellectual property with the Defense Department. The department is essentially saying “you are the right product for us, now turn over your source code,” Stephens said. “It's crazy. We're literally doing to our companies in America what we're criticizing the Chinese for doing to their companies and to our companies when we enter that market. And so there has to be a better commercial practice for enabling companies to retain their IP and do business with the government without having to fight a legal battle every time they go through a contract.” ‘Knock down the doors' Despite those concerns, all three venture capitalists that spoke to Defense News are involved in investments in defense-focused firms. So why are they spending their money in the sector? Mission is part of it — the belief that, as Americans, a stronger Defense Department benefits their firms. But that only goes so far if dollars don't follow. Once again, it comes down to math. Investing in a company focused on defense technologies, which may have to wait years to secure a contract with the Pentagon, isn't a great strategy for a VC firm looking for quick returns. But if a company is able to get government funding early on, the business suddenly becomes more worthy of investment, said Boyle. “If the government is allocating capital in the right way, it will get VC dollars immediately. Like, it will follow so quickly,” Boyle said. “I see so many people come in to our office and they have an OTA [other transaction authority contract], and they're excited. It's a small, $1 million contract, and that is great for a seed company. But if that same company came in 18 months later and said, ‘Oh, by the way, the OTA has turned into a $10 million contract,' that would meet every milestone that I usually see to series A.” (An OTA is a type of contract that enables rapid prototyping; series A financing is the investment that follows growth from initial seed capital used to launch operations.) “$10 million to the US government is nothing, but to [a] startup — $10 million is the best startup I've seen all year, if they're an 18-month-old startup and they're getting that kind of capital early on,” she said. Added Stephens: “It means they're doing something right.” That creates a chicken and egg scenario: Venture capitalists only want to invest in companies that already have a Pentagon contract, but small firms often can't keep the doors open long enough without external funding while waiting for the department's contracting processes to progress. While groups such as the Defense Innovation Unit — the Pentagon's technology hub — are helping speed along that process, it remains a problem with no easy solution, at a time when the Pentagon needs the nondefense technology community in ways it hasn't for decades. Boyle believes there is a “growing group” of investors who see the strong success of a handful of companies like goTenna, Anduril or Shield AI that have managed to break through and become successful defense-focused investment vehicles. That means the next few years are going to be critical for everyone involved. “None of us would be here if we weren't optimistic,” she said. “I actually think this is an incredible time to be investing in deep tech, particularly deep-tech companies that are selling to the Department of Defense because if it doesn't happen now, it never will.” https://www.defensenews.com/smr/cultural-clash/2020/01/30/the-math-doesnt-make-sense-why-venture-capital-firms-are-wary-of-defense-focused-investments/

All news