Back to news

February 26, 2020 | International, Clean technologies, Big data and Artifical Intelligence, Advanced manufacturing 4.0, Autonomous systems (Drones / E-VTOL), Virtual design and testing, Additive manufacturing

Air Force To Pump New Tech Startups With $10M Awards

The Air Force's new investment strategy is designed to "catalyze the commercial market by bringing our military market to bear," says Roper.

By THERESA HITCHENS

PENTAGON: The Air Force will roll out the final stage in its commercial startup investment strategy during the March 13-20 South By Southwest music festival, granting one or more contracts worth at least $10 million to startups with game-changing technologies, service acquisition chief Will Roper says.

The first-of-its kind event in Austin, called the Air Force Pitch Bowl, will match Air Force investment with private venture capital funds on a one to two ratio, according to a presentation by Capt. Chris Benson of AFWERX at the Strategic Institute's Dec. 4-5 “AcquisitionX” meeting. So, if the Air Force investment fund, called Air Force Ventures, puts in $20 million, the private capital match would be $40 million.

AFWERX, the Air Force's innovation unit, has one of its hubs in Austin.

“This has been a year in the making now, trying to make our investment arm, the Air Force Ventures, act like an investor, even if it's a government entity,” Roper explained. “We don't invest like a private investor — we don't own equity — we're just putting companies on contract. But for early stage companies, that contract acts a lot like an investor.”

The goal is to help steer private resources toward new technologies that will benefit both US consumers and national security to stay ahead of China's rapid tech growth, Roper told reporters here Friday.

The Air Force wants to “catalyze the commercial market by bringing our military market to bear,” he said. “We're going to be part of the global tech ecosystem.”

Figuring out how to harness the commercial marketplace is critical, Roper explained, because DoD dollars make up a dwindling percentage of the capital investment in US research and development. This is despite DoD's 2021 budget request for research, development, test and evaluation (RDT&E) of $106.6 billion being “the largest in its history,” according to Pentagon budget rollout materials. The Air Force's share is set at $37.3 billion, $10.3 billion of which is slated for Space Force programs.

“We are 20 percent of the R&D is this country — that's where the military is today,” Roper said. “So if we don't start thinking of ourselves as part of a global ecosystem, looking to influence trends, investing in technologies that could be dual-use — well, 20 percent is not going to compete with China long-term, with a nationalized industrial base that can pick national winners.”

The process for interested startups to compete for funds has three steps, Roper explained, beginning with the Air Force “placing a thousand, $50K bets per year that are open.” That is, any company can put forward its ideas to the service in general instead of there being a certain program office in mind. “We'll get you in the door,” Roper said, “we'll provide the accelerator functions that connect you with a customer.

“Pitch days” are the second step, he said. Companies chosen to be groomed in the first round make a rapid-fire sales pitch to potential Air Force entities — such as Space and Missile Systems Center and Air Force Research Laboratory — that can provide funding, as well as to venture capitalists partnering with the Air Force.

As Breaking D broke in October, part of the new acquisition strategy is luring in private capital firms and individual investors to match Air Force funding in commercial startups as a way to to bridge the ‘valley of death' and rapidly scale up capability.

The service has been experimenting with ‘pitch days' across the country over the last year, such as the Space Pitch Days held in San Francisco in November when the service handed out $22.5 million to 30 companies over two days. Roper said he intends to make “maybe 300 of those awards per year,” with the research contracts ranging from $1 million to $3 million a piece and “where program dollars get matched by our investment dollars.”

The final piece of the strategy, Roper explained, is picking out the start-ups that can successfully field game-changing technologies.

“The thing that we're working on now is the big bets, the 30 to 40 big ideas, disruptive ideas that can change our mission and hopefully change the world,” Roper said. “We're looking for those types of companies.”

The Air Force on Oct. 16 issued its first call for firms to compete for these larger SBIR contracts under a new type of solicitation, called a “commercial solutions opening.” The call went to companies already holding Phase II Small Business Innovation Research (SBIR) awards. The winners will be announced in Austin.

If the strategy is successful, Roper said, the chosen firms will thrive and become profitable dual-use firms focused primarily on the commercial market.

“The, we're starting to build a different kind of industry base,” Roper enthused. “So, we've gotta get the big bets right. Then most importantly, if you succeed in one of the big bets, then we need to put you on contract on the other side, or else the whole thing is bunk.”

https://breakingdefense.com/2020/02/air-force-to-pump-new-tech-startups-with-10m-awards

On the same subject

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • New NATO Innovation Hub challenge: Trust in autonomous systems

    October 22, 2020

    New NATO Innovation Hub challenge: Trust in autonomous systems

    Hello, Similar to IDEaS, the NATO Innovation Hub is a community where experts from around the world collaborate to tackle NATO challenges and design solutions. The Hub has recently launched a challenge seeking innovative solutions that address how trust in autonomous systems can be established and strengthened. Solutions can include any combination of methodologies, concepts, techniques and technologies. These challenges are open to all, including the Canadian innovation community. Submit your solutions by November 17, 2020: https://www.innovationhub-act.org/challenge-intro If you have questions, contact the NATO Innovation Hub by email: contact@InnovationHub-act.org Thank you, The IDEaS Team

  • Speech Recognition and AI Help Take the Pressure off Aircrew

    October 14, 2020

    Speech Recognition and AI Help Take the Pressure off Aircrew

    Air accidents have decreased in recent years, but when they do occur, the crew's workload is usually at its highest level. Therefore, augmenting crew performance during high workload periods is of great importance and can help maintain flight safety. Aircrew workloads peak when faced with a combination of unpredictable situations: meteorological conditions; high-density traffic; system failures; and flight operations like take-off, climb, descent, approach and landing. The amount of information and number of actions that need to be processed by the crew may become unmanageable, affecting flight safety. The EU-funded VOICI project addressed this threat by developing an intelligent 'natural crew assistant' for the cockpit environment. The system comprises three main technologies, namely sound recording, speech recognition and artificial intelligence. This includes a cockpit-embedded speech-processing system that understands aviation terminology, as well as an array of low-noise optical microphones and optimised array processing for it. The VOICI system also features a new and more efficient speech synthesis, adapted to aviation terminology and noise levels. For further information see the IDTechEx report on Voice, Speech, Conversation-Based User Interfaces 2019-2029: Technologies, Players, Markets. Assessed under realistic conditions Project partners aimed to provide a proof-of-concept demonstrator capable of listening to all communications in the cockpit, both between crew members, and between crew and air traffic control. "The VOICI system should recognise and interpret speech content, interact with the crew, and fulfil crew requests to simplify crew tasks and reduce cognitive workload," outlines project coordinator, Tor Arne Reinen. Researchers also developed a realistic audio evaluation environment for technology experiments. This facilitated the development of the crew assistant and enabled evaluation of its performance, including the speech capture and recognition technologies for use in a noisy cockpit, together with the intelligent dialogue system with automatic speech synthesis as its main output. The audio testing environment involved a 3D physical model of a Falcon 2000S cockpit, including loudspeaker reproduction of noise recordings from a real flight. "We have demonstrated that the crew assistant is feasible under the very high noise levels of an aviation cockpit," Reinen explains. Multiple benefits Speech capture is achieved through both the pilot's headset and an ambient microphone array. Speech recognition using deep neural networks and the dialog system were developed explicitly for the cockpit environment and include aviation terminology and robustness to high levels of background noise. The systems function independently of cloud-based systems and employ dedicated language models for the cockpit scenario. According to Reinen, all the algorithms underlying the dialog system have been implemented and tested: from the Natural Language Understanding unit that understands natural requests to the Dialogue Core which handles the conversation flow. "Particular emphasis has been placed on the ability of the voice assistant to use contextual data," he notes. By reducing crew workload, VOICI will contribute to optimisation of operations, flight safety and crew awareness; better maintenance; reduced cost of operations; and generally higher efficiency and lower stress. "VOICI comprises both small and medium sized enterprises (SMEs) and research institutes, and cooperation within the consortium will contribute to innovation and job creation," Reinen points out. https://www.onartificialintelligence.com/articles/21880/speech-recognition-and-ai-help-take-the-pressure-off-aircrew?rsst2id=193

All news