Back to news

January 31, 2020 | International, Clean technologies, Big data and Artifical Intelligence, Advanced manufacturing 4.0, Autonomous systems (Drones / E-VTOL), Virtual design and testing, Additive manufacturing

Tech startups still face the Pentagon’s ‘valley of death’

Tech startups still face the Pentagon’s ‘valley of death’

By: Joe Gould

WASHINGTON ― Brooklyn-based technology startup goTenna launched in 2014 with a candy bar-sized gadget that pairs with smartphones to create off-grid, no-network communications.

Though it was originally a commercial product, the company has received millions of dollars' worth of government business since 2015, mostly with the U.S. Department of Homeland Security but also with Special Operations Command, the Air Force, the Navy and the Army. About 150,000 devices have shipped.

The Army has spent millions of dollars with goTenna, but the service cannot give the company one of the most important things for a small business ― the certainty of recurring revenue.

“Now the funding is out, and even the program officer for that program doesn't know where we go next within the Army,” goTenna founder and CEO Daniela Perdomo said at a Defense News-hosted roundtable in December. “That's in part why we've been spending more time, frankly, on civilian public safety. Because even though DHS is consistently [under funding restrictions], they seem to be moving. They seem to move faster.”

That sort of inconsistency and confusion is why tech startups dealing with the Pentagon, as well as investors, so dread the gap between their innovative product's development and the Pentagon's sluggish decisions to launch. That gap has a nickname: “the valley of death.”

The Pentagon has experimented with a variety of means to buy emerging technologies, an important goal as it seeks to preserve its edge against Russia and China. But one truism ― affirmed in a recent report from the Ronald Reagan Institute ― is that the federal government has been unable to fully adapt its practices to promote and harness private sector innovation, despite making strides.

Addressing the House Armed Services Committee on Jan. 15, former Under Secretary of Defense for Policy Michele Flournoy said the valley of death between a product's development and the moment that product becomes part of a program of record remains an obstacle. She said that's partly because acquisitions officials don't use the new authorities granted by Congress over recent years.

The excitement of receiving development money from the Department of Defense stands in stark contrast to what often follows.

“[Startups] win the prototype competition: ‘Great, we love you.' And that's in, like, FY19,” Flournoy said. “And then they are told, ‘OK, we are going to have [a request for proposals] for you in '21,' and [the startups] are like: ‘OK, but what do I do in '20? I have got a 10-year hole in my business plan, and my investors are pressuring me to drop the work on DoD because it's too slow, it's too small dollars.'”

How would Flournoy fix it? She advised the Pentagon to hire tech talent ― “smart buyers and developers and fielders of new technologies” ― and create a bridge fund for firms in competitive areas like artificial intelligence, cybersecurity and quantum computing. (The idea seemed to resonate with Texas Rep. Mac Thornberry, who is the panel's top Republican and the author of multiple acquisitions reform laws passed in recent years.)

At the Defense News roundtable, leaders from the tech community said not only has it been difficult for small businesses to enter an aerospace and defense market dominated by five major firms, but it's hard for startups to justify to investors that the government should be retained as a client when it is often the least decisive.

“I think the fundamental misunderstanding between the DoD and venture investors is just how difficult it is to keep the wheels on a fast-growing startup,” said Katherine Boyle of venture capital firm General Catalyst.

But the Pentagon is working to bridge the gap between prototype and production. Over the last year, the Defense Innovation Unit ― the department's outpost in Austin, Texas; Boston, Massachusetts; and California's Silicon Valley ― has launched two internal teams, for defense and commercial engagement, to envision these transitions and match them to the Pentagon's five-year budgeting process, according to DIU's director of strategic engagement, Mike Madsen.

These teams are tasked with learning the needs of the services, working with commercial industry to develop prototypes to meet those needs and then helping market the prototypes more broadly within the Defense Department. Along these lines, DIU helped a company that developed a predictive maintenance application for the Air Force ― Redwood City, California-based C3.ai ― win a predictive maintenance contract for Army ground vehicles. C3.ai has since created a federal arm unit.

A quarter of all prototypes awarded by DIU transitioned to programs of record, and another 50 percent are eligible for the transition. “It will take time for us to develop the right cultural instincts, but it's already happening,” said DIU's director of commercial engagement, Tom Foldesi.

Anduril Industries co-founder and Founders Fund partner Trae Stephens has often criticized the DoD's approach to Silicon Valley. But speaking at the Defense News panel, he acknowledged progress through DIU's ability to harness the flexible other transaction authority, a congressionally mandated contracting mechanism that makes it easier to prototype capabilities. He also praised the Air Force's effort to rework Small Business Innovation Research funds to target more mature technologies.

“I don't know who's responsible for banging the table about it over and over, but somebody is out there saying it," Stephens said. “It seems to be coming across in the messaging in some way.”

https://www.defensenews.com/2020/01/30/tech-startups-still-face-the-pentagons-valley-of-death/

On the same subject

  • USAF issues RFI for directed energy C-UAS technologies

    November 2, 2020

    USAF issues RFI for directed energy C-UAS technologies

    by Pat Host The US Air Force (USAF) is requesting information from industry about directed energy (DE) capabilities for counter-unmanned aerial system (C-UAS) technologies. The Air Force Life Cycle Management Center, Architecture and Integration Directorate (AFLCMC/XA) seeks to better characterise the technological, manufacturing, and performance capabilities of the industrial base to develop and produce upgrades to DE prototypes and related C-UAS subsystems. The directorate will use this information to inform its trade space analysis of solutions for engagement and mission level modelling and simulation (M&S), as well as programme cost estimates for potential future technical maturation of DE C-UAS systems. The USAF wants to research the industrial base for C-UAS capabilities related to fixed-site Air Base Air Defense (ABAD) against potential Group 1 and 2 UAS threats, which weigh 25 kg or less. These threats may have characteristics such as small size; low radar cross sections; low infrared (IR) or radio frequency (RF) signatures, or no RF signatures at all; the ability to hover; and low-altitude flight capabilities, which may render them difficult to detect and defeat. Additionally, these UASs are typically either controlled remotely from a ground control station (GCS) or can fly pre-programmed routes. Recent and pending procurements of DE C-UAS weapons require even further development and improvement, including connected and related, but not limited to, subsystems such as command-and-control (C2) suites, radar, and electronic warfare (EW). https://www.janes.com/defence-news/news-detail/usaf-issues-rfi-for-directed-energy-c-uas-technologies

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • How the Biden administration is expected to approach tech research and development

    December 1, 2020

    How the Biden administration is expected to approach tech research and development

    Andrew Eversden WASHINGTON — Experts expect President-elect Joe Biden's administration to build on the Trump administration's investments in emerging technologies, while adding to research and development budgets in the Defense Department and across the federal government. The incoming Biden administration signaled throughout the campaign that basic research and development funding would be a priority. Biden wrote in Foreign Affairs he would make research and development a “cornerstone” of his presidency and pointed to the United States having the “greatest research universities in the world.” “It's basic research that's the area where you get the breakthroughs, and you need long-term, sustained investments to build up a strong S&T base,” said Martijn Rasser, a senior fellow at the Center for a New American Security's technology and national security program. Biden's R&D investment is an expected change from the Trump administration's approach, which experts have noted is narrower in scope and focused on harnessing private sector innovation. “The reality is the U.S. private sector has eclipsed the government, which in some ways that can be good,” said Rep. Jim Langevin, D-R.I., chairman of the House Armed Services Committee's Subcommittee on Intelligence and Emerging Threats and Capabilities. “The private sector can move with greater agility than the government, but the private sector may not be focusing on developing those exquisite technologies that we need for the war fighter.” Experts told C4ISRNET they expect the Biden administration to invest more money in basic research areas and to reform immigration laws that slowed the innovation pipeline from abroad to the United States. “China is closing in. They are spending every year more and more on R&D. They will soon, if not already, be spending as much as we are, if not more on R&D,” Langevin said told C4ISRNET. “Congress has woken up to this problem.” Basic research Perhaps the most likely area the Biden administration is poised to change is basic research and development funding. According to annual reports from the Congressional Research Service, the Trump administration consistently proposed top-line cuts to federal research and development in yearly budget proposals. This included the fiscal 2021 budget proposal's $13.8 billion decrease in defense R&D over the fiscal 2020 funding enacted by Congress. While the Pentagon has often been spared from such cuts, the Trump administration has also suggested trimming the defense-related basic research budget line — money that is a “substantial source of federal funds for university R&D,” according to the Congressional Research Service. The White House's FY21 defense-related basic research budget line asked for a reduction of about 11 percent from FY20 enacted, or a $284.2 million decrease. Biden's campaign platform calls for a four-year investment of $300 billion in R&D for new technology such as 5G, artificial intelligence, advanced materials and electric cars. “A nation speaks to and identifies its priorities by where it puts its research dollars, where it puts its money,” Langevin said. “Basic research has to be more of a priority, and that's something I'm going to encourage the Biden administration to focus on.” Michèle Flournoy, thought to be a leading contender to become the next secretary of defense, has also written about the need to increase investment in emerging technologies to counter China. In Foreign Affairs in June, Flournoy wrote that “resilient battlefield networks, artificial intelligence to support faster decision-making, fleets of unmanned systems, and hypersonic and long-range precision missiles” will “ultimately determine military success.” “Continuing to underinvest in these emerging capabilities will ultimately have dire costs for U.S. deterrence,” she wrote. Congressional and think tank reports published during the Trump administration's tenure called for an increase in basic research funding. A report from the House Permanent Select Committee on Intelligence's strategic tech and advanced research subpanel, led by Rep. Jim Himes, D-Conn., recommended bumping up federal research and development funding from 0.7 percent to 1.1 percent of gross domestic product, or an increase of $146 billion to $230 billion. A report by the Council on Foreign Relations from 2019 applauded the Trump administration's requested increases in funding for the Defense Advanced Research Projects Agency, now funded at $3.46 billion, and the Defense Innovation Unit, for which the Trump administration requested $164 million. Laying the groundwork Initiatives started under the Trump administration did provide a groundwork on which the Biden administration can build. Under the Trump administration, DARPA kicked off a $1.5 billion microelectronics effort. In artificial intelligence, the administration launched the American AI Initiative. However, the Council on Foreign Relations criticized that effort because it had no funding and left agencies to prioritize artificial intelligence R&D spending without metrics, while also drawing funds from other research areas. The administration also made an $1.2 billion investment in quantum information science. “The Trump administration started bringing national attention and federal focus to many of these technologies,” said Lindsey Sheppard, a fellow at the Center for Strategic and International Studies. “I hope to see from the Biden administration perhaps a more cohesive guiding strategy for all of these pieces.” While the Trump administration has started many initiatives, the Council on Foreign Relations report also criticized the Trump administration's innovation strategy as an “incremental and limited approach,” writing that “action does not match the language officials use to describe the importance of AI to U.S. economic and national security.” While investment in future technology is important, defense budgets are expected to stay flat or decrease in the coming years. In her Foreign Affairs article, Flournoy acknowledge that the budgetary reality will require “tough tradeoffs.” Experts agree. “R&D programs are going to have to start being able to consistently, clearly articulate justifications for their budgets and the returns on investment,” Sheppard said. But the coronavirus pandemic has highlighted the need for increased investments in research and development, Himes and Langevin argued. Both lawmakers identified biothreats as something they fear for the future. Biological threats are one area that DARPA — an organization Langevin pointed to as a major federal R&D success story — has triumphantly address. Commercial partners from DARPA's 3-year-old pandemic prevention platform program announced they developed a COVID-19 therapeutic using new techniques. “There's absolutely going to be a rethink,” Himes told C4ISRNET in an interview. “Are we correctly allocating money between the possibility that there could be a pandemic that kills a million Americans, versus the possibility that we're going to have to fight the Russians in the Fulda Gap? I think there's going be a lot of thinking about that. And there should be thinking about that because our money should go to those areas where there's the highest probability of dead Americans.” Immigration innovation Another way to improve American innovation in critical future technologies is by allowing highly skilled foreigners to work in the United States. Biden has hinted at changes that will affect American innovation through the expected reversals of President Donald Trump's immigration policies, which limited high-skilled workers from legally working in the country. The Biden administration's platform states it wants to reform the H-1B visa process that the Trump administration restricted, much to the chagrin of American tech companies, which use the program to hire top talent from abroad. Think tanks have recommended reforming the current U.S. immigration policy to attract international students, entrepreneurs and high-skilled workers because of the innovative ideas they provide. For example, an analysis by Georgetown University's Center for Security and Technology found that 68 percent of the United States' top 50 artificial intelligence companies were co-founded by immigrants, most of whom came the U.S. as students. “A lot of the Trump administration's policies — we're shooting ourselves in the foot making it so much harder for people to come here,” said Rasser, who wrote a report for CNAS last year calling for H1-B caps to be increased. “Because of the fact that people want to come to the United States to live and work, that's one of our greatest competitive advantages. It's something I expect the Biden administration to reverse.” https://www.c4isrnet.com/smr/transition/2020/11/29/how-the-biden-administration-is-expected-to-approach-tech-research-and-development/

All news