Back to news

May 4, 2021 | International, Clean technologies, Big data and Artifical Intelligence, Advanced manufacturing 4.0, Autonomous systems (Drones / E-VTOL), Virtual design and testing, Additive manufacturing

Les brèves de l'actualité

Les brèves de l'actualité

Drones- Advanced Air Mobility


L3Harris and Bye Technologies (28 avril)
L3Harris Technologies and Bye Aerospace have agreed to develop an all-electric, multi-mission aircraft that will provide intelligence, surveillance and reconnaissance (ISR) capabilities. They will modify the recently announced eight-seat, all-electric, twin-motor eFlyer 800™ aircraft.

Spirit AeroSystems (28 avril) Spirit AeroSystems in Northern Ireland has named team members for the United Kingdom's Mosquito ‘loyal wingman' attritable unmanned aerial vehicle (UAV) concept under the Lightweight Affordable Novel Combat Aircraft (LANCA) programme. They include Northrop Grumman and Intrepid Minds.


ASKA (27 avril) Pre-orders are now being taken for the ASKA, an electric vertical takeoff and landing (eVTOL) vehicle designed for consumers. The four-seat ASKA acts as an automobile as well as a VTOL and STOL aircraft.

On the same subject

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • Saab trials 3D-printed part on Gripen for battlefield repairs - Skies Mag

    March 30, 2021

    Saab trials 3D-printed part on Gripen for battlefield repairs - Skies Mag

    Saab successfully conducted a trial which marked the first time an exterior 3D-printed part has been flown on a Gripen.

  • As tech startups catch DoD’s eye, big investors are watching

    January 31, 2020

    As tech startups catch DoD’s eye, big investors are watching

    By: Jill Aitoro SIMI VALLEY, Calif. — Private investors are not yet lining up to back defense startups, but they are paying close attention. Two factors have created an opening that could lure venture capitalists to defense investments: first, a few select venture-backed technology startups are gaining traction; and second, there's been a strategic shift in approach to weapons development from the U.S. Department of Defense, focusing more on information warfare and, as such, software. In the words of Mike Madsen, director of strategic engagement at the Pentagon's commercial tech hub, Defense Innovation Unit: "We're at a significant inflection point right now that will be visible through the lens of history.” Nonetheless, for the tech startups, it's been slow going, as discussed during a Defense News roundtable in California. For the second year, leadership from DoD and the tech community came together to discuss the state of the Pentagon's efforts to attract commercial startups — this time digging into the challenges and opportunities that come with investment in defense development. “We went into this eyes wide open, knowing full well that to the venture community, the math doesn't make sense. Making the choice to contribute to the advancement of artificial intelligence for DoD represented for us more of a mission-driven objective,” said Ryan Tseng, founder of artificial intelligence startup Shield AI. But early on, “we were fortunate to get the backing of Andreessen Horowitz, a top-tier venture fund. They're certainly leaning in, in terms of their thinking about defense technology — believing that despite the history, there might be a way to find an opening to create companies that can become economically sustainable and make substantial mission impact.” Shield AI has raised $50 million in venture funding since 2015, with more rounds expected. Indeed, a few key Silicon Valley investors have emerged as the exceptions to the rule, putting dollars toward defense startups. In addition to Andreessen Horowitz, which counts both Shield AI and defense tech darling Anduril in its portfolio, there's General Catalyst, which also invested in Anduril, as well as AI startup Vannevar Labs. And then of course there's Founders Fund. Led by famed Silicon investors Peter Thiel, Ken Howery and Brian Singerman, among others, the venture firm was an early investor in Anduril, as well as mobile mesh networking platform goTenna. Founders Fund placed big bets on Palantir Technologies and SpaceX in the early days, which paid off in a big way. Some of the early successes of these startups have “done an excellent job of making investors greedy,” said Katherine Boyle, an investor with General Catalyst. “There's a growing group who are interested in this sector right now, and they've looked at the success of these companies and [are] saying: ‘OK, let's learn about it.' ” Take Anduril: The defense tech startup — co-founded by Oculus founder Palmer Luckey and Founders Fund partner Trae Stephens — has raised more than $200 million and hit so-called unicorn status in 2019, reaching a valuation of more than $1 billion. As the successes piled up, so did the venture capital funding. According to Fortune magazine, those investors included Founders Fund, 8VC, General Catalyst, XYZ Ventures, Spark Capital, Rise of the Rest, Andreessen Horowitz, and SV Angel. “I started my career at Allen & Company investment banking. Herbert Allen, who's in his 80s, always said: ‘Hey, you should run into an industry where people are running away,' ” said John Tenet, a partner with 8VC as well as a co-founder and vice chairman of defense startup Epirus. “There's so much innovation occurring, where the government can be the best and biggest customer. And there are people who really want to solve hard problems. It's just figuring out where the synergies lie, what the ‘one plus one equals three' scenario will be.” Also attracting the attention of Silicon Valley investors is the growing emphasis by the Pentagon not only on systems over platforms, but software over hardware. Boyle described the shift as the “macro tailwind” that often drives innovation in a sector. Similar revolutions happened in industrials and automotive markets — both of which are also massive, global and slow-moving. That emphasis on tech, combined with some recent hard lessons, also provides a glimmer of hope that the typical hurdles associated with defense investments — lengthy procurement cycles and dominance by traditional manufacturers, for example — could be overcome. Consider U.S. Code 2377, which requires that commercially available items be considered first in procurement efforts, said Anduril's Stephens. He also noted court decisions in lawsuits filed by SpaceX and Palantir, which ultimately validated claims that defense agencies had not properly ensured a level playing field for major competitions. “These types of things are now at least in recent memory for Congress, and so they have some awareness of the issues that are being faced,” Stephens said. “It's much easier now to walk into a congressional office and say, ‘Here's the problem that we're facing' or ‘Here's the policy changes that we would need.' There are also enough bodies like DIU, like In-Q-Tel, like AFWERX, like the Defense Innovation Board, like the [Defense Science Board] — places where you can go to express the need for change. And oftentimes you do see that language coming into the [National Defense Authorization Act]. It's part of a longer-term cultural battle for sure.” For now, all these factors contribute to the majority of skeptical investors' decisions to watch the investments with interest — even if they still take a wait-and-see approach. And that places a lot of pressure on the companies that are, in a sense, the proof of concept for a new portfolio segment. “My fear is that if this generation of companies doesn't figure [it] out, if they don't knock down the doors and if there aren't a few successes, we're going to have 20, 30 years of just no investor looking around the table and saying we need to work for the Department of Defense,” Boyle said. “If there aren't some success stories coming out of this generation of companies, it's going to be very hard to look our partners in the eye and say: ‘We should keep investing in defense because look at how well things have turned out.'” https://www.defensenews.com/smr/cultural-clash/2020/01/30/as-tech-startups-catch-dods-eye-big-investors-are-watching/

All news