Back to news

December 3, 2020 | International, Clean technologies, Big data and Artifical Intelligence, Advanced manufacturing 4.0, Autonomous systems (Drones / E-VTOL), Virtual design and testing, Additive manufacturing

Aviation Startups Making Progress, But Can They Disrupt The Industry?

Aviation Startups Making Progress, But Can They Disrupt The Industry?

Graham Warwick December 02, 2020

The concept of a minimum viable product is not new to aviation. It is how the industry started. But as aircraft technology has advanced, customers have come to expect more than a minimum capability.

Along comes Silicon Valley's startup culture, with its drive to find a foothold from which to launch a new technology—a less-than-perfected product that can be developed quickly to disrupt or create a market.

How well is that going for aviation? From autonomy and artificial intelligence (AI) to hybrid-electric and hydrogen propulsion, is there a viable product taking shape that can perform a valuable mission?

Autonomy

The vision: unmanned cargo aircraft plying the skies to meet the ever-growing express logistics needs of the e-commerce giants. The reality: a pair of startups that are converting the Cessna Caravan into a remotely piloted regional cargo aircraft as a first step.

The goal is that supervised autonomy would enable several aircraft to be managed by one remote pilot on the ground, increasing aircraft utilization and reducing operating costs. Reliable Robotics and Xwing plan to operate their aircraft manned initially, the autonomy advising the pilot while accumulating the experience required to certify the system. The companies hope to begin commercial flights by 2022.

There are plenty of startups pursuing the express logistics market with unmanned cargo aircraft, but by targeting an existing market—several hundred Caravans fly as freight feeders for package carriers—and modifying an already certified aircraft and taking a staged approach to introducing autonomy, these two companies hope to lower the certification hurdles.

Artificial Intelligence

The vision: automated aircraft flown by machine-learning algorithms that replicate the skills of human pilots but not their mistakes. The reality: The initial approach is to use AI to help the pilot in high-workload phases of flight, such as landing.

Swiss startup Daedalean is developing a camera-based system to provide safe landing guidance for general-aviation aircraft and vertical-takeoff-and-landing vehicles. Airbus has the longer-term goal of bringing autonomy to its commercial aircraft but has started in the same place, demonstrating fully automatic vision-based takeoffs and landings with an A350 in April.

By tackling one well-defined subtask of visual flying, and proving the system can be safer than human piloting, Daedalean hopes to create the path to certification of AI for safety-critical applications. The European Union Aviation Safety Agency, which has been working with the startup to frame the rules, expects the first AI applications to be certified in 2022.

Hybrid-Electric

The vision: propulsion systems that overcome the limitations of batteries to deliver the economic and emissions benefits of electrification in larger, faster, longer-range aircraft. The reality: Starting small, startups Ampaire and VoltAero are testing power trains in converted Cessna 337 Skymasters.

Ampaire's route to market is to modify existing aircraft, beginning with the Skymaster as the four-seat, 200-mi. Electric EEL but moving on to the 19-seat de Havilland Canada Twin Otter. France's VoltAero, meanwhile, is taking the clean-sheet approach with plans for a family of hybrid-electric aircraft with up to 10 seats and 800-mi. range. Delivery of the initial four-seat Cassio 330 version is planned for 2023.

While batteries have improved enough to make pure-electric urban air taxis feasible, longer ranges are still out of reach. But there are startups working to field all-electric nine- and 19-seat aircraft within just a couple of years of the first hybrid-electric types. It remains to be seen whether hybrid propulsion is just a stopgap, as with cars, or a long-term market niche.

Hydrogen

The vision: zero-emissions flight for aircraft of all sizes and ranges. The reality: adapting automotive fuel-cell technology to modify regional turboprops and kick-start the market for green hydrogen as an aviation fuel.

ZeroAvia made the first flight of a six-seater with a fuel-cell power train from Cranfield, England, in September and plans a 300-mi. demonstration flight. The startup's route to market is to modify existing 10- and 20-seaters to hydrogen-electric propulsion, aiming for its first certification within three years. Universal Hydrogen is more ambitious, targeting the 50-seat de Havilland Canada Dash 8-300 for conversion to hydrogen fuel-cell propulsion for market entry by 2024.

Introducing a new fuel to aviation is an infrastructure issue. By starting small, the startups believe the challenge of producing green hydrogen can be made manageable. But to have an impact on aviation's contribution to climate change, hydrogen needs to be scaled up to larger and larger aircraft as quickly as possible.

https://aviationweek.com/aerospace/emerging-technologies/aviation-startups-making-progress-can-they-disrupt-industry

On the same subject

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    January 21, 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

  • Bordeaux Technowest et Airbus Développement lancent un concours « Challenge Innovation »

    September 3, 2021

    Bordeaux Technowest et Airbus Développement lancent un concours « Challenge Innovation »

    Bordeaux Technowest et Airbus Développement lancent un concours « Challenge Innovation », dans le cadre du salon UAV SHOW qui aura lieu les 19, 20 et 21 octobre à Bordeaux. Ce concours vise à mettre en lumière les projets innovants portés par les startups de la filière drones, selon 4 thématiques : impact environnemental, innovation technologique, service au territoire, intelligence artificielle et data & communication. La clôture des candidatures est prévue le 1er octobre. https://www.aerobuzz.fr/breves-aviation-generale/challenge-innovation-bordeaux-technowest-et-airbus-developpement/?paged1=2#:~:text=Lancement%20du%20Challenge%20Innovation%20par,up%20de%20la%20fili%C3%A8re%20drones.

  • L’intelligence artificielle, une révolution technologique pour la défense

    September 18, 2020

    L’intelligence artificielle, une révolution technologique pour la défense

    L'Usine Nouvelle consacre un article détaillé aux bouleversements induits par l'intelligence artificielle (IA) dans le secteur de la défense. Le magazine rappelle que le ministère des Armées a publié fin 2019 un rapport dédié à l'intelligence artificielle, et qu'il a fait de l'IA une de ses priorités, avec un investissement de 100 millions d'euros par an durant la période 2019-2025. « L'IA doit permettre le combat collaboratif », souligne L'Usine Nouvelle, qui relève que Dassault Aviation et Thales « préparent les évolutions du cockpit du Rafale : l'avion de chasse pourra communiquer avec les drones pour adopter des stratégies innovantes de pénétration des défenses antiaériennes, fondées notamment sur des trajectoires d'évitement intelligentes et réactives ». Dans les domaines naval et terrestre, Naval Group et Nexter développent également leurs capacités gr'ce à l'IA. Marko Erman, directeur scientifique de Thales, souligne : « l'un des défis est d'avoir des algorithmes explicables en temps réel et dans des termes compréhensibles par le soldat en mission ». L'Usine Nouvelle du 17 septembre

All news