8 mars 2022 | International, Technologies propres, Méga données et intelligence artificielle, Fabrication avancée 4.0, Systèmes autonomes (Drones / E-VTOL), Conception et essais virtuels, Fabrication additive

Générer de l’innovation avec le Japon : Québec 2022

Sur le même sujet

  • Navys Next Generation Air Dominance Program to be Family of Manned Unmanned Systems

    31 mars 2021

    Navys Next Generation Air Dominance Program to be Family of Manned Unmanned Systems

    The Navy is eyeing a mix of manned and unmanned platforms as it embarks on its next-generation air dominance program, which will replace some of the service's aging planes, said a top official March 30. At the center of the iniatitive is an effort to procure a sixth-generation fighter and replace the F/A-18E/F Super Hornet, said Rear Adm. Gregory Harris, director of the air warfare division of the office of the chief of naval operations. The Super Hornet will begin nearing the end of its service life in the mid-2030s. The Navy has not yet decided whether that platform will be robotic or have a pilot in the cockpit, he said during a virtual event hosted by the Navy League of the United States. “In the next probably two to three years, we'll have a better idea of whether the replacement for the F/A-18E/F will be manned or unmanned,” he said. “I would believe it will most likely be manned but I'm open to the other aspects.” That decision will be informed by a concept refinement phase, which the Navy is currently in, he said. “That concept refinement phase and the teams that we have with our prime air vehicle vendors will start to advise what's in the realm of possible — has autonomy and artificial intelligence matured enough to be able to put a system inside an unmanned platform that has to go execute air-to-air warfare?” he said. Air-to-air warfare is perhaps the most complex mission for an autonomous capability to perform, he noted.

  • Impact of COVID-19 on commercial MRO

    24 avril 2020

    Impact of COVID-19 on commercial MRO

    Opinion: How COVID-19 Has Already Changed Everything David Marcontell April 17, 2020 Oliver Wyman To say that COVID-19 is having a devastating effect on aviation is an understatement. With hundreds of millions of people living under stay-at-home orders and unemployment rates in the U.S. and Europe rising faster than they ever have, global airline capacity in available seat-miles is down 59% compared to what it was at this time last year. The International Air Transport Association is forecasting airline losses of $252 billion—a tally that has been revised upward twice in the last six weeks. At my own firm, we cut our 2020 forecast for demand in the MRO market by $17-35 billion to reflect the nearly 11,000 aircraft that have been taken out of service and the 50% drop in daily utilization for those that are still flying. Oliver Wyman also lowered its projection for new aircraft deliveries by 50-60% versus 2019 after a comprehensive review of original equipment manufacturer (OEM) build projections versus airline demand. Deliveries for most current-production models are expected to drop 50% or more in 2021 and 2022. As a result, we project that it will be well into 2022 before the global MRO market might return to the size it was before COVID-19. This crisis has gone well past the point of a V-shaped recovery. Lasting damage has been done, and not unlike the Sept. 11, 2001, terrorist attacks or the 2008 global financial crisis, the behavior of governments, businesses and the public is likely to have been changed forever. Following 9/11, it took nearly 18 months for passenger traffic to return to its previous level, and when it finally did, travel looked very different than it had before the attacks. Passenger anxiety and the “hassle” factor associated with heightened airport security caused people to stay at home or drive. It took nearly a decade for the public to adjust to the new normal of commercial air travel. In a post-COVID-19 environment, it is not unrealistic to expect new screening protocols to be put in place to help manage the risk of reinfection or an emergence of new hot spots. Already, international public health officials are considering such tools as immunization passports and body temperature scanning (already in use by some airports) that would be applicable to everyone on every flight, much like our security screening is today. In addition, virtual meeting technology—adoption of which is expanding quickly out of necessity—is now becoming business as usual for work and socializing, and it's unlikely we will turn away from it entirely even when the disease is a memory. These combined influences will undoubtedly slow passenger traffic growth. COVID-19 also will change the industry's labor landscape. For the past several years, the aviation industry has been concerned with a looming labor shortage. Before the coronavirus crisis, regional airlines were already being forced to shut down because they couldn't find enough pilots; others were trimming flight schedules. A stunning 90% of the Aeronautical Repair Station Association's 2019 survey reported difficulty finding enough technicians—a situation that cost ARSA members more than $100 million per month in unrealized revenue. COVID-19 will change all that. With the global fleet expected to have 1,200 fewer airplanes flying in 2021 than 2019, the industry will need roughly 18,000 fewer pilots and 8,400 fewer aviation maintenance technicians in 2021. The depth of the cutbacks is the equivalent of grounding 1-2 years' worth of graduates from training and certification programs around the world. How many would-be pilots and mechanics may now be dissuaded from pursuing a career in aviation with those statistics? If people turn away now, when aviation comes back it may be a few years before that candidate pipeline is restored. Another example of permanent change from aviation's last cataclysmic event was the consolidation of the OEM supply chain after the Great Recession. Tier 1 and Tier 2 suppliers went on a buying spree, gobbling up smaller companies. While the post-COVID-19 business environment will undoubtedly be hazardous for these same suppliers, the consolidation of the past decade has put them in a better position to survive this upheaval. Can the same be said for the MRO community, which comprises many smaller, privately held and family-owned companies? I suspect not. While governments are scrambling to provide financial relief for small businesses hurt by the global economic shutdown, these efforts will likely fall short. The result might well be a further consolidated MRO community dominated by the OEMs plus a handful of fully integrated firms that provide support to both OEMs and airlines. COVID-19 is a painful reminder that aviation always will be a cyclical business. With each cycle, the industry renews itself, performing better than before. One should expect this cycle to be no different. The biggest question is: How long will this cycle last? —David Marcontell, Oliver Wyman partner and general manager of its Cavok division, has aftermarket experience with leading OEMs, airlines, MROs and financial services.

  • Trustworthy AI: A Conversation with NIST's Chuck Romine

    21 janvier 2020

    Trustworthy AI: A Conversation with NIST's Chuck Romine

    By: Charles Romine Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers. How would you define artificial intelligence? How is it different from regular computing? One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community. That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human. There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy? AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being. Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system. NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI? Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house. What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance. That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there. I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us. AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears? I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence. We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution. What's next? I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted. https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

Toutes les nouvelles