21 janvier 2020 | International, Méga données et intelligence artificielle

Trustworthy AI: A Conversation with NIST's Chuck Romine

Trustworthy AI: A Conversation with NIST's Chuck Romine

By: Charles Romine

Artificial Intelligence (AI) promises to grow the economy and improve our lives, but with these benefits, it also brings new risks that society is grappling with. How can we be sure this new technology is not just innovative and helpful, but also trustworthy, unbiased, and resilient in the face of attack? We sat down with NIST Information Technology Lab Director Chuck Romine to learn how measurement science can help provide answers.

How would you define artificial intelligence? How is it different from regular computing?

One of the challenges with defining artificial intelligence is that if you put 10 people in a room, you get 11 different definitions. It's a moving target. We haven't converged yet on exactly what the definition is, but I think NIST can play an important role here. What we can't do, and what we never do, is go off in a room and think deep thoughts and say we have the definition. We engage the community.

That said, we're using a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes us responsible for providing guidance to the federal government on how it should engage in the standards arena for AI. We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human.

There's a lot of talk at NIST about “trustworthy” AI. What is trustworthy AI? Why do we need AI systems to be trustworthy?

AI systems will need to exhibit characteristics like resilience, security and privacy if they're going to be useful and people can adopt them without fear. That's what we mean by trustworthy. Our aim is to help ensure these desirable characteristics. We want systems that are capable of either combating cybersecurity attacks, or, perhaps more importantly, at least recognizing when they are being attacked. We need to protect people's privacy. If systems are going to operate in life-or-death type of environments, whether it's in medicine or transportation, people need to be able to trust AI will make the right decisions and not jeopardize their health or well-being.

Resilience is important. An artificial intelligence system needs to be able to fail gracefully. For example, let's say you train an artificial intelligence system to operate in a certain environment. Well, what if the system is taken out of its comfort zone, so to speak? One very real possibility is catastrophic failure. That's clearly not desirable, especially if you have the AI deployed in systems that operate critical infrastructure or our transportation systems. So, if the AI is outside of the boundaries of its nominal operating environment, can it fail in such a way that it doesn't cause a disaster, and can it recover from that in a way that allows it to continue to operate? These are the characteristics that we're looking for in a trustworthy artificial intelligence system.

NIST is supposed to be helping industry before they even know they needed us to. What are we thinking about in this area that is beyond the present state of development of AI?

Industry has a remarkable ability to innovate and to provide new capabilities that people don't even realize that they need or want. And they're doing that now in the AI consumer space. What they don't often do is to combine that push to market with deep thought about how to measure characteristics that are going to be important in the future. And we're talking about, again, privacy, security and resilience ... trustworthiness. Those things are critically important, but many companies that are developing and marketing new AI capabilities and products may not have taken those characteristics into consideration. Ultimately, I think there's a risk of a consumer backlash where people may start saying these things are too easy to compromise and they're betraying too much of my personal information, so get them out of my house.

What we can do to help, and the reason that we've prioritized trustworthy AI, is we can provide that foundational work that people in the consumer space need to manage those risks overall. And I think that the drumbeat for that will get increasingly louder as AI systems begin to be marketed for more than entertainment. Especially at the point when they start to operate critical infrastructure, we're going to need a little more assurance.

That's where NIST can come together with industry to think about those things, and we've already had some conversations with industry about what trustworthy AI means and how we can get there.

I'm often asked, how is it even possible to influence a trillion-dollar, multitrillion-dollar industry on a budget of $150 million? And the answer is, if we were sitting in our offices doing our own work independent of industry, we would never be able to. But that's not what we do. We can work in partnership with industry, and we do that routinely. And they trust us, they're thrilled when we show up, and they're eager to work with us.

AI is a scary idea for some people. They've seen “I, Robot,” or “The Matrix,” or “The Terminator.” What would you say to help them allay these fears?

I think some of this has been overhyped. At the same time, I think it's important to acknowledge that risks are there, and that they can be pretty high if they're not managed ahead of time. For the foreseeable future, however, these systems are going to be too fragile and too dependent on us to worry about them taking over. I think the biggest revolution is not AI taking over, but AI augmenting human intelligence.

We're seeing examples of that now, for instance, in the area of face recognition. The algorithms for face recognition have improved at an astonishing rate over the last seven years. We're now at the point where, under controlled circumstances, the best artificial intelligence algorithms perform on par with the best human face recognizers. A fascinating thing we learned recently, and published in a report, is that if you take two trained human face recognizers and put them together, the dual system doesn't perform appreciably better than either one of them alone. If you take two top-performing algorithms, the combination of the two doesn't really perform much better than either one of them alone. But if you put the best algorithm together with a trained recognizer, that system performs substantially better than either one of them alone. So, I think, human augmentation by AI is going to be the revolution.

What's next?

I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that.

Last year, we published our plan for how the federal government should engage in the AI standards development process. I think there's general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they're going to be trusted.

https://www.nist.gov/blogs/taking-measure/trustworthy-ai-conversation-nists-chuck-romine

Sur le même sujet

  • L'US Army développe un concept innovant de collaboration drones – robots

    17 décembre 2020

    L'US Army développe un concept innovant de collaboration drones – robots

    Afin d'accroître l'endurance et la portée de ses drones, l'US Army entend faire collaborer des essaims aériens et terrestres. Des robots pour recharger des drones. L'US Army se penche actuellement sur un concept innovant visant à faire collaborer drones et robots et ainsi accroître les performances de ses essaims de drones. Afin de pouvoir augmenter les capacités des drones déployés au sein de l'essaim, ces derniers se rendront au sol et se poseront sur des robots, qui leur serviront de plateformes de chargement. Une idée astucieuse afin de considérablement augmenter la portée et l'endurance de ces petits aéronefs. Algorithmes et intelligence artificielle. Afin de conduire ce projet, le laboratoire de recherche de l'US Army a notifié à l'université d'Illinois un accord portant sur 4 ans et un budget de recherche de 8M$. L'enjeu est notamment de pouvoir définir une intelligence artificielle assez performante afin que les drones puissent se poser en toute sécurité sur les robots au sol, et que ces derniers parviennent à suivre les aéronefs en vol. Néanmoins, de nombreux aspects sont à prendre en compte eut égard à l'environnement opérationnel dans lequel ces drones seront déployés. Ils devront conserver leur discrétion, tout en évitant les potentiels obstacles, puisque toute la manœuvre sera réalisée de façon automatique. L'aspect essaim sera également à gérer car l'ambition est de pouvoir mener une mission en continu. Il faudra donc faire alterner les drones dans les phases de chargement afin qu'il n'y ait pas d'interruption de missions. Libérer la charge mentale du soldat. A travers ce projet, l'objectif est également de soulager les soldats, aussi bien d'un point de vue opérationnel que logistique. Les militaires n'auront plus à se charger du pilotage du drone ni à gérer le niveau et le remplacement des batteries. L'ensemble se fera automatiquement et permettra aux opérationnels de se concentrer sur des t'ches à haute valeur ajoutée. https://www.air-cosmos.com/article/lus-army-dveloppe-un-concept-innovant-de-collaboration-drones-robots-23979

  • Panel wants to double federal spending on AI

    2 avril 2020

    Panel wants to double federal spending on AI

    Aaron Mehta A congressionally mandated panel of technology experts has issued its first set of recommendations for the government, including doubling the amount of money spent on artificial intelligence outside the defense department and elevating a key Pentagon office to report directly to the Secretary of Defense. Created by the National Defense Authorization Act in 2018, the National Security Commission on Artificial Intelligence is tasked with reviewing “advances in artificial intelligence, related machine learning developments, and associated technologies,” for the express purpose of addressing “the national and economic security needs of the United States, including economic risk, and any other associated issues.” The commission issued an initial report in November, at the time pledging to slowly roll out its actual policy recommendations over the course of the next year. Today's report represents the first of those conclusions — 43 of them in fact, tied to legislative language that can easily be inserted by Congress during the fiscal year 2021 budget process. Bob Work, the former deputy secretary of defense who is the vice-chairman of the commission, said the report is tied into a broader effort to move DoD away from a focus on large platforms. “What you're seeing is a transformation to a digital enterprise, where everyone is intent on making the DoD more like a software company. Because in the future, algorithmic warfare, relying on AI and AI enabled autonomy, is the thing that will provide us with the greatest military competitive advantage,” he said during a Wednesday call with reporters. Among the key recommendations: The government should “immediately double non-defense AI R&D funding” to $2 billion for FY21, a quick cash infusion which should work to strengthen academic center and national labs working on AI issues. The funding should “increase agency topline levels, not repurpose funds from within existing agency budgets, and be used by agencies to fund new research and initiatives, not to support re-labeled existing efforts.” Work noted that he recommends this R&D to double again in FY22. The commission leaves open the possibility of recommendations for increasing DoD's AI investments as well, but said it wants to study the issue more before making such a request. In FY21, the department requested roughly $800 million in AI developmental funding and another $1.7 billion in AI enabled autonomy, which Work said is the right ratio going forward. “We're really focused on non-defense R&D in this first quarter, because that's where we felt we were falling further behind,” he said. “We expect DoD AI R&D spending also to increase” going forward. The Director of the Joint Artificial Intelligence Center (JAIC) should report directly to the Secretary of Defense, and should continue to be led by a three-star officer or someone with “significant operational experience.” The first head of the JAIC, Lt. Gen. Jack Shanahan, is retiring this summer; currently the JAIC falls under the office of the Chief Information Officer, who in turn reporters to the secretary. Work said the commission views the move as necessary in order to make sure leadership in the department is “driving" investment in AI, given all the competing budgetary requirements. The DoD and the Office of the Director of National Intelligence (ODNI) should establish a steering committee on emerging technology, tri-chaired by the Deputy Secretary of Defense, the Vice Chairman of the Joint Chiefs of Staff, and the Principal Deputy Director of ODNI, in order to “drive action on emerging technologies that otherwise may not be prioritized” across the national security sphere. Government microelectronics programs related to AI should be expanded in order to “develop novel and resilient sources for producing, integrating, assembling, and testing AI-enabling microelectronics.” In addition, the commission calls for articulating a “national for microelectronics and associated infrastructure.” Funding for DARPA's microelectronics program should be increased to $500 million. The commission also recommends the establishment of a $20 million pilot microelectronics program to be run by the Intelligence Advanced Research Projects Activity (IARPA), focused on AI hardware. The establishment of a new office, tentatively called the National Security Point of Contact for AI, and encourage allied government to do the same in order to strengthen coordination at an international level. The first goal for that office would be to develop an assessment of allied AI research and applications, starting with the Five Eyes nations and then expanding to NATO. One issue identified early by the commission is the question of ethical AI. The commission recommends mandatory training on the limits of artificial intelligence in the AI workforce, which should include discussions around ethical issues. The group also calls for the Secretary of Homeland Security and the director of the Federal Bureau of Investigation to “share their ethical and responsible AI training programs with state, local, tribal, and territorial law enforcement officials,” and track which jurisdictions take advantage of those programs over a five year period. Missing from the report: any mention of the Pentagon's Directive 3000.09, a 2012 order laying out the rules about how AI can be used on the battlefield. Last year C4ISRNet revealed that there was an ongoing debate among AI leaders, including Work, on whether that directive was still relevant. While not reflected in the recommendations, Eric Schmidt, the former Google executive who chairs the commission, noted that his team is starting to look at how AI can help with the ongoing COVID-19 coronavirus outbreak, saying "“We're in an extraordinary time... we're all looking forward to working hard to help anyway that we can.” The full report can be read here. https://www.c4isrnet.com/artificial-intelligence/2020/04/01/panel-wants-to-double-federal-spending-on-ai/

  • How DoD can improve its technology resilience

    17 décembre 2020

    How DoD can improve its technology resilience

    Mark Pomerleau WASHINGTON — The Department of Defense must bolster its resilience in mission platforms in order to stay ahead of threats, a new think tank report says. With the military's shift toward great power competition, or conflict against nation states, its systems and platforms will be under greater stress than technological inferior adversaries battled during the counterterrorism fight of the last decade-plus. Systems and networks are expected to be contested, disrupted and even destroyed, meaning officials need to build redundancy and resilience in from the start to work through such challenges. In fact, top defense officials have been warning for several years that they are engaged in conflict that is taking place below the threshold of armed conflict in which adversaries are probing networks and systems daily for espionage or disruptive purposes. “Resilience is a key challenge for combat mission systems in the defense community as a result of accumulating technical debt, outdated procurement frameworks, and a recurring failure to prioritize learning over compliance. The result is brittle technology systems and organizations strained to the point of compromising basic mission functions in the face of changing technology and evolving threats,” said a new report out today by the Atlantic Council titled “How Do You Fix a Flying Computer? Seeking Resilience in Software-Intensive Mission Systems.” “Mission resilience must be a priority area of work for the defense community. Resilience offers a critical pathway to sustain the long-term utility of software-intensive mission systems, while avoiding organizational brittleness in technology use and resulting national security risks. The United States and its allies face an unprecedented defense landscape in the 2020s and beyond.” This resilience, is built upon three pillars, the authors write: robustness, which is the ability of a system to negate the impact of disruption; responsiveness, which is the ability of a system to provide feedback and incorporate changes on a disruption, and; adaptability, which is the ability to a system to change itself to continue operating despite a disruption. Systems, the report notes, are more than just the sum of its parts — hardware and software — but rather are much broader to include people, organizational processes and technologies. To date, DoD has struggled to manage complexity and develop robust and reliable mission systems, even in a relatively benign environment, the report bluntly asserts, citing problems with the F-35′s Autonomic Logistics Information System (ALIS) as one key example. “A conflict or more contested environment would only exacerbate these issues. The F-35 is not alone in a generation of combat systems so dependent on IT and software that failures in code are as critical as a malfunctioning munition or faulty engine — other examples include Navy ships and military satellites,” the authors write. “To ensure mission systems like the F-35 remain available, capable, and lethal in conflicts to come demands the United States and its allies prioritize the resilience of these systems. Not merely security against compromise, mission resilience is the ability of a mission system to prevent, respond to, and adapt to both anticipated and unanticipated disruptions, to optimize efficacy under uncertainty, and to maximize value over the long term. Adaptability is measured by the capacity to change — not only to modify lines of software code, but to overturn and replace the entire organization and the processes by which it performs the mission, if necessary. Any aspect that an organization cannot or will not change may turn out to be the weakest link, or at least a highly reliable target for an adversary.” The report offers four principles that defense organizations can undertake to me more resilient in future conflicts against sophisticated adversaries: Embrace failure: DoD must be more willing to take risks and embrace failure to stay ahead of the curve. Organizations can adopt concepts such as chaos engineering, experimenting on a system to build confidence in its ability to withstand turbulent conditions in production, and planning for loss of confidentiality in compromised systems. Improve speed: DoD must be faster at adapting and developing, which includes improving its antiquated acquisition policies and adopt agile methodologies of continuous integration and delivery. Of note, DoD has created a software acquisition pathway and is implementing agile methodologies of continuous integration and delivery, though on small scales. Always be learning: Defense organizations operate in a highly contested cyber environment, the report notes, and as the department grows more complex, how it learns and adapts to rapidly evolving threats grows in importance. Thus, it must embrace experimentation and continuous learning at all levels of systems as a tool to drive improvement. Manage trade-offs and complexity: DoD should improve mission system programs' understanding of the trade-offs between near-term functionality and long-term complexity to include their impact on systems' resilience. https://www.c4isrnet.com/cyber/2020/12/14/how-dod-can-improve-its-technology-resilience/

Toutes les nouvelles