As Russia and Ukraine fight it out, a look back into NSCAI’s Final Report


by Piyush Mathur


On March 1, 2021, the United States (US) declared its official entry into the emergent Age of Artificial Intelligence by publishing the Final Report of its National Security Commission on Artificial Intelligence (NSCAI). Global geopolitics, especially its security outlook, has, of course, changed profoundly since—what with the onset, on February 24, 2022, of the Russian invasion of Ukraine (in itself a belated update of Russia’s 2014 annexation of Crimea). This second Russian attack on Ukraine began at a time when the abhorrent optics of the American exit from Afghanistan (on August 30, 2021) were still fresh enough in the minds of many—and they had highlighted a peculiar asymmetry between an advanced, albeit fleeing, invader—the United States—and a disastrously under-resourced, primitive prevailer—the Taliban.

This is a clickable link to the report’s portable document format file.

That technological and even economic asymmetry between the US and the Taliban had ultimately failed to favour the invader. In that tiny, yet pointed moment of geopolitical history, the world, once again, had been handed a subliminal reality check of sorts regarding the technological state of the humankind and the extent of its geopolitical-cum-militaristic significance. NSCAI’s Final Report would antedate this latest geopolitical humiliation of the US by around 6 months; there is thus a irony of sorts to the fact that it comprises a comprehensive nationalistic-strategic centrestaging of AI—a technological assortment so advanced that it is as yet basically embryonic!

But let’s not get ahead of ourselves in regard to the Final Report. We have to keep in mind the following as well: The Russian invasion of Ukraine this time around has also turned out to be a reality check—on several counts—but this reality check has had to do with the exposure of facile beliefs regarding the invader’s technological sophistication, aside from its ability to translate the same into clinical military victories, among other things. Broadly, the three key realities that have come to light since the start of the Russian invasion of Ukraine are as follows (with the first two of them having left security and intelligence establishments from around the world somewhat puzzled, if not entirely embarrassed, about their prior estimates): Russia under Putin was not half as sophisticated a military, strategic, or intelligence player as its reputation had been up until the first couple of weeks of the invasion; Ukraine had been better prepared, under Volodymyr Zelenskyy, than had been imagined; and an AI-powered, robotically augmented, sci-fi style war is still a distant nightmare.

In exploring the broadest contours of the Final Report, it is an aspect of the last of the above three realities that would concern us—given that this Russia-Ukraine war, underneath all the gory wreckage and hard rubble, has tipped us off a little bit to the type of nightmare that is mentioned above. In an opinion piece published in Los Angeles Times, Roberto J. González—the author of War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future (2022)—has picked out the usage of drones, Google Maps, and social media propaganda in this war as the humanity’s pre-adult steps toward a full-fledged virtual war. The Final Report provides not only the American roadmap to that type of a virtual world of global strategic interaction (to put it politely), but also a substantial guide to the role that AI specifically is envisaged to play in it.

General comments
NSCAI was constituted on August 13, 2018, under Section 1051 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (P.L. 115-232). A bipartisan body of 15 persons—including technologists, national security professionals, business executives, and academic leaders—this Commission has been helping the President and Congress in making AI-related decisions; its Final Report is a result of its 2-year long work. While one might think of the Commission’s endeavour as a domestic political tactic intended to direct massive funds at the earliest into the AI sector, the report itself seeks to push the national panic button on AI in the language of pressing sincerity. Right on the first page, in the ‘Letter from the Chair and Vice Chair’, one comes across this declaration: ‘America is not prepared to defend or compete in the AI era.’ This warning of a declaration is repeated severally in the report.

Readers would recall that Vladimir Putin was the first global leader to formally acknowledge the futuristic geopolitical and strategic significance of AI in his September 1, 2017 address to Russian school children. What the Final Report makes clear, however, is that it is China that most interests the US as regards AI (and, by extension, as regards the next 30 years or so). Russia surely has greater than a guest appearance (as an adversary) in the report—but it shows up more easily as a significant member of the global undemocratic club rather than as the main AI-era threat to the US. Even generally, the Commission presents Russia in the report as a bit less polarized than China against the US (and of course we realize how anachronistic that stance seems already given the unfolding tensions between the US-led West and Russia).

General outline
The Commission’s recommendations are meant to be used as a whole and in accordance with the timeline it specifies for each of them. The first part of the report—‘Defending America in the AI Era’—theorizes and contextualizes AI (especially from an American strategic perspective). This part also articulates evidence-based and reasoned advice for the US government to help it protect American interests, including civic ones, in the AI era. The second part, ‘Winning the Technology Competition’, identifies areas of future importance in AI and broader technological research, recommending actions to the US government to help it promote American AI ‘competitiveness’ while retaining ‘critical U.S. advantages’ (8). Half of the report, though, comprises the Blueprint, which tells the government exactly what to do in order to make the Commission’s recommendations a reality.

Domestic social measures
Carefully identifying and classifying how external adversaries of the US could harm the country socially, commercially, and politically using AI and other digital tools, the Commission makes three key recommendations that are intended to intensify and expand prior AI-related efforts internally. These recommendations can be summarized as follows: Provide additional funding to the Defense Advanced Research Projects Agency (DARPA)—to help it counteract AI-enabled fakery while supporting valid digital sources of information; establish an AI-enabled, 24-hour ‘Joint Interagency Task Force and Operations Center’; and have the White House Office of Science and Technology Policy establish a taskforce to standardize authentication certification for AI-related contents.

A fourth key recommendation of the Commission is to treat civic and commercial data ‘as a national security asset’ (50). There are many concrete systemic steps that the Final Report recommends to the government in this regard; perhaps the most frequently mentioned one is to promote ‘adversarial testing’ through ‘AI red-teaming’ (52). Recognizing AI’s potential to alter biological capabilities of organisms (including humans), the report also recommends that the US ‘update the National Biodefense Strategy’ to include in its purview AI-associated dangers (including foreign access to ‘personal genetic information’) (53).

Defence
Regarding the future of American defence, NSCAI wants the Department of Defense (DoD) to be ready by 2025 to integrate AI into its operations. While the DoD has had a Data Strategy in place since 2020, the Commission recommends that its Secretary establish a department-wide ‘digital ecosystem’ for its ‘AI development teams and critical industry partners’—and to put in place ‘a digitally literate workforce’ as well as ‘modern AI-enabled business practices’ (63, 61). Noting that the DoD’s ‘resource allocation process’ has barely changed since 1961, the Commission also urges it and Congress to reform its budgeting so that ‘software and digital technologies’ may be suitably promoted (66).

Then, against the backdrop of General Joseph Dunford’s 2017 testimony that the US stands to lose its ‘ability to project power’ within the next five years, the Commission makes the following key recommendations: Embed fully capable ‘AI delivery teams...at each Combatant Command’ (68); harmonize ‘the DoD and the National Security Innovation Base…to sustain the competitive advantage’ (66); and promote ‘bottom-up AI development’ inside the DoD, whereby ‘the Joint Artificial Intelligence Center (JAIC)’ would serve ‘as the Department’s AI Accelerator’ (67). On the financial side, the Commission recommends expanding ‘Core AI spending from $1.5B to $8B per year by 2025’ (67).

Intelligence
While crediting the Intelligence Community (IC) for being the farthest ahead within the US government in adopting AI, the Commission insists that it reform its ‘security clearance process’ for optimizing its AI usage by 2025 (110). The Commission also recommends automating, quantifying, and fully streamlining the security clearance process as well as other aspects of American intelligence—which it wishes to see as being interoperable across the agencies. A future is envisaged here in which ‘all available data and information’ are processed ‘through AI-enabled analytic systems before human analyst review’—and the resultant outputs are ‘disseminated at machine speed’ (110).

In relation to the above, the Commission recommends that ‘all intelligence products include both a human-readable version and...an automated machine-readable version that can be ingested into other analytic systems throughout the IC’ (110). The Commission further advises that American intelligence functionaries be recruited and trained to retain a standard of ‘digital literacy and access to the digital infrastructure and software required for ubiquitous AI integration in each stage of the intelligence cycle’ (110).

Public sector
Coming back to the present, the Commission identifies ‘talent deficit’ inside the government sector as the ‘greatest impediment’ to the US’ getting AI-ready by 2025—and it rejects the notion of maximal outsourcing as a means to overcome it (121). What the Commission recommends is that the US government organize the best specialists among its employees via ‘a talent management system’ managed by a ‘Digital Corps’—to be ‘modeled on the Army Medical Corps’—which would complement a civilian institution-to-be called ‘National Reserve Digital Corps (NRDC)’ (10, 125). This civilian institution, NRDC, in turn, is recommended to be ‘modeled after the military reserve’s commitments and incentive structure’ (125).

The report further recommends establishing a United States Digital Service Academy (USDSA)—‘an accredited, degree-granting university’ with hybrid funding and managed by a dedicated, independent federal agency (127). This academy would address ‘the government’s needs for digital expertise’—as determined by an interagency board, with assistance from ‘a Federal Advisory Committee composed of private-sector and academic technology leaders’ (127). While it should prepare ‘civilians for all federal government departments and agencies’, the USDSA is recommended to be designed along the lines ‘of the five U.S. military service academies’ (127).

Internal leadership
The Commission identifies cultivating ‘justified confidence in AI systems’ as a key goal, recommending the US to establish and maintain a domestic leadership for them (137). That would mean appointing a full-time AI leader in each key security-sensitive department as well as each branch of the armed services, and putting together a ‘standing body’ of multidisciplinary specialists inside the National AI Initiative Office (137).  This bureaucracy would facilitate implementing other, following objectives crucial to cultivating justified confidence in AI systems inside the US: articulating and maintaining robust and reliable AI; developing and fine-tuning human-AI interaction and teaming; and establishing and maintaining a complete Testing and Evaluation, Verification and Validation (TEVV) programme for American AI systems (to be).

The report explains methodically how to develop and deploy ‘AI across the national security community’ (134). In that regard, apart from greater investments into R&D involving interdisciplinary risk assessments, the Commission recommends better documentation, and usage of architecture able to limit AI system failure fallouts. The Commission also specifies the research trajectories in human-AI interaction that it believes deserve to be pursued—recommending that the national security research laboratories invest more into this arena overall. As to TEVV, the Commission recommends that the DoD should articulate an AI-specific system that would ‘minimize performance problems and unanticipated outcomes’, with the National Institute of Standards and Technology (NIST) establishing and maintaining all its parameters (137).

Civic dimension
Referencing the rise of ‘techno-authoritarian governance’ abroad, the Commission insists that ‘the United States…continue to serve as a beacon of democratic values’—by which it means to refer to ‘privacy, civil liberties, and civil rights’ (143). To that end, the US must ensure that its bureaucratic usage of AI aligns ‘with principles of limited government and individual liberty’ (143). But inasmuch as AI itself is recommended to be governed democratically, the Commission suggests investing in and adopting AI tools to enhance ‘oversight and auditing’; strengthening 'oversight mechanisms’ and ‘public transparency’; developing systems aimed at ensuring ‘privacy and fairness’; and protecting 'legal redress and due process’ (142).

Against the above backdrop, the Commission recommends setting up a task force that would evaluate AI’s legal and policy-related implications and suggest reforms required to ensure privacy rights and civil liberties for all Americans through the AI era (148). The Commission also recommends that the Privacy and Civil Liberties Oversight Board (PCLOB)—set up in 2007—

be given visibility into AI systems before they are fielded, including at a more granular technical level, and should be resourced and staffed to fulfill the more technically sophisticated mission that the AI era now requires.

Likewise, it recommends that DHS Offices of Privacy and Civil Rights and Civil Liberties (CRCL) be integrated into ‘the legal and approval processes’ related to procuring and using ‘AI-enabled systems, including for associated data used in DHS ML systems’ (149).

The Commission sees the need yet for general oversight of this civic dimension of AI; it thus recommends setting up ‘a standing body’ that would ‘align and coordinate to enhance AI oversight and audit with respect to privacy, civil liberties, and civil rights’ (149). The Commission also recommends Congress to ‘require AI Risk Assessment Reports and AI Impact Assessments from the Intelligence Community, the Department of Homeland Security, and the Federal Bureau of Investigation’, suggesting that ‘DHS and the FBI...improve practices for issuing system of records notices and privacy impact assessments to provide a more holistic view of the role of AI systems before they are fielded’ (146). Recommending regular ‘assessments of privacy and fairness assurances’ (and of the definition used of fairness), the Commission also highlights some relevant interim steps that DHS and FBI could take.

The Commission also recommends setting up inside each concerned agency an official group of people that would be entrusted to conduct pre-deployment reviews of all and any component of AI systems. For ‘high-stakes systems’, it recommends mandating ‘independent, third-party testing’ (otherwise to be kept voluntary); for such testing, it recommends establishing dedicated centres (147). The Commission further recommends agency-internal reviews of AI employment policies and associated grievance redressal mechanisms: Crucially, it recommends that an Attorney General guidance be issued on ‘the due process rights of U.S. persons when AI use may lead to a deprivation of life or liberty’ (148).

Talent
In regard to retaining and attracting talent to AI, the Commission notes that the US is falling behind in producing STEM graduates; to address this shortfall, it recommends passing a National Defense Education Act II, as an AI-era follow-up to the National Defense Education Act of 1958. This new act is supposed to focus on instruction in ‘digital skills, like mathematics, computer science, information science, data science, and statistics’ across the American educational spectrum—and it ‘recommends investments in university-level STEM programs with 25,000 undergraduate, 5,000 graduate, and 500 PhD-level scholarships’ (175). Also recommended here are a slew of measures aimed at re-orienting the American immigration policy to strengthen international inflow and retention of AI-related talent. 

Those measures include ‘doubl[ing] the number of employment-based green cards' stressing 'permanent residency for STEM and AI-related fields’, and introducing ‘an entrepreneur visa for those’ whose stay in the US would significantly benefit the public (179).  The Commission also suggests that the National Science Foundation (NSF) be entrusted to list out ‘critical emerging technologies every three years’—and that the DHS use that input to allow relevant foreigners ‘to apply for emerging and disruptive technology visas’ (179). In Footnote 20 of Chapter 10, the Commission cannily notes that ‘if the companies remain American, it reduces the American Intelligence Community’s (IC) legal authorization to collect information about Chinese technology development’ (181).

Intellectual property & international technological order
Citing the fact that the American system has denied ‘patent protection’ to AI and biotech inventions since 2010—a situation that has made American ‘inventors pursue trade secret protection’—the Commission raises the fear of China, given that recent years have seen more AI patent applications of Chinese rather than American origins. The Commission thus declares ‘intellectual property policy…a national security priority’, demanding a quick clarification of it in relation to AI and emerging technologies via an executive order (200). The Commission also recommends producing a plan for reforming intellectual property policies in response to AI—and for taking relevant executive and legislative actions; it articulates steps that the US establishment must take to make Intellectual Property (IP) advances in AI.

Associated research areas, technologies, and steps
The Commission wants the US to ensure that it stay ‘two generations ahead of potential adversaries in state-of-the-art microelectronics while also maintaining multiple sources of cutting-edge microelectronics fabrication’ domestically (216). In that regard, the Commission’s insistence is that the country focus on

promising areas such as next-generation tools beyond extreme ultraviolet lithography, 3D chip stacking, photonics, carbon nanotubes, gallium nitride transistors, domain-specific hardware architectures, electronic design automation, and cryogenic computing. (220)

The Commission also realizes that the American endeavour to maintain AI leadership would have little meaning unless it also has ‘a comprehensive strategy to sustain’ the same ‘in key associated technologies’; it thus identifies the following technologies or technological capabilities besides AI as those requiring special governmental consideration: advanced biofabrication capabilities; quantum chip fabrication; 5G spectrum sharing; robotics software; additive manufacturing; and energy storage technologies (256). Indeed, the Commission recommends digitally connecting the ‘physical assets’ of the US—and ensuring optimal digital skilling of American citizens—to make the nation-state maximally AI-enabled.

‘Blueprint for Action’
Comprising a chapter-wise ‘Blueprint for Action’ and several Appendices, the latter half of this report (272-756) is an step-by-step how-to manual for realizing the entirety of the Commission’s recommendations for the American (and, for that matter, a global-democratic) future in AI. This manual comes supplemented with draft budgets, legal guidelines, draft legislations, and global collaborative plans.

Concluding remarks
In this 754-page long Final Report, one would be lucky to find even a couple of typos, leave aside any gaps in details in the Commission’s recommendations and its action plan for their projected implementation. An elegant compilation of meticulous, collaborative research across a vast range of disciplines, this report would impress a scholar—no matter what that person’s geo-political stance on the themes it covers—as a work of applied political philosophy and futuristic imagination on a grand scale. This report should be read by university students (not only in the US) who wish to understand AI comprehensively—and not merely as an engineering enterprise.

The release of this report, however, reflects that the AI discourse is already a security discourse, quite generally, and the US anyway has undertaken to spearhead it in that shape. The report has resulted from, and responds to, an internal perception within the US that it is a latecomer to AI on the strategic scale—and that it has also been falling behind somewhat in the digital race as well. The unfolding post-pandemic geopolitical scenario (and how it continues to implicate China)—coupled with the facts that have been tumbling out regarding Russia’s overall capabilities through the ongoing war in Ukraine—does not necessarily indicate that the US has that much to worry about, AI or otherwise, in terms of its own security anyway.

Meanwhile, the report’s recommended realignments regarding higher education and intellectual property inherently undercut many a latter-day ideal regarding economic globalization and liberal-humanist free flow of knowledges. Inasmuch as the report harks back to the National Defense Education Act of 1958, it simply reawakens us to the fact that higher education remains a subset of geopolitical realpolitik in the United States as much as anywhere else in the world. But by now, it just seems that whatever is taught inside the presently moribund sector of socio-humanistic studies is consigned to being little more than a game of intellectualist entertainment.



Piyush Mathur
, Ph.D., is a Research Scholar at Ronin Institute—and the author of Technological Forms and Ecological Communication: A Theoretical Heuristic (Lexington Books, 2017). If you wish to get in touch with him, let us know here.


References & background material
Cachero, Paulina (April 18, 2022) ‘Most U.S. College Grads Don’t Work in the Field They Studied, Survey Finds’ (Downloaded from the following URL on April 25, 2022: https://www.bloomberg.com/news/articles/2022-04-18/is-college-worth-it-most-graduates-work-in-other-fields?fbclid=IwAR3n4g_d5ivHGx-PTEht2b0xQQQByImZxMg8oKn26AR8Bz0VbYO_sSZWDQY)

Gonzalez, Roberto J. (April 7, 2022) ‘On display in Ukraine: the dangerous, futuristic world of virtual war’ Los Angeles Times (Downloaded from the following URL on April 25, 2022: https://www.latimes.com/opinion/story/2022-04-07/ukraine-war-virtual-drones-ai-social-media)

National Security Commission on Artificial Intelligence (NSCAI) (March 1, 2021) Final Report (United States) (Downloaded from the following URL on July 18, 2021: https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf)


Previous
Previous

A year on, LUTTAI founder gives a 2nd interview to Thoughtfox; discusses future plans

Next
Next

Quick memories from a Women’s Entrepreneur Market in Mauritius