This section describes the research and application of AI technologies in several government organizations. AI use ranges from Machine Learning to Artificial Neural Networks. More details of these and other applications can be found on the Links Page.
Army researchers are surveying companies with AI and Machine Learning capability to develop autonomous cyber defenses to protect tactical networks and communications. Capabilities would include: detect and mitigate known cyber vulnerabilities; identify and correct misconfigurations in networks and hosts; detect known and previously unknown malware samples; and create machine learning-based cyber agents designed to parse data flows and messages to detect and deduce the intent of an attack.
The Navy is using data analytic tools to understand the link between readiness and battle action. Mining historic data to derive readiness drivers, comparing those drivers to success or failure in real-life scenarios, and then applying Predictive Analytics, it is possible to forecast necessary changes in investments in drivers that will produce better decision outcomes and ROI for taxpayers.
Air Force researchers are seeking to shrink the Size, Weight, and Power consumption (SWaP) of artificial intelligence (AI) and machine learning (ML) embedded in computing avionics for a variety of military aircraft. The Air Force Research Laboratory (AFRL) Information Directorate in Rome NY is soliciting industry to diminish SWaP while driving greater sophistication, autonomy, intelligence, and assurance for command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) applications and SWaP-constrained aircraft. Among many things the Air Force needs new and unconventional architectures, innovative technologies, power-aware and energy-optimized deep learning, real-time embedded plug-and-play capabilities and advanced computing architectures, algorithms, and applications.
Recently the U.S. Air Force Life Cycle Management Center awarded 4 aerospace contractors a share of $400M for the Skyborg Program. Skyborg will develop a prototype for an unmanned accountable fighter aircraft with winning combat power against a manned jet fighter. The aircraft will have a core system using AI to enable teaming among manned and unmanned aircraft; avoid other aircraft, terrain, obstacles, and hazardous weather; complete autonomous takeoffs and returns; and the autonomy to compose and select independently among different courses of action. In addition, the aircraft should operate with personnel who have limited pilot experience and be economical enough to sacrifice to high-value targets in combat.
Defense Advanced Research Projects Agency (DARPA) has been on the leading edge of AI research since the early 60's. Today it is spending approximately $500 million a year on AI across about 80 programs. The multiyear AI NEXT campaign's funding is focused on DoD issues including: security clearance vetting; accrediting software systems for operational deployment; reducing power, data and performance inefficiencies; and pioneering the next generation of AI algorithms and applications. The primary future emphasis is addressing the limitations of current systems to make it possible for machines to adapt to new environments, i.e. Artificial Neural Networks or AGI.
The Defense Intelligence Agency's' (DIA) vision of their mission is to give warfighters a comprehensive, dynamic picture of an enemy's operational environment. With the goal of enhancing the warfighter's ability to mitigate risks and defeat adversaries, IA is launching a program of AI techniques and Machine Learning - Machine-Assisted Analytic Rapid-Repository System (MARS) - to access current military intelligence data bases and transform them into multi-dimensional, flexible data sets that can be leveraged into a virtual model of the real world.
The Defense Information Systems Agency's (DISA) is working with the private sector to introduce more transparency into their AI tools in an effort create confidence and understanding in them as DISA applies ML and other AI technologies to their portfolio. This is a necessary process to take full advantage of AI capabilities and feel comfortable with the removal of the human-in-the-loop which is the only way to meet the challenge presented by 1.5 billion daily cyber events. DISA, participating in DoD's National Background Investigation Service (NBIS), is building an AI system to expedite the clearance application process that must be checked and verified against multiple data sources.
On June 27, 2018 the Office of the Secretary of Defense, DoD Chief Information Officer, established the Joint Artificial Intelligence Center (JAIC). JAIC's principal guidance will include: accelerating the research and development of AI capabilities; synchronizing DoD AI efforts, particularly in large-scale projects; encouraging the movement of AI capabilities to the cloud to enable rapid delivery; improving collaboration of AI projects internally and including private companies and academics; leading AI-related planning policy, oversight, ethics and safety within the department.
The Intelligence Advanced Research Projects Activity (IARPA) proposed a program to industry to build Predicative Analytic tools capable of determining if government AI systems have been corrupted by "Trojan attacks". These attacks can exploit the AI Training process, such as a ML facial recognition tool trained on a data base of 1000's of images. If the data base was corrupted, enemies could manipulate the resulting AI decision-making process to their advantage.
The Intelligence Community (IC) is struggling with how to effectively use AI technology to their advantage. ML tools are being carefully trained using very large "noisy" (unstructured) data sets to find patterns such as facial recognition or to augment language processing. Another effective AI research innovation uses AI technology to collect and evaluate data. Among the challenges faced by the IC's use of AI technology is: interacting with and convincing the established organizational system of the need for AI; ensuring that AI solutions are compliant and shareable (i.e. not stovepipes) across the IC; and ensuring that operators have confidence and trust in the AI algorithms that make the system "explainable" especially in the context of AI action recommendations. A grave problem facing the IC is the role that AI unequivocally must play in cybersecurity in order to cope with the rapid response necessary on a daily basis to protect against massively increasing cyber network attacks.
On February 11, 2019 the President issued Executive 13859 Order on Maintaining American Leadership in Artificial Intelligence and directed National Institute of Standards and Technology (NIIST) to create "a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies." The NIST Director noted that trust is the key to the adoption and acceptance of AI. NIST's primary efforts will be to support the development of AI standards, and fundamental research to measure and enhance the security and "explainability" of AI technology.
As an independent federal agency, the mission of the National Science Foundation (NSF) is to provide federal funding to support basic academic research in scientific fields (i.e. computers, mathematics) that secure the national defense. AI is a special focus of NSF's Directorate for Computer and Information Science and Engineering (CISE) with funding going to its core programs such as: Fairness, Ethics, Accountability, and Transparency (FEAT) for discovery in research and practice related to fairness, ethics, accountability, and transparency in AI; Real-Time Machine Learning (RTML) in cooperation with DARPA to research high-performance Machine Learning techniques that can learn from real time continuous streams of new data.; and Fairness in Artificial Intelligence (FAI), in Collaboration with Amazon, to jointly support research focused on trustworthy AI systems that can be smoothly accepted and implemented to solve societal problems. Some FAI challenges include: transparency, "explainability", and accountability.
BCT LLC
10810 Guilford Road, Suite 111 | Annapolis Junction, MD 20701