Research and Vision for Intelligent Systems for 2025 and Beyond
Brett Piekarski, Brian Sadler, Stuart Young, William Nothwang and Raghuveer Rao
The current Army Operating Concept document, “Win in a Complex World,” lays out a future vision for Intelligent Systems out to 2040 as a force multiplier for improving the effectiveness and reach of Soldiers and units in complex worlds [1]. It indicates that these systems could be autonomous, semi-autonomous, have the ability to learn, reduce the cognitive burden of the Soldier, and assist in making rapid decisions [1]. Also, through their increased intelligence and autonomy they could perform tasks such as teaming of unmanned ground vehicles (UGV) and unmanned aerial systems (UAS) to conduct adaptive and persistent intelligence, surveillance, and reconnaissance (ISR) in areas inaccessible by human operators, operate dispersed over wide areas while possessing the mobility to concentrate rapidly, and develop situational understanding through action [1]. All of these concepts will play a significant role and impact the strategy and operational concepts for 2025 and Beyond in complex environments such as Megacities and Dense Urban Environments. But if we put on our Mad Scientist hats, this vision probably stops short of how far we could really push technology and a vision for 2025 and beyond. This paper examines the research challenges and ways we can augment that vision to enable even more capable systems and a larger impact on future operations through collective heterogeneous systems that exhibit distributed awareness, intelligence, adaptable and resilient controls and behaviors, and operational complexity.
Current roadmaps for UAS and UGV focus primarily on individual systems and multi-robot coordination is a future goal [2,3]. For the individual system they state that autonomous mission performance may demand the ability to integrate sensing, perceiving, analyzing, communicating, planning, decision making, and executing to achieve mission goals and adapt to changes as well as predict what will happen next by integrating cognitive behaviors [2]. But, most current unmanned robotic systems still rely heavily on teleoperation or have limited autonomy using GPS waypoint navigation. There is basic research ongoing within the DoD laboratories and academia to increase the levels of autonomy for both air and ground systems, increase the level of interactions with humans to create robotic teammates, and also demonstrate large numbers of collaborative systems. Commercial advancements are happening fast in driverless cars, large scale cooperation for logistics robots, small drones are becoming ubiquitous around the world, and advancements in Artificial Intelligence are happening for applications like IBM Watson. The vision put forth here builds off many of these advancements to integrate large numbers of heterogeneous systems to include; Soldiers integrated into the control architecture and as sensor nodes, large and small UAS and UGV, data from distributed unattended sensors, and information from knowledge bases into one large distributed and collaborative intelligent system. This vision is not so much about a singular system or technology but how to integrate varying levels of autonomy and intelligence across spatially and temporally distributed singular systems, small teams, and even swarm behavior under one robust and adaptable command and control architecture while augmenting the capability of the collective beyond that of any one component within it.
Many commercial networked technologies, such as computers and smart phones, and large commercial robotic system implementations, have moved towards homogeneity in design rather than heterogeneity. This is highly desirable from a modular design, manufacturing, and potentially cost viewpoint. However, it remains an open question for future Army systems as to the degree and mix of homo- and heterogeneity to balance cost, logistics tails, and broad applicability and adaptability of the overall system. It is not at all obvious what is the right mix of heterogeneity in sensing, computation, platforms, levels of autonomy, and human/robot teams; or what is the best or even a good ontology. Robotic ontology’s exist, but are missing coupling with reasoning, cognition, and task allocation. What is clear is that, for future Army systems, there will be some, potentially high, level of heterogeneity. It is important to note, that as part of this heterogeneous system, there needs to be an effort in creating individual systems that have low price points, especially for small attritable systems. This vision for highly distributed and collaborative intelligent systems will drive new advances and system attributes such as Distributed Awareness, Distributed Intelligence, Adaptable and Resilient Controls, and Operational and Experimental Complexity, all of which are only starting to be realizable.
Distributed Awareness infers that the systems perceives the environment and gathers information from many different sources to provide situational awareness for the individual platform as well as the collective system. One aspect of this is distributed mapping and perception. As the Soldiers and intelligent agents disperse themselves through the environment, information will be collected across the collective and shared to augment missing, incomplete, or stale information to provide for example; multiple views and improved object recognition, 3-D scene generation, images and maps of areas not accessible by other systems, and understanding of population dynamics. How and what information to share given potential bandwidth limitations and how to represent this common model of the world across heterogeneous systems with varying levels of processing power, memory, or ability to act on the information is not a solved problem. While research is underway to extend this to much larger teams, distributed autonomous mapping and exploration has only been accomplished with a few air and ground platforms [4]. Research is also underway to fuse information from intelligent agents and humans [5,6]. Sharing maps, threat, and other information across a collective of soldiers and robots will greatly enhance the Soldier’s, and robots, situational awareness in complex environments.
Access and utilization of the cloud, big data, social media, real world complex simulation models running on high performance computing platforms (e.g., weather, natural disaster evolution), and other knowledge bases should be included and leveraged to support functions such as intelligent/semantic routing of valuable information, or answering critical real-time questions as they arise. Future knowledge bases will be highly distributed and evolving. While knowledge bases provide rapid answers to queries, they are associative and rely on similarity, and typically provide many possible answers, some of which may be dramatically incorrect. Thus, mechanisms are needed for interactive querying and information push and pull. Even more fundamentally, a science and analytical framework is needed to bridge control and signal processing on the one hand, and associative knowledge bases on the other.
If we can incorporate deep learning methods and leverage distributed computing, the adaptability of the systems would be enriched. As an example, the deep learning methods employed by Google, Microsoft, Facebook, IBM and others [7,8] could be brought to bear on the perception challenges currently encountered by military robots. While our data sources are not as voluminous, and our opportunities for crowd-sourcing are restricted, the approaches should be brought into the current robot architectures. Further, as events unfold in a region and are discussed on social media and from other data sources, this information could be utilized by the Soldiers and the robots to do a better job of reasoning about the activities they may encounter. This is especially important if we want the robots to adapt to their environments. Without this connectivity, the robot architecture will have no means to reason on data that may explain their environment. If this connectivity to distributed computing and intelligence sources is not tapped, the behaviors of the robots will be construed as brittle and not adaptable.
Distributed Intelligence infers that the individual and collective system can reason about the constantly changing local and collective situational awareness and the local and overall mission objectives to make predictions about future and perform real-time adaptations and decisions to optimize operations based on that future. A key element of future military intelligent systems is that they must make decisions on their own, likely at speeds beyond human operational tempo. However, this should not be misunderstood to infer that the robots will act unsupervised, or exhibit free will. Future robots must make decisions on their own to accomplish their mission, and this will have to be done at rates beyond which a human can control them. As an example, it is conceivable for a human operator to deploy and control a few UAS to engage with an enemy threat; however, this does not scale and what if he needs to deploy hundreds of systems. The human will need to interact at a much higher level, for example with a high-level tasking using natural language, such as, “Deploy UAS robots to engage all incoming threats.” A distributed collective of agents should make group decisions that are acceptable based on the cost/benefit preferences of the mission commander, yet we are far from a satisfactory means to achieve this reliably today. Foundations and methods need to be devised to provide distributed control and decision making that is responsive to human intent, interactive to changes in that intent, and function in complex environments with high degrees of a priori uncertainty.
When communication between agents is limited or even completely disrupted, the only way to counter such an adversary situation is to perform reasoning and prediction to predict the situation and future movements and decisions of allies and adversaries. Collective and distributed reasoning and prediction are critical when missions and objectives are not clear or change rapidly in dynamic and complex environments. Agents must preserve mission intent at operational tempo, and may be required to predict human courses of action. In addition, the intelligent system may face adversarial disruptions, requiring reasoning and prediction to enable appropriate real-time response at a pace that is far beyond what can be achieved with human interaction. Reasoning and prediction may also enable the determination and dissemination of critical and timely information. There are numerous challenges to achieving this level of collective intelligence, including knowledge representation, and real-time simulations and models and methods for understanding intent and its prediction. Research is underway to enable the teaming of autonomous air and ground robots with Soldiers. The current approaches include onboard computation, perception, and are beginning to incorporate reasoning to extract soldier intent from natural language commands and other cues for the robots to then execute [9,10]. However, if we extend this concept to incorporate distributed intelligence and awareness from broader sources of information, the progress we can make will be far greater.
Another important consideration of Distributed Intelligence is the opportunity of the robots to learn from one another. Over the last decade, we have been exposed to expensive robots, and because of their costs, we have not pursued some behaviors that might be pursued with less costly platforms. For example, instead of deploying 1-$100,000 robot, what if we deploy 100-$1000 robots, or even 1000-$100 robots. This scaling opens up many opportunities to distribute intelligence across many platforms and enable sharing of learning by all robots. By having many robots, behaviors that may result in failure of some platforms may actually benefit the collective whole. This is how humans learn. We learn from our own mistakes, and from the mistakes of others. If our robots are never allowed to fail, then we are significantly constraining their opportunity to learn, and thus improve their performance. Multi-agent learning is a potentially attractive alternative to directly coding teams or swarms of agents or robots. It is very challenging problem to provide the micro-level behaviors necessary to achieve a given macro-level phenomenon and more research is needed to find approaches to teach large numbers of heterogeneous agents how to do nontrivial collective tasks in real-time and in the physical world.
We posit that there are two fundamental shifts that have occurred in the past decade that will substantially alter how Soldiers will interact with autonomy moving forward. First, we have moved into a far more personal relationship with our autonomous systems, and secondly, and perhaps more importantly we have shifted from a mode where a task is no longer done by an autonomous agent OR a human but increasingly to a mode where it is done by an autonomous agent AND a human. Some examples of this include direct integration where we have begun to cede control to intelligent agents, and humans are no longer the sole arbiter of decision making. Intelligent agents within our automobiles that act as driver assistance tools, applying anti-lock brakes automatically when an obstacle is observed, parking for us, and maintaining lane position would be a few examples, but even these examples are just humans ceding control of sub-tasks, and the human still has the possibility to over-ride these intelligent agents. There has been a trend within DoD to invest in human “within the control loop” tasks, where humans AND intelligent agents are performing largely the same tasks, and the individual output of each agent, human and intelligent system, is fused together into a joint decision [5,6]. There have been a number of fundamental scientific studies examining how to enable this and how to properly assess, instrument, and monitor the agents [5,6]. What these studies have shown, though, is that when decisions are performed in this manner, substantial improvements in performance and accuracy are observed and errors minimized [5,6]. To accomplish this, means that humans have had to cede control of decision making to these intelligent agents, when those intelligent agents are performing better. To date, most of the tasks that have been examined have been fairly benign (e.g., image classification), but increasingly we, as a research community, are investigating how consequence, trust, confidence and accountability impact these decision paradigms. Technology has shown that these capabilities are real, if still nascent. To fully realize these capabilities, there are several investments in fundamental research that need to be addressed. An intelligent agent that has been imbued with a commander’s desired outcome should be able to independently, or as part of a larger group, move through an environment, navigate unforeseen obstacles and accomplish the intent of the human. This implies many technologies that do not yet exist: the ability to quantify, codify human intent; adaptive group behaviors; the ability to fuse disparate inputs from distributed agents to develop a comprehensive understanding of the world. Research to enable augmented human capabilities has a key role to play to enable this transformation.
Adaptable and Resilient Controls that enable adaptable and assured individual and collective mission plans based on changing situational awareness are clearly desirable system traits. Finding optimal, or even good enough, plans for autonomous agents is computationally difficult, especially for systems in complex environments. For most military operations real-time operational tempo is needed, and plans must be dynamically adapted during execution. This problem grows combinatorially with large heterogeneous multi-agent systems, where planning must be coordinated across many heterogeneous sub-systems with varying mission objectives, where individual agents may or may not have the same goals, where some agents may not be able to complete their tasks due to failures, and there exist non-cooperative players or adversaries. Research is needed in sub-optimal planning and exploration of the tradeoffs in speed of planning versus the accuracy and optimality of the plan.
Resilience, the ability to recover after something bad happens, is critical for intelligent systems, yet very difficult to model, analyze, and put into practice. Resiliency of large multi-agent Army systems needs to be considered based on realistic networking, and uncertainties in localization, mapping, sensing, and the state of other agents. Wireless networking instabilities, time variation, bandwidth, and security have not been sufficiently accounted for in distributed control. Information representation must be optimized in the context of the system tasks, and random information loss must be accounted for. Coupling control with autonomous networking may provide new control paradigms that simultaneously support the setup and healing of networks, dynamic network reconfiguration, the ability to withstand and overcome severe electronic warfare threats, all while supporting the War Fighter objectives such as autonomous exploration or seeking and sensing threats. Morphing, reconfigurable, and adaptable platforms and systems performance are ways to offer increased resiliency. For these to be effective, behavior synthesis should be rapid and scalable (via “online behavior synthesis”). Learning methods could be applied to reduce needed synthesis, but both of these are complicated by the potential use of many small platforms with low capability.
What is the best organizational structure to offer a balance between resiliency and operational efficiency and how can we reconfigure teams in the middle of a mission using a distributed architecture? Complex missions may require multiple teams to simultaneously carry out multiple tasks. Agents may need to play multiple roles that may span across teams. As contingency situations arise, rapid reconfigurations in teams, both locally and globally, will be needed across the distributed architecture. Dealing with intelligent adversaries will force the team into unforeseen situations. The ability to generate new behaviors on-line are likely critical to deal with contingencies and for the system to exhibit resilient behavior. On-line behavior synthesis is a challenging problem even when using a central architecture. Performing on-line synthesis of behaviors in a distributed fast paced mission is beyond the state of the art. There is no general framework or design for large numbers of distributed heterogeneous agents. Flocking is reasonably well understood with respect to coordinated group movement, but this is a small piece of the distributed intelligence problem. More research is needed in new sophisticated hybrid control architectures for large heterogeneous teams that may include both global and localized control of single agents, spatially and temporally distributed small and large teams, and localized swarm behavior. A key issue is the abstraction of localized behaviors and local controls to enable global control. The control architecture must incorporate autonomous networking, with its many limitations and tradeoffs. For large heterogeneous teaming, it can be assumed that not all communications will be bi-directional and must be understood in the context of abstraction, roles, and heterogeneity.
Networking is obviously critical for distributed system operation, while simultaneously autonomous agents can dynamically adapt to create, support, and heal networks to match the environment and the desired state of the collective. To achieve this, an entirely new theoretical foundation is needed as the number of agents and their ability to network and operate autonomously grows. Wireless networking instabilities, time variation, bandwidth, and security have not been sufficiently accounted for in distributed control. Information representation must be optimized in the context of the system task(s), and random information loss must be accounted for. Coupling control with autonomous networking may provide new control paradigms that simultaneously support the setup and healing of networks, dynamic network reconfiguration, the ability to withstand and overcome severe electronic warfare threats, all while supporting the warfighter objectives such as autonomous exploration or seeking and sensing threats. Efforts in autonomous networking must proceed in a tightly coupled research spiral with intelligent system design. This must include pervasive consideration of security and electronic warfare threats from adversarial intelligent systems. The emerging paradigms of cognitive radio and dynamic spectrum access may be critical to achieving the desired networking capabilities. Critical to this is the creation and exploitation of massive diversity, through the use of multiple wavelengths in both radio and optical domains. Mobility control will be utilized to dynamically maintain and heal the network as desired. Low frequency operation, for example in the lower VHF, can be harnessed to provide persistent links in complex terrain such as mega-cities, due to the physics of penetration at longer wavelengths. Miniature antennas and cooperative arrays may be utilized to achieve robustness to interference, and multi-user technology, including full duplex operation and coding, will provide dramatic increases in the spectrum utilization.
Operational and experiment-driven research is critical to explore and discover the brittle connections and interdependencies between perception systems, interactions with external data sources, efficient data sharing and processing methods, intelligence and decision making algorithms, multi-agent navigation and collaborative behaviors, and the collective performing spatially and temporally relevant missions. There have been recent examples of operating singular fully autonomous systems in complex environments, small heterogeneous teams with moderate complexity and interactions, and large numbers of homogeneous agents/swarms in simple environments and with limited autonomy. In order to make these demonstrations tractable and fit within today’s technology, researchers typically reduce the complexity along several axes: 1) number of agents; 2) degree of heterogeneity among the agents; 3) agent behavior complexity, autonomy, and adaptability; 4) the degree of interactions and communication among the agents; 5) speed of operation; and 6) the complexity of the environment and available infrastructure (e.g., GPS). Large scale experiments rely on readily available technology and so are limited in their ability to simultaneously push along these axes. Research in ways to simultaneously push the complexity along each of these axes is needed. A lack of design methods and models for such systems is a remaining critical issue, and foundations in this area may lead to new component technology that enables leap-ahead experimentation, as well as reduce the time cycle for technology development and costs related to iterative field testing of large complex systems. As the degree of heterogeneity increases, so does the design and task allocation complexity. Metrics and roles for heterogeneous elements must be understood.
Summary
There are many challenges to meeting the technical objectives laid out in this paper but it is envisioned that research in these areas will have a significant impact in shaping the future of Army intelligent systems and operational concepts in complex environments. These concepts for highly distributed and collaborative systems will change how intelligent systems interact with each other and the soldiers they are interacting with, the physical environment around them, and the cyber world to include access to knowledge bases and other sources of information. They will use this distributed and collaborative approach to develop a much greater understanding and awareness of the environment and the threats within than is capable with any one or even a few systems. Based on this continuing evolution and awareness, the collective will be able to exhibit complex autonomous behavior at the individual, team and swarm level to reason, predict, and adapt and respond to local stimuli while maintaining resiliency in the overall mission objectives. This will result in significantly increased capabilities for extended ISR reach in complex Megacity, dense urban environments, or areas with restricted or denied access. This approach will also enable flexibility on the battlefield and provide a capability to respond to changing social and population dynamics with varying levels of autonomy, intelligence, and even swarm behaviors for increased awareness or delivery of payloads; could provide real time resupply to dismounts and small squads in dynamic threat environments; enable robots to be used as a diversions or support fires and targeting; to be used as additional protection and to mask dismounted movements; and to collectively perform missions that otherwise would be unachievable, such as persistent surveillance of a particular region that exceeds the endurance of a single platform to ensure future tactical advantage.
Acknowledgements
The authors would like to acknowledge all the participants at the FY16 Army Science Planning Meeting: “Distributed and Collaborative Intelligent Systems,” their discussions contributed significantly to the ideas captured in this paper.
References
- TRADOC Pamphlet 525-3-1 “Win in a Complex World”
- DOD Unmanned Systems Integrated Roadmap FY2013-2038
- Unmanned Ground Systems Roadmap, Robotic Systems Joint Project Office, 2011
- Nieto-Granda, C., Rogers, J. G., Christensen, H. I., “Coordination Strategies for Multi-Robot Exploration and Mapping,” The International Journal of Robotics Research, 2014.
- Amar R Marathe, Brent J Lance, Kaleb McDowell, William D Nothwang, Jason S Metcalfe, “Confidence metrics improve human-autonomy integration.” Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 240-241, 2014/3/3.
- Ryan M Robinson, Hyungtae Lee, Michael J McCourt, Amar R Marathe, Heesung Kwon, Chau Ton, William D Nothwang, “Human-autonomy sensor fusion for rapid object detection.” Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, 305-312, 2015/9/28.
- Yann LeCun, Yoshua Bengio & Geoffrey Hinton, “Deep Learning,” Nature, Volume: 521, Pages: 436–444, 28 May 2015.
- Nicola Jones, “The Learning Machines: Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence,” NATURE, VOL 505, Pages: 146–148, 2014
- D Summers-Stay, T Cassidy, CR Voss, “Joint Navigation in Commander/Robot Teams: Dialog & Task Performance When Vision is Bandwidth-Limited,” V&L Net 2014.
- CR Voss, T Cassidy, D Summers-Stay, “Collaborative Exploration in Human-Robot Teams: What’s in Their Corpora of Dialog, Video, & LIDAR Messages?” Proceedings of the Workshop on Dialogue in Motion (DM), 14th Conference of the European Chapter of the Association for Computational Linguistics, 2014