Neural Networks for Flight Control:

A Strategic and Scientific Assessment

 

Paul J. Werbos

National Science Foundation*, Room 675

Arlington, Va. 22230

 

ABSTRACT

 

                This paper is essentially an advanced draft of a paper written for an August 1994 workshop on Neural Networks for Flight Control, held at NASA Ames, organized by Jorgensen and Pellionisz.  All references to "this conference" below refer to that workshop. The full Proceedings are not yet available.

                This paper assesses the strategic potential and scientific importance of research efforts and proposals presented at this conference. It argues that there is a remarkable confluence between the longer-term mission objectives of NASA and the fundamental scientific needs in the neural network field. Papers at this conference represent four of the five most advanced, brain-like neural network systems which have ever been implemented; all four describe substantial research success on flight control problems (albeit only in simulation in all but one case). Optimizing control -- the most promising segment of the neural net field, and, arguably, a necessary prerequisite to true autonomous intelligence -- can be critical to the fuel and weight reductions necessary to a new generation of lower-cost launch vehicles, which in turn are critical to the future of human activity in space. Using related designs, it should be possible to reduce the loss of aircraft in war (or in commercial accidents) even more substantially than NASA's present well-conceived programs now promise. There are substantial potential benefits in obvious areas, such as brain research and remote sensing; however, there is also great synergy with the Partnership for a Next Generation Vehicle, the President's initiative to make internal combustion cars obsolete. Data compression, teleoperation, manufacturing and missile interception are also promising application areas. Strategic coordination in strategic defense may appear to be an obvious application, but there are hazards and stability problems unique to that application which suggest that it should be avoided.

                NASA-NSF collaboration could play a key role in strengthening efforts in this area, which would require highly focused and aggressive management, linked, if possible, to new expanded efforts in launch vehicle development.

 

1. SCOPE

 

                This paper will address three questions, for the benefit of the decision maker and the engineer, based on presentations at this conference:

 

               If NASA should create a major initiative in artificial neural networks (ANNs) for flight control, what are the potential benefits for national strategic priorities and for fundamental scientific progress? How could these benefits be maximized?

 

               What is the significance and potential of the ANN designs presented at this conference?

 

               What is a viable technical strategy for the specific application area -- reconfigurable flight control    -- which has served as the initial anchor for these efforts at NASA Ames?

 

It is somewhat difficult to address both the policy level and the nitty-gritty engineering details in a single document. Nevertheless, this kind of balance is essential; it is critical to the challenge of managing and implementing this kind of effort efficiently. Government contracting officers often report that the commonest cause of unsatisfactory products which disappoint their agencies is that contractors often answer the wrong question or address the wrong problem; thus the practical engineers working for this kind of  effort need to think seriously about the larger goals they are contributing to. Even in basic research, I have seen the same phenomenon over and over again, as people exert enormous energy in nonproductive or duplicative directions. Likewise, experience shows that high-level decisions which are not linked tightly to the concrete, nitty-gritty engineering realities often lead to efforts which -- at the operational level -- push in the opposite direction from what the higher-level people think they are doing, and thereby waste huge amounts of taxpayers' money. Specific innovations at the engineering level are often crucial to the success of larger efforts -- and not to be taken for granted in the world of bureaucratic inertia.

__________________________________

 

* This paper expresses my personal views, not the views of NSF. My personal views are informed by my experience at NSF, but are substantially influenced by three other factors as well: (1) my past activities in support of the National Space Society and the L-5 Society (for which I was once a Regional Director, coordinating several Eastern States, and the Washington representative); (2) past employment at the Department of Energy (where my responsibilities included, among other topics, the evaluation of long-range energy forecasts, and the evaluation of economic and technological forces bearing on those forecasts); (3) my experience as President (1991-1992) of the International Neural Network Society, and earlier research activity[1].

 

 

2. SUMMARY OF CONCLUSIONS

 

                There is a near-exact correspondence between the type of ANN design most critical to NASA's long-term mission objectives and the type of design most critical to fundamental scientific progress in this field. Thus there is excellent reason to believe that an initiative in this area - if properly directed -- could have substantial scientific benefits for a number of major strategic goals, while developing fundamental scientific understanding more efficiently than any other ANN funding program on the earth (except for the small-scale efforts at NSF). The work presented at this conference already marks a significant start in this direction.

                The designs of greatest relevance here have been variously described as Approximate Dynamic Programming (ADP), adaptive critics or reinforcement learning. They provide two critical new capabilities: (1) to compute, offline, an approximation to the optimal nonlinear control strategy for a noisy, nonlinear plant or vehicle affected by uncertainty, based on either a conventional model of the system to be controlled or an ANN trained to emulate that system; (2) to perform the same task based on real-time learning, both in the controller and in the model of the plant. More conventional techniques fall short of these capabilities in various ways: some assume linearity; some are capable of stabilizing a plant but not optimizing it in real-time; some become too expensive to implement as the number of variables grows (beyond 1 or 2); some are numerically inefficient (i.e. too slow) in their treatment of noise; and so on. Neural network implementations of ADP also permit the use of high-throughput ANN chips, which can make it more practical to use a highly complex and intelligent control design even within the limitations of an aircraft or spacecraft.

                Of the five most advanced ADP systems working in the world, four have been tested on flight control problems and should be described in this conference proceedings. (The fifth has been tested only on robotics simulations and unrelated problems.)

 

3. POTENTIAL CONTRIBUTION TO A SPACE ECONOMY

 

                Many analysts believe that NASA's most important mission, in the long-term, is to use R&D to break down the key barriers which support us from a true "space economy."[2]. In a true "space economy," the unit cost of critical space activities would be reduced down to the point where the economics of space utilization are so favorable that the activity can grow by itself, at a rapid rate, on a private sector basis, delivering more value to the earth than it costs, and eventually leading to a human settlement of the solar system. The most urgent prerequisite here is to reduce the cost per pound to earth orbit, either through single-stage-to-orbit (SSTO) rockets or -- at greater risk, but with greater long-term cost reduction potential -- Mach 26 hypersonic aircraft. Other key barriers involve the ability to utilize nonterrestrial materials (NTM) at low cost, and to perform space manufacturing on an efficient, modular basis. The concept of a space economy is related to Rostow's concept of the "takeoff effect" in economic development theory[2].

                The paper from Pap and Cox of Accurate Automation (AAC) at this conference already shows how ADP is playing a crucial role in controlling the first prototype being built for the US hypersonics program. In the final days of NASP (the predecessor program), it became clear that the weight ratio was the one really serious bottleneck or apparent showstopper in building a Mach 26 hypersonic aircraft; existing control designs could stabilize the craft, but not at an acceptable weight ratio. There was an urgent need to minimize fuel consumption, stringently, and to minimize the effective cost (weight) due to the thermal control system. Earlier work by AAC, by Neurodyne and by McDonnell-Douglas (funded initially by NSF and internal funds) indicated a substantial possibility that ADP could perform this critical high-risk, high-payoff task, with assistance from other ANN subsystems. SSTO rocket work is at an earlier stage; however, there is every reason to expect that weight ratios will be critical in that approach as well.

                With NTM utilization, the chief barrier is initial cost. Estimates by Gerard O'Neill -- both in his earlier High Frontier discussion[3] and in later, more detailed studies -- were low enough to be worth discussing before Congress. But estimates from NASA Houston of the cost of an initial lunar base -- let alone the NTM option -- appear to be politically unfeasible. The key difference between the two is that they are cost estimates for different approaches; O'Neill proposed a higher degree of automation on the moon. Paradoxically, to open up the door to a large human presence in space -- as in the O'Neill plan[3] -- requires low costs, which in turn requires greater automation of the initial lunar activity. ANNs may or may not be necessary here. However, AAC has recently demonstrated a new, far more efficient controller for telerobotic robot arms, tested on a physical prototype of the space shuttle; main arm and on underwater robot arms, based on an ADP outer control loop; this was an extremely difficult control problem, previously attempted unsuccessfully at a number of locations, using a variety of approaches, at great cost. ([4] discusses some of the earlier work, which was initially funded by NSF, through the small grant which actually started the company.) This suggests that ADP and related techniques might also be critical to the use of telerobotics, to make NTM affordable. A purely robotic approach to extracting NTM would require even greater intelligence in the controller, making ADP even more essential as part of a rather complex system.

                The long-range requirements for space manufacturing seem far less clearly defined at present. They involve issues such as the ability of humans to live and work in space, the design of mass-producible space structures, the definition of minimal "basic" manufacturing capabilities permitting sustained growth[5], specific manufacturing processes, automation, net materials flows across different processes, and so forth.

ADP has demonstrated an ability to automate certain manufacturing processes [6] which had been impervious to automation using conventional control and pure rule-based systems; Neurodyne, for example, has extended its earlier work on manufacturing composite parts through to applications in semiconductor manufacturing, a success which has aroused great interest at SEMATECH. The work by AAC[4], by Jameson[7] and by many others[6] on ANNs to control robot arms is also potentially relevant. Likewise, the use of neural network research to better understand the brain may possibly have implications for the human ability to live and work in space, because the human nervous system plays a central role in the process of adapting to space.

                In summary, ADP and related techniques may play a critical role in overcoming the most urgent barriers to a "space economy," and a useful supporting role (possibly critical -- we don't yet know) in overcoming others. The most urgent and well-defined tasks involve flight control in the larger sense (including integrated control of propulsion, avionics and of temperature).

 

4. BENEFITS TO THE ENVIRONMENT,

TO SUSTAINABLE DEVELOPMENT ON EARTH

 

                Sustainable development on earth is also a leading strategic priority for policy at a national level, cutting across all agencies [8]. Current concerns about sustainable development are in some ways an outgrowth of the old Gore-Gingrich bill for a national foresight capability, a bill which did not pass, but which nevertheless left its mark on thinking at the highest levels of both political parties.

                Traditionally, NASA's primary response to this priority has been to expand activities in remote sensing, to permit better monitoring of the environment. The Division where I work (the Electrical and Communications Systems Division at NSF) also has a long-standing interest in basic research related to remote sensing. Unfortunately, the technologies related to remote sensing are extremely diverse and scattered; in developing research priorities, it is extremely difficult to overcome the laundry list (or Christmas tree) approach.

                When I was asked to look into this problem a few months ago, I decided to take what seemed an obvious approach: I simply walked down the hall to where they fund research in ecology, and asked them what would really be useful. The ecologists expressed (informally) great frustration with the way in which billions of dollars can be spent, justified on the basis of ecological problems, without anyone asking them what I came to ask them. Within the realm of remote sensing, their greatest need was for help in bridging the gap between voluminous, raw, physical data, on the one hand, and information , on the other. They wanted information on variables like species or genus proliferation, as a time-series. In short, their real need was for better pattern recognition or feature extraction, from extremely voluminous time-series data where computational throughput is a major part of the problem.

                This kind of pattern recognition is an ideal application area for ANNs. At this workshop, Leon Cooper (who earlier won the Nobel Prize for the BCS theory of superconductivity) reported great real-world success in applying ANNs to static pattern recognition systems, for clients like financial institutions and the IRS. Post Office officials have told me that the best existing ZIP code recognizers are based on ANNs, which, because of special chips, can also overcome the high-throughput bottleneck[9], without requiring costly hard-wired application-specific chips. (The adjustable weights in ANN chips make them usable on multiple applications, and even permit remote "reprogramming" based on telemetry.) Remote sensing is more difficult, because the patterns there are highly dynamic; however, this merely indicates a need to use ANN designs from the neuroidentification literature [6, ch.10]. Neuroidentification is important as well to advanced forms of ADP.

                In the past, the most advanced work in neuroidentification has occurred in the chemical industry. (Also, there is work by Principe, Fernandez and Feldkamp of importance here).This area was not well represented in this conference; there was only one very formal paper on time-lagged recurrent networks, a paper which provides some theoretical support for designs which have already proven very successful in practical applications. However, the synergy between ADP and neuroidentification is great enough that groups could be formed in the future which are world-class in both areas -- neurocontrol and neuroidfentification -- if this initiative encourages such development. The development of such groups is of crucial importance to the scientific development of the ANN field.

                Nevertheless, from a strategic point of view, there are a few of us who would question the relative importance of this activity, compared with certain other activities. Is it not more important to improve the environment, and create sustainable production technologies, than to monitor the damage already done?

                As an example, what if we could reduce the wastes from chemical plants by a factor of two or more, using intelligent control, while actually reducing costs through greater efficiency? ANNs can be used in such applications, but the private sector is already doing very well in that kind of research [6,ch.10], and NSF and EPA already have mechanisms to fund it. It is not an obvious target of opportunity for NASA at present.

                On the other hand, motor vehicles are also a major source of pollution on earth. Transportation, in general, is the main reason for our nonsustainable dependence on oil, which poses large immediate problems for national security. NSF also has an active role in supporting the application of ADP to automobiles[10], within the context of the larger Partnership for a Next Generation Vehicle, a major Presidential initiative.

                Despite the size and scale of this initiative, a new initiative at NASA Ames could have major spinoff benefits to PNGV, simply because the technological needs are so similar. Having funded work related to both hypersonic vehicles and to PNGV, I am amazed at the structural similarity of the technical challenges and management issues involved. With PNGV, optimal real-time control under noise, minimizing fuel use (and pollution), is a central issue; likewise, special chips are called for. (In testimony in the summer of 1993 to Marilyn Lloyd's committee in the House, Phil Haley -- then representing General Motors -- testified that "integration and control" was the main technical challenge outstanding in building a marketable fuel-cell car. Bench-scale work by Neurodyne, funded by NSF,  suggests that an ANN controller can convert even existing cars to ultralow emission vehicles; tests on an actual Saturn engine are planned for early 1995.) In both applications, the optimization tools might even be used at the design stage, if ways can be found to hook up ADP to the CAD/CAM software. It is easy to imagine both applications reinforcing each other by supporting the development of dual-use integrated, modular software packages, in small companies working on both applications.

                Neurodyne and McDonnell-Douglas also showed that ADP designs can automate the continuous production of carbon composite parts, a problem which did not yield to earlier efforts using more conventional methods and AI.[6]. This work was suspended, in part because of cutbacks in submarine programs which helped support it. However, carbon composite parts are also important to the cost of aircraft and to the PNGV initiative. Dr. Rashid of USCAR has described them as absolutely essential to the President's goal of improving fuel efficiency three times over. Resurrecting this work and bringing it to fruition should be given serious consideration as an add-on to NASA work in this field.

                Ultimately, sustainable development involves more than just pollution and natural resources. Human resources and population are also critical. In the recent UN conference in Cairo, it was widely agreed that improvements in education worldwide (with special emphasis on female education in poorer countries) will be crucial to all of these human variables. ANNs will not be crucial to such developments, of course. But HPCC -- high-performance communications and computing -- may in fact offer us a chance to create a leapfrog in the level of education worldwide. ANNs could perform a useful supporting role to HPCC in that context. For example, it is quite possible - but unproven - that compression ratios for voice and video might be improved by a factor of 2 or more, if ANNs were used to learn optimal compression algorithms.

                Most people attempting data compression by ANN have used a simple encoder/decoder design described by Hinton in 1987, or a slight generalization of that design. (Such designs are sometimes called "autoassociators.") That design was purely static; it is not surprising that it does less than an optimal job of extracting patterns from time-series information like speech or video. In 1988 [11], I described how one might generalize such designs, to account for dynamics as well. But it turns out that all of these designs have fundamental mathematical problems, which may explain the difficulties people have had in using them on real-world compression applications. In 1992 [6,ch.13], I developed a new design -- the Stochastic Encoder/Decoder/Predictor -- which overcomes these mathematical problems.

                Because of severe limitations on my personal time and my program budget, I have not been able to pursue the resulting possibilities. (In fact, this would also require state-of-the-art insight into how to structure the various component networks and test problems, and extensive computer equipment; some consideration of Shannon coding and adaptive wavelets should probably enter in as well.) Perhaps a NASA initiative could follow through on these important but unexplored possibilities. Improving compression ratios by a factor of two could cut the cost of voice and video access by a factor of two; this, in turn, would significantly improve the chances of wiring up large parts of the developing world.

                In the long-term, ANNs could also help as tools in the design of intelligent agents for educational software, and in the understanding of the human minds to be educated. Because these are very complex, multidisciplinary areas, the short-term potential is difficult to predict. Just as Maxwell's Laws eventually became crucial to our understanding of molecules (molecular orbits and states), a full understanding of learning at the neural network level will eventually be of enormous importance to education; however, the path from here to there is not a quick and narrow line.

                In summary, a NASA initiative on flight control could be very beneficial to sustainable development, in a significant and probable way, by supporting technology and tools of relevance to the Next Generation Vehicle and to manufacturing in general. A slight expansion in scope (still without straining the intellectual unity of the effort) could help in remote sensing applications and data compression applications, of importance to HPCC and to sustainable development in general. Additional long-term benefits are possible, related to the educational process as such, but are harder to predict; they may be considered as areas for expansion for the initiative, after it has built a sufficient base.

 

5. BENEFITS TO BASIC SCIENCE

 

                Fundamental progress in basic science is itself a strategic priority, according to the Administration's recent science policy plan and the statements of the two previous Administrations.

                There are many different concepts of what it means to support basic science. To some, it means supporting the entire national base of knowledge in all fields. Many of us prefer an alternative definition: truly basic science is a strategic effort to understand the underlying, unifying mathematical principles which lie at the base of everything else. Some have argued that there are really only four fundamental questions here:

                (1) What are the underlying laws of physics?;

                (2) What is the structure of the universe, the space in which these laws operate?;

                (3) What are the mathematical principles underlying the phenomena of intelligence or mind?;

                 (4) what are the mathematical principles underlying the phenomenon of life (or of

                        self-organizing systems in general)?

Many of us became interested in neural networks entirely because of their importance to question number 3.

                From a NASA perspective, ANNs might also be useful in supporting projects relevant to (1) and (2). For example, recent experiments on the Hubble telescope regarding the age of the universe have led to graphic and even startling results, which could have very large implications[12]. If the stabilization control of that telescope (or of others) could be improved significantly, using ADP, this could be very exciting. Unfortunately, those experts I have spoken to tell me that stabilization is not a limiting factor at present in such instruments. Likewise, Roger Angel has said that ANN-based adaptive optics will always be far more important to earth-based telescopes than to space-based telescopes, because of the greater noise and complexity of the former. If there should be exceptions to these rules, however, then ADP or other ANN designs might well be useful

                Question number (3) above is the more serious area of benefit here.

                Recent efforts in neuroscience suggest the possibility of a true Newtonian revolution in our understanding of the brain [13,1]. Prior to Newton, physics -- like neuroscience today -- was essentially a phenomenological field of research, with lots of empirical results (some quite quantitative) but no real mathematical, scientific unity. In the past, many researchers have despaired of achieving a similar unified understanding of intelligence in the brain; the sheer complexity of the brain seems to preclude the development of simple, unifying principles. However, consider our analogy to Newton: Newton did not find an elegant way to summarize the complex initial conditions of the physical universe; he achieved a unification (for gravity) by changing the focus of attention towards the dynamic laws which govern changes in the state of the universe. In a similar way, there is evidence that the dynamics of learning in the brain apply in a uniform, modular, flexible way within all the major components of the brain, such as the cerebral cortex.

                Substantial efforts have gone into computational, mathematical models of learning in the brain, in recent years. However, the bulk of these models have been bottom-up efforts, rooted in very detailed models of membrane chemistry but very little systems-level integration or consideration of other features of the physiology. Models of this sort typically do not replicate the very high level of engineering functionality that we know is present in the brain.

                Researchers in psychology have argued that even a minimal model of brain-like intelligence must include three basic elements:

                (1) An "emotional" or "affective" or "secondary reinforcement" or "value-calculation" system. Such                     a system would evaluate objects or variables in the external world, so as to assess their value              -- positive or negative -- to the goals of the organism.

                (2) An "expectations" or "prediction" system.

                (3) An "action" or "motor" system, which sends signals to muscles or actuators

                      (or to simple postprocessors controlling muscles or actuators) so as to

                      maximize the values calculated by the "emotional" system.

So far as I know, Grossberg and his coworkers (Levine, Schmajuk, Waxman) are the only people in the psychological community who have ever formulated a model of intelligence in the brain incorporating all three elements, with all three elements governed entirely by some kind of generalized neural learning algorithm. However, this portion of Grossberg's work has never demonstrated real engineering functionality. There is reason to suspect that it would require substantial upgrading -- informed by engineering-based ANN studies -- in order to pass this test. [14; 1, ch.10]. Explaining the functionality of the brain is really the core of the problem, in explaining intelligence. Some have argued that Grossberg's approach is at least "unsupervised," in some technical sense; however, this is equally true for the engineering-based designs discussed below. See [15] for deeper discussions of the relation between engineering functionality and human psychology.

                In the ANN engineering community, several ADP designs have been developed which meet all three criteria above, motivated by the requirements for greater engineering functionality. These designs already appear to offer a first-order understanding of how the brain "works" -- how it achieves that basic capability we call "intelligence." [16]

(This statement should not be interpreted as an endorsement of the claim that all intelligence lies in the brain; however, controversies regarding the nature and existence of the soul are beyond the scope of this paper[15].)  To take this process further, and develop a more serious second-order understanding of the match between ADP and specific connections and cell types in the brain, would require a substantial expansion in the number of people who fully understand these kinds of three-component designs[16,17]. Furthermore, appropriate studies of the brain itself could yield ideas for better and more powerful ADP designs, if the teams doing this research include some intellectual leadership from engineers fully versed in ADP, who know what to look for. Thus it would be appropriate to include collaborative research of this sort in the initiative as well, at least after the basic ADP capability is consolidated. From a NASA viewpoint, there might be particular interest in parallels between artificial control and natural motor control in the cerebellum, which acts as a kind of buffer -- like a teleoperation system -- between the higher parts of the brain and smooth, coordinated movements like flight control in the bird[13].

                Prior to this workshop, there were only two published examples of three-component ADP designs running successfully -- a 1993 report from Jameson[7] (of Jameson Robotics, formerly of Lockheed Texas) and a brief 1994 paper by Santiago and myself[18]. There are four new examples all appearing in this conference proceedings, all showing substantial results on difficult flight control problems: (1) Wunsch and Prokhorov; (2) Santiago; (3) Pap and Cox; and (4) Balakrishnan. Jameson's work showed that a three-component design can solve the problem of controlling a nonMarkhovian simulated robot arm, a problem which (to his great disappointment) he could not solve by using even the best of the two component designs (essentially what Neurodyne has used). Wunsch and Prokhorov have reported a similar finding for a stiffened, more difficult version of the autolander problem published in [19], supplied by C.Jorgensen of NASA Ames. (More precisely, they report a 100% failure rate for the two-component ADP design and conventional controllers, and an 80% success rate for the three-component design, using a loose definition of "success" in both cases.)  Santiago reports significantly better results yet on the same problem when he uses DHP, the most advanced three-component architecture implemented to date. (Santiago's company holds a patent pending on DHP and several related designs, but is currently authorizing use of DHP at no cost conditional on citation of these facts.)

                Balakrishnan and Pap and Cox have also reported great success in using DHP.

Balakrishnan uses a special simple form for the "value" or "critic" network, which is not an ANN in his case. This underlines the fact that ADP designs are generic learning designs which can be applied to all kinds of sparse or simple nonlinear structures, not only ANNs. He reports substantial success in the missile interception problem, compared with conventional well-tested and well-known algorithms for that problem. Of course, the missile interception problem is a flight control problem of serious strategic importance. Pap and Cox reported a high level of success in using DHP in controlling a prototype hypersonic vehicle, as discussed in section 3; their talk put more emphasis on the application itself, but I hope that their proceedings paper will give some of the neural network details as well. (As is common in real-world projects, however, I might tend to expect a complex array of ANN designs used on different aspects of the problem at different stages.)

                Certain concepts from classical artificial intelligence (AI) could also be very useful here, if they could be assimilated into more neural designs, in a more brain-like context. Possibilities of this sort look very promising [17,20,6 ch.13], but the ADP work needs to be extended further, first, in order to supply the context. The most definitive description of ADP designs is still in [6]; however, Santiago tells me that chapters 8 and 9 of [1] are valuable as an introduction or prerequisite to some of the more complex ideas in that book.

                In [6,ch.13;20], it is described how ADP designs -- with certain modifications -- could also solve AI-like planning problems. One might therefore imagine using them on problems like global coordination in strategic defense. Unfortunately, the required design modifications yield a degree of autonomy that makes these designs less predictable than the designs required for flight control. There is good reason (e.g. various Lipschitz criteria and so on) to expect that ordinary ADP systems will actually be more stable than conventional adaptive controllers[17], when well-designed; however, the modifications required for the global coordination problem erode these phenomena and -- in my view -- imply a degree of hazard too great for safe operation, in an application where computers might order attacks on human beings.

                In summary, the papers presented to this conference already mark a major watershed in the development of those kinds of designs which will be crucial to our understanding of intelligence in the brain. For the sake of basic science, it is crucial that this promising start be followed up on (albeit not in the application of global coordination of strategic defense).

 

6. RECONFIGURABLE FLIGHT CONTROL: PRACTICAL ISSUES

 

                Work on reconfigurable flight control is clearly the keystone to the current ANN work at Ames. Extensive presentations were made at this conference by McDonnell-Douglas, by Lockheed, and by people at NASA working with McDonnell-Douglas.

                The challenge here is simply to cut in half (or more) the probability of losing an aircraft, when that aircraft is subject to an "involuntary change in configuration," like a wing being shot off or like the kind of glitch which causes commercial aircraft to crash. This is an excellent initial testbed for ANN-based flight control, because the issue of formal stability proofs is obviously not central; even a 50% success rate would still be substantially better than the status quo in terms of safety. Reducing losses in war by 50% or more would have substantial military implications.

                At the same time, there are serious issues regarding realism here. For example, there was one video shown which was really quite comical for those familiar with what happens in warfare. It showed two simulated airplanes flying side by side, one equipped with a reconfiguration box and one without. At a certain part, the announcer stated that a fault was initiated, such that the controller lost access to certain actuators, as taken from a predesigned menu of a dozen or two fault configurations. Actual flight parameters, like drag coefficients, were assumed to be unaffected. One aircraft veered off course, while the other simply continued as if nothing happened at all, without any change in attitude or velocity. If only Saddam Hussein could be persuaded to develop guns and missiles which have that kind of gentle effect when they hit!

                The technical strategy presented by Jim Urnes of McDonnell-Douglas was considerably more sophisticated than that video (which came from a different presenter).

Urnes' strategy will almost certainly work, in permitting substantial safety benefits to users of F-15s and commercial aircraft. However, there is also room to get still greater performance, in this critical area, by expanding the program to include an additional phase of effort, a phase which would be very compatible with the testing concerns expressed at this conference by NASA Dryden.

                Urnes' strategy involves two "phases." Both phases can be pursued concurrently, but it is expected that Phase I will be completed sooner than Phase II. In Phase I, an ANN is trained to input sensor data and output an estimate of the matrices A and B in the simple, linear classical model:

                                               

where x is the state of the aircraft and u the state of the controls. The estimates of A and B are then fed into a classical linear-quadratic optimal controller, of the form given in the classical textbook of Bryson and Ho[21]. Because the controller itself is a classical controller, not itself changing over time, the speaker from NASA Dryden said that this arrangement will be much easier to flight-certify than any design involving true real-time learning in the controller itself.

                Phase II is planned to use a true real-time learning system, in collaboration with Neurodyne. Neurodyne is a small company, historically linked to McDonnell-Douglas, initially started up by a Small Grant for Exploratory Research (SGER) from NSF. In 1992, Neurodyne's ADP designs were by far the most advanced and brain-like in operation in the world; White and Sofge at Neurodyne were the editors of the Handbook of Intelligent Control [6], which is still the most definitive source (at least until this conference) on all aspects of ADP design. Their designs are essentially just one step before the three-component designs discussed at this conference. (Neurodyne is also developing a neuroidentification capability, which is crucial to the three-component designs and to more conventional control approaches, but this is still in the research stage.) Unlike the three-component designs (except perhaps the AAC example), their designs have been proven successful on a variety of real engineering tasks, not just simulations of those tasks. In simulated tests of real-time learning, White and Urnes showed readaptation within two second to an involuntary change in aircraft configuration, using the Neurodyne designs. However, Urnes has reported delays of more like 10 seconds to a minute in more recent wind tunnel tests. There is considerable work to be done in bridging the gap between the simulated problem and the tough, actual problem, and in figuring out how to flight-qualify the result. There is tremendous potential here, but also a serious potential for delay.

                The Phase I approach involves certain limitations which can be overcome even before going to Phase II. Therefore, I would propose an expansion of this program --- using new money, rather than doing anything to cut back on the existing well-conceived efforts -- to create a Phase-one-and-a-half, or Phase IB.

                The most serious limitation with Phase I is the assumption that linear equations describe the behavior of an aircraft after it has been hit. In Phase I, the matrices A and B are based on a linear approximation centered on the optimal, stable, equilibrium attitude and velocity (or, at least, the desired attitude and velocity). But after being hit by a gun or a missile, the aircraft is not likely to have anything close to the optimal or desired attitude or velocity. Furthermore, the real challenge in saving an aircraft is to change its state from a highly undesirable state to something in the general vicinity of the desired state; thus quality or accuracy of control is most critical in the region far away from the desired state. Finally, from listening to Urnes, I have the impression that McDonnell-Douglas does have nonlinear aircraft models which are likely to remain reasonably valid (though with changed parameters, and some need to add noise terms) even after the aircraft is hit.

                Based on these considerations, I believe that aircraft recovery could be improved substantially if, in Phase IB, we used a McDonnell-Douglas nonlinear, stochastic model instead of the current linear model. It is still a significant task to develop an ANN to estimate the parameters of the model, just as in the existing Phase I. (Actually, the ideas in [6,ch.10] might be used to improve this component; again, the noise parameters also require some consideration, for optimal performance.) But then we face an interesting task: how to design an optimal controller, offline, like the Bryson and Ho controller, but optimal for the actual nonlinear stochastic model -- so as to permit better recovery even when the aircraft starts out with a bad attitude. DHP provides precisely this capability.

                Unlike the two-component ADP designs, DHP is a model-based design, whose behavior could be based entirely on the McDonnell-Douglas model (even though that model is not an ANN). Noise in the system and in the parameters can be used (with analogy to recent work by Feldkamp of Ford Motor Company[22]) in the offline simulations, in order to be sure that the resulting controller is more robust with respect to the details of the model. This general two-step approach would be exactly like McDonnell-Douglas' existing Phase I approach, except that it is the nonlinear generalization of that approach. As with the Phase I design, it involves the offline development of the controller, which should minimize the problems with flight testing and verification. From a scientific viewpoint, this would also be quite interesting, since it would actually use a more brain-like kind of design, even though adapted offline. (Also, there is clearly room to perform this task at different levels -- quick-and-dirty and very thorough.)

                The Phase IB approach does have one advantage over the Phase II approach: it implicitly uses experience (albeit simulated) to learn the specifics of how to cope with fluctuations in specific, familiar parameters like drag coefficients; it does not try to relearn the whole structure of the aircraft from scratch in real time. In general, real-time learning is necessary only for coping with unprecedented fundamental, structural changes which cannot be represented by a simple change in parameters; for more familiar changes, it is better to use higher-order approaches (such as TLRN controllers[6, ch.3.3.3]) rather than real-time learning. (The system presented by Lockheed at this conference also exploits this kind of principle; however, the Phase IB approach proposed here would be more flexible in handling a wide spectrum of faults.)  The true optimum, like the human brain, would combine both kinds of learning together, to cope with both kinds of shocks; thus eventually, to really minimize the loss of aircraft, we may need a Phase IIB which combines Phase IB and Phase II together. Some early research at universities may be a good way to help prepare for that long-term possibility. For now, however, addressing Phase IB is the most urgent possible addition to this program.

 

7. MANAGEMENT ISSUES

 

                This paper has argued that there is substantial potential for NASA activity in this area to have large long-term benefits. However, there are substantial management challenges which need to be addressed, in order to attain this potential.

                First of all, it is essential that the leaders of this effort perceive a political mandate to do their best in achieving these benefits, and the possibility of a real budget to do so.

In theory, the Civil Service system is supposed to give us the kind of job security which empowers us to simply go out and do what will benefit the nation and the world to the maximum, without being distracted by self-interest, and without being paralyzed by fear or cynicism. At NSF, this does still seem to work, to a reasonable extent, much of the time. However, budget cutbacks and management cutbacks at most other agencies -- including NASA -- make it extremely important that new efforts be encouraged in a more explicit manner. Fortunately, there are two key players in the political process here -- Vice-President Gore and Congressman Gingrich -- who have established a firm commitment to the long-term goals in government R&D, and to the primacy of high-risk high-potential research. It should be possible for the management at Ames to create bipartisan channels of communication which support a stable, conscientious effort in these directions. Ties to larger expanded efforts in hypersonic flight and SSTO (issues clearly large enough to already be on the radar screen of high-level people) could be very helpful in accomplishing this. Historically, of course, the NASP hypersonic program began as a major Reagan initiative; a reinvigorated hypersonic program, with technological links to Clinton's PNGV initiative, might well be a relatively stable, bipartisan context for these critical research activities.

                Second, it is crucial that the leaders of this effort maintain a firm grip on the long-term goals, even when this requires funding high-risk efforts. In the NSF Clean Car SBIR effort, we have asked our reviewers to use a simple criterion: projects which have a very high probability of improving existing electric cars, but no chance of giving us a breakthrough into the performance region required by the consumer, are not to be funded; proposals with lots of loose ends, and a significant chance of failure, but also a serious hope of getting us closer to the kind of car the consumer really needs, will have a high probability of being funded. The point is that we cannot allow short-term risk to cause us to drift away from the long-term objective. Developing a Mach 26 airplane will require a similar degree of firm resolution and willingness to accept that some risk is inevitable.

                There were many excellent, low-risk ANN projects also presented at this conference. Those projects are proper for the local mission offices tasked to perform the relevant applications. However, NASA's intellectual center for ANN expertise should be leading the agency in developing more advanced designs, in a unifying framework. The ADP approach does provide a unifying framework, tilted towards the biggest-payoff applications.

                Strictly speaking, the nature of the risk here varies greatly from application to application. For example, there is excellent reason [1,16,17] to believe that some kind of neural network model will work, eventually, in explaining intelligence in the brain. Also, the existence proof provided by the brain gives us good reason to believe that some kinds of neural network design can, at least, solve the current kinds of engineering control challenges -- challenges requiring optimization in a noisy, nonlinear world, involving a few dozen variables. The risks involve phenomena like debugging, institutional issues, potential delays, and so on; the ultimate technical feasibility of these applications is actually not so much in doubt. It is legitimate to give priority to these kinds of applications for now. Applications like video compression, however, are harder to predict in advance; there is little solid basis for guessing how large an improvement in compression ratios is possible. (It might be a few percent; it might be an order of magnitude.) Complex applications like the intelligent agent are even harder to scope out in advance.

                Success in this effort will also require an expanding pipeline of properly trained students, to work in the relevant companies and agencies. This requires a university basic research effort, to tie into the effort at NASA. One possible mechanism to accomplish this would be a joint NASA-NSF initiative, patterned on the many existing DOD-NSF joint initiatives. Because of the time factor, I have not asked NSF management for any feedback on this document; however, as a matter of general principle, NSF management does strongly supports collaboration with other agencies, especially when such collaboration tends to create a pipeline all the way from undergraduate education and fundamental scientific research through to measurable impacts on national strategic priorities.

                NSF has already set up several cross-Directorate initiatives which can fund engineering-neuroscience collaborations (among others). The neuroscience community has already indicated a serious interest in working together with engineers; the challenge to us as engineers is to develop the level of capabilities needed for a more serious approach to the kind of capabilities seen in the brain. A NASA-NSF joint initiative could naturally tie in with these existing initiatives, and bolster their engineering content, if NASA should to be a player in those kinds of issues.

 

REFERENCES

 

[1] Werbos, P., The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting, Wiley, 1994.

[2] Settling space: the prime objective. In America's Future in Space: A Briefing Book of the NSS Family of Organizations. Washington DC: National Space Society, 1989.

[3] O'Neill, Gerard K., The High Frontier. New York: Morrow, 1977.

[4] M.Adkins, C.Cox, R.Pap,C.Thomas and R.Saeks, Neural joint control for space station robotic manipulator system, Proc. of 1992 IEEE/RSJ International Conference on Intelligent Robors, 1992

[5] R.A.Freitas and W.Gilbreath, eds, Advanced Automation for Space Missions , NASA Conference             Publication 2255, 1982.

[6] D.White and D.Sofge, eds, Handbook of Intelligent Control, Van Nostrand, 1992.

[7] J.Jameson , Examples of continuous reinforcement learning control, in C.Dagli et al, Intelligent Engineering Systems Through Artificial Neural Networks, NY: ASME Press, 1993, Vol. II. (ANNIE 1993 Proceedings.) (Statements in paper based on personal communication, 1993.)

[8] A.Gore, Earth in the Balance, Houghton-Mifflin, 1992.

[9] L.D.Jackel et al, Hardware requirements for neural-net optical character recognition, IJCNN90 Proceedings, IEEE, 1990, p.II-855-II-861.

[10] Technologies relevant to next generation vehicles, Small Business InnovationResearch (SBIR), NSF        94-45. Arlington, VA: National Science Foundation, 1994,p.60-62.

[11]  P.Werbos, Backpropagation: Past and future, ICNN Proceedings, IEEE, 1988. A transcript of the talk       with slides (including diagrams of these designs) is available from the author.

[12] P.Werbos, Self-organization: Re-examining the basics and an alternative to the Big Bang. In       K.Pribram, ed, Origins: Brain and Self-Organization, Erlbaum, 1994.

[13] P.Werbos and A.Pellionisz, Neurocontrol and neurobiology,  IJCNN92 Proceedings, IEEE, 1992.

[14] N.Schmajuk, Stimulus configuration, classical conditioning, and spatial learning,

                WCNN94 Proceedings, Erlbaum, 1994, p.II-723-728.

[15] D.Levine and W.Elsberry, eds, Optimality in Biological and Artificial Networks?,

                Erlbaum, forthcoming (1995).

[16] P.Werbos, The brain as a neurocontroller: New hypotheses and new experimental possibilities. In            K.Pribram, ed., Origins: Brain and Self-Organization, Erlbaum, 1994.

[17] P.Werbos , Control circuits in the brain: Basic principles, and critical tasks requiring engineers. In             K.S.Narendra, Proc. of 8th Yale Workshop on Adaptive and Learning Systems. New Haven, CT:             

                Prof. Narendra, Dept. of Electrical Eng., Yale U., 1994.

[18] R. Santiago and P.Werbos, New progress towards truly brain-like intelligent control,    WCNN94 Proceedings, Erlbaum, 1994, p.I-27 to I-33.

[19] W.Miller, R.Sutton and P.Werbos, Neural Networks for Control, MIT Press, 1990. (The paperback            edition, 1994, contains fixes to some of the pseudocode in this book.)

[20] P.Werbos, Supervised learning: Can it escape its local minimum?, WCNN94

                Proceedings, Erlbaum, 1994, p.III-358 to 363.

[21] A.E.Bryson and Y.C.Ho, Applied Optimal Control. Ginn, 1969.

[22] L.Feldkamp, Puskorius, Davis and Yuan, Enabling concepts for applications on neurocontrol, in K.S.       Narendra, ed., op. cit.[17].