Hey, I'm Dan! I'm the CEO of Plus and a venture partner at Madrona. I write the DL, a newsletter about tech in the Pacific Northwest

Featured Posts

Find Stuff

Built with Webflow

Boeing's 737 MAX and what it means for AI

A couple of weeks ago, the NY Times published an incredibly thorough investigation into the Boeing 737 MAX. It’s written by a pilot-turned-journalist, and he goes against prevailing opinion to argue that poorly-trained pilots were the cause of the crashes, not the plane’s systems.


Interestingly, on the same day, the New Republic also released a report with the opposite point of view - blaming Boeing and exonerating the pilots.


The 737 MAX investigation raises a lot of interesting questions about AI, so here is my two-minute summary of the situation and what it means for the future of automation:


✈️ Boeing vs. Airbus

  • Boeing’s basic philosophy is the pilot always has “ultimate authority of control,” and pilots can override or turn off any system
  • Airbus, on the other hand, was founded 50 years after Boeing on the idea of a “robotic airplane” that required minimal piloting skills using digital flight controls and pilot-proof protections


🖥️ The MCAS

  • The Maneuvering Characteristics Augmentation System (MCAS) is the center of the 737 MAX investigation
  • The MCAS is a software fix to an aerodynamic problem; it creates synthetic forces to mimic the aerodynamic performance of earlier 737s, which allowed it to avoid designation as a new model
  • If the MCAS detected a certain set of inputs, it would trigger the system to turn the nose of the plane down
  • If the MCAS detected a false positive, it would present as a runaway trim, which is a problem that “any pilot would know how to handle”


👨‍✈️ Pilot training

  • As flying becomes cheaper and the industry needs more pilots, training has become productionized, and schools don’t care about “airmanship”
  • For many students, flight training is about rote memorization of the steps for each flight or simulation (at one pilot training school in Indonesia, the completion rate is 95%)
  • Runaway trim, for example, is always part of Simulation No. 3, and no one ever has issues with it because it always occurs at the same time in the same way in the simulation


🚨 The accidents and the response

  • Oct 28, 2018 - A Lion Air plane sees errors from a faulty sensor. The MCAS kicks in, and the plane pitches down. The pilot disengages the electric trim, disabling the MCAS, and everything is fine
  • Oct 29, 2018 - The same plane is flown again without repairs, and it crashes 12 minutes after take-off. The faulty sensor activates the MCAS, it continually forces the plane’s nose down, and the pilots do not disengage the electric trim
  • Mar 10, 2019 - Ethiopian Airlines 302 crashes for the same reasons – faulty sensor, MCAS engaged, and electric trim was not disengaged
  • Mar 16, 2019 - Within a week of the second crash, countries around the world ground Boeing 737 MAX flights


This is a very complex investigation, but I think the crux of the issue is automation. As we use more tools to automate and augment human work, who’s responsible if something goes wrong – the user? their manager? the trainer? the toolmaker? the regulators? all of the above?


Here are some of the questions I had reading through these articles:

What should we expect from humans? The way pilots describe the MCAS makes it sound like disengaging electric trim is as straightforward as guiding a car with lane-keeping back into a lane if it begins to drift. But who decides what to expect from users? Should humans always have ‘ultimate authority’ over machines, or should there be 'user-proof’ protections?


Should training be focused on tools or theory? As automation moves up the stack, the debate between teaching theory and teaching tools will become more important. The question of “why do I need to learn math if I always have a calculator” is going to become “why memorize names of bones and muscles when I could spend that time in a surgical simulator?”


Are software problems different than hardware problems? The MCAS was developed because the MAX did not have the same aerodynamic properties as previous 737 models, so Boeing fixed it with software. That feels weird because testing is meant to identify problems, but the processes for testing software are different than those for testing hardware (and testing “AI” is different than testing traditional software).


Where will these types of issues pop up next? These situations will happen in healthcare, transportation, finance, manufacturing, and every industry that is attempting to automate or augment human work. The way the 737 MAX case is handled will set some important precedents for how to assign responsibility, how to thoughtfully design holistic systems, and what companies should expect from their partners.



Liked this article? Sign up for the DL, my weekly newsletter 📬

Thank you! You'll receive your first issue of the DL on Monday!
If you want to check out older issues - click here for the archives.
Oops! Something went wrong while submitting the form.