they don't get it...

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
JohnRoberts said:
This is not the same as VW cheating emissions testing.

It's exactly the same, John. Money rules. Engineers try to be safe, but it's the top that decides. And they only know one parameter: money.
 
cyrano said:
It's exactly the same, John. Money rules. Engineers try to be safe, but it's the top that decides. And they only know one parameter: money.
I try not to speculate about what everybody else thinks.

I recall one personal experience where my boss (company VP) told me submit a product for testing to an alternate testing agency that was easier to pass (Canada had two different safety agencies) after it failed a winding insulation test. 

I refused because the first test had identified a human safety issue and I try to live by the motto, "When in doubt do what's right".  The VP decided not to argue with me about an engineering call. (It did delay the introduction of the SKU when I had the transformer redesigned and retested. That delayed the company shipping goods against orders. )
======

That said I don't blindly accept everything the safety agencies say. I went to the mat against UL once, because in internal secondary safety fuse was missing a sticker.  UL wanted us to pull all the finished goods out of inventory, and tear them apart just to add this one sticker, for a fuse that only blows to prevent the unit catching fire after some other component fault. This was not technically a user serviceable fuse.  A qualified repair shops would likely have to fix more than the blown fuse, and would have access to the full repair information. Long story short an extremely unlikely source of human injury.

I did reprimand the manufacturing engineers who didn't follow proper procedure in the factory. If they pulled parts based on the BOM (that was accurate ), they would be left with a pile of unused stickers on the production line.  ::) The UL inspector's boss agreed with me about the extremely low probability of harm and gave us a pass on that one infraction, the factory supervisors didn't get a pass from me. 

----

Back on topic, I am guilty of what I complained about, speculating without complete information.  The more I learn about the MCAS system the less I like about it. Using computers to make the plane respond like a different plane the pilots were more familiar with, means when the MCAS is switched off, they are flying a different plane they don't really know.

Perhaps this is the modern reality of highly automated aircraft that do so much routine flying already.

JR

PS: Back to my personal speculation the anti-stall was NOT looking at air speed but angle of attack, using only a single sensor for that, so that data is harder to back up without using another redundant sensor.
 
JohnRoberts said:
PS: Back to my personal speculation the anti-stall was NOT looking at air speed but angle of attack, using only a single sensor for that, so that data is harder to back up without using another redundant sensor.
One wonders why, although there are two attack angle sensors, only one is used in the anti-stall system. Worst speculation is that they found these two sensors gave contradictory info, so they decided to use only one... :eek:
 
It may not be just Boeing.

The TV had a show about three nose-down incidents at Quantas a few years back. One severe, no crashes. They could not reproduce the events, even flying through a high RFI area which had recently started operation. But reviewing the data they found *sometimes* data in the AoA and Altitude fields were *swapped*. Computer thinks Angle of Attack is 40,000, it panics, puts the nose down part-degree. Each time the data was swapped it put the nose trim down more. If the glitches came close together, the small trims accumulated into a dive.

This was an AirBus! 380?

Could both companies be getting code from the same sources?

Seems like a place to review Defensive Programming. A T-shirt's "color" can't be "$19.97", and an Angle of Attack can't be "40,000". The rate of change should be physically possible, not 2.1-2.3-2.5-40,000. Insight into the physics is good: my car opens a tank-vent and expects the exhaust fuel ratio to rise (burning tank-fumes), if not it sets an error (likely stuck valve, because they put it in a bad place and it rusts). A nose-up *problem* would normally be reflected in a drop of indicated air speed.
 
Its possible it was getting bad data but still in range...

I only know I don't know.

I didn't even know the MCAS had two sensors available but only used one.  :eek:


JR

PS: Not apples to apples but i just replaced the two temperature sensors on my car motor.  My motor was stumbling when cold as if the ECU was not making the fuel injection delivery richer when cold. The one sensor that was apparently working (for dashboard temp gauge) only cost $15. The second different temp sensor presumably for ECU cost $55?? The car runs better now. Maybe  the ECU technology of a couple decades ago could not interface with the cheaper sensor.

 
Trying to anticipate ways humans will misuse or combine control inputs in strange and novel ways I imagine is the most difficult task of a coder.  But what I don't get is when you have sensors that read the environment, the variable ranges and combinations can be anticipated in advance- assume each input can and will fail, and send bad data.
If data exceeds expected ranges,  subroutines check other sensor ranges that give the errant data a weight. (I think it's called asymptotic/stochastic analysis?) I'm not a heavy coder, but wouldn't this be a fairly standard coding practice, given the aerospace context?
 
boji said:
Trying to anticipate ways humans will misuse or combine control inputs in strange and novel ways I imagine is the most difficult task of a coder.  But what I don't get is when you have sensors that read the environment, the variable ranges and combinations can be anticipated in advance- assume each input can and will fail, and send bad data.
If data exceeds expected ranges,  subroutines check other sensor ranges that give the errant data a weight. (I think it's called asymptotic/stochastic analysis?) I'm not a heavy coder, but wouldn't this be a fairly standard coding practice, given the aerospace context?
IIRC the original space shuttle control systems used 3 redundant computer systems and required at least two of the three to agree (a voting system). They actually had three computers running and a forth as a back up JIC... Reliability was pretty important to them. Contrary to the space cowboy movie humans can't fly it without cybernetic help.

JR

https://www.nasa.gov/mission_pages/shuttle/flyout/flyfeature_shuttlecomputers.html
 
It appears that they used the MCAS system to avoid requiring expensive pilot training on the new airframe, using computers to mimic the previous version. Modern air crews are increasingly dependent on computer aided controls so the line on this is blurry.

One critical issue that I consider inexcusable is that they had two attitude sensors available and chose to only use one... With two sensors they can be far more confident about the data input quality, and when to obviously suspect the data.

In the crystal clear clarity of hindsight we can see mistakes were made... I am optimistic they will get it much better... in the modern age of social media they may need to rename the air plane (after some super hero).

JR


 
It's also becoming clear that some safety features were optional.

Bad idea, as some customers might deem these unnecessary.
 
cyrano said:
It's also becoming clear that some safety features were optional.

Bad idea, as some customers might deem these unnecessary.
I perceive obvious shortcomings in the basic aircraft control system (MCAS).

The new location of more powerful engines apparently has the undesirable characteristic of raising the nose under power... it appears the remedy for that (anti-stall algorithm)  was too aggressive, made worse by using only one attitude sensor.

Tweaked software (involving both attitude sensors and less aggressive response to nose up) and finally admitting that the pilots need retraining on the new airframe will finally happen and should mitigate this tragic failure, but arguably it shouldn't have happened.

Modern aircraft keep getting safer, and ironic perhaps that a safety feature (anti-stall algorithm) reportedly caused two crashes.  Clearly the computers are not ready to fly the aircraft by themselves just yet. 

JR
 
cyrano said:
It's also becoming clear that some safety features were optional.

Bad idea, as some customers might deem these unnecessary.

Not only optional, but with an extra cost.  Both planes that crashed were bought without the extra $ safety features. It is interesting to see the free market approach to safety in practice.
And the regulatory board admitted they were letting the manufacturer self regulate to some extent.  Classic regulatory capture.
Without government regulation for safety, we could have consumers pay a safety upcharge on their flights. It would be interesting to see how that would work out. I expect the amount of people flying would plummet (not literally, haha)
 
dmp said:
Not only optional, but with an extra cost.  Both planes that crashed were bought without the extra $ safety features. It is interesting to see the free market approach to safety in practice.
And the regulatory board admitted they were letting the manufacturer self regulate to some extent.  Classic regulatory capture.
Without government regulation for safety, we could have consumers pay a safety upcharge on their flights. It would be interesting to see how that would work out. I expect the amount of people flying would plummet (not literally, haha)
Thanks for providing another example demonstrating my "they (pundits) don't get it theme"..

In hindsight some things always seem obvious... but these features were not mandated by the FAA either... How about mandating that the attitude sensors always work.  ::) One of the software tweaks is to automatically compare the output from both attitude sensors, and act accordingly,  instead of an optional warning light that a pilot has to check for and see, before ignoring input and taking some remedial action.

Boeing does not publish the full list of options available and I bet it contains more than just those two items (one of which has already been made standard by Boeing due to the publicity surrounding these two crashes).

Indeed these added charges for optional features, is probably very much an attempt to manage profitability in aircraft that are probably sold for fixed base prices multiple years before they finally get built and delivered.

We can also ask why didn't the FAA mandate these "safety" features, since they so obviously had value in these crashes (in hindsight to the NYT)? Also why didn't the airlines who purchased these aircraft fully load them with safety features (and train their pilots better)? Certainly they bear some culpability.

American airlines has 24 of these planes with the optional safety features.
Southwest has 36 max, already bought the disagree light and mounted angle of attack display elsewhere in the cockpit.
United has 14 max jets and does not use the safety features saying that "their pilots use other data to fly the planes"  (whatever that means) ::).

I still think Boeing screwed up, but IMO they are NOT some isolated bad actor, but had plenty of help from several others to make them look this bad. Media of course is rewarded for scaring people, and legislators are always looking for ways to appear heroic by railing about public safety.

Note the majority of grounded max jets in the US already have the safety features being discussed. The new improved software should be even safer, but I don't like forcing them to rush it out (software is difficult to make perfect). 

JR

 
"The saga of Boeing’s 737 MAX serves as a case study in engineering incompetence, and in engineering ethics – or the lack thereof."
https://www.eetimes.com/document.asp?doc_id=1334482&page_number=1#
 
As more news leaks out about internal communication inside boeing, there were missed opportunities to get this right. In hindsight there will always be people who say their warnings were ignored, and some may have been.

JR
 
for those old enough to remember,
this weekend is the 40th anniversary of American Airlines flight AA191.
273 lives snuffed out due to sloppy maintenance and poor design.
 
gridcurrent said:
for those old enough to remember,
this weekend is the 40th anniversary of American Airlines flight AA191.
273 lives snuffed out due to sloppy maintenance and poor design.
I had to google it to refresh my memory but indeed... having an engine fall off during takeoff*** is a bad thing. The good news is that we generally learn from our mistakes so are unlikely to repeat that particular disaster.  Airliner crashes are big news because it generally takes hundreds of souls in one event. While we can ignore thousands dying every day from automobile crashes (self driving cars will save the vast majority of those lives).

Back on topic, the Boeing software upgrade seems to be moving at glacial speed as international regulators get their 5 minutes in the camera spotlights, and the blame game runs its course.  I don't expect this to be explained by simple mercenary motives to dismiss human safety concerns, while every business is motivated to be profitable to continue its existence as a going concern.

This "may" turn out to be another casualty of the increasing automation of the flight deck. Who can argue with the benefit of anti-stall automation, unless it responds to bad data and over corrects. In hindsight that is pretty obvious, so we shouldn't make that mistake again. The real judgement call to question is whether all pilots could safely manage the aircraft despite MCAS misbehaving. Most did but not all apparently.

We have seen other examples of planes that crashed when pilots were over dependent on auto-pilot automation, and a handful of examples of remarkable flying out of difficult circumstances by experienced pilots.  We have already seen a couple deaths related to over dependance on self driving car systems, but this is drop in the bucket compared to the thousands of human driver deaths daily.

This will be a big weekend for driver deaths, drive carefully all.

JR 

*** I recall around the time this happened, talking with a magazine publisher who flew himself around the country in his small 2 engine plane. I suggested the second engine was safer in case one failed, but he shared that if one engine fails during takeoff, a crash is the most likely outcome.  :eek:
 
The Boeing Max is still grounded... I suspect the FAA is embarrassed because they didn't immediately ground the max and were shamed into that by other countries who grounded them almost immediately after the second crash.
---

I saw a recent article where a pilot was invited to fly the new version MCAS software to compare with old version on the same  simulator, programmed to mimic the attitude sensor failure indicated in both crashes, and numerous reported non-crash events.

It appears the new software corrects several issues that could cause difficulty for pilots to overcome. Improving the software was the relatively easy part. Much harder will be getting the new software re-certified by FAA and other countries. 

In hindsight this is huge duh why Boeing didn't incorporate the safer approach originally but hindsight is like that.  ::) Perhaps there was an over reliance on an experienced pilot's ability to negotiate such sensor failures that often occur. As air travel keeps expanding we don't have as many experienced pilots to go around.

The MCAS anti-stall mitigation was a little too persistent, repeatedly trying to push down the nose. The new software stops trying after pilot resists the first anti-stall response and defeats MCAS. It appears both crashes suffered multiple nose down MCAS responses before crashing. 

JR

PS: Some argue we need completely self flying planes but I don't see the public ready to accept that. Agencies are looking at regulation for self-piloting ocean going ships. I can already see the cheesy movie about a runaway ship. Maybe hijacked by terrorist hackers.
 
I'm convinced self-flying planes and self-driving cars are statistically safer.
The problem is nobody wants to be a "statistics".
I believe the safest means of transportation is the TGV (Train à Grande Vitesse). It could ride without an operator, because everything is under computer control. The driver is there to reensure people there's actually a "pilot in the plane", and to make decisions when a bunch of idiots start a demonstration on the railway!
I think programmers should mandatorily read Asimov's books of robots.
 
abbey road d enfer said:
I'm convinced self-flying planes and self-driving cars are statistically safer.
The problem is nobody wants to be a "statistics".
I believe the safest means of transportation is the TGV (Train à Grande Vitesse). It could ride without an operator, because everything is under computer control. The driver is there to reensure people there's actually a "pilot in the plane", and to make decisions when a bunch of idiots start a demonstration on the railway!
I think programmers should mandatorily read Asimov's books of robots.
I read a lot of Asimov back in my yout, but I revisited " I Robot"  a couple years ago...

Good science fiction provides a robust inspection of possible futures.  8)

JR
 

Latest posts

Back
Top