I think victim-blaming based on profiling is harsh. A woman is killed by an essentially unproven technology with a proven software fault and we seek to apportion blame to the dead woman?
Your comparisons to use of computers in airplanes is simplistic and unreasonable.
Human driven cars are remarkably safe - a little over 1 death per 100 million miles. How many deaths do we find acceptable until driverless cars meet that expectation? Companies are producing these cars and software without ethical questions being addressed. Autonomous car makers stayed mum when MIT's 'Moral Machine' study was published.
I don't imagine we have to think much about who the Tesla car is going to protect in event of a moral dilemma accident... nameless pedestrians or the owners of the car?
You say that until it's your husband, wife, mom, dad or child who gets killed by Tesla.
I think this is a bit harsh. Computers fly planes these days and few of us complain. Computers took astronauts to the moon. Sure, computer software makes mistakes, but so do humans. Often. Humans get hungry, tired, angry etc. They enact road rage, speeding, drink-driving. It's not unreasonable to think that computers driving cars might be a better solution than people driving cars.
People also walk in front of cars and get killed. Or push bikes in front of cars. Some of them are blameless, some of them are not.
So its hard to support a stance that the computer must be to blame and the cyclist cannot be to blame. It's more likely, as in much of life, that there is plenty of blame to spread around.
Self driving vehicles are far more complex than people can possilbly imagine.
The computers flying aircraft and spacecraft don't have much to deal with that's unexpected in comparison. Aircraft fly defined straight routes in the air, follow corridors and cones up and down from runways and can release themselves to a human who must be present to take over if they can't decide.
The additional complexity of cars is due to stuff we have accepted for years, the road is bendy and non-perfect and has intersections, that the road can be shared by small cars, big cars, trucks, cyclists, pedestrians, horses, road works, god knows what. Even if the software can detect what is happening (forget AI, that's bollocks, you must code for every eventuality), what should it decide? We accept a human may instinctively automatically steer away from the rear of a suddenly stopped car and crash into pedestrians. Should that be coded into the software? Should the "car" software be coded to decide to protect itself and its occupant by killing say 6 people on the pavement (sidewalk for simplified English speakers) or kill the car driver to protect 6 lives?
Planes also talk to each other over TCAS and other systems. While new traffic control device and vehicles can incorporate such technology, history means most don't or will never do so for decades yet.
I admire Elon Musk's crazy thinking and the genius engineering at Tesla, SpaceX and so on, but I think his decision to push to market using only 2D imaging because it looks better on the car, rather than everyone else who are using LIDAR for 3D imaging may ulitmately be Tesla's downfall. Teslas have driven at full speed into the side of turning HGVs and into the back of a fire truck because the software is set to determine non-moving big things are infrastructure such as bridges, sky and so on and ignore them to prevent erroneuous sudden braking. This works 99.9......% of the time.