I see the ethical-choice dilemmas posited for autonomous cars as as a red herring.
IMO no company will programme a car to deliberately risk sacrificing its occupants, for any reason. This is because the first time a car implemented that approach guidance, its manufacturer would be sued into oblivion, by survivors or their relatives. I reality such situation will arise rarely. Some reasons (all conjecture):
Autonomous cars will be operated more slowly and highly defensively (in particular when sharing road space with non-autonomous vehicles).
Cars will eventually share their locations wirelessly.
Autonomous cars will have greater ability to anticipate risk of collisions and avoid them.
Autonomous cars will have greater ability to decelerate and manoeuvre and could be built to undertake mitigating actions like deploying external air-bags.
IMO where an autonomous car cannot avoid collision, it will be programmed to stay on the roadway (moving laterally only if it can do so without risking another collision) and to undertake avoidance and mitigation operations. This happens to a limited extent now.
If I brake very hard in my 2015 Mazda 3 (not autonomous), it activates SRS, activates the hazard lights, and pre-tensions the seatbelts. Some cars will apply the brakes themselves.
As to the cat ... an autonomous car could from a safety perspective disregard very small animals like cats. From a safety perspective, why even risk whiplash from heavy braking to avoid fluffy?
In short, be better drivers than us. Better drivers than a very good human driver in fact.
I think its illegal/against road rules to avoid a cat/dog and cause an accident?