by Rick Curtis
Alligator Death at Disney
The tragic death of a toddler at Disney World in Orlando, FL points to an important risk management principal that can be overlooked–a failure to warn.
By various news accounts it is clear that Disney was well aware of the hazards of alligators in the area. Alligators are a fact of life in Florida and the particular ecosystem where Disney World is located–swamp land and drainages–creates a perfect habitat for alligators.
Disney has an active risk management plan for monitoring and dealing with alligators. According to a report by the Associated Press “Disney said it has a policy of relocating alligators that are considered a potential threat. Animals less than 4 feet long are taken to conservation areas. Larger gators are removed by state-licensed trappers, the company said.”
Alligator human contacts at Disney world have happened in the past. In 2012 a mother from said “her three young children and niece were playing on a resort beach at the water’s edge when the beady eyes of a 7-foot gator appeared in a lake just a few feet away. Everyone scatted without any injuries and local trappers came and removed the alligator. Disney staff threatened to confiscate any photos or video of the incident.
The one piece of the risk management piece that seemed to be missing is – a failure to warn. It may be that warnings were not considered the right image for a family-friendly theme park.
Disney is now installed signs warning of alligators and snakes and advising visitors to stay away from the water. Additional plans for fencing and other methods of restricting access are also under consideration.
How does this incident impact your program?
Do you have clearly known or identifiable hazards associated with your program activities or facilities? Have you taken active steps to communicate those risks to both staff and clients? Remember, staff can also be harmed by such hazards. Failure to warn, when the hazard was clearly known, could be considered negligence or event gross negligence.
‘Autonomous’ Car Death & Risk Homeostasis
Our second tragic death is a driver in a Tesla automobile who was using the “Autopilot” driving feature of the car. Reuters reported that “the Model S operated by Brown went underneath the trailer of a truck that had turned left in front of the car. The Tesla’s windshield hit the bottom of the trailer as it passed underneath, and the car kept going, leaving the road, striking a fence, crossing a field, passing through another fence and finally hitting a utility pole about 100 feet south of the road, according to the police report.” Witnesses report that the driver was watching a movie in his car when the crash occurred.
While there has been a lot in the media about ‘autonomous’ cars (cars that drive themselves) there is no such technology currently available – the existing technology (including Tesla’s) are ‘driver-assist’ features–systems to keep the car in the lane, provide braking when sensors indicate an impact and provide lane-changing maneuvers.
Tesla describes the feature as “still in beta testing”. However the company’s marketing of Tesla as the ‘car of the future’ could be a seen as encouraging drivers to think that the product is more capable of ‘automatically’ controlling the car than is actually the case. According to the Tesla web site “while truly driverless cars are still a few years away, Tesla Autopilot functions like the systems that airplane pilots use when conditions are clear.”
We all understand what an autopilot does in an airplane, it takes control, so perhaps that is creating an incorrect mindset for driver. It’s important to remember that airplanes use autopilot in a big open sky, not on a crowded road, and the airplane pilot and co-pilot have a variety of onboard warning systems to let them know when other planes are in the area.
“It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.” – https://www.teslamotors.com/blog/tragic-loss
Take a look at some of the behaviors Tesla owners are demonstrating with this technology.
I’ve reproduced this YouTube video locally since it was filmed with a 3D camera and the 3D version doesn’t display in all browsers. The original is at https://www.youtube.com/watch?v=W0SYtrl3cnc
This is an example of the Risk Homeostasis Model in action. Risk Homeostasis, developed by Dr. Gerald J.S.Wilde is the idea that we modulate our behavior based on a level of risk we are willing to accept. Things that ‘increase safety’ (like wearing a helmet while skiing) may cause people to engage in riskier behavior because they believe that the added safety factor will offset the increased risk—and therefore they remain operating at the same level of risk they are comfortable with.
The Risk Homeostasis model has been shown to be measurable in many situations. It’s fascinating to watch the Tesla drivers in these and other videos. At first you can see real reluctance even anxiety, to let the car be in control. Over time they become more relaxed, or perhaps more desensitized. What’s important to note here is that as long as the system operates with the car getting in an accident, the driver is receiving only positive feedback for the behavior, accelerating the process of desensitization to the potential risk.
However, it is not always the case that a new safety innovation makes everyone engage in higher risk behaviors. For example, avalanche professionals carry transceivers, and do snow pit analysis to determine whether it is safe to ski at a particular location. They don’t assume that they can ski anywhere simply because they are carrying a transceiver (although other backcountry skiers and boarders do). So it is also clear that Risk Homeostasis is not some unconscious process that we have no control over.
Why do some people use the ‘safety innovation’ as an impetus to increase their risk taking behavior while others do not? There is currently no clear answer for that. One area for research is to explore what is actually happening in the brain during risk taking behaviors using functional MRI brain imaging. This might show us how different areas of the brain are responding when Risk Homeostasis is active and when it is not.
The Ethical Question of Safety
One can’t look at these videos without raising the question of ethical decision making. The drivers are using a technology in a way that it’s not intended – according to Tesla you are supposed to keep you hands on the wheel (and the system is supposed to slow the car down if you hands are not on the wheel. Clearly that is not happening with the software version of Autopilot in these cars). So what happens if the system fails as it did in the crash in Texas and he crashes into another car or hits a biker or pedestrian? Is the driver even considering his ethical responsibility to everyone else on the road? Doesn’t look like it.