Driverless Technology: A Risk As Low As Reasonably Practicable

George Hall

George Hall

With the future of automotive transport fast becoming a driverless utopia, there are lots of hurdles to cross in order to ensure that the safety of all road users is at the forefront of all technological advances, among them Artificial Intelligence (AI)-driven vehicles, writes George Hall.

A race between two driverless electric cars recently took place in Buenos Aires. Both cars were created by Roborace, a company looking to set up regular races between driverless cars for entertainment purposes, but also to better understand and develop driverless technology. Straight away, you can see envisage the benefits of this, especially from a Health and Safety perspective. No driver equals no injury should anything go awry on the track.

Unfortunately for the Roborace’s two Devbot cars, a crash did occur. One of the cars attempted to take a corner too fast and clipped it, causing damage to the exterior of the vehicle and, ultimately, causing it to crash out. As this race (or experiment) took place prior to the Buenos Aires Formula E race itself, there were spectators and race staff around the course which may have resulted in injury.

However, in terms of risk management, the risk of injury was successfully mitigated through several control measures, such as a speed limiter being activated on the cars (with one car topping out at 116 mph) and crash barriers being in place. Alongside this, other control measures could be put in place, such as emergency braking systems and safe distances kept from the track.

Artificial Intelligence software in control

Both cars were controlled by AI software – computer programs designed to drive the car, observe and monitor the local environment and ensure a safe drive. Devbot 2, however, didn’t manage this and crashed while travelling at high speed. Computers are not perfect (a fact which I’m sure we all appreciate at some level).

Now, despite the car crashing and not finishing the race, as it was just a computer program in control there were no human injuries to consider or deal with. Specialists can review what happened prior to the crash, kind of like a dashboard cam, although with much greater detail. Going forward, all that needs to be done is ensure that a new chassis is up-and-running and to re-download the AI into the car, hopefully having learned from its mistakes the first time around. After all, the whole point is for these cars to learn and become better.

The worrisome thing for me is the fact that the cars are AI-controlled. I have nothing against AI making life easier. My main concern is that each AI-controlled vehicle will, in essence, be different from the next.

Continual learning

In a real-life on road situation, how can a driver in control of a car predict what a driverless car is going to do if they’re always learning and vice versa? One car may do something completely different to another when put into similar situations, in much the same way human drivers are prone to reacting differently to specific occurrences. Risk assessing against unpredictability is a tricky situation to say the least.

In addition, any company investing in driverless technology will also have to ask themselves whether any risk scores they attain should be considered ALARP. Over time, through risk mitigation, the risk of death due to dangerous driving will reduce as a chaotic element (ie humans) will be removed from the equation. I guess the introduction of autonomous cars will be a slow culture shift for the world to contend with. ‘Global Change Management’ would be great here although, in reality, impossible to manage. However, with Dubai now having approval for single occupant drones, perhaps lessons learned there will cascade down to the road level?

Regarding risk management, ‘expect the unexpected’ is a good mantra to follow. Another one I like is: ‘If it could go wrong, it will’. If we do begin to assess against the unexpected occurring, then I’m sure some level of successful mitigation will be in place for driverless automotive transport.

A recent report I read on the Deepwater Horizon disaster sums it up nicely. It states that, these days, when it comes to risk, most organisations/individuals are focused on the possibility of successfully mitigating an issue whereas more organisations/individuals should instead be focusing on the possibility of failure.

Making that all-important call

Should the principle of ALARP start creeping in, a business has to make the call as to when its stops researching mitigating measures or controls. If a risk cannot be further mitigated against due to it being a financial sinkhole or requiring too many resources that far outstrip the risk appetite of the company in charge, then issues could arise.

I’m sure no company in charge of driverless technology would ever mark a potential risk that may impact human lives as ALARP. The more tests conducted through the slow introduction and culture shift discussed previously, the better the technology becomes until one day – I’m pretty sure – the majority of journeys will be completely undertaken by faithful AI companions.

After all, wouldn’t it be nice after a long and difficult day at work to simply ask your car to take you home?

George Hall is Safety, Quality and Risk Management Consultant at Ideagen

About the Author

Brian Sims BA (Hons) Hon FSyI, Editor, Risk UK (Pro-Activ Publications)

Beginning his career in professional journalism at The Builder Group in March 1992, Brian was appointed Editor of Security Management Today in November 2000 having spent eight years in engineering journalism across two titles: Building Services Journal and Light & Lighting.

In 2005, Brian received the BSIA Chairman’s Award for Promoting The Security Industry and, a year later, the Skills for Security Special Award for an Outstanding Contribution to the Security Business Sector.

In 2008, Brian was The Security Institute’s nomination for the Association of Security Consultants’ highly prestigious Imbert Prize and, in 2013, was a nominated finalist for the Institute’s George van Schalkwyk Award.

An Honorary Fellow of The Security Institute, Brian serves as a Judge for the BSIA’s Security Personnel of the Year Awards and the Securitas Good Customer Award.

Between 2008 and 2014, Brian pioneered the use of digital media across the security sector, including webinars and Audio Shows. Brian’s actively involved in 50-plus security groups on LinkedIn and hosts the popular Risk UK Twitter site.

Brian is a frequent speaker on the conference circuit. He has organised and chaired conference programmes for both IFSEC International and ASIS International and has been published in the national media.

Brian was appointed Editor of Risk UK at Pro-Activ Publications in July 2014.

Related Posts