The UK’s road network is one of the most intensively used in Europe. Yet despite decades of improvement, road traffic collisions remain a major cause of death and injury. Provisional figures from the Department for Transport show that there were 1,633 fatalities on British roads in 2024 (about 1 % more than in 2023), with 29,537 people killed or seriously injured (KSI) and more than 128,000 casualties of all severities. Although KSI casualties have decreased by 14% compared to 2014, progress has slowed in recent years. Human error contributes to approximately 88% of collisions, so public authorities and the industry hope that artificial intelligence (AI) and automation can help prevent crashes, ease congestion, and make journeys smoother.
AI Cameras – Catching Dangerous Behaviour
In its simplest terms, AI, or at least ‘technology’, is already catching bad driving. Across the UK, dashcams have helped catch and convict many thousands of dangerous drivers. With schemes like Operation-Snap, drivers who record dangerous driving can upload their footage to a local Police force.
Regarding AI, one of the earliest applications in road safety has been the use of automated enforcement. In Devon and Cornwall, police have trialled Acusensus ‘Heads‑Up’ cameras fitted with high‑resolution lenses and AI algorithms that can detect drivers using mobile phones or not wearing seatbelts. The cameras capture multiple images, which AI filters to identify potential offences; a human reviewer then confirms whether the law has been broken.
Over the course of two years of operation, this technology has been credited with a reduction in fatal and serious injuries in the region. KSI numbers fell from 790 in 2022 to 754 in 2023 and to 678 in 2024, while seat-belt offences at monitored locations halved, and mobile-phone offences decreased by a third. Less than 1% of motorists were flagged, suggesting that enforcement is targeted and proportional. Officials also emphasised that wearing a seatbelt halves the risk of death or serious injury, underscoring the life‑saving potential of such AI cameras.
AI is also being used to detect fare evasion and passenger safety issues in London’s public transport system. Transport for London (TfL) has trialled smart station cameras at Willesden Green and Blackhorse Road stations. These AI‑enabled cameras can recognise up to 77 use cases, from spotting abandoned items to warning staff when passengers stand too close to the platform edge. At Blackhorse Road, the system dynamically opened or closed ticket barriers based on passenger flow, resulting in up to 30% more passenger throughput and 90% shorter queues. These trials show how computer vision can enhance safety and efficiency without replacing human staff. Human controllers still oversee the systems, but AI handles the volume of data.
Such successes have encouraged UK police forces and transit operators to adopt AI enforcement more widely. However, critics warn that AI cameras may introduce biases if training data are not representative, and poor‑quality images could lead to false accusations. Transparent auditing of algorithms and independent oversight will be essential to maintain public trust.

Advanced Driver Assistance Systems (ADAS): promising but imperfect
Automated braking, lane‑keeping, adaptive cruise control and other Advanced Driver Assistance Systems (ADAS) are now commonplace in modern cars. During our on-road courses, we can provide the necessary guidance, as not everyone is familiar with their vehicle’s ADAS features, which will become increasingly important over time.
Research funded by Science Foundation Ireland estimated that if ADAS were installed in every car in Great Britain, crashes could decrease by approximately 24%. Automatic Emergency Braking (AEB) was also found to reduce intersection accidents by 28%, rear-end collisions by 27.7%, and pedestrian accidents by 28.4%. The researchers forecast that the full deployment of ADAS could prevent approximately 18,925 accidents per year, representing a 23.8% reduction in overall crash frequency.
Those figures are echoed in a 2024 report for the UK government’s DG Cities project, which estimated that universal adoption of ADAS could lead to a 30% reduction in crashes. But the report highlighted a significant “trust gap”: many drivers do not understand how to operate ADAS features, as we are aware from our driver training, and manufacturers are not currently required to pass rigorous, real‑world tests before selling these systems. Interviews with ambulance, fire and police services revealed that lane‑keeping assistance can resist crossing lane markings during emergency manoeuvres and that automated emergency braking may activate unexpectedly, hampering response times.
The study found that public support for autonomous vehicles increases from under half to nearly three-quarters if deployment reduces serious injuries and fatalities by just 5%. Better testing, regulation and user education are therefore essential; otherwise, the safety benefits could be undermined by misuse or poorly configured systems.
Weather conditions also pose challenges. The same Irish research observed that ADAS sensors can struggle in heavy rain or fog, reducing their ability to detect obstacles and leading to disengagement. For the UK, with its variable climate, robust sensor fusion and fallback strategies will be critical. Regulators should set minimum performance requirements across diverse environments to ensure reliability.

AI‑driven traffic management: smarter signals and reduced congestion
Beyond individual vehicles, AI is transforming traffic management. TfL has partnered with Siemens Mobility to create a Real-Time Optimiser (RTO) that utilises data from inductive loops, cameras, and connected vehicles to dynamically adjust traffic signal timings. The system can prioritise buses, cyclists and pedestrians in line with the capital’s “Healthy Streets” strategy and respond quickly to incidents, helping to restore normal traffic flow. By balancing signal timings for different road users, the RTO aims to reduce congestion and improve air quality.
Researchers at Aston University in Birmingham have developed a video‑based AI traffic and signal monitoring system that offers a glimpse of the future. Their program uses deep reinforcement learning: it “rewards” itself for moving vehicles through junctions quickly and penalises itself for creating jams. Trained entirely on a photorealistic 3D simulation called Traffic 3D, the system outperformed traditional methods that rely on manually designed signal phases.
When the researchers tested the AI on a real junction, it adapted to real‑world traffic despite being trained only in simulation. Because the system “sees” queues forming via video, it can respond before vehicles reach the sensors used in current induction‑loop systems. The reward mechanism can even be adjusted to prioritise emergency vehicles or discourage uncomfortable accelerations. Although still experimental, this approach demonstrates how AI can provide urban traffic control with far greater flexibility.
AI sensors are also improving active travel planning. TfL has trialled VivaCity sensors that classify pedestrians, cyclists, cars, and buses with up to 97% accuracy, recording speeds and movement patterns. Data from these sensors helps planners design safe cycle routes and monitor new infrastructure. Meanwhile, predictive analytics are being used to anticipate congestion. High-resolution traffic data feeds edge-computing systems that process information locally, reducing latency and addressing privacy concerns.
Implementing these systems presents challenges. Cities such as London, Manchester and Birmingham face technical hurdles in integrating disparate technologies, dealing with legacy infrastructure and addressing citizens’ privacy concerns. Achieving reliable and equitable AI traffic management will require collaboration among technology providers, urban planners, and policymakers.

Predictive Maintenance and Fleet Management:
Keeping vehicles and infrastructure in good condition is fundamental to road safety. Machine‑learning tools can analyse large datasets from vehicles and roads to predict failures before they occur. A Transport Research Laboratory (TRL) proof-of-concept study replaced traditional rule-based pavement maintenance models with a Random Forest machine-learning model, dubbed the “Digital Engineer.” Using network‑level data on surface condition, roughness, skid resistance, construction and traffic volumes, the model provided significantly higher accuracy in identifying road sections needing treatment than the current deterministic tools. The report noted that further contextual information and transparency about feature importance are required before wide adoption, but machine learning could help highway authorities prioritise repairs more effectively, reducing potholes and improving safety.
Within commercial fleets, telematics and AI are transforming service, maintenance and repair (SMR). At a Fleet200 Strategy Network event in 2024, UK fleet operators described how telematics data, combined with AI, can predict component failures with approximately a 95% probability, enabling proactive maintenance and resulting in cost savings. Participants also emphasised the importance of integrating AI with existing systems, ensuring data accuracy, and promoting cultural change to adopt predictive maintenance. When AI identifies a high probability of failure, vehicles can be serviced before breakdowns occur, reducing roadside incidents that expose drivers and roadside workers to danger.
A 2025 Webfleet survey of 200 UK fleet decision‑makers found that 69% viewed real‑time driver alerts as AI’s most promising safety application, and 68% valued AI’s ability to analyse telematics data for deeper insights into driver behaviour and vehicle performance. 65% highlighted the potential to predict accidents before they occur, and 97% of heavy‑goods vehicle (HGV) fleets reported fewer safety incidents after adopting AI‑powered video systems. Ninety-one per cent of fleet managers plan to invest in AI and advanced telematics within the next three years. These figures suggest that AI‑enabled maintenance and driver assistance can significantly reduce risk for professional drivers and other road users.
Nevertheless, telematics raises privacy questions. Drivers may fear constant monitoring, while companies must ensure compliance with data protection laws. Clear policies, anonymisation and transparency about data use will be essential to maintain trust.
Autonomous vehicles: the road ahead
The Automated Vehicles Act 2024
In May 2024, the UK enacted the Automated Vehicles Act (AVA), a landmark law intended to pave the way for self‑driving vehicles by 2026. Building on earlier recommendations from the Law Commission, the Act aims to make Britain a leader in automated mobility. Recognising that human error contributes to 88% of collisions, the Act requires authorised automated vehicles to achieve a level of safety equivalent to or higher than that of careful and competent human drivers. To obtain authorisation, vehicles must pass a self‑driving test demonstrating that they can operate autonomously without human monitoring or intervention. The Act introduces a no‑user‑in‑charge (NUiC) regime, where licensed operators are responsible for journeys without a driver and must ensure vehicles are insured and safe.
A crucial element of the Act concerns liability. When an authorised self‑driving vehicle is operating in autonomous mode, legal responsibility for traffic infractions or accidents lies with the vehicle’s Authorised Self‑Driving Entity (ASDE), typically the manufacturer or software provider, rather than the human occupant. The legislation gives government inspectors extensive powers to monitor performance, investigate incidents and compel data from ASDEs. It also introduces marketing restrictions to stop manufacturers from misrepresenting driver‑assistance features as full automation.
The Act builds on the Automated and Electric Vehicles Act 2018, which already stipulates that where a self-driving car is at fault, insurers bear first-instance liability, allowing victims to claim compensation even if no human driver is responsible. Insurers can then recover costs from liable parties. Together, the two laws aim to provide clarity for consumers and industry while ensuring that safety standards remain high.
Ethical and privacy challenges
Self‑driving vehicles raise profound ethical questions. Parliament’s science and technology research office notes that ethicists disagree on what the “correct” action should be when a collision is unavoidable, whether an AI should prioritise its passengers or minimise harm to bystanders. To address this, the Centre for Data Ethics and Innovation has proposed a “safe and ethical operational concept” for autonomous vehicles, ranking principles such as: (1) avoiding collisions with all objects; (2) maintaining safe distances; (3) staying on the road; and (4) avoiding movement that causes discomfort. Emergency braking, for example, would take precedence over passenger comfort because preventing a collision ranks higher than avoiding discomfort. Who decides these priorities, and how they are encoded in algorithms, remains a topic of debate.
Privacy is another major concern. Automated vehicles generate enormous amounts of data: around 4,000 GB per day, according to BAE Systems. This data includes the location and identity of vehicle occupants, as well as the movements of cyclists, pedestrians and other drivers captured by sensors. Researchers warn that such information could be misused, for example, for targeted advertising or even stalking in extreme cases. The AVA subjects this data to the Data Protection Act 2018, and researchers are exploring encryption and authentication methods to mitigate cybersecurity risks. Nonetheless, the scale of data collection necessitates robust safeguards and clear governance frameworks.
Challenges and Limitations
While AI and automation offer substantial benefits, significant hurdles remain:
a. Weather and sensor limitations: ADAS sensors may perform poorly in heavy rain or fog. AI traffic systems must handle varied lighting and weather conditions.
b. Integration with existing infrastructure: Smart traffic systems must work with legacy hardware and respect urban heritage constraints. Upgrades can be costly.
c. Public trust and education: Misunderstanding of ADAS features and unrealistic expectations can lead to misuse. Manufacturers and regulators need to provide clear instructions and training.
d. Cybersecurity and privacy: The vast data collected by autonomous systems poses risks if mishandled. Encryption, access controls and legal accountability are essential.
e. Ethical decision‑making: Determining how AI should act in unavoidable collision scenarios remains unresolved.
Will AI Make Driver Training Obsolete?
We are often asked this. While we can’t predict decades into the future, over the next 15 years or so, we can expect driver training, if anything, to become more widespread. In the short term, AI will highlight just how at-risk human drivers are. How they need additional training, how distractions cause many collisions, and, in virtually all instances, it’s humans who make the serious mistakes. And because of this, training is key. We don’t believe AI will mean humans no longer need to drive. If anything, driveless automation will be a nice option, like A/C, Sat Nav or another must-have addition to a vehicle.
People will want the visceral feel of driving, but…..not all the time.
More companies than ever are already mandating driver training for any staff who drive as part of their job to mitigate risk.
We expect the rise of AI-managed vehicles to make driver training more widespread until around 2040, after which the nature of driver training is likely to undergo a change.
We may revisit this article in decades to come……