How Edge AI is Turning Safety Managers into Strategists: Netradyne's Pratik Verma
AI has been integrated into the supply chain for quite some time. It not only enhances efficiency but also improves quality. In this context, Netradyne is introducing fleet technology designed to support fleet growth and advocate for drivers.
Netradyne employs scientific methods and cutting-edge technology to statistically enhance driver performance and safety standards in transportation. Their vision-based edge AI is transforming traditional approaches to driver safety, road mapping, and the development of driving skills.
Speaking to Analytics Insight, Pratik Verma, Senior Director, Data Science, Netradyne, discusses how Netradyne's vision-based edge AI can fundamentally improve driver intelligence, fleet safety, and more.
Traditional fleet safety relies on telematics and rule-based alerts. How does vision-based edge AI fundamentally change the quality of driver intelligence?
For a long time, fleet safety has been built around hindsight. You review what happened and then try to prevent a repeat. But the real risk sits in the few seconds before impact, when human judgment is under pressure and milliseconds matter. That is why the shift from telematics to vision-based edge AI is fundamentally the move from knowing what happened to understanding why it happened. That “why” is the missing link in fleet safety, as the core distinction lies in causation.
Traditional telematics relies on abstracted signals, such as IMU patterns, G-forces, and braking intensity. If a driver slams on the brakes, a signal-based system flags a hard-braking event, but the signal alone cannot explain intent. Was the driver tailgating aggressively, or did a pedestrian suddenly step into the road and the driver made a strong defensive save? In a signal-only setup, both cases can look like a penalty or a violation.
Vision adds the enriched context that signals cannot provide. By recognizing object classes like a heavy truck versus a bicycle, edge AI interprets the environment's physics in real time and delivers a depth of intelligence that raw signals cannot replicate. It can differentiate between a risky maneuver and a life-saving one. It can also read the nuance of weaving and tell whether the driver is being risky or simply navigating potholes and unpredictable obstacles.
Beyond risk, vision also lets you reward good driving and identify areas for improvement. Telematics cannot detect the near-miss driver who was successfully avoided. This is also a core distinction; many of our customers can pinpoint where drivers are doing well and where they are not coaching them. The system scores and adjusts based on driving behaviour, serving as a north star for safety managers. Even with video telematics, pinpointing the right and risky behaviours is not possible. It just acts as a recording device.
Building these systems requires massive, diverse datasets. How do you ensure contextual accuracy across vastly different geographies like India and the US?
When you scale globally, you run into two kinds of challenges that you have to design for from day one: outward variation on the road and inward variation inside the cabin. Outwardly, the world changes fast across markets, from signs and driving norms to road structure and traffic flow. Inward variation is comparatively small but still an important driver of seating positions, uniform and seatbelt colors, cabin configurations, etc.
At scale, data collection must respect strict privacy, not just because of the regulations of each country, but it is critical to build the trust of drivers. Hence, from the outset, our approach is privacy-first. From a modeling perspective, we typically use a common backbone, for example, an object detection backbone that works across geographies.
Then we train different heads for country-specific outputs and create our own in-house curated datasets. While for new geographies, we leverage public datasets where applicable, for example, road sign datasets and combine them with in-house data. We apply augmentation techniques to simulate multiple real-world scenarios.
The real secret is the feedback loop and our strong in-house capability to train, deploy, monitor, and continuously improve. We deploy, track precision and recall tightly, and use a mix of Generative AI and manual sampling to catch misses and continuously improve across terrains and geographies.
How do you deliver real-time coaching without overwhelming the driver or losing their trust through false positives?
Real-time coaching works best when it feels consistent and fair, because that is what gets drivers to actually engage with it. We start by making sure the system alerts only when it has high confidence, and we build that confidence by using more than one signal.
Cameras tell you what the driver is doing, while telematics tells you how the vehicle is responding, and combining the two helps us separate a quick glance from a developing risk pattern. We also look for compound patterns rather than reacting only to isolated cues. When multiple risk signals occur close together, the system can treat it as a higher severity situation, which reduces noise and makes the alert feel more credible to the driver.
By training our models on real Indian driving data, including how traffic merges, how trucks and buses move, and how two wheelers interact with heavy vehicles, we teach the system which close interactions are routine and which combinations signal elevated risk.
Today, our systems process over 700 million miles of driving data every month globally, and that real-world exposure keeps sharpening detection accuracy, context understanding, and risk prediction. The result is alert precision of up to 99% even in complex road environments, with fewer false positives because the model relies on learned patterns, not isolated moments.
More importantly, if there is a risk but the driver automatically corrects within a given time window, we do not raise an alert or provide feedback. The objective is not to penalize momentary deviations but to intervene only when risk persists. That distinction is critical to maintaining driver trust.
The other design choice is edge-first processing. Critical analysis and the immediate audio prompt happen on the device in real time, so the timing stays reliable even when connectivity is inconsistent. That lets us keep the experience simple for the driver. Fewer prompts, clearer prompts, and prompts that match what the driver is experiencing in that moment.
Driver fatigue is a major contributor to highway crashes. So how does the AI identify early signs of sleepiness? How effective is it?
Driver fatigue is one of the most underestimated killers on our highways. On long straight routes, steady speeds and minimal steering create a vacuum of stimulation, and fatigue does not hit like a lightning bolt, it seeps in like a tide. Vision-based edge AI treats this as a behavioral science problem. We do not wait for a microsleep moment, because by then it is often too late.
Instead, the system looks for early progression signals such as eyelid behavior, blink frequency and duration, and percent eye closure over time, combined with gaze drift, fixed stares, head nodding, reduced head stability, yawning frequency, and facial slackness. At the same time, they look for external signals from our outward-facing AI camera, such as lane drift and changes in speed. The key is the progression, not any single cue, so we map risk through stages like initial and advanced drowsiness and intervene before situational awareness drops.
Because video is processed locally in the cab, the response is instant. Drivers receive clear audio prompts to re-engage and take a break at the next safe point, and events are triggered to safety teams so they can address the behaviour. One customer told us that even experienced drivers were surprised by early alerts at first, and over time, they learned when to stop and rest. Fleets report measurable reductions in drowsiness indicators when the system is precise and treats fatigue as a safety risk to manage, not a failing to punish.
Regarding outcomes, several Indian fleets have seen clear reductions in drowsiness-related risk signals after rolling out these systems. The difference comes from framing fatigue as an operational safety condition to manage, not a driver flaw to blame. That tone makes drivers far more likely to accept the prompt, take a break, and stay engaged with the program instead of dismissing the alerts.
Looking 18–24 months out, what is the single most disruptive technology coming to fleet management?
Over the next 18 to 24 months, the most disruptive shift will be safety orchestration through a virtual assistant for safety managers. Generative AI and LLMs will turn the safety manager dashboard from a list of violations into a conversational partner. Instead of digging through spreadsheets, a manager can ask which routes are showing a spike in fatigue risk, what is driving it, and what interventions to prioritise, and get a clear, synthesised strategy.
For drivers, feedback will evolve from just being alerts to more human, contextual guidance that is easier to accept and act on. Connectivity advances like 5G and V2X are promising, but they depend on broader infrastructure changes. A virtual assistant layer can be delivered sooner because it can sit on existing in-vehicle hardware and the edge perception stack already deployed.
What changes under the hood is intent reasoning at the edge. It is not enough to detect a pedestrian; the system will increasingly predict whether they are about to step into traffic, and coach earlier. In that window, AI stops being just a detector and becomes a key enabler for fleet safety teams, automating routine monitoring and analysis so teams can focus on saving lives.
.png)
