Witness.AI: Automated Accident Reconstruction
Witness.AI is a fully automated, serverless solution for comprehensive crash analysis.
  • Transforms raw driving video footage into professional-grade reports.
  • Leverages deep learning, rule-based reasoning, and generative AI on AWS.
  • Meticulously determines root causes and identifies responsible parties.
  • Provides actionable insights for future prevention.
Built for the AWS × INRIX Transportation Hackathon 2025
Why Road Accident Analysis Needs AI
Slow & Manual Investigations
Traditional crash reviews are manual, taking days or weeks. This delays insurance claims, law enforcement, and critical road safety interventions.
Prone to Human Error & Bias
Subjective human interpretation introduces bias and inconsistency. Inaccurate estimates of speed, distance, and time-to-collision lead to flawed fault assessments.
Focus on Monitoring, Not Causation
Most existing transportation systems monitor traffic but lack deep, post-incident reasoning. They fail to identify the true underlying causes of accidents.
Lack of Automated Explainability
There's no automated, unbiased pipeline to convert raw video evidence into verifiable, semantic insights about driver behavior and fault.
Witness.AI: Post-Incident Reasoning
Witness.AI was developed during the AWS × INRIX Hackathon to address a gap in automated analysis of post-incident events.
We targeted the INRIX transportation path along with still using multiple AWS Services to focus on main path as well.
Our focus: forensic analysis of accidents. We aim to understand not just that an accident occurred, but why, the precise sequence of events, and who was responsible, based purely on video evidence. Our vision is a reliable, digital "witness" pipeline that provides legally sound, explainable crash reports.
  • Leveraging AWS and INRIX data to solve specific safety challenges.
  • Moving beyond detection to sophisticated reasoning and explanation.
  • Creating a high-fidelity, sensor-independent accident reconstruction system.
Witness.AI: The End-to-End Serverless AI Pipeline
Witness.AI is a modular, event-driven AI pipeline transforming video into actionable insights across 5 key stages.
Stage 1: Video Ingestion
Ingests dashcam/CCTV video, extracts frames for processing.
Stage 2: Object Detection
Detects vehicles, lane markers, and critical scene elements in each frame.
Stage 3: Tracking & Metrics
Tracks vehicle motion, computes critical metrics (speed, acceleration, TTC).
Stage 4: Reasoning & Fault Analysis
Applies rule-based logic to classify behaviors (e.g., tailgating) and assign fault.
Stage 5: Report Generation
Generates a structured, human-readable crash report using AI (LLM).
The Serverless AWS Architecture in Detail
  • Leverages AWS serverless technologies for high scalability, fault-tolerance, and cost-efficiency.
  • Event-driven design ensures compute resources are only used during active video processing.
Step 1 & 2: Data Transformation and Perception
Stage 1: Extracting Frames
Transforms continuous video into discrete images for Computer Vision analysis.
  • Trigger: S3 video upload.
  • Tool: AWS MediaConvert for efficient frame extraction (5 frames/sec).
  • Output: Still images stored in a dedicated S3 "Frames Bucket".
  • Benefit: Decoupling ensures responsiveness and prevents Lambda timeouts.
Stage 2: Detecting Vehicles & Lanes
Analyzes each extracted frame to identify key road elements using advanced AI models.
  • Models: State-of-the-art YOLOv8 & DETR.
  • Hosting: Scalable Amazon SageMaker endpoints.
  • Detection: Vehicles, pedestrians, lane markings, road obstacles.
  • Output: Detailed JSON with bounding boxes, confidence scores, and class labels.
Step 3: Advanced Tracking and Motion Metrics
Beyond detection, we analyze vehicle trajectory and dynamics by transforming static bounding box data into kinetic motion data.
Vehicle Tracking
Assign unique IDs using IoU & Kalman Filtering. Ensures accurate trajectory mapping despite occlusions.
Speed Calculation
Calculate relative speed from pixel movement (pixels per second). Essential for motion analysis and accident reconstruction.
Time-to-Collision (TTC)
Estimate TTC visually from bounding box growth. This sensor-independent metric predicts imminent danger.
Output: tracks.json, a time-series dataset of all computed motion metrics for further analysis.
Step 4: Deterministic Behavioral and Fault Analysis
Witness.AI converts numerical metrics into meaningful behavioral insights and assigns preliminary fault classifications. This rule-based engine ensures transparency and explainability.
High-Risk Thresholds
TTC < 2.5s: Critical collision warning for severe safety violations.
Tailgating Behavior
Persistent unsafe distance between vehicles.
Aggressive Approach
Rapid speed increase or Speed ≥ 180 px/s in constrained areas.
Lane Weaving
Lateral deviation > 5 px over short intervals: erratic driving.
Sudden Braking/Stop
Disproportionate negative acceleration, a root cause of rear-end collisions.
Unsafe Cut-In
Lane change into a rapidly closing gap or violating safe distance.
The output, faults.json, provides a deterministic record of observed behavioral violations linked to specific vehicles and timestamps.
Step 5: AI-Generated Formal Crash Report
Synthesizes all data—frames, detection, motion, faults—into a coherent, narrative report using Generative AI.
LLM Integration via AWS Bedrock
Structured `faults.json` data sent to Claude Haiku 4.5 on AWS Bedrock. Chosen for superior reasoning and handling complex input.
A detailed prompt guides the AI to act as a neutral accident reconstruction expert, ensuring consistent, professional output.
  • Output: Structured Markdown report.
  • Sections: Overview, Timeline, Impact, Root Cause, Recommendations.
  • LLM adds human-style narrative for superior explainability.
Final report stored in S3 for immediate stakeholder access.
Witness.AI: From Pixels to Powerful Proof
Transforming raw video evidence into objective, actionable intelligence for high-stakes road safety automation.

Insurance Claim Assessment
Rapid, unbiased root cause analysis reduces claims processing time and costs, improving customer satisfaction.
Road Safety Authorities
Identify high-risk road segments, enforce traffic laws, and design targeted safety improvements from crash video analysis.
Autonomous Vehicle Development
Analyze test footage for edge cases, provide ground truth for training data, and accelerate safety validation.

Made with