root / work / intervision
Active — AI Application

InterVision

AI-powered mock interview platform with behavioral analysis and real-time feedback

Started
2026
Repository Private / Internal

The Context

Most people walk into interviews underprepared — not because they lack the skills, but because they've never practiced in a realistic setting with honest feedback. Coaching is expensive, peers are biased, and watching YouTube videos is passive. InterVision puts you in an actual interview simulation: a live AI session that plays the role of a technical interviewer, analyzes your answers and behavior in real-time, and delivers a full post-session report. Not a score. Not a grade. A real breakdown — what you got right, where your gaps are, and what to do about it. Alongside this, a job market module lets users cut through the noise and surface relevant offers from the global market fast.

Architecture & Execution

Each interview session is bootstrapped as a real-time WebRTC stream, with SignalR managing the bidirectional channel between the client and the AI interviewer engine. The AI layer handles dynamic question sequencing based on domain and seniority level, then processes each response through two parallel pipelines — a content analysis pass that evaluates correctness, depth, and structure, and a behavioral signal pass that tracks delivery patterns like hesitation, filler words, and pacing. Session state is held in Redis for sub-millisecond access during the live session, then flushed to SQL Server on completion. The report engine aggregates both pipelines into a structured performance document: identified strengths, skill gaps ranked by severity, and a prioritized action list. The job market module is a separate bounded context — it aggregates listings from external sources, normalizes them against the user's profile built from their session history, and surfaces ranked matches.

Post-Mortem Lessons

01

Real-time AI conversation feels broken the moment latency crosses ~800ms. Streaming tokens directly to the client via SignalR — rather than waiting for a full response — was the fix that made the AI feel like an actual interviewer and not a form submission.

02

Behavioral analysis is a UX minefield. Users initially felt surveilled. Framing it as 'delivery coaching' rather than 'monitoring' in the report copy changed the perception entirely — same data, completely different reception.

03

The report was the hardest part to get right. An LLM left to its own output produces either vague praise or harsh criticism. Structuring the prompt around specific evidence ('at 4:12 you said X, which indicates Y') forced the output to be actionable rather than editorial.

AI Interview Session
Real-time
Performance Report
360°
Job Market Feed
Live
Judgment — Only Insights
0
Core Technologies
.NET 9 ASP.NET Core SignalR WebRTC CustomAIModel SQL Server Cloudflare Flutter
User joins session
        ↓
WebRTC audio/video stream
        ↓
SignalR real-time channel
        ↓
AI Interviewer (OpenAI)
  ↙            ↘
Answer analysis   Behavior signals
        ↓
Report engine
        ↓
Full performance report
  gaps · strengths · next steps