Mobilize.AI Demo Platform

Overview

This is a sales demo I built at EV3 to showcase Mobilize.AI's phone calling technology through an interactive web interface. The challenge was making AI phone conversations feel as natural as possible while giving our sales team the flexibility they needed to demonstrate different calling scenarios.

The Challenge

When I joined the team at EV3, demonstrating our AI calling technology was a clear pain point. Setting up demos took hours of engineering and customer service time, scripts were hard to modify after they were created, and the whole process felt clunky from the prespective of prospective customers. We needed something that would:

The Approach

As the only developer on this project, I got to own the entire process from design through implementation. I started by shadowing our sales team to understand their demo workflow. With that knoledge, I worked with our engineering team to learn the architecture of our current call scripts. With feedback from Sales and Customer Success I built the demo application iteratively until it was ready to display publically.

Design Decisions

The key insight was that people need to feel how natural AI calls can be, not just hear about it. From a design perspective the primary goals were on top of a clean design were ease of use and feedback directly in the app. To achieve this I:

Technical Implementation

The frontend is built with Next.js, React, and TypeScript. Technically, latency between the user utterance and the reply from the AI-agent was the greatest challence. I made use of WebSockets for real-time audio streaming between the client and the server to greatly speed up communication. Being able to quickly iterate on call scripts was also important, and to do so I built a flexible script management system based on JSON that let the team quickly modify conversation flows.

The Impact

The platform transformed how our sales team demonstrates the core Mobilize AI product:

What I Learned

This project solidified my understanding of various web technologies, from advanced JavaScript object manipulation when it came to editing scripts, to using WebSockets to stream binary data. Over the course of the project I learned strategies for transfering audio over the internet, and how it is read and written in the browser and on the server.

From a UX perspective, I got to explore creating UIs for LLM-based applcations, which boiled down to learning how to handle errors gracefully and reduce percieved latencies via the UI.