AI Quality Architect • Meta

Building the Super AI that fits on your wrist

I architect end-to-end AI quality systems across Meta's RealTime AI stack — voice, vision, and agentic reasoning. 98 test requests. 10 AI products. 992 bugs found. One vision: a supercomputer in every human's hand, then on every human's wrist.

98+
Test Requests
992
Bugs Caught
10
AI Products
8
Features Shipped
The Blueprint
7–12 Layer AI Architecture

A complete AI system isn't just a model — it's a vertically integrated stack from hardware to human experience. Each layer must be bulletproof for the system to earn trust at wrist-level.

L12
Human Experience Layer
The moment AI feels like a natural extension of thought — ambient, anticipatory, invisible
UX / Trust
L11
Embodiment Layer
AI takes physical form — watch, glasses, car HUD, home devices. Form factor adaptation.
Hardware
L10
Personalization & Memory
Persistent context — knows you, your preferences, your patterns, your relationships
P13N
L09
Agentic Action Layer
Tool calling, device control, code execution, API orchestration — AI doesn't just answer, it acts
Agents
L08
Multimodal Fusion
Voice + vision + text + haptics unified in a single reasoning pass
Fusion
L07
Safety & Governance
Guardrails, compliance, content filtering, biometric auth for safety-critical actions
Safety
L06
Real-Time Reasoning (LLM)
Two-brain architecture — Fast Brain for instant response, Big Brain for deep reasoning
LLM
L05
Vision Processing
Live AI — real-time camera analysis, object ID, text reading, scene understanding
Computer Vision
L04
Voice Pipeline
VAD → ASR → NLU → TTS — the cascade that makes voice-first interaction possible
Voice
L03
Streaming Infrastructure
Cosmos shim, WebRTC, sub-200ms latency pipelines — real-time or nothing
Infra
L02
Edge Computing
On-device processing — VAD, wake word, basic commands without network dependency
Edge
L01
Silicon & Sensors
NPU, microphone arrays, bone conduction, haptic engines, cameras, biometrics
Hardware
Portfolio
AI Products I've Shipped & Tested

End-to-end QA leadership across Meta's most critical AI products — from voice agents to embodied AI to content generation at scale.

Live — 50% Public
Meta AI Voice 2.0
Voice-first conversational AI powered by Kepler/Cosmos. Testing model quality, immersive UX, and Live AI (video) across C50, Facebook, Messenger, Instagram, and WhatsApp.
Kepler Cosmos ASR/TTS 26 Languages Live AI
Teamfood Active
Hatch Voice
Two-brain voice AI: Fast Brain (Cosmos) for instant response + Big Brain (Jarvis) for agentic reasoning with tool calls. Web, mWeb, iOS, Android.
Two-Brain Agentic Tool Calling Web + Mobile
New — Active
Meta Me
AI-powered embodied digital twin that looks, sounds, and reacts like you. Powered by LLM + video + audio AI models. The avatar on your wrist.
Embodied AI Video Gen Personalization
Active
MAIV2 — WhatsApp Voice
Meta AI Voice across WhatsApp's 3B users. 50 test cases, 13 execution cycles, voice-to-live-AI transitions, multilingual validation.
WhatsApp 3B Users Cross-Platform
Active
AI Content Consumption (MIFU)
AI-generated content at scale — autonomous creation and distribution in Facebook & Instagram feeds. Testing share rate, engagement quality, and relevance.
AIGC Feed Ranking FB + IG
Building
AI-Native QA System
AI that tests AI — 5-layer autonomous quality system. If AI scales to infinite surfaces (phone, watch, glasses, car), human QA can't keep up. AI must test itself.
Meta-QA Autonomous Self-Healing
RealTime AI — End to End
The Pipeline I Test Every Day

This is the exact stack that becomes the wrist supercomputer. I've validated it across 7 surfaces — surface #8 is the watch.

YOUR DEVICE
Watch / Phone / Glasses
VAD
UltraVAD — Voice Detection
ASR
Avocado — Speech to Text
LLM + TOOLS
IPNext — Reasoning + Actions
TTS
Avocado V2 — Text to Speech
RESPONSE
Audio + Actions + Visuals
7
Surfaces Tested
26
Languages
373
Test Cases
83%
Fix Rate
<1s
Voice Latency
The Endgame
Supercomputer on Your Wrist

Mark's vision: a super AI in every hand. My vision: that same AI on your wrist — coding, controlling your car, replacing your phone entirely. The pipeline is surface-agnostic. The watch is just surface #8.

WRIST AI
Voice + Vision + Action
always on • always listening • always acting
Voice Command
No screen needed. Speak to code, to navigate, to create. The same Kepler pipeline running today — optimized for wrist.
Camera Intelligence
Point your wrist at anything. Live AI identifies, reads, understands. Real-time visual reasoning at 30fps.
Device Control
Start your car. Lock your house. Deploy your code. The agentic layer turns voice into physical world actions.
Code Anywhere
"Write a Python function that..." — AI writes, tests, deploys. Your watch is a full development environment via voice.
Your Digital Twin
Meta Me on your wrist — an AI that looks, sounds, and represents you. Takes meetings, responds to messages, preserves your authentic self.
Two-Brain Edge+Cloud
Fast Brain on-device for instant responses. Big Brain in cloud for complex reasoning. Works offline for critical commands.
End-to-End AI Workflows
How I Validate AI Systems

Not just testing features — architecting quality systems that scale across infinite AI surfaces.

01
Test Strategy Design
Decompose AI products into testable layers. Map risk surfaces. Design coverage matrices across voice, vision, action, and memory capabilities.
02
Automated Execution
Build AI-powered test agents that execute 373+ test cases across 7 surfaces, 26 languages, and multiple device types — daily.
03
Bug Intelligence
Automated triage, priority classification, and routing. 992 bugs tracked with fix rate monitoring and engineering follow-up loops.
04
Quality Signals & Reporting
Real-time dashboards, weekly compliance reports, fix rate tracking. Automated reports to eng, PM, and leadership via AI-powered skillbooks.
05
Self-Healing QA (Next Gen)
AI Native QA — autonomous test generation, self-healing test suites, AI that catches bugs before humans notice them. The future of quality.