Dakota Kim

Director of Engineering, Applied AI โ€” EQengineered
CEO & Founding Engineer โ€” MadWatch LLC

In a nutshell

I build systems that think and interfaces for humans to think with them. I lead applied AI teams, architect full-stack products end-to-end, and publish open-source models on Hugging Face. My work spans from C and systems programming to fine-tuning 21B parameter language models, from native iOS and Android apps to autonomous agents. I'm equally comfortable writing CUDA kernels, designing product UX, or leading a team through an architecture migration.

What I think about

Long-horizon reasoning in agents. World models. Continuous self-improvement. Human-computer interfaces โ€” from pixel-perfect apps to embodied intelligence.

By the numbers

21B
Largest model fine-tuned
10+
ML models published
7+
Apps shipped to production
10+
Years building software

Find me

Director of Engineering, Applied AI โ€” EQengineered
CEO & Founding Engineer โ€” MadWatch LLC

I build systems
that think โ€” and
interfaces for humans
to think with them

Engineering leader and ML researcher working across every layer of the stack. From systems programming and native mobile to large language models and autonomous agents โ€” I architect products, lead teams, and publish research at the frontier of applied AI.

Dakota Kim

Three modes of building

I think in systems, ship as a product engineer, and research at the frontier of machine intelligence.

01

Applied AI & Agents

Building intelligent systems that reason, plan, and act. I fine-tune large language models, design agentic architectures, and research approaches to long-horizon reasoning and continuous self-improvement. Published 10+ models and datasets on Hugging Face, including a 21B parameter fine-tune.

02

Product Engineering

End-to-end product thinking with deep technical execution. I architect and ship full-stack web applications, native iOS and Android apps, and watchOS experiences โ€” always with a systems mindset. Seven apps on the App Store and counting.

03

Human-Computer Interfaces

Designing the surfaces where humans and intelligent systems meet. From pixel-perfect mobile UIs to voice interfaces to embodied intelligence โ€” I'm interested in every modality through which people interact with computation.

What I think about

The problems I keep returning to โ€” at work, in open source, and in the margins of my notebooks.

001

Long-Horizon Reasoning

How do we build agents that don't just react, but plan? I'm interested in architectures that enable multi-step reasoning, goal decomposition, and execution across extended time horizons โ€” agents that can hold a thread for hours, not just tokens.

002

World Models

Intelligence requires a model of reality. I explore how to ground language models in structured representations of the world โ€” spatial, temporal, causal โ€” so they can predict, simulate, and reason about consequences before acting.

003

Continuous Self-Improvement

Systems that learn from their own experience. I research approaches to self-play, reflection, and iterative refinement โ€” how an agent can evaluate its own outputs, identify failures, and improve without human intervention.

004

Embodied Intelligence

The full spectrum of human-computer interaction โ€” from touch screens to voice to physical embodiment. I'm drawn to the question of how intelligent systems should present themselves to humans, and how that interface shapes what's possible.

Open research

Models, datasets, and technical writing published on Hugging Face.

Things I've shipped

Products, tools, and experiments โ€” from AI agents to watchOS apps.

Thoughts & explorations

Technical deep-dives, engineering philosophy, and occasional musings.

Let's build something

Find me across the internet.