Introducing Units - A Tiny DSL for LLM-Generated UIs

A practical exploration of using a compact UI DSL to reduce LLM token overhead, improve structure validation, and compile cleanly to JSX.

Mar 03, 2026


I started Units as a side project to test a simple idea:

What if LLMs could generate UI with a compact DSL instead of full JSX?

When we use LLMs to build interfaces today, they often spend tokens on verbose syntax, repeated patterns, and formatting noise. JSX is excellent for developers, but it may not be the most efficient language for a model to think in.

Units is my attempt to explore that gap.

The hypothesis

If we give models a condensed, structured UI language, they can:

  • use fewer tokens per UI generation
  • keep more context window available for reasoning
  • produce output that is easier to parse, validate, and transform
  • iterate faster in agent-style loops

In short: less syntax overhead, more signal.

Why this matters

LLMs generating UI usually juggle:

  • layout and hierarchy
  • component composition
  • design-system constraints
  • data bindings and interactions
  • app-specific logic

Every token spent on boilerplate is a token not spent on decisions.

Visual: Units pipeline

Rendering diagram...

This is the core idea: keep generation compact, validate early, then compile to familiar framework code.

What Units is (so far)

Units is a compact DSL that represents UI intent in a tighter format, then compiles to JSX (and potentially other targets later).

JSX

<Card className="p-4">
  <Heading level={2}>Welcome</Heading>
  <Button variant="primary" onClick={start}>Get Started</Button>
</Card>

Units DSL

card.p4[
  h2["Welcome"]
  button.primary{click:start}["Get Started"]
]

Same intent, fewer tokens, cleaner structure for machine generation.

Visual: Agentic UI generation loop

Rendering diagram...

Possible benefits beyond token savings

Token savings is the starting point, not the finish line. A constrained DSL may also unlock:

  • better reliability (fewer syntax errors than raw JSX generation)
  • stronger guardrails (schema validation before codegen)
  • cleaner diffs (compact trees are easier to compare and patch)
  • model portability (shared intermediate format across providers)
  • safer automation (agents reason at intent-level before emitting framework code)

What I am testing next

This is still an experiment, so the focus is on measurable outcomes:

  1. token usage vs JSX across common UI tasks
  2. generation quality and structural correctness
  3. compile-time guarantees and validation ergonomics
  4. end-to-end latency in iterative LLM UI workflows

Closing thought

I am not trying to replace JSX for humans.

I am exploring whether we need a better language for models, then compile to the language humans already use. If this works, we get both: efficient generation and familiar runtime code.

If you are experimenting with LLM-native UI pipelines, I would love to compare notes.


A Personal Blog by Tushar Mohan.
Sharing key lessons and insights from experiences in engineering, product development, and team building. Views expressed are personal and based on my experiences.© 2026