xllify

How xllify uses Claude to suggest code

How xllify uses Claude models to generate Luau function code inside the Assistant.

The xllify Assistant lets you describe a function in plain English and get working code back, ready to package up with xllify and deploy as an Excel add-in. Paste in existing VBA or Excel formulas and have them converted. Build and download an XLL directly from the chat. See it in action.

This is powered by Claude. The AI is not at the core of xllify: the runtimes, the Luau VM, and taming Excel’s extensibility model are. But it’s a useful ingredient as it lowers the barrier to entry considerably for people who wouldn’t otherwise write code at all, but as practitioners can express requirements clearly. This is what made Excel great over forty years ago.

Why Luau makes this work

As we covered in Why Luau?, Luau has a small, well-defined surface area. That matters for code generation. Claude can target it reliably. The language is constrained enough that generated code is almost always syntactically correct and does what you’d expect. Python and JavaScript have too many ways to do the same thing.

The predictable objection, “I don’t know Luau”, matters less than it used to. A lot of us write code with LLMs and agents now. The language is increasingly the LLM’s problem, not yours. You can absolutely build an add-in end to end with your browser using xllify Assistant. If you’re working locally and want a similar experience, the starter template helps you get started quickly. This includes integration with Claude Code. Check it out.

Does it matter that the code is generated rather than hand-written? Arguably not. A C compiler generates machine code. Your Python interpreter generates bytecode. LLM-generated Luau is just another abstraction. Whether you typed it or described it, the function either works or it doesn’t.

We proxy the API calls

When you use the Assistant, API calls go through xllify’s servers rather than directly to Anthropic. We absorb the cost, so you don’t need a Claude API key. We also control the model, the prompt, and the behaviour centrally.

A mixture of models

Not every request needs the same amount of oompf (technical term). All initial conversation messages as well as simpler questions hit Claude Haiku first, which is fast and cheap and great for triage. Code generation escalates to use Sonnet or Opus depending on complexity. Routing between models keeps the experience responsive without burning unnecessary compute, wait time and cost. We also make use of prompt caching and query similarity to align with an in-memory cache on our API, so repeated or near-identical requests come back faster and at lower cost.

Keeping it focused

The Assistant is deliberately constrained to a narrow domain: writing Excel custom functions in Luau. It won’t help you write a web scraper or debug your Python project. That keeps the output quality high and the experience coherent. It also means we don’t give away our tokens to anyone who fancies a free chatbot. 💸

xllify is not an LLM wrapper

People see AI code generation and assume that’s the product. It isn’t. The native XLL runtime, the Luau VM compiled to WASM, the cross-platform build pipeline - none of that was easy to get right. The AI sits on top of it.

That said, it’s a genuinely useful bit. Someone with no Luau knowledge, or no coding background at all, can describe what they want and have a working Excel function in under a minute. That’s a genuine leveller, and it’s worth building well.

← All posts