Building with Ash, Before & After AI
BLUF (Bottom Line Up Front)
AI assisted coding with the Ash framework is surprisingly good.
History
I built a side project with Ash two years ago and wrote about it here. I might have had an early version of GitHub CoPilot at the time, but how quickly I write code with the assistance of AI has increased massively. I haven’t really done much with Ash until I reached for it again on a side-project a couple of months ago.
My assumption/hypothesis
I’ve heard that LLMs are best at writing code in popular languages. And with Ash not being very popular at all yet, I figured LLMs might be quite terrible at it. But I decided to give it a try, and if it didn’t work well, I would just drop down to Elixir.
New Findings & Opinion
One of my biggest struggles with Ash two years ago was the code interface, whether I used an Ash.Query
or exposed a proper code interface, and the error messages when I got it wrong were not as clear as vanilla Elixir code. Surprisingly, the LLM got through this quite quickly. It would either get it right the first time, or when it made a mistake, Cursor could run the code with Tidewave, understand the error more quickly than I could, and fix the code.
One of the strongest points in favor of Ash is that the modeling of the business domain & logic is extremely dense and clear (very few short lines of code). So that if someone new to the project wrote some code that might contain mistakes, one person that understands the business domain can very quickly read & review what was written and if it lines up with business requirements. This benefit lines up PERFECTLY with an LLM writing code.
My new workflow with LLM assisted Ash coding
I would start with the Ash resources (as you should). And I would very carefully read & review every line that was written by the LLM and sometimes just hand-write small changes. Migrations are generated by ash_postgres, those are even more carefully reviewed. Then define the code interface by which the UI would access the resources. Add a few tests if I have any concern that an authorization bug could one day cause a security issue. Ash policies with passing the user as an actor is usually the right way to do it. Then I would just vibe-code the UI. I might have strong opinions about how the routes should be constructed, and I’ll carefully review every function call that accesses my resources, but the actual UI code is LLM generated and just lightly reviewed.
This has gone very quickly, because the pieces of code that I need to carefully review, is a lot less code than vanilla Elixir (or any other programming language I’ve ever read). So the speed of the LLM being able to generate lots of code and sometimes make a mistake, becomes less of a liability when there is less code for me as a human to review. The danger of me getting bored and just committing & merging code that I don’t fully understand is reduced.
Context Switching & Ash Policies
When you decide to review the authorization flow, trying to make sure that each user can only see and do what they are supposed to, Ash Policies is implemented with a minimal amount of code that is all organized into policies
blocks. This allows your human reviewer brain to peruse all the authorization code while staying in the zone of thinking through authorization.
Downsides
If you are considering using Ash, you need to weigh the downsides/costs to make an informed decision. Learning curve. I went back and read the Cons I listed here. I think they all still hold true, but maybe with less weight. The documentation has improved a lot in the last two years. A few times, I looked up a module on HexDocs and only found a description & typespec, but no examples. I think error messages & learning curve might not have changed, but an LLM definitely reduces the time & pain. Using Claude in agent mode with Tidewave, it would very quickly see the errors and understand them and fix the code.
What did I build?
I built Server-Sent Events as a Service called EventBlast. Ash made it really easy to model that each user can create and belong to multiple organizations, an Organization can have one Plan with a monthly price & various rate limits, and each Org can have many API keys. And all the authentication & authorization required to ensure who can see and do what. This is all boilerplate stuff, and not particularly interesting to work on. I didn’t want to spend a bunch of time writing/reviewing all the code by hand. Ash allowed me to very quickly get it done, with minimal code to maintain going forward.
Final thoughts
Would an LLM be better at writing Python? Maybe. But that isn’t the bottleneck with delivering software in 2025. A human needing to review all of the code that the LLM generates and ensuring that every bit of it lines up with the expectation, and that human feeling confident to just hit git reset --hard HEAD
and try again with a new prompt is what ultimately builds good software on clean code when an LLM is in the mix.