Here’s someone who’s excited that they were able to make something but then… it all falls flat on the follow through
I was never that interested in coding or making software. The things you could do with it just didn’t interest me that much. Me personally, my greatest joy comes from writing, and reading. I collect books from everywhere. If a friend is moving and getting rid of books, I will take all of them. If I am on a run and see books in a box by the side of the road, I will carry it home with me.
In college, I double majored in Politics and Economics. But at UC Santa Cruz, “Politics” mostly doesn’t mean political science, it means political theory and philosophy. So I spent a lot of time reading and writing about thinkers like Foucault, Hegel, or Butler. All of this to say that although I realize the power of quantitative analysis and mechanical processes, they aren’t my first love and my core interest.
This changed in 2019, when I encountered GPT-2 and BERT — digital systems that could understand and create language. In fact, they could work with ideas in natural language. I was fascinated, and became obsessed. To be honest, that feeling never really faded. It is strong enough that I realized I finally needed to learn how to code, and build a team that could work with software, and data.
I and Handshake have come a long way since then. We’ve had the pleasure of getting to work with think tanks on bringing their ideas to life, with companies to analyze their conversations with customer, even groups within the government and defense. Sometimes that looks like agentic, abstract conceptualization, and sometimes more like traditional data science and NLP. All of it has been fascinating.
But over the last year or so, working with and learning to code has changed a lot. Mostly for the better. Large models have gotten good at creating accurate code — but not just that. In many cases, they can take an abstract idea in natural language, interpret it, and then come up with many possible interpretations of it in code. Ethan Mollick shares a lot of fun examples of this on his social channels. Here’s his little video demo that he posted on Twitter:
In the video, he’s using a feature of Anthropic’s consumer interface for their Claude models called “Artifacts.” In the Artifacts feature, they’ve built really nice scaffolding so that the code generated comes out correct pretty much every time, and displays right in the browser tab next to your prompt so its easy to tweak and iterate using natural language. I really encourage you to try it out — it is, if nothing else, a whole lot of fun.
For instance, I’ve lately been playing a lot with different color combinations, as I’m in the process of designing the cover of a book I’ve co-authored with Bob Johansen and Gabe Cervantes (called Leaders Make the Future, coming out in March 2025 — you’ll be hearing more about this, don’t worry). I went into Claude and typed in the prompt:
“Create an interactive app to explore different colorways interactively. The swatches should all be displayed next to each other and switch around randomly depending on my selection. Both the position of the colors and the overall color options should change each interaction.”
From that description, it created this interactive, sharable little app for combining and comparing colors. If you want to copy and edit it, you can just click ‘Remix Artifact’, which is a nice feature.
There are lots of really good things to say about Artifacts — it is one of the few features that really start to get beyond a “chat” or “autocomplete” paradigm in a compelling way — but that is actually not my point in bringing it up. What I actually want to talk about is the process of using code, sharing it with others, and actually putting it out on the web as something that works — what very broadly could be called “deployment.”
So right now, you can throw something decent together with no coding knowledge; something that might just be fun to play around with and share with friends. But in order to actually get a webapp online and working correctly, you need to know how to debug the code, where you want to host it (services like Vercel are great for this), and you’ll probably want to make some UX decisions if you actually want people to use what you make.
Currently there’s no generative AI solution for this: you have to go through and manually check everything and then put it online. There are also other considerations — does whatever app you’ve made require a user login? That means connecting a service like Auth0, which you can’t (yet) tell a chatbot to do magically on your behalf.
To add another layer onto this already tall and wonky cake, we’re also in a very interesting place with the creation of AI tooling: because of the capabilities that chatbots afford us, individuals and organisations have become very interested in generating their own retrieval systems. The excitement around building bespoke AI tools reminds me of what it was like building websites in the 90s, when websites had only just become a thing: it was expensive, time-consuming, and you had to out-source it because it required specialist knowledge. Then through lots of learning and doing, design patterns emerged, and the idea of ‘a website’ crystallised, and we ended up with services like Squarespace and Wordpress which really lowered the barrier.
With gen AI, we’re kind of swimming in this wide and open ocean of possibility, wondering when we’re going to reach our version of Squarespace (which may never happen anyway!). Websites had to exist for years and years before website builders really became a viable consumer product. There had to be stronger consensus on what websites were for, what kinds of interactions were expected, and what the UX would look like — e.g. having a nav bar at the top, or hidden behind a hamburger, or knowing when a link will or won’t open a new tab.
We’ve barely begun to scratch the surface of what’s possible now that we have generative AI; like with websites in the 90s we’re kind of just throwing stuff at the wall and seeing what sticks. From the perspective of running a generative AI agency, the need for bespoke generative AI tooling is high enough that people are going out and trying to build them themselves, but they generally hit a wall because of a lack of expertise — and that’s usually where we come in and build out the system for them, just as we did with the chatbot on give.org.
I think it’s interesting and exciting that we’re in a space now where you can see your ideas come to life very quickly (just as above with the Flappy Bird clone on Artifacts) and iterate on them in real time. It kind of closes that gap between building on top of existing paradigms — such as what a website is ‘meant’ to look and feel like — and deciding that actually, it’s time for a new paradigm.
Love the realism here. I'm inspired to try out Claude's artifacts. What a fun little app!