Hello! We just finished up a huge project with give.org. We built a chatbot for them that helps you decide who you should donate money to based on what you care about — we’ll let you know when it’s up and available to use. A lot of work went into this, so we’re super excited to share this with you… soon.
Some other stuff:
Jeremy gave a short talk recently about how to design gen AI apps, using the give.org work as an example. Watch it now or keep reading for more about that.
Jeremy was interviewed in give.org’s podcast, which discussing the tool we built for them.
He was also interviewed by Kriterion, focusing on AI & business.
FYI — we are revamping our online course
I don’t usually like to use this newsletter to do sales, but I thought I’d just let you all know that we’ve made some huge updates to our course, and it starts again on July 9th.
Until now, our course has very much been focused on demonstrating what you can achieve when you go beyond ChaGPT and really get into the weeds with generative AI tooling — even if you don’t have a technical background. This course was very much written in response to the huge shifts in generative AI back in 2022.
In 2024, things are shifting again — we have rewritten the course to focus much more on AI Agent Design. As a generative AI consultancy, we can very much see how agentic systems will, very soon, radically change the way we think about and design online interactions. I wrote about this in more detail last month, in fact.
We really want our students to get ahead of this stuff instead of just watching things change around them! If this is of any interest to you, there’s actually a 25% discount on the course right now (because we are one of the top-rated courses on the platform… no big deal). The discount is only running until June 9th!
What’s involved in designing gen AI apps for clients?
I recently gave a lightning talk on Maven about this very thing — in this talk I briefly explain the process we went through when working on the chatbot for give.org. You can watch the recording here (the talk is only 20 minutes long, with around 10 minutes of Q&A), and below is a brief summary of the key points I made.
Co-design product requirements with your clients
Building bespoke software is difficult anyway, but with generative AI, the possibilities seem infinite — so there may be some wild expectations coming from your client, e.g. ‘can we build something that takes in ALL our data?’ So part of the work in this initial stage is educating your client on what’s possible. Luckily the give.org team came in with quite a bit of background knowledge already — but it still takes work to boil the ocean of possibilities and align on a single approach.
The problem that give.org came to us with was getting people to better interact with all the content that exists outside of their charity reports. So that was stuff like podcasts, blog posts, etc. — and we had to make a bunch of decisions on what that would look like. Would the system use speech or text? Would it look at natural language data or tabular data?
In many ways, this initial stage is the hardest part — and a lot of the time people do converge on a form factor of a chatbot that uses retrieval; but not all RAG chatbots are the same!
Get more granular: what kinds of inputs and outputs do we want for this chatbot?
Next we had to decide what happens on the backend when the user gives this chat interface an input. You cannot design this with a single prompt, like you might when building a custom GPT with OpenAI. With give.org, we started by mapping it out like this:
The yellow boxes represent different prompts; these are created depending on what the user input is, and these prompts lead to the black diamonds, which are queries. These queries are then used to prompt the model to retrieve data from the correct place. There are multiple sources of data to retrieve from, and multiple kinds of user inputs (from asking a complex question to just saying ‘hello’), so you can see how this can quickly become quite complicated to map out.
Then you need to think about LUI/LUX
LUI and LUX refer to Language User Interface/Experience. So, in the case of a chatbot, this refers to what it feels like to talk to it. What kind of tone does it have? Is it warm and friendly or cold and concise? Does it talk to you like a colleague at a party or like… a cat? In give.org’s case, they wanted something that didn’t sound like your run of the mill customer service assistant. They wanted it to feel warm, compassionate, and unpretentious, along with a lot of other details. That may sound obvious, but it isn’t. In other cases you might want it to be authoritative, direct, and concise. What you choose depends on many factors, including your use case, your client’s brand, and your user base.
If you want to learn more, watch the talk! We’re very proud at how this product has turned out, and we’re excited to share it with the public very soon!
Escape into your infinite future
We’ve been tinkering around with how to create fully generative websites. Can AI models create not just text and imagery, but the computer code to structure how that content is displayed? Can it go beyond to create its own interactivity? This fun little demo explores that —it’s a future forecaster tool that creates science fiction vignettes. Put anything in the input box, and it will generate an image, write a short story about the future and create the format and style with HTML and CSS. Right now, this mostly means that your sci-fi story ends up looking like its on MySpace. In the next few years, however, this kind of generativity will transform the way websites work. Instead of a single website with static functions, websites will be written from scratch every time a user interacts with them, for their exact context and use case, and the code will just get thrown out afterward. UIs will evolve to fit what they intuit the user needing. In this mini-experience, the generative UI is actually not what I’m most proud of. What I am pleased with is the language style. I think I managed to scrub out that beige overly polite AI “smell” you often get. Let me know what you think!
Georgia and I tried to make a Magic: The Gathering card generator and uh…
…it didn’t go super well. Consider this a lesson in the perils of no-code tools. With these videos, our challenge is to make something half-decent with generative AI in under two hours. Sometimes that means using other tools to build UIs that wrap around whatever thing we’re making — and sometimes those other tools are not user friendly, expensive, and upsetting.
This video turned out to oscillate between funny, useful, and mind-boggling. So, not all that different from the usual. Enjoy!
Can't wait for your course!