
Beyond the Algorithm
The interview with Tim Belonax, brand designer at Anthropic explores Anthropic’s approach to AI development.
Spring 2025
words
min read
STORY IS BREWING…
Anthropic and its creation: 'Claude'
What is Claude and Anthropic? What do they do?
When the tech world zigged toward faster, flashier AI, Anthropic zagged. Born from a split with OpenAI, this band of researchers wasn’t interested in the sprint to build the smartest machine—they wanted to build the safest one.
Their AI assistant Claude isn’t trying to pass as human. Look closely at its branding: hand drawn sketches, earthy colors they call ”Claude Clay.” These aren’t random choices. While Silicon Valley churns out sleek, cold interfaces, Anthropic deliberately keeps one foot planted in the human world. In an industry obsessed with moving fast and breaking things, Anthropic is the rare company saying “let’s slow down and get this right.”
As AI keeps reshaping our world, Anthropic is betting that careful stewardship beats technological swagger. They’re not just building AI; they’re trying to make sure AI’s future includes us.
In this interview, Yoon and Tim dive into the heart of Anthropic’s design philosophy and its AI assistant Claude. They’ll explore how this unique company is hitting the brakes on the AI race to focus on something bigger – making sure tech serves humans, not the other way around.
Who is Tim Belonax?
Tim Belonax is a designer, educator, and lecturer with international recognition for his work in branding, typography, publication, and environmental design. At Anthropic, he leads brand design efforts for Claude, crafting visual narratives that communicate AI safety and research principles to the public. With experience at companies like Pinterest, Airbnb, and Patreon, Belonax brings a unique perspective to the challenge of making abstract AI concepts more approachable.
Personal Perspective on AI
I’m curious about how you thought of AI before joining Anthropic and if it changed during your time working there.
I clearly didn’t know as much as I know now. I had an inkling of it early on. I worked on a project for a place called the Singularity Institute, which was all about reaching a point where technology is smarter than how humans can think. I was aware, but I didn’t take it as seriously two decades ago.
Now getting closer to that, I think I have a better understanding of the consequences and why safety is so important and the nuances. Going into it, I hadn’t thought of the myriad ways that things could go wrong or right, and what other futures are therefore possible when this happens. We’re on the cusp of that and it’s exciting but also terrifying.
Then where do you see AI world and Anthropic in five years?
Oh, I’m the wrong person to ask. I would say a good primer for that is “Machines of Loving Grace”, which our CEO wrote, I would say read up on that. I think it’s a good primer for thinking about where things will be in a few years
<1>
<”Machines of Loving Grace” >
{explores whether technology will replace or enhance humans, tracing this devide through computing history}
<2>
Anthropic’s / <“Constitutional AI”>
{embeds ethical principles directly into AI systems, ensuring they remain aligned with human values even as capabilities expand.}
Anthropic's Mission and Approach
Anthropic has now been a team for about two years. What is Anthropic trying to achieve through a creative design team and design thinking?
Everybody is working on the same mission, which is to help humanity move through this transformative time through AI. Within design, there are a variety of components that we work on more closely. For product design, we’re working on the actual products, so that could be our console product, that could be how the API integrates into things on the brand side of things. We are shaping the brand of Anthropic and Claude and telling stories about these entities, helping people understand them through educational documentation and videos.
We really like to think of the company as a three-legged stool. It’s focused on research, policy, and products.1 And what we’ve noticed is that you can have tremendous research, which the company was founded on, but unless it is applied, then it’s just theoretical. And so that’s why the product is really important. Policymakers won’t listen to you unless you have some sort of proof of concepts. And so that’s why product also helps hold things up, but you also can’t effect real change without policy around something. So that’s why policy also needs to exist, and AI can’t be safe and it can’t be as intelligent as it is without great research. So all of those things need to do their equal parts.
Is the brand team responsible for foreseeing all of those three working harmoniously together?
It is about telling a story of all three of those and tracking how well the brand is developing people, how many people know about us, what that opinion is, if it’s going the way that we want it to, what we need to do to change course, things like that.