Cloudflare's November 18 Outage – A Continuous Delivery Perspective

On November 18th, 2025, Cloudflare had what they describe as their worst outage since 2019.

It didn’t start with a cyber attack or massive hardware failure. It started with a database permissions change.

This is exactly the sort of thing Continuous Delivery is supposed to make safe. So since the post-mortem came out I wanted to analyze it through a CI/CD lens.

Summary of what happened

Around 11:20 UTC, Cloudflare’s network started returning a huge spike of HTTP 5xx errors for traffic flowing through their core network. Users all over the Internet started seeing Cloudflare error pages instead of the sites they were trying to visit.

The root cause was a change to how one of their ClickHouse database clusters handled permissions and metadata.

Here’s the rough chain of events:

  1. Cloudflare deployed a change to ClickHouse permissions and metadata visibility. That change meant queries against system metadata suddenly started returning more rows than before — including underlying tables in another schema (r0) where previously they only saw the default database.
  2. One of the systems that depended on that metadata was Cloudflare’s Bot Management feature file generator. It queried system.columns to assemble the list of features used by a machine learning model to score bots. That query didn’t filter by database name, so after the change it suddenly saw roughly double the number of columns, and produced a much larger feature file.
  3. That feature file is shipped out to Cloudflare’s edge network every few minutes. The bots module in their core proxy expects the number of features to be below a hard limit (200) and preallocates memory accordingly. The new, larger file exceeded that limit. The Rust code hit the limit and panicked, leading to 5xx responses from the core proxy.
  4. Because this file is generated every five minutes from a distributed ClickHouse cluster that was being gradually updated, the system actually went up and down for a bit: sometimes a good config, sometimes a bad one, depending on which node produced it. Eventually all nodes produced the bad file, and the failure became stable.
  5. Downstream systems that depend on the core proxy – things like Workers KV and Access – were also impacted until they were temporarily rerouted or bypassed.

Main impact started around 11:28, substantial mitigation began around 14:30, and everything was fully resolved by 17:06 UTC.

So: a change to database permissions → bigger config file → hard limit exceeded → proxy panics → global outage.

That’s a change management story.

This is why Continuous Delivery exists

The primary cause of production failures is change. Continuous delivery is about making change boring.

If your system is updated frequently then you must assume that any change is guilty until proven innocent.

From a CD perspective, a few big themes jump out:

1. Config is code

The feature file that broke things isn’t some harmless data; it is executable behaviour. It changes what the bot-scoring model does and it clearly has operational consequences. That should be treated with the same discipline as application code:

  • Stored and versioned in source control
  • Validated in pipelines
  • Subject to contract tests with the consumers (the proxy module)

2. Contracts between systems

There is an implicit contract here:

  • Producer: “I will generate a feature file.”
  • Consumer: “I can safely handle up to 200 features.”

That contract wasn’t explicitly enforced at integration boundaries. The producer didn’t know that exceeding 200 features would crash the consumer, and the consumer effectively said: “If this assumption is broken, I will simply panic.”

In a good CD setup, we encode these contracts as tests:

  • “If the feature file has > 200 features, the pipeline fails.
  • Or, at runtime: “If it’s too big, log, drop extra features, and degrade gracefully instead of crashing.”

3. Fast feedback tied to change events

When production starts returning loads of 5xxs right after a change, your first hypothesis should almost always be: “We broke it.”

Cloudflare initially suspected a large DDoS attack, partly because their off-platform status page also went down by coincidence, which complicated the diagnosis.

A strong CD/operational posture ties monitoring directly to deployments:

  • “We deployed change X to ClickHouse at 11:05.”
  • “At 11:20, 5xxs spike globally.”
  • The system should scream: “Roll back that change first, then keep investigating.”

To their credit, they did eventually correct course, stop propagation of the bad file, push out a good one, and restart the proxies.

Where CI/CD could have helped more

Let’s talk about specific points where mature CI/CD practices can help avoid or limit this sort of outage.

1. Test the config generator like production code

The change that triggered this was essentially a behaviour change in the query used to build the feature file.

In a CD world, we’d treat that as an ordinary code change and ask:

  • Do we have automated tests for the query behaviour?
  • Do we have tests that assert the range of output sizes we consider safe?
  • Do we have a test environment where a realistic ClickHouse cluster – including these permission changes – is exercised before we touch production?

A property test like:

Given realistic metadata, the resulting feature file must have ≤ 200 features, or the build fails

…would have caught this before it ever reached live traffic.

2. Practice progressive delivery for everything

Cloudflare already had a gradual rollout of permissions across the ClickHouse cluster, but the effect of that was system going up and down: sometimes a good config, sometimes a bad one.

From a CD perspective, you want controlled blast radius:

  • Ship new behaviour to a tiny slice of traffic or a subset of regions first.
  • Observe: “Did the new feature file cause any spike in 5xxs, latency, or resource usage?”
  • Only then ramp up.

Instead of having the feature file immediately pushed to the entire network every five minutes, imagine:

  • A canary group of edge nodes is updated first.
  • If they see the bot module panic or error rates spike, an automated system:
    • Rolls back to the last known good config file.
    • Blocks further rollout.
    • Raises an incident with a clear “config rollout blocked” signal.

That’s progressive delivery applied not just to code, but to ML feature sets and configuration.

3. Design for graceful degradation, not panic

The Rust code in the affected FL2 proxy is essentially designed to “panic” (as per post-mortem) if it gets more than 200 features.

From a resilience standpoint, that’s exactly what we don’t want. In a world of continuous delivery and constant change, you assume your assumptions will be broken.

Better options might include:

  • Drop any features over the limit and log loudly.
  • Disable the Bot Management module temporarily and continue forwarding traffic, maybe treating everything as “unknown bot score” rather than bringing down the proxy.
  • Trip a feature kill-switch that turns off Bot Management globally while keeping the core CDN and proxy path alive.

Cloudflare’s own remediation list mentions:

  • Hardening ingestion of internal configuration files like user input.
  • More global kill switches for features.
  • Reviewing error handling across core modules.

That’s exactly the direction you’d expect from a team embracing CD and resilience engineering.

4. CI as safety net for “platform-ish” changes

One subtle thing here: this wasn’t a direct change to Bot Management. It was a change to how the database platform handled permissions and metadata.

In many organisations, platform changes are treated as “infra stuff”, not subject to the same product-level tests. But in a CD culture:

  • Platform changes go through pipelines that also run representative workloads and integration tests for the services that depend on them.
  • That ClickHouse permission change should have run:
    • A compatibility test suite for all consumers that query system.columns.
    • Specific tests for the Bot Management feature generator pipeline.

If you can’t do that comprehensively, you at least start with the critical systems: anything that can bring down your main proxy path should have extremely strong automated protection.

Good engineering is about turning failure into fast, safe learning

Cloudflare did the right thing by publishing a detailed post-mortem, while expressing engineering humility and laying out concrete follow-up steps.

Everyone running large-scale systems has failures. What distinguishes good engineering organizations is:

  • They treat incidents as opportunities to improve the system, not to blame individuals.
  • They share what they learned.
  • They make structural changes – to code, to pipelines, to architecture, to daily practices.

From a continuous delivery perspective, the lessons I’d highlight are:

  1. Every change is potentially dangerous. CD is about making lots of small changes safe, not moving fast and ignoring risk.
  2. Config, queries, and ML features are code. They need the same CI/CD discipline: tests, contracts, and progressive rollout.
  3. Design for graceful failure. When – not if! – your assumptions are broken, the system should bend, not snap.
  4. Tie observability tightly to deployments. If you’ve just changed something and the world is on fire, suspect your change first.

Cloudflare’s outage is painful for them and for a lot of the internet. But it’s also a rich example of why we do continuous delivery in the first place.

If It Hurts, Do It More Often

A common saying from the culture of Continuous Delivery is “If it hurts, do it more often”.

When something hurts, lean into it. Discomfort is the teacher that forces you to adapt until it disappears.

Let’s say deploying is painful because:

  • The process is manual
  • Only one person can do it
  • If it goes wrong, nobody else could fix it

You can solve it by:

  • Automating the deployment
  • Defining a task on Semaphore to make deploying and rolling back a matter of pushing a button
  • Giving other engineers access to Semaphore workflows and production logs

This is of course a simplified example. “Automating the deployment” may alone be a gargantuan task. Good. You decompose big problems into small problems. Go step by step.

The same logic applies when the pain comes from the world having changed. Today development teams are going through a radical transformation with the use of AI tools. The discomfort is real. As Tom Blomfield recently tweeted (emphasis mine):

Hearing from a lot of good founders that AI tools are writing most of their code now. Software engineers orchestrate the AI.

They are also finding it extremely hard to hire because most experienced engineers have their heads in the sand and refuse to learn the latest tools.

I get it. You need to shift from writing code by hand—a big part of your identity—to curating and reviewing what the AI agent produces. And you need new tooling and habits so agents can run without breaking anything.

So lean into it:

  • Start with bug fixes and chores that bore you to death
  • Write prompts, specs, and checks the agent must pass before it lands in review
  • Pair with teammates so you spot patterns together
  • Update your AGENTS.md so your prompts stay DRY and everyone shares the same playbook
  • Share lessons with the team in an open chat room and weekly meetings
  • Celebrate small wins, then ramp up the tasks you hand to agents

Whatever hurts is a signal that you have a mission to complete to level up.

Operately 1.0 →

Today we’re releasing Operately 1.0. Most open source products have a bad UX. At Operately we have set a higher standard.

For example, the Work Map is our third design solution for visualizing goals and projects. Sometimes you need a few bad takes before you nail it.

Would love to hear your thoughts if you’re looking to replace duct-taped Notion documents or legacy project management tools at your company. Drop me a DM on any of the socials linked below with any feedback or ideas as you try it out.

This is just the beginning. The best is yet to come.

AI-Generated Personal Message Equals Trash

A paper from Harvard Business School (PDF) provides data evidence on something we all knew: people dismiss AI-generated personal messages at best, and consider them offensive at worst.

People could barely tell AI from human messages (59% accuracy), but when they thought something was AI-generated, they rated it as less helpful even when a human actually wrote it.

What’s also interesting is that the harder the AI tried to sound human, the stronger people’s aversion became when they detected it.

I believe that most managers will eventually have an AI avatar that people can talk to when they’re not around. But using AI to generate a message to send to someone else is a different matter.

There’s plenty of ways to leverage AI for communication and decision making. Use AI to brainstorm counter-arguments, detect blind spots, research background, or polish your draft - but the ideas and final message must be yours. You must stand 100% behind them.

People don’t just read your words. The basic currency between humans is trust. So even when they’re not consciously thinking about it, every time you talk to someone they’re deciding whether to trust you.

Use AI to think better, not to think for you. Because the moment you stop owning your words, you stop being worth listening to.

Easter walk

Easter walk

Peaks calling

Peaks calling

A blue dot underneath the rings of Saturn

A blue dot underneath the rings of Saturn

The Day the Earth Smiled is a composite photograph taken by the NASA spacecraft Cassini on July 19, 2013.

How Channable Made Deploying Often Easy →

This 250+ person tech company deploys 50+ times daily with zero drama:

At various times, CI was not fast enough, in response to which we did various optimizations. Very aggressive caching played a large role. We also made sure we only tested code that actually changed. In the early days of our CI adventure we switched CI providers twice before settling on Semaphore for performance and ergonomics reasons.

It’s incredibly rewarding to see how companies like Channable use Semaphore to power their development workflow. Their journey shows exactly why we built Semaphore with performance and developer experience in mind.

  • Developers can get new features to production with just a few simple steps
  • The entire deployment process requires minimal human intervention
  • Code changes move from branch to production safely and confidently

The business impact is clear - faster innovation, happier developers, and a better product for customers. This is what happens when technical excellence meets thoughtful process design.

A Checklist for Cofounder Happiness

Successful companies are built on strong cofounder relationships. These partnerships don’t happen by accident - they require these foundations:

  1. 100% mutual trust
  2. 0% ego - leave it at the door
  3. Shared vision for the company’s future
  4. Full transparency
  5. Honest conversations
  6. Having uncomfortable conversations early
  7. Recognizing and valuing your differences
  8. Mutually agreed expectations
  9. Written agreements for clarity, not just handshakes
  10. Consistently delivering on your commitments
  11. Clear decision-making process for disagreements
  12. Flexible roles that evolve based on company needs
  13. Acknowledgment of each other’s wins and contributions
  14. Aligned personal timelines
  15. Aligned financial needs
  16. Time apart to recharge and maintain perspective
  17. Remember why you started this journey together

Just Do Things

You can just do things is the best startup meme in a long time.

Action produces information. When you do things it makes things happen. Or not.

Doing things shows you things you didn’t know. Motion creates clarity. So you can learn things to do more of the right things and less of the wrong things.

The world doesn’t belong to people who keep planning things because they rarely accomplish things.

It certainly doesn’t belong to people who have opinions about things.

The world belongs to those who do things.

Don't Guessbuild

One of the principles in product design that we’ve established at Operately is to avoid guessbuilding.

We stop the design of a new feature at the edge where certainty ends and speculation begins.

The pattern we’ve noticed is, we start by solving specific problems that either we or our users are experiencing. As we analyze the problems, we start seeing patterns that shape the general solution.

This is where things get dangerous. The general solution opens up possibilities for handling hypothetical use cases that we haven’t actually encountered yet.

This isn’t new. In software development, people who practiced Extreme Programming in the mid 90s coined an acronym YAGNI – you ain’t gonna need it.

Don’t be a fool and spend time programming capabilities that you presume your software will need in the future — because it most likely won’t.

Likewise, we aim to draw a clear line between:

  • What we know our users need (backed by direct feedback or usage data)
  • What we think our users might need (speculative features)

When these phrases start appearing in our discussions, it’s usually a sign that we’re guessbuilding:

  • “This could be useful if…”
  • “Users might want to…”
  • “What if someone needs to…”
  • “We should make this flexible enough to handle…”

So we stop there to build confidently for the known needs, ship that solution, and move on to another area of our product. The new solution needs to simmer long enough to have new evidence emerge that clearly indicates what we should build next.

An example

Recently we were discussing how to approach removing goals. Currently, you can close a goal and mark it as accomplished or not. But this action didn’t feel appropriate for situations when you want to stop working on a wrong goal. Or when as a new user you’re just exploring with imaginary data.

So we started with an idea that we need to support both archiving and deleting. Archiving would hide dropped initiatives. Deleting would completely get rid of stuff. But who actually asked for archiving? Just because archiving is a common concept doesn’t mean we have to apply it. A goal isn’t stateless like a document or a workspace. It has a natural lifecycle—it can be closed. It doesn’t need archiving. So we decided to just do deleting.

Looking more closely, deleting can be implemented in several ways. We can just wipe all the data. But in a collaborative business app, the content you create belongs to the team as much as it does to you. Is wiping the data too destructive? We could nullify the data and leave traces that something existed. Also, what about the associated sub-activities, like projects and sub-goals? What should they point to? Or should they be wiped out too?

As we addressed these questions, we realized we went too far and said—let’s not guessbuild. Let’s simply do the thing that we’re sure is needed: users who make a mistake or fool around and want to start fresh should be able to delete a goal and not see it again. So we’re going to allow just that—with a clear warning of implications. And if a goal has any sub-activities, we’re going to politely ask the user to delete those first. We’ll be happy to make it more nuanced and complicated if and when we see strong feedback pointing in another direction.

Semaphore's Open Source Repo →

As announced, Semaphore has open sourced its CI/CD platform under Apache 2.0 in a new GitHub repository: semaphoreio/semaphore.

Here’s what you can do with it:

  • Star the repo as a way to support us (2 seconds)
  • Self-host a webscale CI/CD platform (less than 30 minutes)
  • Contribute to the codebase and help make it even better (infinite fun)

We also have a new website domain: semaphore.io.

This is the biggest news we’ve released since launching Semaphore 2.0 in 2018. We’re super excited to continue building the best CI/CD platform in the world with the community. Marathon, not a sprint.

How to Come Up With New Features That People Want

I opened r/founder while waiting for my Magic Mouse to charge to 2%, started writing a reply and a few minutes later realized I was writing a blog post. So here goes a slightly extended version.

A founder who built and launched products before but never hit product-market fit asked:

How do you come up with new features? Purely from user requests, or do you pitch ideas and let users upvote?

Being data-driven sounds compelling, but you won’t build a great product through feature requests or metrics. So what works? Getting super deep into your space, actually watching users struggle (painful but worth it), and having the wisdom and guts to decide which problems are actually worth solving vs which ones are just noise.

First of all, build in a domain you deeply understand. Start with a clear vision of what you want your product to accomplish for its users. You need to believe in your bones that this is worth spending your next 5-10 years on.

The first versions of your product will cover only a small percentage of that vision at best. From there, you are responsible for determining the optimal path to filling the gaps.

You don’t need users to explicitly request features (although they will) — you need to learn from them about what issues they’re running into, what’s stopping them from accomplishing their goals, and what workarounds they’re creating.

Spend time actually watching people use your product in their natural environment, not just in artificial hypothetical scenarios. Video calls will be fine, as long as you don’t tell people where to click and what to do. You’ll see firsthand what frustrates them and what part of the product they totally did not understand the way you intended it to work.

Features aren’t solutions, they’re responses to problems. The best features come from asking “why?” repeatedly when users tell you what they want. For example, they might ask for a dashboard, but what they really need is confidence their work is on track. In reality you can rarely ask humans “why?” five times. Instead, again, it is your job to think deeply enough and figure it out.

You don’t need formal user voting systems. Simply track how frequently something is requested and multiply by the impact it would have. But be careful with using this as the foundation for what you do—sometimes the most valuable improvements are ones nobody asks for because users don’t know they’re possible. This doesn’t mean they think it’s technically impossible, just very unlikely that you will actually implement it and so never mention it.

Always, always, listen to your users. Sometimes you need to do exactly what they say. But often, what they say is a second derivative of a true problem at best.

You can’t automate or outsource judgment.

Making a Full-Content RSS Feed in Astro

I spent more time than I’d like to admit implementing the RSS feed for this blog. Creating a pure text RSS feed with Astro, my favorite website framework, for a Markdown content collection is straightforward. However, creating a feed that includes arbitrary MDX and renders URLs of build-time optimized images proved challenging. Having a full-content feed is free internet 101, so I couldn’t let it go. After opening an issue on GitHub and talking to some kind people on Astro Discord, I figured it out.

This was also a textbook example of a junior-level challenge that you can’t just brute force your way through with AI prompting. The solution is non-linear: it starts from the out-of-the-box idea that instead of messing around with pre-processing, you need to headlessly render posts before further adjustments, which is exactly what the experimental Astro Container API is for.

A Graveyard of Incomplete Execution Loops →

Cedric Chin writing on Commoncog:

it is easy to come up with plans and then start executing on them. The problem is that:

  1. Either you get distracted, mid-execution, by something else that’s shinier. Or you get distracted mid-execution by something that blows up (and in business there’s always something that’s blowing up).
  2. Or you get distracted by something else that pops up at the end of the current execution loop,
  3. Or you forget to do the ‘study’ bit at the end of a loop, which informs your next cycle.

It’s easy to proclaim focus, but it’s a whole other game to practice the sheer discipline of seeing things through when your big customer cancels, your head of engineering leaves, the backlog gets boring, and the next shiny idea comes knocking while you’re trying to execute a multi-month strategy.

Kudos to Cedric for addressing what many of us avoid: how deeply distractions and forgetfulness affect our work, despite it seeming unprofessional to admit.

The whole Commoncog website feels like a breath of fresh air in the world of fluffy business writing.

Semaphore Summit 2025 →

We’re hosting a virtual Semaphore Summit on February 26-28 to kick off our open source journey.

Each session is short and focused: Darko, our CTO, will open the Summit with a keynote on why we made this move and what it means for the future of CI/CD. Other engineers will share tips on running Semaphore and optimizing CI/CD workflows. We’re also excited to host developers from companies like Confluent and SimplePractice sharing their experiences.

The event is free to attend and the sessions will take place during NYC noon time. The best part is that every talk will have a live Q&A session. So no long dull videos, just real conversations about continuous delivery.

Serbia's Student Revolution

Serbia is experiencing an unprecedented political protest movement.

On November 1, 2024, a 300-tonne concrete canopy collapsed at a railway station in Novi Sad, killing 15 people and severely injuring two. The root cause: terminal-stage corruption.

The railway station reconstruction project involved over 60 subcontractors—many without relevant work experience—and cost five times more than projected. The country’s president, minister of construction, and city mayor—all from the ruling party—had opened the station to the public. Documents later revealed that at the time, the station was still a construction site without a usage permit.

What followed was a unique form of political protest led by university students, which developed into Europe’s largest student-led movement since 1968.

When hooligans sponsored by the ruling party attempted to sabotage the first protests with violence, students responded by locking down the universities. Their demands were simple: publish all documentation related to the railway station reconstruction and ensure complete criminal accountability for the collapse.

For a systematically corrupt government, these demands are like Schrödinger’s cat—fulfilling them would likely put the party leadership in jail.

Waiting for students marching 80km from Belgrade to arrive, Novi Sad, Serbia, January 31, 2025

Waiting for students marching 80 km from Belgrade to arrive, Novi Sad, Serbia, January 31, 2025

As of this writing, hundreds of thousands of people have participated in 24-hour blockades of key intersections and bridges in Novi Sad and Belgrade. Universities remain closed, with professors joining their students. Many high schools, some elementary schools, and public kindergartens have shut down. Attorneys are on general strike. Daily protests have spread to over 200 towns and villages. When students march, they are welcomed like liberators with tears of joy and hope.

The unified message is clear: we stand with our children and support their demands.

Each day at 11:52—the exact moment when the canopy collapsed—people across the country pause their daily activities. They stand in silence for fifteen minutes, one minute for each life lost in the tragedy.

When it’s a part of a street protest, the gathering is now secured by farmers with tractors and bikers. This precaution was implemented after several incidents where people attempted to drive through the crowds and injured protesters. These attackers were presumably influenced by inflammatory government rhetoric that labeled protesters as being backed by invisible foreign powers aiming to destabilize the country.

It’s hard to describe the experience of standing in silence with twenty thousand others on a city street. The collective grief, unity, and resolve for change are palpable.

Perhaps most remarkable is the students’ organizational discipline. They have rejected all outsiders from their movement—no politicians, activists, opposition parties, or NGOs. They operate without a single leader, practicing direct democracy through plenary sessions. When speaking to media, they rotate spokespersons to prevent any individual from becoming prominent.

They cannot be provoked, frightened, defamed, or bribed.

Students in Novi Sad, Serbia, marching with a banner 'DIALOGUE'

Students marching with a banner ‘DIALOGUE’ in response to a violent incident. On the same day the government invited students to dialogue, a 23-year-old woman was attacked with a baseball bat by someone who emerged out of the ruling party’s office. Her jaw was broken. The attacker was arrested and the prime minister resigned.

The students consistently reject unconstitutional calls for “dialogue” from the country’s president—legally a ceremonial position—to negotiate their demands. They dismiss as irrelevant this man who typically dominates daily media coverage and controls everything. Instead, they insist that institutions like prosecutors and police simply do their taxpayer-funded jobs. They call for the restoration of checks and balances and the rule of law. In these dark times created by weak leadership on both sides of the political landscape, such basic demands feel revolutionary.

Eventually, this movement must culminate in both justice and political change. The timing and method remain uncertain, but consensus is building around establishing a transitional expert government with a limited term to unblock captured institutions and ensure fair elections. One thing is clear: there’s no returning to the status quo that existed before the canopy collapse.

Mantras

Therapy is expensive but saying this is free:

Ship it.
Fail fast.
Just do it.
Locked in.
Skill issue.
It’s always day one.
You can just do things.
Fuck around and find out.
Move fast and break things.
Done is better than perfect.
Discipline equals freedom.
Don’t ask for permission.
Open source everything.
Fuck it we ball.
Zero to one.
Amp it up.
Don’t die.
Kaizen.
LFG.

Semaphore Is Going Open Source →

After more than a decade of building Semaphore as a commercial CI/CD platform, we’re open sourcing the entire core under Apache 2.0. This is a moment I’ve been waiting for for a long time. The stars have finally aligned — the tools for self-hosting cloud native applications have matured, developer tools have found their natural home in open source, and we’ve built something worth sharing with the world.

What excites me most isn’t just opening up our codebase — it’s the transformation into building in public. The best developer tools are built in the open, shaped by the collective wisdom of their users. It’s time for Semaphore to join that tradition.

Update: we’re kicking off our open source journey during the Semaphore Summit on February 26th.

Foggy weekend

Foggy weekend

Semaphore is now SOC 2 Type 2 Certified →

We just got our SOC 2 Type 2 certification at Semaphore. This means we’ve proven our security practices work consistently over time, not just on paper. Protecting our customers’ code and data has always been a top priority for us, and now we have the audit to back it up.

The difference between Type 1 and Type 2 matters here. Type 1, like ISO 27001 which Semaphore has been certified to since 20201, is a snapshot – it shows your security controls look good on a specific day. Type 2 proves you’re actually following these practices over months of real operations. It’s the difference between having a gym membership and showing you actually go regularly.

Getting here wasn’t quick or easy. Unlike typical product work, there’s no definitive specification for SOC 2. No clear manual that says “do exactly these things and you’ll pass.” Instead, you’re working with broad principles about security, availability, and confidentiality that you have to interpret and implement in the context of your specific business.

I’m very proud of our small security team who turned these vague compliance requirements into real, practical security improvements across the organization.

To our customers: this certification confirms what we’ve been doing all along – treating your code and data with the care it deserves. To anyone considering Semaphore: this is what we mean when we say security is built into how we operate, not bolted on later.

AI Can Build Apps Like Calculators Can Do Math

Naval Ravikant on X:

AI won’t replace programmers, but rather make it easier for programmers to replace everyone else.

Someone jumps in with the “but Jensen Huang says AI writes code!” argument. Naval’s response packs as much wit and insight per character as ever:

Calculators can do math per CEO of Texas Instruments.

Exactly. Now is the best time in history to be a programmer.

AI expands your capabilities into domains where your knowledge was insufficient. If all you ever wanted was to create, and you’re looking at AI the right way, chances are you feel reborn.

Designers who dreamed of breaking new ground in human-computer interaction but got stuck styling forms and churning out YouTube thumbnails should be excited.

The industry is resetting to where it was 18 years ago, before iPhone and when “social” web was just emerging. The soul-crushing parts of the job are being automated away. What’s left? The work that actually matters: original thinking backed with good taste.

Technology creates demand for more technology. There will always be new programs to write.

In the new year, fly free.

In the new year, fly free.