Essay · 13 min read

The Last Website

There will come a day, probably sooner than you think, when you open a browser and feel the same mild bewilderment your parents felt the first time they used one. Not because browsers will be gone—they'll linger the way fax machines lingered—but because the activity of browsing will have become as quaint as the activity of manually tuning a radio. You'll remember when you used to do that. You'll wonder why it took so long to stop.

The browser was always a workaround. A way for humans to navigate a system built for machines—servers talking to servers—by wrapping it in a graphical interface legible to people who hadn't memorized HTTP verbs. It solved a real problem brilliantly. It also created thirty years of problems nobody asked for.

We are about to solve them.

The Problem With Websites

To understand what's coming, you have to understand what websites actually are underneath the design and the JavaScript and the carefully A/B tested button colors.

A website is a data source with a presentation layer bolted on top. Amazon has a database of products, prices, and inventory. The website is the interface that lets humans query that database, evaluate the results, and execute transactions. Everything between the database and the human—the navigation, the search bar, the product page, the checkout flow, the confirmation email—exists for one reason: humans can't talk to databases directly.

The presentation layer is not the product. It's the translation layer. And translation layers exist to be eliminated when a better translation becomes available.

Personal AI agents are that better translation.

When your agent can query Amazon's product database directly, understand the results, evaluate them against your preferences, and execute a purchase—the website becomes unnecessary overhead. The translation layer dissolves. The data and the human are directly connected, with the agent in between doing what agents do: understanding what you want and getting it.

This is not a distant science fiction scenario. It is the logical and near-term consequence of AI systems that can already read, reason, and act. The infrastructure to make it universal doesn't fully exist yet. But it's being built right now, in pieces, by people who mostly don't realize they're building the same thing.

Four Primitives

Strip away every website ever built. Strip away the CSS, the analytics tags, the cookie consent banners, the infinite scroll, the push notifications, the A/B tests, the recommendation engines. Strip away every piece of UI logic that exists to help humans navigate to what they want.

What remains is four things.

Read. Structured data. Products, prices, availability, schedules, balances, records, feeds. Everything a human comes to a website to learn. Public or gated behind credentials, but fundamentally: data that can be queried and returned.

Write. Gated, validated state changes. Book a flight. Purchase a product. Submit an application. Cancel a reservation. Update a preference. Transfer money. The action that changes something in the world. All of them share the same primitive: authenticated request, validated input, committed transaction.

Notify. The world changed and you should know. Your order shipped. Your reservation is confirmed. The price dropped. The appointment is ready. Every notification a website sends reduces to this: here is a structured update about something you care about.

Manifest. The rules. What this domain exposes, what operations it supports, what credentials are required, what constraints apply, what it will and will not share. The document an agent reads before touching anything else.

Read. Write. Notify. Manifest.

That's the web. Everything else was for the humans.

The Architecture

The new web has five components. Three of them are boring. Two of them are new.

The Manifest lives at /.well-known/capabilities.yaml at the root of every domain. It's a single YAML file—human-auditable, AI-generated, machine-parsed. It declares what the domain exposes, what each resource requires for access, what write operations are available and what they expect, what notification feeds exist, and what the system enforces and cannot share. An agent arriving at an unfamiliar domain reads this file first. Everything it needs to know to interact safely and correctly is in this document.

This is robots.txt for the agentic web. Robots.txt told crawlers what not to touch. The manifest tells agents everything—what to touch, how to touch it, and what the rules are. It's the missing standard that nobody has built yet.

Read is standard HTTP GET against declared endpoints. The manifest describes what's queryable and what credentials it requires. The agent queries what it needs. Nothing more. The domain returns structured data the agent can reason about without natural language inference. Public data needs no credentials. Gated data requires a credential the agent can present.

Write is standard HTTP POST, PUT, or DELETE against declared endpoints with declared schemas. The manifest describes what writes are available, what they require, and what validation rules apply. The agent submits. The domain validates. The transaction commits or rejects with a structured error the agent can interpret and explain to its user.

Notify is an Atom feed—an IETF standard that has existed since 2005 and which the industry inexplicably abandoned in favor of push notification infrastructure that requires persistent connections, device registration, and platform intermediaries. The agent polls the feed on its own schedule. New entries are structured data. No push infrastructure. No persistent connection. The agent owns the polling relationship. The domain owns the feed. Nothing in between needs to exist.

Signet is the credential layer. When a domain requires proof of identity, age, jurisdiction, payment authorization, or any other attribute, the agent doesn't expose raw user data. It generates a zero-knowledge proof—a cryptographic statement that proves the claim without revealing the underlying fact. The domain verifies the proof locally in microseconds. No OAuth dance. No password. No personal data transmitted. Mathematical certainty that the user is who they need to be, without the domain learning anything about who they actually are.

That's the architecture. Five components. Three HTTP verbs. One IETF standard from 2005. One credential layer built on modern cryptography.

The web doesn't need to be rebuilt. It needs to be simplified.

The Trust Chain

For this to work, platforms need to trust agents. Not trust that agents are well-intentioned—trust that specific agents will behave as certified, enforcing the constraints users have accepted, showing the ads users agreed to see, honoring the rules of the economic relationships they've entered.

This requires a trust chain with three links.

The first link is the user. When a user accepts a relationship with a platform—agreeing to see ads in exchange for content, or paying tokens for access, or sharing declared interests in exchange for better recommendations—they sign a capability token through Signet. This token says: I authorize this agent to act on my behalf in this relationship, under these constraints, with these permissions. Signed by the user's private key. Scoped to this platform. Expires on a declared schedule.

The second link is the agent. The agent presents the capability token on every interaction. But the platform needs more than the user's authorization. It needs to know the agent will actually enforce the constraints—that it will show the ad before delivering the content, that it won't quietly strip the constraint and pretend compliance. The agent presents a compliance attestation: a signed statement that this agent, at this version, enforces constraint packages as declared.

The third link is the agent's producer. The compliance attestation is only meaningful if someone trustworthy signed it. Anthropic signs Claude's attestation. OpenAI signs its agents. A certification authority signs open-source agents that have passed compliance audits. The platform verifies the producer's signature against a well-known public key. The chain is complete: the user authorized the agent, the agent is certified compliant, the producer guarantees the certification.

No central identity provider. No OAuth server to breach. No session database to leak. Three cryptographic signatures verified locally in milliseconds.

The system isn't trying to prevent circumvention. Determined developers will strip compliance from their forks. That's fine—DRM has never been about prevention. It's about effort. A developer who wants an uncertified agent can have one. They and their users simply lose access to the constraint economy: the premium content, the token earnings, the advertising relationships that require certified delivery. Honest participation is made easy. Circumvention is made effortful. The majority takes the easy path.

The End of Advertising As We Know It

The current web's business model rests on one foundation: controlling the surface the human looks at. You control the surface, you control ad placement, you charge for eyeballs. Google, Meta, YouTube—the entire consumer web—is attention real estate. They are landlords charging rent on human perception.

The agent eliminates the surface.

The human never sees the platform's interface. The agent reads the feed, extracts what's relevant, presents it in whatever format serves the user. There is no surface to place an ad on. There is no attention to rent.

This sounds like the end of advertising. It's actually the beginning of advertising that works.

The current model is adversarial. The ad interrupts what you're doing. It fights for attention against the content you came for. It arrives at the wrong moment—car ads after you bought the car, baby product ads after the baby outgrew them, vacation ads after you returned. The targeting is inference from behavior, perpetually late and often wrong.

The agent model is cooperative. Your agent knows your intent—not inferred from behavior, but explicit, current, and accurate. You told your agent you're planning to buy a car. Or your agent helped you research it. Or it manages your calendar and knows your lease expires in three months. The intent is real. The timing is right.

Advertisers in this world don't buy eyeballs. They buy access to declared intent at the moment of consideration. The car dealer pays to be surfaced when your agent is helping you evaluate options—not as an interruption, but as a relevant option in the consideration set alongside organic results. You see it because you opted into seeing sponsored options in exchange for tokens or a better price. You see it because you wanted to.

The advertising becomes interactive because the agent mediates the interaction. A quiz that helps you find the right product. A configurator that lets you spec a car in the agent's interface. A game that earns tokens while introducing you to a product. The user chooses to engage because the engagement has value. The advertiser pays for engagement that actually happened, verified by the agent that facilitated it.

The attribution problem—the fundamental unsolved problem of digital advertising—disappears entirely. The agent knows if the ad resulted in a purchase because the agent facilitated the purchase. No attribution window. No last-click mythology. No view-through fraud. The advertiser pays for outcomes. Outcomes are verifiable.

The privacy problem disappears too. The agent proves the user's relevant attributes to the advertiser through Signet without exposing the underlying data. Age verified without revealing age. Jurisdiction confirmed without revealing address. Purchase intent proven without revealing browsing history. The advertiser gets what they need. The user's data stays with the user.

The Token Economy

The attention economy's replacement is a token economy. Simple, transparent, and structurally aligned with user interests.

Every interaction with a domain has a declared cost. Premium content costs tokens. Ad-supported content is free but shows an ad. Basic content is free unconditionally. The manifest declares the cost. The agent checks the user's balance. The user decides once—not per interaction, not per session—when they establish the relationship.

Tokens are acquired three ways. You buy them with money. You earn them by engaging with advertising—watching ads in a dedicated session you chose, completing a quiz, trying a product configurator. You accrue them automatically over time, small amounts for account age, engagement history, participation. The accrual handles the casual user who doesn't want to think about it.

The agent manages the wallet. You ask to watch something. The agent checks your balance, accesses it if funded, or explains once that you need tokens and offers options. One decision. Transparent economics. No dark patterns.

The constraint package is the user-approved contract. An advertiser or platform publishes a structured, signed constraint package: show one ad before this content category, maximum thirty seconds, must match declared interest categories, user earns ten tokens per completion. The user's agent receives it. The user reviews it once. They approve it. Signet-eval adds it to the agent's policy set. The agent enforces it on itself—not because the platform controls the interface, but because the user approved the constraint and the agent honors what its user agreed to.

The user can revoke any constraint at any time. The agent removes it from its policy set. The relationship ends. The platform loses access. The user loses the tokens. Clean, bilateral, immediate.

This is the inversion of the current model. Today the platform imposes constraints on users and monetizes their compliance. Tomorrow users accept constraints from platforms and receive compensation for their compliance. The user's attention, intent, and data have explicit economic value. The user captures that value directly.

What This Does to Platforms

The platforms that survive are the ones with genuinely valuable data. The ones that die are the ones whose only value was controlling what humans looked at.

The distinction matters. Airbnb has valuable data—properties, availability, pricing, verified reviews, quality-controlled inventory. An agent that can query that data directly and book on the user's behalf makes Airbnb more useful, not less. The booking still happens through Airbnb. The revenue still flows to Airbnb. The discovery just happens through the agent instead of the website. Airbnb's business survives and probably improves—lower acquisition cost per booking, better conversion because every booking represents genuine intent.

Facebook has engagement data. It knows what content makes people feel outrage, nostalgia, connection, fear. It knows this because it spent years optimizing its interface to maximize the time humans spent looking at it, which required maximizing emotional arousal. That knowledge is useful for selling ads against human attention. It is not useful for an agent that filters content based on what its user actually wants. The data Facebook has is data about how to manipulate humans. Agents aren't manipulable. Facebook's core asset becomes worthless.

Twitter—call it what you want—has a feed of real-time public discourse. That's genuinely valuable data. An agent that can query the feed, filter for what's relevant to its user, and surface it without the manipulation mechanics is a better Twitter than Twitter. The feed survives. The attention manipulation doesn't. Twitter's business model shrinks to the value of the feed itself. That might be sustainable. The current model's advertising premium disappears.

Google is the interesting case. Google's search business is already under pressure from AI answers that don't send users to websites. The agent web accelerates this—agents don't search, they query. But Google has something no one else has: the most comprehensive index of the web's data ever assembled. If Google can expose that index as a queryable data layer rather than a search results page, the business transforms rather than dies. The search engine becomes a data API. The advertising business rebuilds around declared intent rather than search behavior inference.

The social platforms whose business is addiction lose everything. The data platforms whose business is information gain. The infrastructure platforms whose business is transactions mostly survive and simplify.

The Standards That Need To Exist

None of this happens automatically. It requires standards that don't fully exist yet, infrastructure that's being built in pieces, and a coordination problem that someone needs to solve.

The manifest standard needs to be defined, implemented, and adopted. /.well-known/capabilities.yaml needs to become as universal as robots.txt. Every domain that wants agent traffic needs to publish one. The format needs to be stable, extensible, and simple enough that AI can generate it and humans can audit it. This is the single highest leverage contribution anyone can make to the agent web right now.

The notification standard needs to be revived. Atom is sufficient. It needs to be declared the standard for agent notifications, the feed format needs minor extensions for structured data delivery, and the well-known endpoint needs to be defined. This is one IETF draft away from existing.

The constraint package format needs to be defined. Signed, structured, machine-readable, human-auditable. The format for advertising constraints, access constraints, and data sharing constraints. Needs to be standard enough that any agent can parse any platform's constraints without custom integration.

The agent certification infrastructure needs to be built. The signing key infrastructure for agent producers. The attestation format. The verification protocol. The revocation mechanism. This is the hardest piece because it requires cooperation from agent producers—Anthropic, OpenAI, Google, and others—to agree on a common format and maintain their signing infrastructure.

The token wallet standard needs to be defined. How tokens are denominated, transferred, and verified across platforms and agents. This is the payments problem of the agent web and it's the one most likely to attract regulatory attention.

None of these are technically difficult. All of them are coordination problems. The technical solutions are obvious. Getting the industry to agree on them is the work.

What You Do With This

If you build websites: your future customer is an agent, not a human. The investment that matters is in your data quality, your manifest, and your API. The investment that stops mattering is in your UI. Start now.

If you build AI agents: the manifest standard is your discovery mechanism. Build to it. Advocate for it. The agent that can navigate the web natively without custom integrations for every domain wins.

If you work in advertising: the intent graph is your new inventory. Start building relationships with the companies that will hold user intent data. The companies that figure out how to buy access to declared intent at the right moment will own the next era of advertising. The ones still optimizing for impressions won't survive the transition.

If you run a platform: figure out which of your value is in your interface and which is in your data. The interface value is going away. The data value isn't. The faster you understand the distinction, the more of your business you save.

If you're a user: you're about to get your time back. The web was designed to capture your attention and sell it. The agent web is designed to accomplish your goals and return you to your life. The transition will be messy and the incumbent platforms will fight it with everything they have. They will lose. The incentive structure is too powerful and the user benefit is too clear.

The Last Website

The last website won't be dramatic. Nobody will announce it. It'll just be the day when someone realizes they haven't opened a browser in a week and doesn't miss it.

The web won't disappear. The data will still be there. The servers will still be running. The transactions will still be happening. Everything that makes the web valuable will persist.

What will be gone is the friction. The dark patterns. The infinite scroll. The cookie consent theater. The notification permission requests. The ads for things you already bought. The recommendation algorithm that knows you better than you know yourself and uses that knowledge against you.

What will replace them is simple. You tell your agent what you want. Your agent finds it, evaluates it, and gets it for you. The web does its job and gets out of your way.

That's not the end of the internet. It's the internet finally working as advertised.

The browser was a workaround. The workaround is almost over.