Decades of remarkable engineering produced the internet, the cloud, and the modern operating system. Each solved real problems at the scale of its time. This document does not question that achievement — it builds on it. We propose that the next logical step is a protocol layer that allows hardware components to compose directly, without a general-purpose intermediary, both locally and across networks. We call this the Hardware Protocol, and we believe its foundations already exist in the work being done across the research community today.
The history of computing is a history of elegant solutions that created the conditions for the next problem. The mainframe centralized compute and made it accessible. The personal computer distributed that compute to individuals. The internet connected those individuals into a global network. The cloud made that network's infrastructure programmable and scalable.
Each transition preserved what was valuable from the previous era while resolving its inherent limitations. None of these transitions was a rejection — each was a continuation. We write in that same spirit.
The cloud did not fail. It succeeded so completely that it revealed the next frontier: the hardware that sits beneath it, and beside us, waiting to be composed with the same freedom that HTML gave to information.
To understand where we are, it helps to see the full arc:
The cloud solved a genuine problem: infrastructure at scale was too complex and too expensive for most organizations to manage. The solution was elegant — abstract the hardware, rent the capability, pay for what you use.
But every abstraction has a cost. In abstracting the hardware, the cloud also intermediated it — placing the silicon at one remove from the people and applications that depend on it. The Hardware Protocol does not compete with the cloud. It completes the arc the PC began — by making the hardware you own as composable and connectable as the information the internet freed.
The clearest model for what we propose already exists in the software world.
Anaconda did not invent Python, NumPy, or PyTorch. It did something more useful: it created an environment manager — a layer that allows any combination of tools to be assembled, isolated, and optimized for a specific task, without requiring the user to own or understand the tools themselves.
You do not own PyTorch. But you own the environment that runs it — and that environment is yours to compose, version, share, and reproduce exactly.
Declares dependencies. Assembles the runtime. Isolates execution. You don't own the libraries — you own the environment.
Declares hardware needs. Assembles the compute stack. Isolates resources. You don't own the GPU architecture — you own the environment that uses it.
The Hardware Protocol is Anaconda for silicon. The application declares what it needs — GPU compute, NVMe bandwidth, network throughput, neural inference capacity. The environment assembles around it. Nothing loads that wasn't requested. Nothing persists that isn't needed.
The web was built on two complementary layers. HTML defined a common language for composing information — any browser, any author, any machine. HTTP defined how that information traveled — simple enough to be implemented everywhere, powerful enough to carry the world's knowledge.
We propose an analogous pair for hardware:
A common language for hardware components to declare their capabilities — compute topology, memory access patterns, I/O characteristics, power envelope. Like HTML elements, each component exposes a standard interface. The application composes them like a document: declare the elements, let the environment render the stack. Local first — your laptop, your workstation, your edge device. The operating system becomes one possible renderer among many, chosen when useful, bypassed when not.
A communication protocol that allows hardware environments to discover, connect, and compose across machines — first as local intranets of silicon, then as a broader network. Not a replacement for the internet, but a complementary layer that operates beneath the application level, allowing hardware capabilities to be shared peer-to-peer, without cloud intermediation. On existing networks where possible. On new networks where necessary.
"What HTML and HTTP did for information — making it composable, portable, and free from any single owner — the Hardware Protocol can do for silicon. The hardware already supports it. The protocol does not yet exist. That is the work."
This vision is not built on empty ground. The research community has been constructing the foundations — often without a unifying conceptual framework to connect the pieces. The Hardware Protocol names that framework.
The pieces exist. The protocol that unifies them — simple enough for any developer to use without understanding the layers beneath — does not yet exist as a coherent, open specification. That is the precise gap this document identifies.
We are not proposing a product, a company, or a standards body. We are proposing that the work already happening across research institutions, open source projects, and hardware manufacturers be recognized as convergent — and that the missing piece be named clearly enough that anyone can work toward it.
The Hardware Protocol, in its minimal form, requires:
A common format for hardware components to describe what they offer — compute, memory, I/O, power — in terms an application environment can discover and compose automatically. This declaration should include not only functional capabilities but an energy manifest: the cost per operation in watts, the idle envelope, the thermal profile. A composable environment that can reason about energy per operation — not just throughput — changes the calculus for AI infrastructure, edge deployment, and any context where the cost of a kilowatt-hour is part of the architectural decision.
A set of primitives — discovery, addressing, synchronization, error propagation — sufficient for components to cooperate without a central arbiter. Formally specified. Provably sufficient.
An abstraction layer simple enough that a developer can declare hardware needs without understanding CXL, SR-IOV, or RDMA. The Anaconda moment — where the complexity disappears and the capability remains.
A protocol that allows local hardware environments to discover and compose with remote ones — peer-to-peer, without cloud intermediation, on existing or new network infrastructure.
Three conditions converge in 2026 that did not exist five years ago:
The hardware is ready. CXL 3.0, PCIe 6.0, and NVMe namespaces provide the low-level primitives for direct, partitioned hardware access. The silicon already supports what we describe.
The research is converging. Unikernels, exokernels, composable infrastructure, and disaggregated systems are active research fronts that are arriving independently at the same architectural conclusions. The pieces are assembling without a shared name.
The need is acute. AI workloads have made the cost of OS mediation viscerally visible — every ML engineer fighting for VRAM pre-allocated to processes they didn't launch understands intuitively what this protocol would solve. The frustration is the demand signal.
This document has no authors, no institution, and no funding body. It is a conceptual specification written in the belief that the work already happening — in labs, in open source repositories, in hardware specifications — is more convergent than it appears.
If you are working on any layer of this stack — capability declaration, composition models, kernel bypass, unikernels, composable infrastructure, or hardware networking — we believe you are already building the Hardware Protocol. We are simply proposing a name for what you are building, and a framework for connecting it to what others are building.
The only thing missing is the recognition that these efforts are parts of the same whole.
ownyoursilicon.pages.dev