The general-purpose operating system was designed to arbitrate scarce hardware resources on behalf of applications that could not access them directly. Modern hardware has rendered that premise obsolete: today's peripherals carry dedicated processors, local memory, and autonomous scheduling logic. Yet the OS remains — not by necessity, but by inertia. This document asks whether a coherent model of direct application-to-hardware composition is theoretically achievable, and invites researchers to pursue that question.
When the first operating systems were designed, the constraint was absolute: a single CPU, kilobytes of shared memory, and peripherals with no intelligence of their own. The OS emerged as the necessary arbiter of scarcity — a layer whose existence was justified by the hardware it managed.
That contract was technically sound. The abstraction it provided was not a convenience; it was a prerequisite. Without a scheduler, a single process would consume the CPU entirely. Without a memory manager, applications would corrupt each other. The OS did not impose itself — it was summoned by necessity.
The operating system was not a product of ideology. It was a product of constraint. When the constraint dissolves, the justification must be reconstructed from first principles.
The premise of the original contract no longer holds. Consider the contemporary computing environment as it actually exists:
A modern NVMe SSD contains an ARM-class processor, dedicated DRAM, and firmware capable of managing its own wear-leveling, error correction, and I/O queuing — independently of any host OS. A GPU executes a scheduling model of its own, managing thousands of concurrent threads with no meaningful participation from the kernel. A network interface card performs TCP/IP offloading, RDMA, and packet classification in dedicated silicon. A neural processing unit executes inference graphs directly, exposing a narrow API that the OS does not meaningfully enrich.
Each peripheral is now, in a meaningful sense, a computer. The silicon that once required central arbitration now arbitrates itself. The OS continues to intermediate — not because the hardware needs intermediation, but because the software stack was never redesigned to function without it.
This is not an incremental problem. It is a structural mismatch between a monolithic, historically-accumulated abstraction layer and a hardware landscape that evolved in the opposite direction — toward distribution, modularity, and local intelligence.
The Own Your Silicon manifesto — from which this provocation originates — frames the problem in terms of ownership and access. This document reframes it as a research question:
"Given that every modern hardware component now carries its own compute, memory, and scheduling intelligence — what is the minimum necessary contract between application and silicon, such that a general-purpose operating system becomes optional?"
The word optional is deliberate. This provocation does not argue that operating systems should be eliminated — it argues that their presence should be a conscious architectural decision, not a structural inevitability. In contexts where the OS adds latency, memory overhead, or scheduling interference without compensating benefit, the programmer should be able to compose an environment without it.
We identify five tractable research problems that would constitute meaningful progress toward a validated concept:
What is the smallest coherent set of primitives — discovery, addressing, synchronization, error propagation — that allows heterogeneous hardware components to compose without a central arbiter? Can this contract be formally specified and proven sufficient?
Can an application express its hardware requirements as a declarative manifest — compute topology, memory access patterns, I/O characteristics — from which a minimal runtime can be assembled automatically and verifiably?
Classical OS security models depend on a privileged kernel as trust anchor. In a composable model, where does trust originate? Can hardware attestation primitives — TPM, TrustZone, Intel TDX — substitute for kernel-mediated isolation without reintroducing a monolithic arbiter?
General-purpose operating systems provide failure isolation as a side effect of their mediation. In a directly-composed system, how are partial failures detected, attributed, and contained when components communicate peer-to-peer rather than through a common supervisor?
The OS abstraction, for all its costs, provides a stable and learnable interface. A composable model must offer an equivalent — or better — developer experience. What does programmability look like when there is no kernel to program against?
This is not a problem without history. Several research threads have approached adjacent territory, without converging on a unified model:
The gap is not in any single component — it is in the absence of a unifying model that treats composability as a first-class architectural property across the full hardware stack.
This document has no authors, no institution, and no funding body behind it. It emerged from a manifesto — Own Your Silicon — which itself emerged from a frustration shared by programmers across five decades of computing.
We are not proposing a research agenda. We are proposing a question — and inviting anyone for whom that question is personally irritating to pursue it by whatever means available to them: a simulation, a prototype, a formal model, a paper, a conversation.
If you build something, write something, or prove something — even something small — the idea moves forward. That is sufficient. There is no committee to report to.
The only requirement for participation is that the question matters to you.