A few weeks ago I built a decentralized formation algorithm for drone swarms. The technical claim is narrow: every drone in the swarm runs the same deterministic computation against the same broadcast input and arrives at the same answer, so the swarm reaches consensus on assignments without any agent ever exchanging messages with another agent. Empirically, the algorithm lands within 1.7–3.0% of the centralized optimum at scale, recovers from drone failure with exactly one reassignment per failure event, and supports priority-aware redundancy through a recursive shadow-fleet structure. Sub-millisecond per-drone computation at fleet sizes up to ten thousand. The paper, formal proofs, code, and a four-formation demo video are public.
What I want to write about isn't the algorithm. It's the application space the algorithm opens up—and specifically, why one architectural primitive supports a remarkably wide range of operational missions that the current state of the art treats as separate problems with separate stacks.
This is not a paper. It's not a product pitch either. It's an attempt to map what becomes possible when the substrate of swarm coordination shifts from "central planner computes trajectories and dispatches them" to "every agent independently derives the same global state from a shared broadcast."
The architectural through-line
The algorithm's central insight is structural. Consensus in distributed systems is usually achieved through one of three mechanisms: a central authority that everyone defers to, an auction or voting protocol that requires multiple rounds of inter-agent messaging, or implicit coordination through local sensing. Each has well-known limits. Centralized coordination has a single point of failure and scales poorly. Auction protocols consume bandwidth and converge slowly. Implicit coordination can't guarantee global consistency.
The algorithm I built uses a fourth mechanism: consensus by determinism. Every agent receives the same broadcast input. Every agent runs the same algorithm. Every agent arrives at the same answer. No agent communicates its answer to any other agent because every other agent already computed it independently. The algorithm is bijective and deterministic, so the global assignment falls out of the local computation without negotiation.
That insight, taken seriously, has consequences. It means the broadcast itself is doing the coordination work. The agents are, in a sense, redundant computations of a single shared state. Adding agents adds robustness and scale; it doesn't add coordination overhead. Removing agents creates work for the surviving agents—they recompute, but they recompute against the same broadcast and reach the same consensus on what the new assignment should be. The architecture's fundamental property is that the substrate scales independently of the agent population.
The other thing this insight unlocks is that the target of the coordination—the manifold the agents are forming, the geometry they're occupying, the pattern they're tracking—is just another input to the same algorithm. Change the manifold-generation function, and the same algorithm produces a different assignment. The agents don't need to know what application they're being used for; they just run the algorithm against whatever target the broadcast describes.
That decoupling is what makes the algorithm general-purpose. Every application I'll describe in this piece is the same primitive applied to a different manifold-generation function. Different threat type, different manifold. Different sampling pattern, different manifold. Different choreography target, different manifold. The substrate doesn't change; the response library does.
Drone light shows: the friendly entry point
The drone light show industry has matured rapidly over the past five years. Companies like Sky Elements, Verge Aero, and Pixis Drones have built impressive operations capable of flying thousand-drone shows reliably. The industry's technical stack is centralized: an artist or designer specifies a sequence of formations, the choreography software computes per-drone trajectories with collision avoidance, the trajectories are uploaded to each drone before launch, and the drones execute their pre-loaded paths in tight synchronization with the music or narrative.
This stack is a remarkable engineering achievement under the constraints that existed when it was built. It also has structural limits that the centralized architecture cannot easily address.
The first limit is in-show drone failure. When a drone fails mid-performance, the formation has a visible gap until the next major transition. Audiences notice. Operators address this through extensive pre-flight checks, conservative flight envelopes, and accepted in-show degradation. There is no current technology that fills the gap during the show.
The second limit is iteration cost. Changing a choreography requires recomputing per-drone trajectories, re-uploading to the fleet, and re-rehearsing. Live changes during a show are essentially impossible because the trajectory files are committed before launch.
The third limit is scale. Trajectory pre-computation grows expensive as fleet sizes grow, and the per-drone upload step becomes a logistical bottleneck. Current shows top out at around 1,500 drones for production reliability reasons rather than aesthetic ones.
Decentralized coordination addresses all three. When a drone fails, a designated surplus drone—kept dark in the formation as a reserve—autonomously promotes itself to the failed drone's position within seconds. No other drone changes its assignment. The audience sees no gap. With the algorithm's specific construction, exactly one reassignment occurs per failure event by mathematical guarantee, with sub-millisecond computation time. The 1% fleet overhead required to cover expected failure rates is a rounding error against current insurance and battery costs.
Live formation changes become possible because the algorithm doesn't depend on pre-computed trajectories. The choreographer broadcasts a new target manifold, every drone independently re-derives its assignment against the new target, and the swarm reforms within the time required for physical motion. Music-reactive shows, audience-interactive shows, performances that respond in real-time to the artistic moment—none of these are possible with current centralized stacks. They become natural with a decentralized algorithm.
Scale ceases to be a coordination problem. The per-drone computational cost is independent of fleet size, the broadcast bandwidth requirement scales linearly with fleet size, and the algorithm has been validated up to ten thousand drones in simulation. Production reliability at those fleet sizes requires hardware and regulatory work that the algorithm doesn't solve, but the coordination layer no longer constrains the ceiling.
The deeper application this opens up is real-time volumetric display. A motion-capture-equipped performer in a studio can drive a swarm of drones at a venue thousands of miles away, with the drones forming the performer's body in three-dimensional space updated in real-time. The same primitive that handles "form a sphere, then a torus" handles "form whatever shape this mocap stream is producing right now." The drone-light-show industry as it currently exists is selling pre-choreographed visual displays. The category that becomes accessible with decentralized coordination is fundamentally different: a real-time volumetric display medium that happens to be implemented with drones.
Severe weather research: the public-good application
Tornado intercept research has a structural problem that has limited the field for decades. The instruments that take measurements inside or near a tornado—mobile mesonet trucks, Doppler-on-Wheels radar, released probe packages—are expensive and hard to replace. The National Severe Storms Laboratory in Norman, Oklahoma, the University of Oklahoma School of Meteorology, and the broader research community have built remarkable scientific instruments that produce remarkable data, but the sample counts are constrained by what's economically feasible to risk.
A drone swarm changes that constraint. Drones in the relevant size class cost a few hundred to a few thousand dollars each. A research operation can launch a hundred-drone swarm into a tornado's path knowing that ten drones will be lost and accept that as the operating cost. The current best alternative—ground-based instruments at safe distances—produces fundamentally less data because the geometry of the measurement is fixed.
Decentralized coordination is the architecture this application needs. The tornado's path updates in real-time, and pre-planned trajectories cannot keep pace. Ground-based researchers broadcast the storm's predicted track; the swarm reforms its sampling pattern against the new manifold within seconds. As drones are lost, the patch protocol fills the sampling positions with surplus drones from the reserve. The data record stays continuous because the formation stays intact.
The geometries this enables are not possible with current methods. A vertical sampling column extending through the storm's mesocyclone, with drones at calibrated altitudes from the surface to thirty thousand feet. A horizontal ring at constant altitude tracking the tornado's translation and contracting or expanding to follow the storm's intensity. A three-dimensional grid covering the inflow region with drones at multiple altitudes simultaneously. The algorithm doesn't care what the manifold is; the meteorologists specify what they want to measure, and the swarm forms it.
Priority-aware redundancy is operationally meaningful here. Some sampling positions matter more than others—drones near the funnel itself versus drones in the broader inflow region—and the shadow-fleet structure lets the research team allocate redundancy by scientific priority. Lose a drone in the funnel, the closest funnel-shadow drone replaces it. Lose a drone in the inflow region, the broader surplus pool fills the gap. The redundancy budget reflects what the science values, not just where the drones happen to be.
Wildfire research, hurricane research, atmospheric chemistry sampling, and pollution dispersion studies all have structurally similar properties. They share an evolving target, a need for high-loss tolerance, and a requirement for adaptive sampling geometries. The same algorithm supports all of them.
Defense applications: where the substrate matters most
The defense application is where the architecture's properties matter most because the operating constraints are most severe. Centralized coordination is jammable, has known single points of failure, and cannot scale to swarm-vs-swarm engagements at the rates that contemporary threats require. Auction-based decentralization consumes bandwidth that may not be available in contested environments and converges too slowly for kinetic timescales. Implicit coordination cannot guarantee the global consistency that defensive geometries require.
Decentralized coordination by determinism addresses each of these directly. There is no central authority to lose. There is no auction round to jam. The broadcast itself is the only channel, and even the broadcast can degrade—drones can fall back to inertial dead-reckoning between broadcast updates, and the algorithm continues to function with bounded position uncertainty rather than failing catastrophically.
The architectural insight that opens the defense application is that the threat itself becomes an input. A perimeter drone or other detection asset identifies an incoming threat, broadcasts a threat classification, and every drone in the defending swarm independently computes the same intercept manifold from a deterministic mapping between threat type and counter-geometry. The mapping is firmware; updating the response library is updating the firmware, not retraining the swarm.
Different threat classes call for different intercept geometries because the threats fail in different ways. Single high-value missiles require dense kinetic intercept—a sphere or ball positioned in the threat's path maximizes the probability that the threat encounters at least one defending drone before reaching its target. Drone swarms or loitering munition swarms require area coverage—a two-dimensional mesh across the approach corridor with mesh spacing matched to the threats' individual size. Cluster munitions and submunition dispersal require three-dimensional volumes with stochastic distribution—a static mesh has gaps that submunitions can find, but a randomly perturbed cube means the gap geometry is unpredictable to the threat. Hypersonic glide vehicles cannot be kinetically intercepted by quadcopters, so the swarm becomes a sparse three-dimensional sensor mesh that detects and tracks the vehicle for other systems. Artillery shells follow predictable ballistic arcs and can be intercepted by pre-positioned dense screens at the predicted impact point. Direct-energy threats are absorbed across many cheap targets in maximum-density formation rather than intercepted, with the swarm itself acting as a sacrificial barrier.
Every one of these is the same algorithm running against a different manifold-generation function. The threat-classification-to-manifold mapping is a finite library that can be updated as new threats emerge. The drones don't need to be reconfigured for new threats; the broadcast just describes the new geometry, and the swarm reforms.
The reactive force projection use case is where this lands operationally. A defending swarm holds a perimeter formation while the bulk of the fleet idles or charges. A perimeter sentinel detects an incoming threat. The threat classification and trajectory broadcast through the substrate. Every drone in the fleet computes the appropriate intercept manifold against the threat data, lifts off if not already airborne, and forms the geometry that defeats this specific threat type. The reaction time is bounded by physical flight dynamics rather than computational coordination. The swarm scales the same at ten drones or ten thousand. The leader is unkillable because there is no leader.
These are not theoretical applications. The algorithm is in the public domain as of this week. The patent application was filed yesterday. The architecture is real and the empirical results are validated. What remains is the operational integration—flight controllers, regulatory compliance, threat-classification sensors, target-discrimination logic—but the coordination layer that has bottlenecked decentralized defensive swarms for two decades is no longer a research problem.
Other applications worth flagging
The application space extends beyond the three I've examined in depth.
Search and rescue benefits from priority-aware sampling, in which high-likelihood-of-target areas receive concentrated drone coverage and low-likelihood areas receive thin coverage. The probability density of where the target is located drives the manifold; as the search progresses and the probability density updates, the swarm reforms to match. Current search-and-rescue drone operations are typically single-drone or small-team operations because the coordination overhead of larger fleets is prohibitive.
Agricultural monitoring benefits from dynamic reformation as crop conditions evolve. Drones identify regions of stress, disease, or pest activity and the swarm concentrates sampling resources at those regions while maintaining baseline coverage elsewhere. The manifold tracks the agronomic priorities of the moment.
Construction and infrastructure inspection benefits from adaptive sampling patterns based on what's been imaged and what's missing. The swarm forms a coverage pattern, drones report their captured imagery, and the manifold updates to fill gaps in the coverage map. The same primitive applied to a different feedback loop.
Border patrol benefits from persistent perimeter coverage with on-demand intercept. The default formation is a sparse perimeter sensor net; on detection of a crossing or threat, the relevant drones reform into intercept or tracking geometries while the perimeter retains coverage elsewhere. The same architecture that handles light-show transitions handles defensive reconfiguration.
In each case, the manifold-generation function differs but the algorithm doesn't. That's the point.
What the architecture doesn't do
The honest version of the limits matters more than the application list. The algorithm is not a complete swarm system; it's a coordination layer that needs to plug into other components.
It does not replace the flight controller. Each drone still needs its own flight stack—attitude control, position holding, motor management, collision avoidance with non-swarm objects. The algorithm assumes a working flight stack and provides target positions; it does not generate the control signals that move the drone.
It does not make individual drones more reliable. A drone that fails will still fail. The architecture's contribution is that the swarm tolerates the failure gracefully; it does not prevent the failure.
It does not solve the regulatory question of operating drones in populated airspace. FAA Part 107, EASA U-Space, and the various national regulatory frameworks are real constraints that any operational deployment must address. The algorithm helps with some operational concerns (graceful failure) and is silent on others (airspace authorization, insurance, BVLOS waivers).
It assumes a broadcast channel. If the operational environment denies broadcast—through jamming, line-of-sight occlusion, or network failure—the algorithm degrades. The fallback is local-only operation with progressively diverging assignments as agents lose contact, which is graceful but not free. Applications that require operation in completely communications-denied environments need additional infrastructure.
It is not a complete cooperative localization solution. The algorithm assumes agents have position estimates; it does not generate them. The localization extension I sketched in the paper provides a path forward through fiducial selection and trilateration against high-confidence broadcasts, but the full implementation is future work.
These limits are where the algorithm needs to integrate with other systems, not where it falls short of its claims. The substrate is general-purpose because it does one thing well and stays out of the way of the things it doesn't do.
Closing
The application space I've described is bounded only by how many manifold-generation functions are written. Light shows, weather research, defense, search and rescue, inspection, agriculture, border security, scientific sampling—these are not separate problems requiring separate stacks. They are the same coordination problem with different inputs, and one architectural primitive supports all of them.
That claim is the substrate's strongest feature and also the part of the work that took the longest to see clearly. The algorithm started as a clever solution to a specific drone-light-show problem. Working through the formal properties—the determinism, the bijectivity, the patch-protocol optimality, the substrate generalization—revealed that the solution was substrate-level rather than application-level. The architecture supports applications I had not initially considered because the substrate doesn't care what application it's used for.
The work is open. The paper, the formal proofs, the benchmarks, the simulation code, and the demonstration video are at zenodo.org/records/19954717 and zenodo.org/records/19954678. The patent is filed. The applications I've described in this piece are not promises; they are uses of the algorithm that I believe the architecture supports based on the formal properties and the empirical validation. Some of them I am actively pursuing. Most I am not, because the application space is larger than any one inventor or company can reasonably address.
If any of the applications I've described matches a problem you are working on, the work is available to use, license, or build on. The algorithm is not the bottleneck on those problems anymore.
Patent pending. Research at perardua.dev/research/decentralized-drone-swarm.