Research Group %28asrg%29 | Algorithmic Sabotage

The ASRG has resurrected this metaphor for the 21st century. Today’s looms are not made of iron gears but of neural networks and gradient descent. The new "sabot" is not a wooden shoe but a carefully crafted adversarial image, a delayed sensor reading, or a strategically placed fake data point.

The ASRG’s answer is twofold. First, all their sabotage techniques are reversible and non-destructive . A poisoned AI can be retrained. A confused drone can be reset. Second, they publish their entire methodology—on the theory that if the vulnerabilities are known, defenders will build more robust systems. "Security through obscurity," their manifesto reads, "is a prayer. Security through universal knowledge is an immune system." The ASRG has no website, no Discord server, and no formal membership. Recruitment is by invitation only, typically after a candidate publishes unusual research: a paper on adversarial gravel patterns, a thesis on confusing facial recognition with thermal noise, or a blog post about using phase-shifted LED flicker to disable optical sensors. algorithmic sabotage research group %28asrg%29

But until the rest of the world catches up—until we have international treaties on adversarial AI resilience, mandatory algorithmic stress-testing, and real liability for algorithmic harms—the ASRG will continue its work in the shadows. They will buy cheap boats. They will plant fake data. They will confuse drones with stickers. The ASRG has resurrected this metaphor for the 21st century

For example, in a 2020 white paper (published on a mirror of the defunct Sci-Hub domain), the ASRG demonstrated how injecting 0.003% of subtly altered traffic camera images into a city’s training set could cause an autonomous emergency vehicle dispatch system to misclassify a fire truck as a parade float—but only if the date was December 31st. The rest of the year, the system worked perfectly. The sabotage was dormant, invisible, and reversible. Modern AI relies on confidence scores. A self-driving car sees a stop sign with 99.7% certainty. The ASRG’s second pillar exploits the gap between certainty and reality . ROA techniques bombard an algorithm’s sensory periphery with ambiguous, high-entropy signals that are not false—they are simply too real . The ASRG’s answer is twofold

The ASRG has developed "destabilizer algorithms" that identify fragile equilibria and introduce a single, small, unpredictable actor. In simulation, this has caused simulated drone swarms to retreat from a hill they were ordered to hold, not because they were beaten, but because each drone concluded that the others had gone insane. The ASRG calls this . Case Study: The Great Container Ship Standoff of 2023 To understand the real-world implications, one must examine the ASRG’s most famous—and most controversial—operation.

It wasn't a glitch. It wasn't a hacker demanding Bitcoin. According to a leaked post-mortem, it was a live-field test conducted by a little-known entity called the .