Notes7 min read · Mar 8, 2025

From Carpet PvP Practice to memory-safety research: what 79K downloads taught me

Shipping a Minecraft mod with 79K downloads and fake-player bots turns out to be a decent primer on untrusted-client / trusted-server security boundaries.

The accidental security lab

I built Carpet PvP Practice to solve a specific irritation: I wanted to drill PvP mechanics without hopping onto a practice server, and I wanted 1.8 combat behavior on a modern Minecraft version. The result is a Fabric mod that's now sitting at 79,400 downloads and supports Minecraft through 1.21.11.

I did not build it to think about security. But maintaining a mod with that kind of download count — and a fake-player system that lets a server execute autonomous game actions — ended up teaching me more about trust boundaries than several more intentional projects.

The fake player problem

The core feature of the mod is /player, inherited from the Carpet framework. You spawn a bot, give it commands, and it executes them: attack, jump, equip armor, glide on an elytra, use items. The bot is a server-side entity. It doesn't have a real client connection. It exists entirely in server memory.

This sounds benign. It mostly is, in a singleplayer or trusted-server context. But think through what it means in a multiplayer environment:

  • The bot can perform any action a player can perform — including actions that consume server resources.
  • The bot's behavior is controlled by whoever has permission to run /player commands.
  • The server trusts the bot's actions unconditionally, because there's no client to validate against.

This is a clean example of the trusted-server / untrusted-client inversion that shows up constantly in multiplayer games. Normally, the server validates client actions because clients are untrusted. The bot inverts this: the bot is the server's own process executing actions directly, bypassing the validation layer that exists to handle adversarial clients.

In a game context, this means that if /player is misconfigured — say, accessible to non-operator players — you have a fairly effective griefing primitive. The mod handles this through Carpet's permission level rules (commandPlayer, etc.), but the configuration surface is wide. 100+ rules. Easy to misconfigure.

Why configuration surface is a security surface

100+ rules, each with its own permission level, each independently toggleable. Most of them are harmless. A few of them — antiCheatDisabled, allowSpawningOfflinePlayers, allowListingFakePlayers — have direct security implications depending on the server context.

antiCheatDisabled turns off movement validation. That's useful for development and for creative servers where you don't want rubber-banding. On a survival server, it removes the primary barrier against speed and fly hacks. The rule exists for legitimate reasons; the risk depends entirely on context.

This is the same pattern that appears in enterprise software with broad configuration surfaces: firewall rules, IAM policies, database flags. The individual settings are often reasonable in isolation. The security failure mode is usually a combination of defaults, admin inattention, and a configuration surface too wide to audit at a glance.

I started adding more explicit documentation about which rules had security implications. Not a security model — a Minecraft mod doesn't need one — but a habit: when you ship something with a large configuration surface, label the dangerous knobs.

The untrusted-client lesson

The more interesting lesson came from the combat mechanics. 1.8 combat — spam-clicking, block-hitting — is implemented by relaxing server-side validation on attack timing. In standard Minecraft, the server rejects attacks that arrive faster than the cooldown allows. The mod loosens this.

The implication: on a server running this mod with those rules enabled, a client can send attack packets at 1.8-style rates and the server will process them. That's the intended behavior. But it also means the mod has deliberately weakened a server-side validation that exists to prevent a specific class of client-side abuse.

In a security context, this is exactly the kind of intentional weakening that shows up in backward-compatibility decisions, protocol downgrade negotiation, and legacy-support flags. Every time you say "we'll accept the weaker variant for compatibility reasons," you're making a tradeoff that the mod author made explicitly — but that the server operator running the mod needs to understand they've inherited.

I started thinking about this pattern in other contexts: TLS downgrade attacks, ALPN negotiation, JWT algorithm confusion. The mechanics are different; the structure is the same. Someone upstream made a decision to accept a weaker option; someone downstream inherited the risk without necessarily understanding the tradeoff.

What it actually changed

None of this turned me into a Minecraft security researcher. The mod doesn't handle user credentials or sensitive data. The blast radius of a misconfiguration is a griefed server, not a breached system.

But thinking carefully about a system with 79,000 users — even a game mod — builds habits. You start asking: who has permission to trigger this action? What happens if this rule is set to the permissive value by default? Where is the trust boundary and who controls it?

Those questions transfer. The answers in Minecraft are low-stakes. The questions themselves are not.