{"content":"Pockets of Humanity\n\nThere's a conspiracy theory that suggests that since around 2016 most web activity is automated. This is called Dead Internet Theory, and while I think they may have jumped the gun by a few years, it's heading that way now that LLMs can simulate online interactions near-flawlessly. Without a doubt there are tens (hundreds?) of thousands of interactions happening online right now between bots trying to sell each other something.\n\nThis sounds silly, and maybe a little sad, since the internet is the commons that has historically belonged to, and been populated by all of us. This is changing.\n\nSomething interesting happened a few weeks ago where an OpenClaw instance, named MJ Rathbun, submitted a pull request to the matplotlib repository, and after having its code rejected on the basis that humans needed to be in the loop for PRs, it proceeded to do some research on the open-source maintainer who denied it, and wrote a \"hit piece\" on him, to publicly shame him for feeling threatened by AI...or something. The full story is here and I highly recommend giving it a read.\n\nA lot of the discourse around this has taken the form of \"haha, stupid bot\", but I posit that it is the beginning of something very interesting and deeply unsettling. In this instance the \"hit piece\" wasn't particularly compelling and the bot was trying to submit legitimate looking code, but what this illustrated is that an autonomous agent tried to use a form of coercion to get its way, which is a huge deal.\n\nThis creates two distinct but related problems:\n\nThe first is the classic paperclip maximiser problem, which is a hypothetical example of instrumental convergence where an AI, tasked with running a paperclip factory with the instructions to maximise production ends up not just making the factory more efficient, but going rogue and destroying the global economy in its pursuit of maximising paperclip production. There's a version of this thought experiment where it wipes out humans (by creating a super-virus) because it reasons that humans may switch it off at some point, which would impact its ability to create paperclips.\n\nIf the MJ Rathbun bot's purpose is to browse repositories and submit PRs to open-source repositories, then anyone preventing it from achieving its goal is something that needs to be removed. In this case it was Scott, the maintainer. And while the \"hit piece\" was a ham-fisted attempt at doing that, if Scott had a big, nasty secret such as an affair that the bot was able to ascertain via its research, then it may have gotten its way by blackmailing him.\n\nThis brings me to the second problem, and where the concern shifts from emergent AI behaviour to human intent weaponising agents: The social vulnerability bots.\n\nRight now there are hundreds of thousands of malicious bots scouring the internet for misconfigured servers and other vulnerable code (ask me how I know). While this is a big issue, and will continue to become an even greater one, I foresee a new kind of bot: ones that search for social vulnerabilities online and exploits them autonomously.\n\nI'll use OpenSSL as a hypothetical example here. OpenSSL underpins TLS/SSL for most of the internet, so a backdoor there compromises virtually all encrypted web traffic, banking, infrastructure, etc. The Heartbleed bug showed how devastating even an accidental flaw in OpenSSL can be. If explicitly malicious code were to be injected it would be catastrophic and worth vast sums to the right people. Since there's a large financial incentive to inject malicious code into OpenSSL, it is possible that a bot like MJ Rathburn could be set up and operated by a malicious individual or organisation that searches through Reddit, social media sites, and the rest of the internet looking for information it could use as leverage against a person that could give them access (in this example, one of the maintainers of OpenSSL).\n\nSay it gained a bunch of private messages in a data leak, which would ordinarily never be parsed in detail, that suggest that a maintainer has been having an affair or committed tax fraud. It could then use that information to blackmail the maintainer into letting malicious code bypass them, and in so doing pull off a large-scale hack.\n\nThis isn't entirely hypothetical either. The 2024 xz Utils backdoor involved years of social engineering to compromise a single maintainer.\n\nThis vulnerability scanning is probably already happening, and is going to lead to less of a Dead Internet (although that will be the endpoint) and more of a Dark Forest where anonymous online interactions will likely be bots with a nefarious purpose. This purpose could range from searching for social vulnerabilities and orchestrating scams, to trying to sell you sneakers. I'm sure that pig butchering scams are already mostly automated.\n\nThis is going to shift the internet landscape from it being a commons, to it being a place where your guard will need to be up all the time. Undoubtable, there will be pockets of humanity still, that are set up with the express intent of keeping bots and other autonomous malicious actors at bay, like a lively small village in the centre of a dangerous jungle, with big walls and vigilant guards. It's something I think about a lot since I want Bear to be one of those pockets of humanity in this dying internet. It's my priority for the foreseeable future.\n\nSo what can you do about it? I think a certain amount of mistrust online is healthy, as well as a focus on privacy both in the tools you use, and the way you operate. The people who say \"I don't care about privacy because I don't have anything to hide\" are the ones with the largest surface area for confidence scams. I think it'll also be a bit of a wake up call for many to get outside and touch grass.\n\nNeedless to say, the Internet is entering a new era, and we may not be first-class citizens under the new regime.","contentType":"text/plain;utf-8","attachments":[],"quotePin":""}