Gemini is Solutionism at its Worst

While I don’t care too much about ideas and projects that I believe to be dead ends or maybe even doomed to fail eventually, a recent interaction on Superhighway84 got me to write down a few thoughts on why I believe Project Gemini is a really bad idea.

Gemini is Solutionism at its Worst

Update: It looks like Gemini has officially started to fall apart. Not only has it apparently reached its peak interest at the beginning of 2021, people like Drew DeVault have also stepped away from Gemini, citing the lack of interesting content, which is what I too pointed out in my follow up to this post a while ago. I applaud Drew and everyone who came to the same conclusion.

Update 2: Another well known name in the Gemini sphere who has ditched the project is the author of the well known Amfora Gemini client.

It all started with a simple post by someone on Superhighway84 who shared a link to their Gemini site. While I was interested to see what that person was writing about and working on, I couldn’t, because that person did not share a HTTP link, with the Gemini URL as an alternative to it. Instead, it was only a Gemini URL.

Up until a while ago, people were trying out Gemini and only used it as sort of a mirror for their HTTP content, meaning that everyone could browse their site one way or the other. However, it seems that more and more people these days have limited either all or parts of their publishing activity to Gemini. One better known example I stumbled upon is Drew DeVault, who has been publishing Gemini-exclusive content for a while now.

As long as projects like Gemini don’t make it harder for people, that aren’t interested in them, to continue using the internet the way they’re comfortable with, I don’t have much of an opinion on them. However, with Gemini seemingly taking over more and more chunks of the things I consume, I felt like pointing out a few things about the project that I believe make it a really bad idea to pursue.

My reply to the person’s post on Superhighway84 was the following:

Hey there,

let me throw in some unpopular opinion, if I might.

I understand where people promoting the smol web (a.k.a Gemini) are coming from and I feel the same pain on a daily basis. The modern web sucks. However, I feel like Gemini is solutionism at its worst. If you compare a single HTTP/1.1 request with a Gemini request you will find out that it’s not the protocol that’s the issue. HTTP can be made incredibly light. In fact, it can be so light that even embedded devices (e.g. Arduinos) these days know how to talk it.

What Gemini is doing, is saying “we don’t need no videos, images, stylesheets, nor JavaScripts, because we want to have a lightweight web experience, so we throw all that crap out!”. Fine, sounds great. But why does it require a new protocol for that? Why couldn’t one simply build on top of existing HTTP infrastructure, throw away all the baggage and instead implement a new Content-Type, which existing browsers then could parse?

Existing infrastructure could have been extended to offer a more lightweight experience of the web that doesn’t come with JS, CSS or anything else. People then could decide whether they want to go the extra mile of installing Lagrange or any other dedicated Gemini browser, or simply have a browser extension that would take care of rendering the Content-Type properly. But Gemini forces people into a completely different stack. Different servers. Different browsers. Heck, not even it’s “markdown” is actual markdown, because…

There are actually many subtly different and incompatible variants of Markdown in existence, so unlike TLS all the different libraries are not guaranteed to behave similarly.

… and that’s why it’s obviously a good idea to introduce another variant of Markdown. Makes sense?

Pretty much everything that is being described in the Gemini FAQ could have been solved on top of already existing protocols and technologies, making it more available to people. (

Also, Gemini is asking the wrong questions. For example:

Why not just use a subset of HTTP and HTML?

The question here shouldn’t be why not to use a subset of HTTP and HTML, but rather, why not build on top of HTTP with a different markup layer other than HTML. We have APIs using HTTP with JSON instead of HTML, for example.

Hence, Gemini, its own text/gemini format, and most of its design choices are addressing problems that don’t really exist. It’s not significantly different to existing HTTP infrastructure to justify introducing a new protocol - and it even depends on it for e.g. offering large files for download.

It’s also not IPFS or ZeroNet. It’s not a blockchain. It’s not bittorrent. It feels like the people working on/running Gemini infrastructure don’t want to actually solve the issues with the modern day web and instead just wanted to be different, for the sake of being different. But unlike for example DAT, that has truly been different, Gemini however follows the same outdated ideas and principles that have been around since the 80s and imposes restrictions on everything they’re not comfortable solving, e.g. file downloads or data submission.

To me Gemini feels like today’s over-hyped computer version of Teletext.

In order to make it clear what I mean by building on top of existing HTTP infrastructure, let me give an actual example.
When your web browser requests a website, it has to connect to the server hosting that website. That’s usually a TCP connection on port 80 or 443, depending on whether or not that website is using any sort of transport encryption. To keep it simple I’m not going into the details of SSL/TLS here, however, as it’s an encapsulation around the HTTP protocol, requesting content is identical no matter if the connection is encrypted or not.

Let’s assume I’d like to browse As soon as I enter the URL in my browser’s address bar, my browser is going to do pretty much the same that I’ll be doing here using the telnet command:

telnet 80
Connected to
Escape character is '^]'.
GET /index.html HTTP/1.1

That’s it. That’s the bare minimum that this server accepts in order for me to request the website from it. Most of the time however, browsers will send more information, like for example the User-Agent, the Accept, Accept-Encoding and Accept-Language headers, maybe some Cache-Control info, and more things, depending on which browser you use. While these bits of information are helping the communication, it’s not like requesting content without them wouldn’t work at all. The request that I typed into telnet does not contain any of these headers, yet the server successfully returns the website that I’m requesting, and even tells me how to interpret this response:

HTTP/1.1 200 OK
Connection: Keep-Alive
Keep-Alive: timeout=5, max=100
content-type: text/html
last-modified: Sun, 18 Jan 2015 00:04:33 GMT
accept-ranges: bytes
content-length: 5108
date: Sun, 16 Jan 2022 22:10:29 GMT
server: LiteSpeed

<!DOCTYPE html>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">

    <!-- FOR THE CURIOUS: This site was made by @thebarrytone. Don't tell my mom. -->

    <title>Motherfucking Website</title>

        <h1>This is a motherfucking website.</h1>
        <aside>And it's fucking perfect.</aside>

        <h2>Seriously, what the fuck else do you want?</h2>

        <p>You probably build websites and think your shit is special. You think your 13 megabyte parallax-ative home page is going to get you some fucking Awwward banner you can glue to the top corner of your site. You think your 40-pound jQuery file and 83 polyfills give IE7 a boner because it finally has box-shadow. Wrong, motherfucker. Let me describe your perfect-ass website:</p>

            <li>Shit's lightweight and loads fast</li>
            <li>Fits on all your shitty screens</li>
            <li>Looks the same in all your shitty browsers</li>
            <li>The motherfucker's accessible to every asshole that visits your site</li>
            <li>Shit's legible and gets your fucking point across (if you had one instead of just 5mb pics of hipsters drinking coffee)</li>

        <h3>Well guess what, motherfucker:</h3>

        <p>You. Are. Over-designing. Look at this shit. It's a motherfucking website. Why the fuck do you need to animate a fucking trendy-ass banner flag when I hover over that useless piece of shit? You spent hours on it and added 80 kilobytes to your fucking site, and some motherfucker jabbing at it on their iPad with fat sausage fingers will never see that shit. Not to mention blind people will never see that shit, but they don't see any of your shitty shit.</p>

        <p>You never knew it, but this is your perfect website. Here's why.</p>

        <h2>It's fucking lightweight</h2>

        <p>This entire page weighs less than the gradient-meshed facebook logo on your fucking Wordpress site. Did you seriously load 100kb of jQuery UI just so you could animate the fucking background color of a div? You loaded all 7 fontfaces of a shitty webfont just so you could say "Hi." at 100px height at the beginning of your site? You piece of shit.</p>

        <h2>It's responsive</h2>

        <p>You dumbass. You thought you needed media queries to be responsive, but no. Responsive means that it responds to whatever motherfucking screensize it's viewed on. This site doesn't care if you're on an iMac or a motherfucking Tamagotchi.</p>

        <h2>It fucking works</h2>

        <p>Look at this shit. You can read it ... that is, if you can read, motherfucker. It makes sense. It has motherfucking hierarchy. It's using HTML5 tags so you and your bitch-ass browser know what the fuck's in this fucking site. That's semantics, motherfucker.</p>

        <p>It has content on the fucking screen. Your site has three bylines and link to your dribbble account, but you spread it over 7 full screens and make me click some bobbing button to show me how cool the jQuery ScrollTo plugin is.</p>

        <p>Cross-browser compatibility? Load this motherfucker in IE6. I fucking dare you.</p>

        <h2>This is a website. Look at it.  You've never seen one before.</h2>

        <p>Like the man who's never grown out his beard has no idea what his true natural state is, you have no fucking idea what a website is. All you have ever seen are shitty skeuomorphic bastardizations of what should be text communicating a fucking message. This is a real, naked website. Look at it. It's fucking beautiful.</p>

        <h3>Yes, this is fucking satire, you fuck</h3>

        <p>I'm not actually saying your shitty site should look like this. What I'm saying is that all the problems we have with websites are <strong>ones we create ourselves</strong>. Websites aren't broken by default, they are functional, high-performing, and accessible. You break them. You son-of-a-bitch.</p>

        <blockquote cite="">"Good design is as little design as possible."<br>
            - some German motherfucker


    <p>From the philosophies expressed (poorly) above, <a href="">txti</a> was created. You should try it today to make your own motherfucking websites.</p>

    <!-- yes, I know...wanna fight about it? -->
      (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),

      ga('create', 'UA-45956659-1', '');
      ga('send', 'pageview');


On the other hand, according to its specification, a request on the Gemini protocol looks like this:

2 Gemini requests

Gemini requests are a single CRLF-terminated line with the following structure:


<URL> is a UTF-8 encoded absolute URL, including a scheme, of maximum length 1024 bytes. The request MUST NOT begin with a U+FEFF byte order mark.

Sending an absolute URL instead of only a path or selector is effectively equivalent to building in a HTTP “Host” header. It permits virtual hosting of multiple Gemini domains on the same IP address. It also allows servers to optionally act as proxies. Including schemes other than “gemini” in requests allows servers to optionally act as protocol-translating gateways to e.g. fetch gopher resources over Gemini. Proxying is optional and the vast majority of servers are expected to only respond to requests for resources at their own domain(s).

Clients MUST NOT send anything after the first occurrence of <CR><LF> in a request, and servers MUST ignore anything sent after the first occurrence of a <CR><LF>.

When comparing the most minimal version of an HTTP request with a standard Gemini request, it turns out that the only difference is a single additionally required header (Host) and a few additional characters (GET and HTTP/1.1) in the HTTP request. Hence, Gemini’s argument of being “lighter than the web” doesn’t make that much of a difference at all from a protocol perspective, and it certainly does not justify completely replacing existing infrastructure and standards that humans have mutually agreed upon.

“But what about the response and the content?”, you might be wondering. Well, it’s a similar story there. By default, regular HTTP servers will include all sorts of information in their response that will allow the browser to process it more easily/without a lot of guesstimating. However, it would nevertheless be possible to bend existing HTTP servers to only include the bare minimum additional info in their response, that would still allow a modern browser to process the data.

As for the actual content, it is easily possible to configure a modern HTTP server like nginx to respond with nothing but pure Markdown. Users could then install either of the dozens of extensions available for their web browser, to be able to visit the Markdown-only websites more comfortably. If Gemini would have gone that path, people interested in the smol internet would still be able to develop custom tailored clients that only work with these type of websites, and that would not include all the baggage that comes with any modern browser. Everybody else, on the other hand, could continue using the tools their familiar with and would still be able to to consume the content.

Gemini instead opted for a different direction, which actively excludes people, while it does not deliver anything new nor beneficial that would justify dismissing existing standards and infrastructure in first place. Gemini is solutionism at its worst and is more about exclusion of the mainstream over bringing actual technological advancement, let alone fixing the issues it baselessly claims to be fixing. Gemini does not bring new ideas to the table, but instead uses decade old concepts from HTTP and Gopher, to implement a castrated and badly designed version of its own, just for the sake of it not being HTTP nor Gopher.

To me, Gemini looks like a mix of Gopher and HTTP/0.9 and it’s a mystery to me why you would rather write a new protocol so similar to those rather than just stick to what already exists.
Daniel Stenberg, founder and lead developer of cURL

“But… but… it takes user privacy very seriously?”
Okay, how so? Let’s quickly check the FAQ:

2.1.2 Privacy

Gemini is designed with an acute awareness that the modern web is a privacy disaster, and that the internet is not a safe place for plaintext. Things like browser fingerprinting and Etag-based “supercookies” are an important cautionary tale: user tracking can and will be snuck in via the backdoor using protocol features which were not designed to facilitate it. Thus, protocol designers must not only avoid designing in tracking features (which is easy), but also assume active malicious intent and avoid designing anything which could be subverted to provide effective tracking. This concern manifests as a deliberate non-extensibility in many parts of the Gemini protocol.

Turns out, neither the FAQ nor the protocol precisely pin-point how exactly Gemini takes privacy seriously. They call out typical buzzwords like supercookies and fingerprinting and suggest that due to the protocol’s non-extensibility Gemini is more privacy-focused than the modern web. Then, on the other hand, Gemini users write things like this:

  • Certificate verification. Gemini servers rarely use certificates with trust chain from certificates in /etc/ssl/certs; self-signed certificates are the norm. Option -k should be the default for gemini protocol.

Ah yes, that is how taking privacy seriously looks like. Besides, what about the visitor’s IP address? Gemini servers can certaily see that. Nowhere in its official documentation Gemini seems to care about telling users this detail, let alone whether or not they’re able to browse via Tor or if there’s any client that would support Tor right out-of-the-box.

Also, what if I wrote my own Gemini server – judging by its protocol that’s something one could do within a few hours – that would attach per-initial-request generated hashes to all links? Thereby, when a user visits my Gemini site, they would get a unique hash assigned, which would then be sent to my server every time they follow a link to a different subsite. I could track the user’s browsing behaviour across my site, just like HTTP sites do these days. If I store these requests, plus the IP address the user is coming from, I would already gather some interesting data points.
What if I would perform a quick investigation of the TCP/IP packets additionally? E.g. the initial packet size, the window- and segment-size, the initial TTL, individual flags? I could make my Gemini server use such fingerprinting techniques to gather more info and store that as well. If I’d be really up to something, I could have all sorts of additional checks and scans running for every new connection. Even if the user would connect through a NATted IP, I could eventually gain enough intel to be able to tell with relatively high confidence if a request was made by a visitor I’ve seen before or not, especially with such a small user-base (as compared to HTTP). Let alone all the still to be discovered exploits within individual client implementations, that might as well lead to potential privacy or even security risks.

Bottom line is, if you agree that the modern web has become an awful place, let’s work on changing that for everyone, instead of abandoning it like a bunch of billionaires trying to escape to a different place, before this one collapses.

The reason this website looks the way it does, is not because it follows the latest online trends, but because it’s everything that is required to efficiently transport information from me to you, using tools that we’re both familiar with, while staying out of both our ways.
If you don’t like how modern websites track their users and flood them with ads, then don’t do that on your website, contribute to projects like uBlock Origin, Privacy Badger and Tor, and stop using websites that do track their users or spam them with ads. If you don’t like JavaScript, don’t use it or use it in a way in which your site will still function even without it, and stop using websites that won’t even load without JavaScript enabled. If you’re not a fan of CSS, don’t use it, nobody forces you to style your HTML and most browsers include a fairly accessible default stylesheet. Heck, if you’re as much of a purist as the Gemini folks claim to be and don’t want neither videos nor images on your website, simply don’t put any there. is a perfect example of a website that uses none of all that while still functioning flawlessly.

Ultimately, serving content solely via Gemini will only lead to it becoming less accessible and available to other people. Moving to Gemini is the opposite of inclusive, it’s exclusive. It’s a step in the wrong direction.

Enjoyed this? Support me via Monero, Bitcoin, Lightning, or Ethereum!  More info.