'Shadow discrimination': 

Shadow banning and its discontents

We are pretty sure that you have already noticed that the nipples of all the bodies in our e-shop’s visual imagery are equally blurred independently of the gender identity or expression of each model. And you are probably wondering why this is so since the commonplace practice in the queer fashion industry and social media platforms is to cover only the ‘female’ nipples, however, the hell the ‘femaleness’ of a nipple is defined or understood or even ‘detected’ by the algorithms that police our bodies, force-feed us certain gendered or sexual representations and put out of the field of the online visibility certain body parts as indiscriminately and vaguely ‘inappropriate’ content. The rationale behind this political statement is that in Ecce Homo, we promote in practice equality and visibility among all genders not only via our designs and the visual contents produced under our exclusive control and in-house for advertising purposes but also via our Corporate Social Responsibility program.

We dare to go against the current of mainstream and queer fashion by challenging the double standards in place regarding genders in terms of their representation, standards imposed by a handful of out-of-touch cis, white, and male executives who have the power to determine ‘the code of moral conduct’ in this new global public sphere that is the internet. This male-dominated social media industry permeated by the ‘bro code’ further creates hyper-masculine and hierarchical work cultures, where femininities and queers have little to no voice in the policy-making processes and no control over the product of their labor constantly working under precarious conditions as the tens of thousands of tech industry workers losing recently their jobs indicates. Having said that, until the sexist rules of the heteropatriarchal game, that social media platforms play, change, we choose to fight back and join the worldwide Free the Nipple movement by taking this prudish rule to its logical extreme and extending it to our male models as well, exposing its inherent absurdity and neo-conservatism.

But this form of digital censorship is just one example among many, just one aspect of the algorithmic biopolitics that govern our digital lives, perhaps an obvious easy-to-spot one. This article was written on three occasions: the recent investigative piece that the Guardian published a few weeks ago according to which ‘AI tools rate photos of women as more sexually suggestive than those of men, especially if nipples, pregnant bellies or exercise is involved’; the recent ruling by Meta’s advisory board which states that the policy of banning bare breast impedes the right to expression for women and trans and nonbinary people; and last but not least, Ecce Homo’s reasonable suspicions that Facebook and Instagram keep on shadow banning a series of posts that depict queer bodies wearing undergarments, especially those that constitute powerful and proud visual expressions of non-binary and trans subjectivities who have historically denied any representation, particularly as subjects and not just objects of desire. As you may notice, our social media engagement sometimes appears to be poor relative to the real extent of our impact in terms of sales and the visits to our e-shop. This discrepancy seems to be a result of Ecce Homo falling victim to shadow banning, as some many other queer firms and persons.  

But what is shadow banning that plagues queer and feminist online communities and firms to begin with? Shadow banning, also known as stealth banning, hellbanning, or ghost banning, is the practice of blocking, entirely but usually partially a user or a user's content from some areas of an online community. This policy particularly practiced by social media platforms results in significantly diminishing the engagement of one’s content since one’s posts or activities aren’t viewable to other users despite the fact that the user can still see their own content. Your posts aren’t showing up on public feeds or the explore pages or hashtag searches, your content is suddenly receiving fewer likes and comments and you can notice a decrease in followers. Shadow ban typically occurs when the user has violated the social media platform's community guidelines or the content is otherwise deemed inappropriate, usually ‘sexually suggestive’ or ‘sensitive’. Among the innocuous reasons for shadow banning are the misuse of hashtags, the overuse of a digital platform and the suspicion of a bot ‘behind’ the user, reports by other users, the buying of fake followers, etc. In some cases, there are some tricks a user can resort to in order to both make sure that they have been shadow banned and remove it.

To be sure, it’s hard to discuss shadow banning for a number of reasons: firstly, it is hard for the user themselves to even make sure that they have been shadow banned to begin with since social media platforms have publicly denied the practice of shadow banning even though they have acknowledged the impact of their algorithms on the prioritization of content, and because the users themselves do not receive an official ban or notification; secondly, a big part of the online discussion around shadow banning borders a conspiracy discourse that fuels a politically misleading mass-produced hysteria that delegitimizes preemptively even the posing of any questions on the matter. Afterall, shadow banning seems to run against our commonsensical and intuitive understanding of how social media work in general since they are designed in such a way to create the illusion of control and choice on the part of the user. In other words, they seemingly allow the user to control both the flow and production of information online and they encourage them to take control over the content and its form.

However, such freedom is an illusion as the content posted online under the injunction to ‘post!’ is carefully curated by the algorithms that govern the cybersphere. At the more obvious level, the choices a user being offered for a post are often limited in terms of the length, form and media. However, behind this façade of the freedom of speech and expression lurks an algorithmic governmentality that controls the flow and the visibility of each post and censors its content based on nefarious ‘policy guidelines’, ‘community standards’ or ‘terms and services’, the new-age Bible of morality and appropriateness. Moreover, the rhetoric used by the tech gurus -the role models of our neoliberal predicament- regarding theirs AIs usually evokes the impersonality, the rationality, and the automatic character of their function, while also alluding to the idea of their necessary, unstoppable, and inevitable development and exponential improvement of these super-human systems. Such a rhetoric is mobilized both as an excuse, ‘it’s not our fault, it’s our AI’s fault’, and an exonerating mechanism, ‘yes, our AI is racist, but give it time and it will get better’.

Hopefully, over the last years, there have been growing voices that call for algorithmic accountability and audit procedures that would take seriously into consideration that the algorithmic systems, often based on artificial intelligence and increasingly used globally by governments and companies to make or recommend decisions that have far-reaching effects on individuals, organizations, and society can also -and indeed do- fail to respect the rights of individuals and result in harmful discrimination and other negative effects. Once again, algorithmic biases quickly turn out to be gender, sexuality, and race biases to the detriment of femininities and LGBTIQA+ persons, especially when those identities or expressions intersect with racial and ethnic identities. For queers all over the world, social media are a vital space of personal expression and community building, while various studies have shown that hostile online environments have detrimental effects on the mental health of LGBTIQA+ persons as well.

And I am talking about online environments in general exactly because algorithmic bias seems to permeate every aspect of the internet. According to the internet and gender studies scholar Safiya Umoja Noble, ‘negative biases against women of color are embedded in search engine results and algorithms’ resulting in ‘data discrimination’ furthered by these ‘algorithms of oppression’, as she puts it. For Noble, ‘the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.’ When it comes to queers, let us recall the public admission by TikTok that has suppressed LGBT content and hashtags in Eastern Europe, Chine, and the Middle East ‘as part of its localised approach to moderation’.

Not even artist expression by these already marginalized groups escapes the censorship of social media, a fact that has sparked a heated debate over the not-so-thinly-veiled-after-all sexism of those platforms and prompted a series of investigations. For queer artists and queer start-ups like Ecce Homo whose fashion designs are a form of artistic expression, shadow banning has important consequences for the financial viability of queer-owned companies, especially those who choose to stay faithful to their political and social commitments in spite of the financial cost of such a gesture. After all, the opaque justification of 'sensitive or sexually suggestive content' that these platforms usually evoke without any further clarification runs against the sexual expression that is inextricably linked to gender expression and which constitutes an indispensable part of both queerness and undergarments.